id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,710,185
https://en.wikipedia.org/wiki/Interference%20fit
An interference fit, also known as a pressed fit or friction fit, is a form of fastening between two tightfitting mating parts that produces a joint which is held together by friction after the parts are pushed together. Depending on the amount of interference, parts may be joined using a tap from a hammer or forced together using a hydraulic press. Critical components that must not sustain damage during joining may also be cooled significantly below room temperature to shrink one of the components before fitting. This method allows the components to be joined without force and produces a shrink fit interference when the component returns to normal temperature. Interference fits are commonly used with aircraft fasteners to improve the fatigue life of a joint. These fits, though applicable to shaft and hole assembly, are more often used for bearing-housing or bearing-shaft assembly. This is referred to as a 'press-in' mounting. Tightness of fit The tightness of fit is controlled by amount of interference; the allowance (planned difference from nominal size). Formulas exist to compute allowance that will result in various strengths of fit such as loose fit, light interference fit, and interference fit. The value of the allowance depends on which material is being used, how big the parts are, and what degree of tightness is desired. Such values have already been worked out in the past for many standard applications, and they are available to engineers in the form of tables, obviating the need for re-derivation. As an example, a shaft made of 303 stainless steel will form a tight fit with allowance of . A slip fit can be formed when the bore diameter is wider than the rod; or, if the rod is made 12–20μm under the given bore diameter. An example: The allowance per inch of diameter usually ranges from (0.1–0.25%), (0.15%) being a fair average. Ordinarily the allowance per inch decreases as the diameter increases; thus the total allowance for a diameter of might be , 0.2%), whereas for a diameter of the total allowance might not be over i.e., 0.11–0.12%). The parts to be assembled by forced fits are usually made cylindrical, although sometimes they are slightly tapered. Advantages of the taper form are: the possibility of abrasion of the fitted surfaces is reduced; less pressure is required in assembling; and parts are more readily separated when renewal is required. On the other hand, the taper fit is less reliable, because if it loosens, the entire fit is free with but little axial movement. Some lubricant, such as white lead and lard oil mixed to the consistency of paint, should be applied to the pin and bore before assembling, to reduce the tendency toward abrasion. Assembling There are two basic methods for assembling an oversize shaft into an undersized hole, sometimes used in combination: force and thermal expansion or contraction. Force There are at least three different terms used to describe an interference fit created via force: press fit, friction fit, and hydraulic dilation. Press fit is achieved with presses that can press the parts together with very large amounts of force. The presses are generally hydraulic, although small hand-operated presses (such as arbor presses) may operate by means of the mechanical advantage supplied by a jackscrew or by a gear reduction driving a rack and pinion. The amount of force applied in hydraulic presses may be anything from a few pounds for the tiniest parts to hundreds of tons for the largest parts. The edges of shafts and holes are chamfered (beveled). The chamfer forms a guide for the pressing movement, helping to distribute the force evenly around the circumference of the hole, to allow the compression to occur gradually instead of all at once, thus helping the pressing operation to be smoother, to be more easily controlled, and to require less power (less force at any one instant of time), and to assist in aligning the shaft parallel with the hole it is being pressed into. In the case of train wheelsets the wheels are pressed onto the axles by force. Thermal expansion or contraction Most materials expand when heated and shrink when cooled. Enveloping parts are heated (e.g., with torches or gas ovens) and assembled into position while hot, then allowed to cool and contract back to their former size, except for the compression that results from each interfering with the other. This is also referred to as shrink-fitting. Railroad axles, wheels, and tires are typically assembled in this way. Alternatively, the enveloped part may be cooled before assembly such that it slides easily into its mating part. Upon warming, it expands and interferes. Cooling is often preferable as it is less likely than heating to change material properties, e.g., assembling a hardened gear onto a shaft, where the risk exists of heating the gear too much and drawing its temper. See also References External links Diagram of an interference fit Interference fitting – formulae for calculating clearance reductions when using interference fits for bearings on shafts and in housings Mechanical engineering Metalworking terminology de:Passung#Übermaßpassung (Presspassung) es:Interferencia_eje-agujero
Interference fit
[ "Physics", "Engineering" ]
1,085
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
13,976,612
https://en.wikipedia.org/wiki/Harris%20functional
In density functional theory (DFT), the Harris energy functional is a non-self-consistent approximation to the Kohn–Sham density functional theory. It gives the energy of a combined system as a function of the electronic densities of the isolated parts. The energy of the Harris functional varies much less than the energy of the Kohn–Sham functional as the density moves away from the converged density. Background Kohn–Sham equations are the one-electron equations that must be solved in a self-consistent fashion in order to find the ground state density of a system of interacting electrons: The density, is given by that of the Slater determinant formed by the spin-orbitals of the occupied states: where the coefficients are the occupation numbers given by the Fermi–Dirac distribution at the temperature of the system with the restriction , where is the total number of electrons. In the equation above, is the Hartree potential and is the exchange–correlation potential, which are expressed in terms of the electronic density. Formally, one must solve these equations self-consistently, for which the usual strategy is to pick an initial guess for the density, , substitute in the Kohn–Sham equation, extract a new density and iterate the process until convergence is obtained. When the final self-consistent density is reached, the energy of the system is expressed as: . Definition Assume that we have an approximate electron density , which is different from the exact electron density . We construct exchange-correlation potential and the Hartree potential based on the approximate electron density . Kohn–Sham equations are then solved with the XC and Hartree potentials and eigenvalues are then obtained; that is, we perform one single iteration of the self-consistency calculation. The sum of eigenvalues is often called the band structure energy: where loops over all occupied Kohn–Sham orbitals. The Harris energy functional is defined as Comments It was discovered by Harris that the difference between the Harris energy and the exact total energy is to the second order of the error of the approximate electron density, i.e., . Therefore, for many systems the accuracy of Harris energy functional may be sufficient. The Harris functional was originally developed for such calculations rather than self-consistent convergence, although it can be applied in a self-consistent manner in which the density is changed. Many density-functional tight-binding methods, such as CP2K, DFTB+, Fireball, and Hotbit, are built based on the Harris energy functional. In these methods, one often does not perform self-consistent Kohn–Sham DFT calculations and the total energy is estimated using the Harris energy functional, although a version of the Harris functional where one does perform self-consistency calculations has been used. These codes are often much faster than conventional Kohn–Sham DFT codes that solve Kohn–Sham DFT in a self-consistent manner. While the Kohn–Sham DFT energy is a variational functional (never lower than the ground state energy), the Harris DFT energy was originally believed to be anti-variational (never higher than the ground state energy). This was, however, conclusively demonstrated to be incorrect. References Density functional theory
Harris functional
[ "Physics", "Chemistry" ]
658
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics" ]
13,985,721
https://en.wikipedia.org/wiki/Biointerface
A biointerface is the region of contact between a biomolecule, cell, biological tissue or living organism or organic material considered living with another biomaterial or inorganic/organic material. The motivation for biointerface science stems from the urgent need to increase the understanding of interactions between biomolecules and surfaces. The behavior of complex macromolecular systems at materials interfaces are important in the fields of biology, biotechnology, diagnostics, and medicine. Biointerface science is a multidisciplinary field in which biochemists who synthesize novel classes of biomolecules (peptide nucleic acids, peptidomimetics, aptamers, ribozymes, and engineered proteins) cooperate with scientists who have developed the tools to position biomolecules with molecular precision (proximal probe methods, nano-and micro contact methods, e-beam and X-ray lithography, and bottom up self-assembly methods), scientists who have developed new spectroscopic techniques to interrogate these molecules at the solid-liquid interface, and people who integrate these into functional devices (applied physicists, analytical chemists and bioengineers). Well-designed biointerfaces would facilitate desirable interactions by providing optimized surfaces where biological matter can interact with other inorganic or organic materials, such as by promoting cell and tissue adhesion onto a surface. Topics of interest include, but are not limited to: Neural interfaces Cells in engineered microenvironments and regenerative medicine Computational and modeling approaches to biointerfaces Membranes and membrane-based biosensing Peptides, carbohydrates and DNA at biointerfaces Pathogenesis and pathogen detection Molecularly designed interfaces Nanotube/nanoparticle interfaces Related fields for biointerfaces are biomineralization, biosensors, medical implants, and so forth. Nanostructure interfaces Nanotechnology is a rapidly growing field that has allowed for the creation of many different possibilities for creating biointerfaces. Nanostructures that are commonly used for biointerfaces include: metal nanomaterials such as gold and silver nanoparticles, semiconductor materials like silicon nanowires, carbon nanomaterials, and nanoporous materials. Due to the many properties unique to each nanomaterial, like size, conductivity, and construction, various applications have been achieved. For example, gold nanoparticles are often functionalized in order to act as drug delivery agents for cancers because their size allows them to collect at tumor sites passively. Also as an example, the use of silicon nanowires in nanoporous materials to create scaffolds for synthetic tissues allows for monitoring of electrical activity and electrical stimulation of cells as a result of the photoelectric properties of the silicon. The orientation of biomolecules on the interface can also be controlled through the modulation of parameters like pH, temperature and electrical field. For example, DNA grafted onto gold electrodes can be made to come closer to the electrode surface on application of positive electrode potential and as explained by Rant et al., this can be used to create smart interfaces for biomolecular detection. Likewise, Xiao Ma and others, have discussed the electrical control on the binding/unbinding of thrombin from aptamers immobilized on electrodes. They showed that on application of certain positive potentials, the thrombin gets separated from the biointerface. Silicon nanowire interfaces Silicon is a common material used in the technology industry due to its abundance as well as its properties as a semiconductor. However, in the bulk form used for computer chips and the like are not conducive to biointerfaces. For these purposes silicon nanowires (SiNWs) are often used. Various methods of growth and composition of SiNWs, such as etching, chemical vapor deposition, and doping, allow for the properties of the SiNWs to be customized for unique applications. One example of these unique uses is that SiNWs can be used as individual wires to be used for intracellular probes or extracellular devices or the SiNWs can be manipulated into larger macro structures. These structures can be manipulated into flexible, 3D, macropourus structures (like the scaffolds mentioned above) that can be used for creating synthetic extracellular matrices. In the case of Tian et al., cardiomyocytes were grown on these structures as a way to create a synthetic tissue structure that could be used to monitor the electrical activity of the cells on the scaffold. The device created by Tian et al. takes advantage of the fact that SiNWs are field-effect transistor (FET)-based devices. FET devices respond to electric potential charges at the surface of the device, or in this case the surface of the SiNW. Being a FET device can also be taken advantage of when using single SiNWs as biosensing devices. SiNW sensors are nanowires that contain specific receptors on their surface that when bound to their respective antigens will cause changes in conductivity. These sensors have the ability to be inserted into cells with minimal invasiveness making them in some ways preferable to traditional biosensors like fluorescent dyes, as well as other nanoparticles which require target labelling. References Biomineralization Biosensors
Biointerface
[ "Chemistry", "Biology" ]
1,118
[ "Bioinorganic chemistry", "Biomineralization", "Biosensors" ]
3,265,197
https://en.wikipedia.org/wiki/Plasma%20recombination
Plasma recombination is a process by which positive ions of a plasma capture a free (energetic) electron and combine with electrons or negative ions to form new neutral atoms (gas). The process of recombination can be described as the reverse of ionization, whereby conditions allow the plasma to evert to a gas. Recombination is an exothermic process, meaning that the plasma releases some of its internal energy, usually in the form of heat. Except for plasma composed of pure hydrogen (or its isotopes), there may also be multiply charged ions. Therefore, a single electron capture results in decrease of the ion charge, but not necessarily in a neutral atom or molecule. Recombination usually takes place in the whole volume of a plasma (volume recombination), although in some cases it is confined to some region of the volume. Each kind of reaction is called a recombining mode and their individual rates are strongly affected by the properties of the plasma such as its energy (heat), density of each species, pressure and temperature of the surrounding environment. Examples An everyday example of rapid plasma recombination occurs when a fluorescent lamp is switched off. The low-density plasma in the lamp (which generates the light by bombardment of the fluorescent coating on the inside of the glass wall) recombines in a fraction of a second after the plasma-generating electric field is removed by switching off the electric power source. Hydrogen recombination modes are of vital importance in the development of divertor regions for tokamak reactors. In fact they will provide a good way for extracting the energy produced in the core of the plasma. At the present time, it is believed that the most likely plasma losses observed in the recombining region are due to two different modes: electron ion recombination (EIR) and molecular activated recombination (MAR). Table References Recombination, plasma
Plasma recombination
[ "Physics", "Chemistry" ]
404
[ "Physical phenomena", "Phase transitions", "Plasma physics", "Plasma phenomena", "Phases of matter", "Critical phenomena", "Plasma physics stubs", "Statistical mechanics", "Matter" ]
3,265,205
https://en.wikipedia.org/wiki/Directional%20symmetry%20%28time%20series%29
In statistical analysis of time series and in signal processing, directional symmetry is a statistical measure of a model's performance in predicting the direction of change, positive or negative, of a time series from one time period to the next. Definition Given a time series with values at times and a model that makes predictions for those values , then the directional symmetry (DS) statistic is defined as Interpretation The DS statistic gives the percentage of occurrences in which the sign of the change in value from one time period to the next is the same for both the actual and predicted time series. The DS statistic is a measure of the performance of a model in predicting the direction of value changes. The case would indicate that a model perfectly predicts the direction of change of a time series from one time period to the next. See also Statistical finance Notes and references Drossu, Radu, and Zoran Obradovic. "INFFC data analysis: lower bounds and testbed design recommendations." Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997. IEEE, 1997. Lawrance, A. J., "Directionality and Reversibility in Time Series", International Statistical Review, 59 (1991), 67–79. Tay, Francis EH, and Lijuan Cao. "Application of support vector machines in financial time series forecasting." Omega 29.4 (2001): 309–317. Xiong, Tao, Yukun Bao, and Zhongyi Hu. "Beyond one-step-ahead forecasting: Evaluation of alternative multi-step-ahead forecasting models for crude oil prices." Energy Economics 40 (2013): 405–415. Symmetry Signal processing
Directional symmetry (time series)
[ "Physics", "Mathematics", "Technology", "Engineering" ]
354
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Geometry", "Symmetry" ]
3,265,506
https://en.wikipedia.org/wiki/Beam%20homogenizer
A beam homogenizer is a device that smoothes out the irregularities in a laser beam profile to create a more uniform one. Most beam homogenizers use a multifaceted mirror with square facets. The mirror reflects light at different angles to create a beam with uniform power across the whole beam profile (a "top hat" profile). Some applications of beam homogenizers include their use with excimer lasers for making computer chips and with lasers for heat treating. Most lasers produce a Gaussian beam energy distribution. A beam homogenizer will create an evenly distributed energy of the beam instead of the Gaussian shape. Unlike a beam shaper, which imparts a certain shape to the beam, a beam homogenizer spreads out the central concentrated energy of the beam among the entire beam diameter. A simple beam homogenizer can be just a murky piece of glass. However, this is a very simple solution with low efficiency, producing a blurry beam. For most applications/uses, advanced methods of beam homogenizing are required such as diffractive beam homogenizers or using a micro lens array (MLA). External links Lens Array Vs Rod Lens For Laser Beam Homogenization References Optical devices Laser science
Beam homogenizer
[ "Materials_science", "Engineering" ]
256
[ "Glass engineering and science", "Optical devices" ]
3,265,720
https://en.wikipedia.org/wiki/Noise-equivalent%20flux%20density
In optics the noise-equivalent flux density (NEFD) or noise-equivalent irradiance (NEI) of a system is the level of flux density required to be equivalent to the noise present in the system. It is a measure used by astronomers in determining the accuracy of observations. The NEFD can be related to a light detector's noise-equivalent power for a collection area A and a photon bandwidth by: , where a factor (often 2, in the case of switching between measuring a source and measuring off-source) accounts for the photon statistics for the mode of operation. See also External quantum efficiency References Physical quantities Engineering ratios Photodetectors Quantum electronics Spectroscopy
Noise-equivalent flux density
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
140
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Quantum electronics", "Physical quantities", "Metrics", "Instrumental analysis", "Quantity", "Engineering ratios", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Spectroscopy", "Physical proper...
3,267,403
https://en.wikipedia.org/wiki/Emotional%20detachment
In psychology, emotional detachment, also known as emotional blunting, is a condition or state in which a person lacks emotional connectivity to others, whether due to an unwanted circumstance or as a positive means to cope with anxiety. Such a coping strategy, also known as emotion-focused coping, is used when avoiding certain situations that might trigger anxiety. It refers to the evasion of emotional connections. Emotional detachment may be a temporary reaction to a stressful situation, or a chronic condition such as depersonalization-derealization disorder. It may also be caused by certain antidepressants. Emotional blunting, also known as reduced affect display, is one of the negative symptoms of schizophrenia. Signs and symptoms Emotional detachment may not be as outwardly obvious as other psychiatric symptoms. Patients diagnosed with emotional detachment have reduced ability to express emotion, to empathize with others or to form powerful emotional connections. Patients are also at an increased risk for many anxiety and stress disorders. This can lead to difficulties in creating and maintaining personal relationships. The person may move elsewhere in their mind and appear preoccupied or "not entirely present", or they may seem fully present but exhibit purely intellectual behavior when emotional behavior would be appropriate. They may have a hard time being a loving family member, or they may avoid activities, places, and people associated with past traumas. Their dissociation can lead to lack of attention and, hence, to memory problems and in extreme cases, amnesia. In some cases, they present an extreme difficulty in giving or receiving empathy which can be related to the spectrum of narcissistic personality disorder. Additionally, emotional blunting is negatively correlated with remission quality. The negative symptoms are far less likely to disappear when a patient is experiencing emotional blunting. In a study of children ages 4–12, traits of aggression and antisocial behaviors were found to be correlated with emotional detachment. Researchers determined that these could be early signs of emotional detachment, suggesting parents and clinicians to evaluate children with these traits for a higher behavioral problem in order to avoid bigger problems (such as emotional detachment) in the future. A correlation was found of higher emotional blunting among patients treated with depression who scored higher on the Hospital Anxiety and Depression Scale (HADS) and were male (though the frequency difference was slight). Emotional detachment in small amounts is normal. For example, being able to emotionally and psychologically detach from work when one is not in the workplace is a normal behavior. Emotional detachment becomes an issue when it impairs a person's ability to function on a day-to-day level. Scales While some depression severity scales provide insight to emotional blunting levels, many symptoms are not adequately covered. An attempt to resolve this issue is the Oxford Depression Questionnaire (ODQ), a scale specifically designed for full assessment of emotional blunting symptoms. The ODQ is designed specifically for patients with Major Depressive Disorder (MDD) in order to assess individual levels of emotional blunting. Another scale, known as the Oxford Questionnaire on the Emotional Side-Effects of Antidepressants (OQESA), was developed using qualitative methods. Causes Emotional detachment and/or emotional blunting have multiple causes, as the cause can vary from person to person. Emotional detachment or emotional blunting often arises due to adverse childhood experiences, for example physical, sexual or emotional abuse. Emotional detachment is a maladaptive coping mechanism for trauma, especially in young children who have not developed coping mechanisms. Emotional detachments can also be due to psychological trauma in adulthood, like abuse, or traumatic experiences like war, automobile accidents etc. Emotional blunting is often caused by antidepressants, in particular selective serotonin reuptake inhibitors (SSRIs) used in MDD and often as an add-on treatment in other psychiatric disorders. Individuals with MDD usually experience emotional blunting as well. Emotional blunting is a symptom of MDD, as depression is negatively correlated with emotional (both positive and negative) experiences. Schizophrenia often occurs with negative symptoms, extrapyramidal signs (EPS), and depression. The latter overlaps with emotional blunting and is shown to be a core part of the present effects. Schizophrenia in general causes abnormalities in emotional understanding of individuals, all of which are clinically considered as an emotional blunting symptom. Individuals with schizophrenia show less emotional experiences, display less emotional expressions, and fail to recognize the emotional experiences and/or expressions of other individuals. The changes in fronto-limbic activity in conjunction with depression succeeding a left hemisphere basal ganglia stroke (LBG stroke) may contribute to emotional blunting. LBG strokes are associated with depression and often caused by disorders of the basal ganglia (BG). Such disorders alter the emotional perception and experiences of the patient. In many cases people with eating disorders (ED) show signs of emotional detachment. This is due to the fact that many of the circumstances that often lead to an ED are the same as the circumstances that lead to emotional detachment. For example, people with ED often have experienced childhood abuse. Eating disorders on their own are a maladaptive coping mechanism and to cope with the effects of an eating disorder, people may turn to emotional detachment. Bereavement or losing a loved one can also be causes of emotional detachment. Unfortunately, the prevalence of emotional blunting is not fully known. Behavioral mechanism Emotional detachment is a manipulative coping mechanism, which allows a person to react calmly to highly emotional circumstances. Emotional detachment, in this sense, is a decision to avoid engaging emotional connections, rather than an inability or difficulty in doing so, typically for personal, social, or other reasons. In this sense it can allow people to maintain boundaries, and avoid undesired impact by or upon others, related to emotional demands. As such it is a deliberate mental attitude which avoids engaging the emotions of others. This detachment does not necessarily mean avoiding empathy; rather, it allows the person to rationally choose whether or not to be overwhelmed or manipulated by such feelings. Examples where this is used in a positive sense might include emotional boundary management, where a person avoids emotional levels of engagement related to people who are in some way emotionally overly demanding, such as difficult co-workers or relatives, or is adopted to aid the person in helping others. Emotional detachment can also be "emotional numbing", "emotional blunting", i.e., dissociation, depersonalization or in its chronic form depersonalization disorder. This type of emotional numbing or blunting is a disconnection from emotion, it is frequently used as a coping survival skill during traumatic childhood events such as abuse or severe neglect. After continually using this coping mechanism, it can become a response to daily stresses. Emotional detachment may allow acts of extreme cruelty and abuse, supported by the decision to not connect empathically with the person concerned. Social ostracism, such as shunning and parental alienation, are other examples where decisions to shut out a person creates a psychological trauma for the shunned party. See also Alexithymia Anhedonia § Social anhedonia Asociality Assertiveness Borderline Personality Disorder Dissociation Dissociative disorders (in DSM-IV) Emotional contagion Emotional dysregulation Emotional isolation Psychic distance Reactive attachment disorder Social rejection Splitting (psychology) Stoicism Structured Clinical Interview for DSM-IV References Symptoms and signs of mental disorders Emotion
Emotional detachment
[ "Biology" ]
1,523
[ "Emotion", "Behavior", "Human behavior" ]
3,267,736
https://en.wikipedia.org/wiki/Transporter%20Classification%20Database
The Transporter Classification Database (or TCDB) is an International Union of Biochemistry and Molecular Biology (IUBMB)-approved classification system for membrane transport proteins, including ion channels. Classification The upper level of classification and a few examples of proteins with known 3D structure: 1. Channels and pores 1.A α-type channels 1.A.1 Voltage-gated ion channel superfamily 1.A.2 Inward-rectifier K+ channel family 1.A.3 Ryanodine-inositol-1,4,5-trisphosphate receptor Ca2+ channel family 1.A.4 Transient receptor potential Ca2+ channel family 1.A.5 Polycystin cation channel family 1.A.6 Epithelial Na+ channel family 1.A.7 ATP-gated P2X receptor cation channel family 1.A.8 Major intrinsic protein superfamily 1.A.9 Neurotransmitter receptor, Cys loop, ligand-gated ion channel family 1.A.10 Glutamate-gated ion channel family of neurotransmitter receptors 1.A.11 Ammonium channel transporter family 1.A.12 Intracellular chloride channel family 1.A.13 Epithelial chloride channel family 1.A.14 Testis-enhanced gene transfer family 1.A.15 Nonselective cation channel-2 family 1.A.16 Formate-nitrite transporter family 1.A.17 Calcium-dependent chloride channel family 1.A.18 Chloroplast envelope anion-channel-forming Tic110 family 1.A.19 Type A influenza virus matrix-2 channel family 1.A.20 BCL2/Adenovirus E1B-interacting protein 3 family 1.A.21 Bcl-2 family 1.A.22 Large-conductance mechanosensitive ion channel 1.A.23 Small-conductance mechanosensitive ion channel 1.A.24 Gap-junction-forming connexin family 1.A.25 Gap-junction-forming innexin family 1.A.26 Mg2+ transporter-E family 1.A.27 Phospholemman family 1.A.28 Urea transporter family 1.A.29 Urea/amide channel family 1.A.30 H+- or Na+-translocating bacterial MotAB flagellar motor/ExbBD outer-membrane transport energizer superfamily 1.A.31 Annexin family 1.A.32 Type B influenza virus NB channel family 1.A.33 Cation-channel-forming heat shock protein 70 family 1.A.34 Bacillus gap junction-like channel-forming complex family 1.A.35 CorA metal ion transporter family 1.A.36 Intracellular chloride channel family 1.A.37 CD20 Ca2+ channel family 1.A.38 Golgi pH regulator family 1.A.39 Type C influenza virus CM2 channel family 1.A.40 Human immunodeficiency virus type I Vpu channel family 1.A.41 Avian reovirus p10 Vvroporin family 1.A.42 HIV viral protein R family 1.A.43 Camphor resistance or fluoride exporter family 1.A.44 Pore-forming tail Tip pb2 protein of phage T5 family 1.A.45 Phage P22 injectisome family 1.A.46 Anion channel-forming bestrophin family 1.A.47 Nucleotide-sensitive anion-selective channel, ICln family 1.A.48 Anion channel Tweety family 1.A.49 Human coronavirus ns12.9 viroporin family 1.A.50 Phospholamban (Ca2+-channel and Ca2+-ATPase regulator) family 1.A.51 The Voltage-gated Proton Channel (VPC) Family 1.A.52 The Ca2+ Release-activated Ca2+ (CRAC) Channel (CRAC-C) Family 1.A.53 The Hepatitis C Virus P7 Viroporin Cation-selective Channel (HCV-P7) Family 1.A.54 The Presenilin ER Ca2+ Leak Channel (Presenilin) Family 1.A.55 The Synaptic Vesicle-Associated Ca2+ Channel, Flower (Flower) Family 1.A.56 The Copper Transporter (Ctr) Family 1.A.57 The Human SARS Coronavirus Viroporin (SARS-VP) 1.A.58 The Type B Influenza Virus Matrix Protein 2 (BM2-C) Family 1.A.59 The Bursal Disease Virus Pore-Forming Peptide, Pep46 (Pep46) Family 1.A.60 The Mammalian Reovirus Pre-forming Peptide, Mu-1 (Mu-1) Family 1.A.61 The Insect Nodavirus Channel-forming Chain F (Gamma-Peptide) Family 1.A.62 The Homotrimeric Cation Channel (TRIC) Family 1.A.63 The Ignicoccus Outer Membrane α-helical Porin (I-OMP Family 1.A.64 The Plasmolipin (Plasmolipin) Family 1.A.65 The Coronavirus Viroporin E Protein (Viroporin E) Family 1.A.66 The Pardaxin (Pardaxin) Family 1.A.67 The Membrane Mg2+ Transporter (MMgT) Family 1.A.68 The Viral Small Hydrophobic Viroporin (V-SH) Family 1.A.69 The Heteromeric Odorant Receptor Channel (HORC) Family 1.A.70 The Molecule Against Microbes A (MamA) Family 1.A.71 The Brain Acid-soluble Protein Channel (BASP1 Channel) Family 1.A.72 The Mer Superfamily 1.A.73 The Colicin Lysis Protein (CLP) Family 1.A.74 The Mitsugumin 23 (MG23) Family 1.A.75 The Mechanical Nociceptor, Piezo (Piezo) Family 1.A.76 The Magnesium Transporter1 (MagT1) Family 1.A.77 The Mg2+/Ca2+ Uniporter (MCU) Family 1.A.78 The K+-selective Channel in Endosomes and Lysosomes (KEL) Family 1.A.79 The Cholesterol Uptake Protein (ChUP) or Double Stranded RNA Uptake Family 1.A.80 The NS4a Viroporin (NS4a) Family 1.A.81 The Low Affinity Ca2+ Channel (LACC) Family 1.A.82 The Hair Cell Mechanotransduction Channel (HCMC) Family 1.A.83 The SV40 Virus Viroporin VP2 (SV40 VP2) Family 1.A.84 The Calcium Homeostasis Modulator Ca2+ Channel (CALHM-C) Family 1.A.85 The Poliovirus 2B Viroporin (2B Viroporin) Family 1.A.86 The Human Papilloma Virus type 16 (HPV16) L2 Viroporin (L2 Viroporin) Family 1.A.87 The Mechanosensitive Calcium Channel (MCA) Family 1.A.88 The Fungal Potassium Channel (F-Kch) Family 1.A.89 The Human Coronavirus 229E Viroporin (229E Viroporin) Family 1.A.90 The Human Metapneumovirus (HMPV) Viroporin (HMPV-Viroporin) Family 1.A.91 The Cytoadherence-linked Asexual Protein 3.2 of Plasmodium falciparum (Clag3) Family 1.A.92 The Reovirus Viroporin VP10 (RVP10) Family 1.A.93 The Bluetongue Virus Non-Structural Protein 3 Viroporin (NS3) Family 1.A.94 The Rotavirus Non-structural Glycoprotein 4 Viroporin (NSP4) Family 1.A.95 The Ephemerovirus Viroporin (EVVP) Family 1.A.96 The Human Polyoma Virus Viroporin (PVVP) Family 1.A.97 The Human Papillomavirus type 16 E5 Viroporin (HPV-E5) Family 1.A.98 Human T-Lymphotropic Virus 1 P13 protein (HTLV1-P13) Family 1.A.99 The Infectious Bronchitis Virus Envelope Small Membrane Protein E (IBV-E) Family 1.A.100 The Rhabdoviridae Putative Viroporin, U5 (RV-U5) Family 1.A.101 The Peroxisomal Pore-forming Pex11 (Pex11) Family 1.A.102 Influenza A viroporin PB1-F2 (PB1-F2) Family 1.A.103 The Simian Virus 5 (Parainfluenza Virus 5) SH (SV5-SH) Family 1.A.104 The Proposed Flagellar Biosynthesis Na+ Channel, FlaH (FlaH) Family 1.A.105 The Mixed Lineage Kinase Domain-like (MLKL) Family 1.A.106 The Calcium Load-activated Calcium Channel (CLAC) Family 1.A.107 The Pore-forming Globin (Globin) Family 1.B. β-Barrel porins and other outer membrane proteins 1.B.1 General bacterial porin family 1.B.2 Chlamydial porin (CP) family 1.B.3 Sugar porin (SP) family 1.B.4 Brucella-Rhizobium porin (BRP) family 1.B.5 Pseudomonas OprP porin (POP) family 1.B.6 OmpA-OmpF porin (OOP) family 1.B.7 Rhodobacter PorCa porin (RPP) family 1.B.8 Mitochondrial and plastid porin (MPP) family 1.B.9 FadL outer membrane protein (FadL) family 1.B.10 Nucleoside-specific channel-forming outer membrane porin (Tsx) family 1.B.11 Outer membrane fimbrial usher porin (FUP) family 1.B.12 Autotransporter-1 (AT-1) family 1.B.13 Alginate export porin (AEP) family 1.B.14 Outer membrane receptor (OMR) family 1.B.15 Raffinose porin (RafY) family 1.B.16 Short chain amide and urea porin (SAP) family 1.B.17 Outer membrane factor (OMF) family 1.B.18 Outer membrane auxiliary (OMA) protein family 1.B.19 Glucose-selective OprB porin (OprB) family 1.B.20 Two-partner secretion (TPS) family 1.B.21 OmpG porin (OmpG) family 1.B.22 Outer bacterial membrane secretin (secretin) family 1.B.23 Cyanobacterial porin (CBP) family 1.B.24 Mycobacterial porin 1.B.25 Outer membrane porin (Opr) family 1.B.26 Cyclodextrin porin (CDP) family 1.B.31 Campylobacter jejuni major outer membrane porin (MomP) family 1.B.32 Fusobacterial outer membrane porin (FomP) family 1.B.33 Outer membrane protein insertion porin (Bam complex) (OmpIP) family 1.B.34 Corynebacterial porins 1.B.35 Oligogalacturonate-specific porin (KdgM) family 1.B.39 Bacterial porin, OmpW (OmpW) family 1.B.42 Outer membrane lipopolysaccharide export porin (LPS-EP) family 1.B.43 Coxiella porin P1 (CPP1) family 1.B.44 Probable protein translocating porphyromonas gingivalis porin (PorT) family 1.B.49 Anaplasma P44 (A-P44) porin family 1.B.48 Curli-like transporters 1.B.54 Intimin/Invasin (Int/Inv) or Autotransporter-3 family 1.B.55 Poly-acetyl-D-glucosamine porin (PgaA) family 1.B.57 Legionella major-outer membrane protein (LM-OMP) family 1.B.60 Omp50 porin (Omp50 Porin) family 1.B.61 Delta-proteobacterial porin (Delta-porin) family 1.B.62 Putative bacterial porin (PBP) family 1.B.66 Putative beta-barrel porin-2 (BBP2) family 1.B.67 Putative beta barrel porin-4 (BBP4) family 1.B.68 Putative beta barrel porin-5 (BBP5) superfamily 1.B.70 Outer membrane channel (OMC) family 1.B.71 Proteobacterial/verrucomicrobial porin (PVP) family 1.B.72 Protochlamydial outer membrane porin (PomS/T) family 1.B.73 Capsule biogenesis/assembly (CBA) family 1.B.78 DUF3374 electron transport-associated porin (ETPorin) family 1.C Pore-forming toxins (proteins and peptides) 1.C.3 α-Hemolysin (αHL) family 1.C.4 Aerolysin family 1.C.5 ε-toxin family 1.C.11 RTX-toxin superfamily 1.C.12 Membrane attack complex/perforin superfamily 1.C.13 Leukocidin family 1.C.14 Cytohemolysin (CHL) family 1.C.39 Thiol-activated cholesterol-dependent cytolysin family 1.C.43 Lysenin family 1.C.56 Pseudomonas syringae HrpZ cation channel family 1.C.57 Clostridial cytotoxin family 1.C.58 The Microcin E492/C24 (Microcin E492) Family 1.C.74 Snake cytotoxin (SCT) family 1.C.97 Pleurotolysin pore-forming family 1.D Non-ribosomally synthesized channels 1.D.1 The Gramicidin A Channel Family 1.D.2 The Channel-forming Syringomycin Family 1.D.3 The Channel-Forming Syringopeptin Family 1.D.4 The Tolaasin Channel-forming Family 1.D.5 The Alamethicin or Peptaibol Antibiotic Channel-forming Family 1.D.6 The Complexed Poly 3-Hydroxybutyrate Ca2+ Channel (cPHB-CC) Family 1.D.7 The Beticolin Family 1.D.8 The Saponin Family 1.D.9 The Polyglutamine Ion Channel (PG-IC) Family 1.D.10 The Ceramide-forming Channel Family 1.D.11 The Surfactin Family 1.D.12 The Beauvericin (Beauvericin) Family 1.D.13 DNA-delivery Amphipathic Peptide Antibiotics (DAPA) 1.D.14 The Synthetic Leu/Ser Amphipathic Channel-forming Peptide (l/S-SCP) Family 1.D.15 The Daptomycin (Daptomycin) Family 1.D.16 The Synthetic Amphipathic Pore-forming Heptapeptide (SAPH) Family 1.D.17 Combinatorially-designed, Pore-forming, β-sheet Peptide Family 1.D.18 The Pore-forming Guanosine-Bile Acid Conjugate Family 1.D.19 Ca2+ Channel-forming Drug, Digitoxin Family 1.D.20 The Pore-forming Polyene Macrolide Antibiotic/fungal Agent (PMAA) Family 1.D.21 The Lipid Nanopore (LipNP) Family 1.D.22 The Proton-Translocating Carotenoid Pigment, Zeaxanthin Family 1.D.23 Phenylene Ethynylene Pore-forming Antimicrobial (PEPA) Family 1.D.24 The Marine Sponge Polytheonamide B (pTB) Family 1.D.25 The Arylamine Foldamer (AAF) Family 1.D.26 The Dihydrodehydrodiconiferyl alcohol 9'-O-β-D-glucoside (DDDC9G) Family 1.D.27 The Thiourea isosteres Family 1.D.28 The Lipopeptaibol Family 1.D.29 The Macrocyclic Oligocholate Family 1.D.30 The Artificial Hydrazide-appended pillar[5]arene Channels (HAPA-C) Family 1.D.31 The Amphotericin B Family 1.D.32 The Pore-forming Novicidin Family 1.D.33 The Channel-forming Polytheonamide B Family 1.D.34 The Channel-forming Oligoester Bolaamphiphiles 1.D.35 The Pore-forming cyclic Lipodepsipeptide Family 1.D.36 The Oligobornene Ion Channel Family 1.D.37 The Hibicuslide C Family 1.D.38 The Cyclic Peptide Nanotube (cPepNT) Family 1.D.39 The Light-controlled Azobenzene-based Amphiphilic Molecular Ion Channel (AAM-IC) Family 1.D.40 The Protein-induced Lipid Toroidal Pore Family 1.D.41 The Sprotetonate-type Ionophore (Spirohexanolide) Family 1.D.42 The Phe-Arg Tripeptide-Pillar[5]Arene Channel (TPPA-C) Family 1.D.43 The Triazole-tailored Guanosine Dinucleoside Channel (TT-GDN-C) Family 1.D.44 The Synthetic Ion Channel with Redox-active Ferrocene (ICRF) Family 1.D.45 The Sonoporation and Electroporation Membrane Pore (SEMP) Family 1.D.46 The DNA Nanopore (DnaNP) Family 1.D.47 The Pore-forming Synthetic Cyclic Peptide (PSCP) Family 1.D.48 The Pore-forming Syringomycin E Family 1.D.49 The Transmembrane Carotenoid Radical Channel (CRC) Family 1.D.50 The Amphiphilic bis-Catechol Anion Transporter (AC-AT) Family 1.D.51 The Protein Nanopore (ProNP) Family 1.D.52 The Aromatic Oligoamide Macrocycle Nanopore (OmnNP) Family 1.D.53 The alpha, gamma-Peptide Nanotube (a,gPepNT) Family 1.D.54 The potassium-selective Hexyl-Benzoureido-15-Crown-5-Ether Ion Channel (HBEC) Family 1.D.55 The Porphyrin-based Nanopore (PorNP) Family 1.D.56 The Alpha-Aminoisobutyrate (Aib) Oligomeric Nanopore (AibNP) Family 1.D.57 The Lipid Electro-Pore (LEP) Family 1.D.58 The Anion Transporting Prodigiosene (Prodigiosene) Family 1.D.59 The Anion Transporting Perenosin (Perenosin) Family 1.D.60 The Alpha,Gamma-Cyclic Peptide (AGCP) Family 1.D.61 The Anionophoric (ABBP) Family 1.D.62 The Bis-Triazolyl DiGuanosine Derivative Channel-forming (TDG) Family 1.D.63 The Peptide-based Nanopore (PepNP) Family 1.D.64 The Carbon Nanotube (CarNT) Family 1.D.65 The Pore-forming Amphidinol (Amphidinol) Family 1.D.66 The Helical Macromolecule Nanopore (HmmNP) Family 1.D.67 The Crown Ether-modified Helical Peptide Ion Channel (CEHP) Family 1.D.68 The Pore-forming Pleuronic Block Polymer (PPBP) Family 1.D.69 The Conical Nanopore (ConNP) Family 1.D.70 The Metallic (Au/Ag/Pt/graphene) Nanopore (MetNP) Family 1.D.71 The Synthetic TP359 Peptide (TP359) Family 1.D.72 The Chloride Carrier Triazine-based Tripodal Receptor (CCTTR) Family 1.D.73 The Mesoporous Silica Nanopore (SilNP) Family 1.D.74 The Stimulus-responsive Synthetic Rigid p-Octiphenyl Stave Pore (SSROP) Family 1.E Holins 1.F Vesicle fusion pores 1.F.1 The Synaptosomal Vesicle Fusion Pore (SVF-Pore) Family 1.F.2 The Octameric Exocyst (Exocyst) Family 1.G Viral fusion pores 1.G.1 The Viral Pore-forming Membrane Fusion Protein-1 (VMFP1) Family 1.G.2 The Viral Pore-forming Membrane Fusion Protein-2 (VMFP2) Family 1.G.3 The Viral Pore-forming Membrane Fusion Protein-3 (VMFP3) Family 1.G.4 The Viral Pore-forming Membrane Fusion Protein-4 (VMFP4) Family 1.G.5 The Viral Pore-forming Membrane Fusion Protein-5 (VMFP5) Family 1.G.6 The Hepadnaviral S Fusion Protein (HBV-S Protein) Family 1.G.7 The Reovirus FAST Fusion Protein (R-FAST) Family 1.G.8 The Arenavirus Fusion Protein (AV-FP) Family 1.G.9 The Syncytin (Syncytin) Family 1.G.10 The Herpes Simplex Virus Membrane Fusion Complex (HSV-MFC) Family 1.G.11 Poxvirus Cell Entry Protein Complex (PEP-C) Family 1.G.12 The Avian Leukosis Virus gp95 Fusion Protein (ALV-gp95) Family 1.G.13 The Orthoreovirus Fusion-associated Small Transmembrane (FAST) Family 1.G.14 The Influenza Virus Hemagglutinin/Fusion Pore-forming Protein (Influenza-H/FPP) Family 1.G.15 The Autographa californica Nuclear Polyhedrosis Virus Major Envelope Glycoprotein GP64 (GP64) Family 1.G.16 The Human Immunodeficiency Virus Type 1 (HIV-1) Fusion Peptide (HIV-FP) Family 1.G.17 The Bovine Leukemia Virus Envelop Glycoprotein (BLV-Env) Family 1.G.18 The SARS-CoV Fusion Peptide in the Spike Glycoprotein Precursor (SARS-FP) Family 1.G.19 The Rotavirus Pore-forming Membrane Fusion Complex (Rotavirus MFC) Family 1.G.20 The Hantavirus Gc Envelope Fusion Glycoprotein (Gc-EFG) Family 1.G.21 The Epstein Barr Virus (Human Herpes Virus 4) Gp42 (Gp42) Family 1.G.22 The Cytomegalovirus (Human Herpesvirus 5) Glycoprotein gO (gO) Family 1.H Paracellular channels 1.H.1 The Claudin Tight Junction (Claudin1) Family 1.H.2 The Invertebrate PMP22-Claudin (Claudin2) Family 1.I Membrane-bound channels 1.I.1 Nuclear pore complex family, including karyopherins 1.I.2 Plant plasmodesmata family 2. Electrochemical potential-driven transporters 2.A Porters (uniporters, symporters, antiporters) 2.A.1 Major Facilitator superfamily (MFS), see also Lactose permease, Phosphate permease and Glucose transporter 2.A.2 The Glycoside-Pentoside-Hexuronide (GPH):Cation Symporter Family 2.A.3 The Amino Acid-Polyamine-Organocation (APC) Family 2.A.4 Cation diffusion facilitator (CDF) Family 2.A.5 Zinc (Zn2+)-Iron (Fe2+) Permease Family 2.A.6 Resistance-Nodulation-Cell Division Superfamily, see also SecDF protein-export membrane protein 2.A.7 The Drug/Metabolite Transporter (DMT) Superfamily 2.A.8 The Gluconate:H+ Symporter (GntP) Family 2.A.9 The Membrane Protein Insertase (YidC/Alb3/Oxa1) Family 2.A.10 The 2-Keto-3-Deoxygluconate Transporter (KdgT) Family 2.A.11 The Citrate-Mg2+:H+ (CitM) Citrate-Ca2+:H+ (CitH) Symporter (CitMHS) Family 2.A.12 ATP:ADP Antiporter Family 2.A.13 The C4-Dicarboxylate Uptake (Dcu) Family 2.A.14 Lactate Permease Family 2.A.15 The Betaine/Carnitine/Choline Transporter (BCCT) Family 2.A.16 Tellurite-resistance/Dicarboxylate Transporter Family 2.A.17 Proton-dependent Oligopeptide Transporter Family 2.A.18 The Amino Acid/Auxin Permease (AAAP) Family 2.A.19 The Ca2+:Cation Antiporter (CaCA) Family 2.A.20 The Inorganic Phosphate Transporter (PiT) Family 2.A.21 Solute:Sodium Symporter Family 2.A.22 The Neurotransmitter:Sodium Symporter Family 2.A.23 The Dicarboxylate/Amino Acid:Cation (Na+ or H+) Symporter (DAACS) Family 2.A.24 The 2-Hydroxycarboxylate Transporter (2-HCT) Family 2.A.25 Alanine or Glycine:Cation Symporter (AGCS) Family 2.A.26 The Branched Chain Amino Acid:Cation Symporter (LIVCS) Family 2.A.27 The Glutamate:Na+ Symporter (ESS) Family 2.A.28 Bile Acid:Na+ Symporter Family 2.A.29 Mitochondrial carrier Family 2.A.30 Cation-Chloride Cotransporter (CCC) Family 2.A.31 Anion Exchanger Family 2.A.32 The Silicon Transporter (Sit) Family 2.A.33 NhaA Na+:H+ Antiporter (NhaA) Family 2.A.34 The NhaB Na+:H+ Antiporter (NhaB) Family 2.A.35 The NhaC Na+:H+ Antiporter (NhaC) Family 2.A.36 Monovalent Cation:Proton Antiporter-1 (CPA1) Family 2.A.37 Monovalent Cation:Proton Antiporter-2 (CPA2) Family 2.A.38 K+ Transporter (Trk) Family 2.A.39 Nucleobase:Cation Symporter-1 (NCS1) Family 2.A.40 Nucleobase:Cation Symporter-2 (NCS2) Family 2.A.41 The Concentrative Nucleoside Transporter (CNT) Family 2.A.42 The Hydroxy/Aromatic Amino Acid Permease (HAAAP) Family 2.A.43 The Lysosomal Cystine Transporter (LCT) Family 2.A.45 Arsenite-Antimonite Efflux Family 2.A.46 The Benzoate:H+ Symporter (BenE) Family 2.A.47 Divalent Anion:Na+ Symporter (DASS) Family 2.A.48 The Reduced Folate Carrier (RFC) Family 2.A.49 Chloride Carrier/Channel (ClC) Family 2.A.50 The Glycerol Uptake (GUP) Family 2.A.51 The Chromate Ion Transporter (CHR) Family 2.A.52 The Ni2+-Co2+ Transporter (NiCoT) Family 2.A.53 Sulfate permease (SulP) Family 2.A.54 The Mitochondrial Tricarboxylate Carrier (MTC) Family 2.A.55 The Metal Ion (Mn2+-iron) Transporter (Nramp) Family 2.A.56 The Tripartite ATP-independent Periplasmic Transporter (TRAP-T) Family 2.A.57 The Equilibrative Nucleoside Transporter (ENT) Family 2.A.58 The Phosphate:Na+ Symporter (PNaS) Family 2.A.59 The Arsenical Resistance-3 (ACR3) Family 2.A.60 Organo Anion Transporter (OAT) Family 2.A.61 The C4-dicarboxylate Uptake C (DcuC) Family 2.A.62 The NhaD Na+:H+ Antiporter (NhaD) Family 2.A.63 The Monovalent Cation (K+ or Na+):Proton Antiporter-3 (CPA3) Family 2.A.64 Twin Arginine Targeting (Tat) Family 2.A.65 The Bilirubin Transporter (BRT) Family 2.A.66 The Multidrug/Oligosaccharidyl-lipid/Polysaccharide (MOP) Flippase Superfamily 2.A.67 The Oligopeptide Transporter (OPT) Family 2.A.68 The p-Aminobenzoyl-glutamate Transporter (AbgT) Family 2.A.69 The Auxin Efflux Carrier (AEC) Family 2.A.70 The Malonate:Na+ Symporter (MSS) Family 2.A.71 The Folate-Biopterin Transporter (FBT) Family 2.A.72 The K+ Uptake Permease (KUP) Family 2.A.73 The Short Chain Fatty Acid Uptake (AtoE) Family 2.A.74 The 4 TMS Multidrug Endosomal Transporter (MET) Family 2.A.75 The L-Lysine Exporter (LysE) Family 2.A.76 The Resistance to Homoserine/Threonine (RhtB) Family 2.A.77 The Cadmium Resistance (CadD) Family 2.A.78 The Branched Chain Amino Acid Exporter (LIV-E) Family 2.A.79 The Threonine/Serine Exporter (ThrE) Family 2.A.80 The Tricarboxylate Transporter (TTT) Family 2.A.81 The Aspartate:Alanine Exchanger (AAEx) Family 2.A.82 The Organic Solute Transporter (OST) Family 2.A.83 The Na+-dependent Bicarbonate Transporter (SBT) Family 2.A.84 The Chloroplast Maltose Exporter (MEX) Family 2.A.85 The Aromatic Acid Exporter (ArAE) Family 2.A.86 The Autoinducer-2 Exporter (AI-2E) Family (Formerly the PerM Family, TC #9.B.22) 2.A.87 The Prokaryotic Riboflavin Transporter (P-RFT) Family 2.A.88 Vitamin Uptake Transporter (VUT or ECF) Family 2.A.89 The Vacuolar Iron Transporter (VIT) Family 2.A.90 Vitamin A Receptor/Transporter (STRA6) Family 2.A.91 Mitochondrial tRNA Import Complex (M-RIC) (Formerly 9.C.8) 2.A.92 The Choline Transporter-like (CTL) Family 2.A.94 The Phosphate Permease (Pho1) Family 2.A.95 The 6TMS Neutral Amino Acid Transporter (NAAT) Family 2.A.96 The Acetate Uptake Transporter (AceTr) Family 2.A.97 The Mitochondrial Inner Membrane K+/H+ and Ca2+/H+ Exchanger (LetM1) Family 2.A.98 The Putative Sulfate Exporter (PSE) Family 2.A.99 The 6TMS Ni2+ uptake transporter (HupE-UreJ) Family 2.A.100 The Ferroportin (Fpn) Family 2.A.101 The Malonate Uptake (MatC) Family (Formerly UIT1) 2.A.102 The 4-Toluene Sulfonate Uptake Permease (TSUP) Family 2.A.103 The Bacterial Murein Precursor Exporter (MPE) Family 2.A.104 The L-Alanine Exporter (AlaE) Family 2.A.105 The Mitochondrial Pyruvate Carrier (MPC) Family 2.A.106 The Ca2+:H+ Antiporter-2 (CaCA2) Family 2.A.107 The MntP Mn2+ Exporter (MntP) Family 2.A.108 The Iron/Lead Transporter (ILT) Family 2.A.109 The Tellurium Ion Resistance (TerC) Family 2.A.110 The Heme Transporter, heme-responsive gene protein (HRG) Family 2.A.111 The Na+/H+ Antiporter-E (NhaE) Family 2.A.112 The KX Blood-group Antigen (KXA) Family 2.A.113 The Nickel/cobalt Transporter (NicO) Family 2.A.114 The Putative Peptide Transporter Carbon Starvation CstA (CstA) Family 2.A.115 The Novobiocin Exporter (NbcE) Family 2.A.116 The Peptidoglycolipid Addressing Protein (GAP) Family 2.A.117 The Chlorhexadine Exporter (CHX) family 2.A.118 The Basic Amino Acid Antiporter (ArcD) Family 2.A.119 The Organo-Arsenical Exporter (ArsP) Family 2.A.120 The Putative Amino Acid Permease (PAAP) Family 2.A.121 The Sulfate Transporter (CysZ) Family 2.A.122 The LrgB/CidB holin-like auxiliary protein (LrgB/CidB) Family 2.A.123 The Sweet; PQ-loop; Saliva; MtN3 (Sweet) Family 2.A.124 The Lysine Exporter (LysO) Family 2.A.125 The Eukaryotic Riboflavin Transporter (E-RFT) Family 2.A.126 The Fatty Acid Exporter (FAX) Family 2.A.127 Enterobacterial Cardiolipin Transporter (CLT) Family 2.B Nonribosomally synthesized porters 2.B.1 The Valinomycin Carrier Family 2.B.2 The Monensin Family 2.B.3 The Nigericin Family 2.B.4 The Macrotetrolide Antibiotic (MA) Family 2.B.5 The Macrocyclic Polyether (MP) Family 2.B.6 The Ionomycin Family 2.B.7 The Transmembrane α-helical Peptide Phospholipid Translocation (TMP-PLT) Family 2.B.8 The Bafilomycin A1 (Bafilomycin) Family 2.B.9 The Cell Penetrating Peptide (CPP) Functional Family 2.B.10 The Synthetic CPP, Transportan Family 2.B.11 The Calcimycin or A23187 Carrier-type Ionophore Family 2.B.12 The Salinomycin Family 2.B.13 The Tetrapyrrolic Macrocyclic Anion Antiporter (TPMC-AA) Family 2.B.14 The Lasalocid A or X-537A Ionophore (Lasalocid) Family 2.B.15 The Tris-thiourea Tripodal-based Chloride Carrier (TTT-CC) Family 2.B.16 The Halogen-bond-containing Compound Anion Carrier (HCAC) Family 2.B.17 The Isophthalaminde Derivative H+:Cl− Co-transporter (IDC) Family 2.B.18 The Pyridine-2,6-Dicarboxamine Derivative (PDCA) H+:Cl− Co-transporter Family 2.B.19 The Calix(4)pyrrole Derivative (C4P) Family 2.B.20 The Prodigiosin (Prodigiosin) Chloride/Bicarbonate Exchanger Family 2.B.21 The ortho-Phenylenediamine-bis-Urea Derivative Anion Transporter (oPDA-U) Family 2.B.22 The Imidazolium-functionalized Anion Transporter (IAT) Family 2.B.23 The Homotetrameric Transmembrane Zn2+/Co2+:Proton Synthetic Antiporter, Rocker (Rocker) Family 2.B.24 The Anion Carrier (BBP-AC) Family 2.B.25 The Peptide-mediated Lipid Flip-Flop (PLFF) Family 2.B.26 The Conjugate (BIBCC) Family 2.B.27 The Tris-Urea Anion Transporter Family 2.B.29 The Anionophoric Marine Alkaloid Tambjamine Family 2.C Ion-gradient-driven energizers 2.C.1 The TonB-ExbB-ExbD/TolA-TolQ-TolR (TonB) Family of Auxiliary Proteins for Energization of Outer Membrane Receptor (OMR)-mediated Active Transport 3. Primary active transporters 3.A. P-P-bond hydrolysis-driven transporters 3.A.1 ABC transporters including BtuCD, molybdate uptake transporter, Cystic fibrosis transmembrane conductance regulator and others 3.A.2 The H+- or Na+-translocating F-type ATPase, V-type ATPase and A-type ATPase superfamily 3.A.3 The P-type ATPase Superfamily 3.A.4 The Arsenite-Antimonite efflux family 3.A.5 General secretory pathway (Sec) translocon (preprotein translocase SecY) 3.A.6 The Type III (Virulence-related) Secretory Pathway (IIISP) Family 3.A.7 The Type IV (Conjugal DNA-Protein Transfer or VirB) Secretory Pathway (IVSP) Family 3.A.8 The Mitochondrial Protein Translocase (MPT) Family 3.A.9 The Chloroplast Envelope Protein Translocase (CEPT or Tic-Toc) Family 3.A.10 H+, Na+-translocating Pyrophosphatase family 3.A.11 The Bacterial Competence-related DNA Transformation Transporter (DNA-T) Family 3.A.12 The Septal DNA Translocator (S-DNA-T) Family 3.A.13 The Filamentous Phage Exporter (FPhE) Family 3.A.14 The Fimbrilin/Protein Exporter (FPE) Family 3.A.15 The Outer Membrane Protein Secreting Main Terminal Branch (MTB) Family 3.A.16 The Endoplasmic Reticular Retrotranslocon (ER-RT) Family 3.A.17 The Phage T7 Injectisome (T7 Injectisome) Family 3.A.18 The Nuclear mRNA Exporter (mRNA-E) Family 3.A.19 The TMS Recognition/Insertion Complex (TRC) Family 3.A.20 The Peroxisomal Protein Importer (PPI) Family 3.A.21 The C-terminal Tail-Anchored Membrane Protein Biogenesis/ Insertion Complex (TAMP-B) Family 3.A.22 The Transcription-coupled TREX/TAP Nuclear mRNA Export Complex (TREX) Family 3.A.23 The Type VI Symbiosis/Virulence Secretory Pathway (VISP) Family 3.A.24 Type VII or ESX Protein Secretion System (T7SS) Family 3.A.25 The Symbiont-specific ERAD-like Machinery (SELMA) Family 3.A.26 The Plasmodium Translocon of Exported proteins (PTEX) Family 3.B Decarboxylation-driven transporters 3.B.1 The Na+-transporting Carboxylic Acid Decarboxylase (NaT-DC) Family 3.C Methyltransfer-driven transporters 3.C.1 The Na+ Transporting Methyltetrahydromethanopterin:Coenzyme M Methyltransferase (NaT-MMM) Family 3.D. Oxidoreduction-driven transporters They include a number of transmembrane cytochrome b-like proteins including coenzyme Q - cytochrome c reductase (cytochrome bc1 ); cytochrome b6f complex; formate dehydrogenase, respiratory nitrate reductase; succinate - coenzyme Q reductase (fumarate reductase); and succinate dehydrogenase. See electron transport chain. 3.D.1 The H+ or Na+-translocating NADH Dehydrogenase ("complex I") family 3.D.2 The Proton-translocating Transhydrogenase (PTH) Family 3.D.3 The Proton-translocating Quinol:Cytochrome c Reductase) Superfamily 3.D.4 Proton-translocating Cytochrome Oxidase (COX) Superfamily 3.D.5 The Na+-translocating NADH:Quinone Dehydrogenase (Na-NDH or NQR) Family 3.D.6 The Putative Ion (H+ or Na+)-translocating NADH:Ferredoxin Oxidoreductase (NFO or RNF) Family 3.D.7 The H2:Heterodisulfide Oxidoreductase (HHO) Family 3.D.8 The Na+- or H+-Pumping Formyl Methanofuran Dehydrogenase (FMF-DH) Family 3.D.9 The H+-translocating F420H2 Dehydrogenase (F420H2DH) Family 3.D.10 The Prokaryotic Succinate Dehydrogenase (SDH) Family 3.E. Light absorption-driven transporters Bacteriorhodopsin-like proteins including rhodopsin (see also opsin) Bacterial photosynthetic reaction centres and photosystems I and II Light harvesting complexes from bacteria and chloroplasts 4. Group translocators 4.A Phosphotransfer-driven group translocators 4.A.1 The PTS Glucose-Glucoside (Glc) Family 4.A.2 The PTS Fructose-Mannitol (Fru) Family 4.A.3 The PTS Lactose-N,N'-Diacetylchitobiose-β-glucoside (Lac) Family 4.A.4 The PTS Glucitol (Gut) Family 4.A.5 The PTS Galactitol (Gat) Family 4.A.6 The PTS Mannose-Fructose-Sorbose (Man) Family 4.A.7 The PTS L-Ascorbate (L-Asc) Family 4.B Nicotinamide ribonucleoside uptake transporters 4.B.1 The Nicotinamide Ribonucleoside (NR) Uptake Permease (PnuC) Family 4.C Acyl CoA ligase-coupled transporters 4.C.1 The Proposed Fatty Acid Transporter (FAT) Family 4.C.2 The Carnitine O-Acyl Transferase (CrAT) Family 4.C.3 The Acyl-CoA Thioesterase (AcoT) Family 4.D Polysaccharide Synthase/Exporters 4.D.1 The Putative Vectorial Glycosyl Polymerization (VGP) Family 4.D.2 The Glycosyl Transferase 2 (GT2) Family 4.D.3 The Glycan Glucosyl Transferase (OpgH) Family 4.E. Vacuolar Polyphosphate Polymerase-catalyzed Group Translocators 4.E.1 The Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family 5. Transport electron carriers 5.A Transmembrane 2-electron transfer carriers 5.A.1 The Disulfide Bond Oxidoreductase D (DsbD) Family 5.A.2 The Disulfide Bond Oxidoreductase B (DsbB) Family 5.A.3 The Prokaryotic Molybdopterin-containing Oxidoreductase (PMO) Family 5.B Transmembrane 1-electron transfer carriers 5.B.1 The Phagocyte (gp91phox) NADPH Oxidase Family 5.B.2 The Eukaryotic Cytochrome b561 (Cytb561) Family 5.B.3 The Geobacter Nanowire Electron Transfer (G-NET) Family 5.B.4 The Plant Photosystem I Supercomplex (PSI) Family 5.B.5 The Extracellular Metal Oxido-Reductase (EMOR) Family 5.B.6 The Transmembrane Epithelial Antigen Protein-3 Ferric Reductase (STEAP) Family 5.B.7 The YedZ (YedZ) Family 5.B.8 The Trans-Outer Membrane Electron Transfer Porin/Cytochrome Complex (ET-PCC) Family 5.B.9 The Porin-Cytochrome c (Cyc2) Family 8. Accessory factors involved in transport 8.A Auxiliary transport proteins 8.B Ribosomally synthesized protein/peptide toxins that target channels and carriers 8.C Non-ribosomally synthesized toxins that target channels and carriers 9. Incompletely characterized transport systems 9.A Recognized transporters of unknown biochemical mechanism 9.B Putative transport proteins 9.C Functionally characterized transporters lacking identified sequences References External links Transporter Classification Database List at qmul.ac.uk Classification of human transporters in pharmacology Biochemistry databases Transport proteins Transmembrane proteins Protein classification
Transporter Classification Database
[ "Chemistry", "Biology" ]
10,329
[ "Biochemistry", "Biochemistry databases", "Protein classification" ]
3,270,043
https://en.wikipedia.org/wiki/Electric%20power
Electric power is the rate of transfer of electrical energy within a circuit. Its SI unit is the watt, the general unit of power, defined as one joule per second. Standard prefixes apply to watts as with other SI units: thousands, millions and billions of watts are called kilowatts, megawatts and gigawatts respectively. In common parlance, electric power is the production and delivery of electrical energy, an essential public utility in much of the world. Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes (as domestic mains electricity) by the electric power industry through an electrical grid. Electric power can be delivered over long distances by transmission lines and used for applications such as motion, light or heat with high efficiency. Definition Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts". The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is: where: W is work in joules t is time in seconds Q is electric charge in coulombs V is electric potential or voltage in volts I is electric current in amperes I.e., watts = volts times amps. Explanation Electric power is transformed to other forms of energy when electric charges move through an electric potential difference (voltage), which occurs in electrical components in electric circuits. From the standpoint of electric power, components in an electric circuit can be divided into two categories: Active devices (power sources) If electric current is forced to flow through the device in the direction from the lower electric potential to the higher, so positive charges move from the negative to the positive terminal, work will be done on the charges, and energy is being converted to electric potential energy from some other type of energy, such as mechanical energy or chemical energy. Devices in which this occurs are called active devices or power sources; such as electric generators and batteries. Some devices can be either a source or a load, depending on the voltage and current through them. For example, a rechargeable battery acts as a source when it provides power to a circuit, but as a load when it is connected to a battery charger and is being recharged. Passive devices (loads) If conventional current flows through the device in a direction from higher potential (voltage) to lower potential, so positive charge moves from the positive (+) terminal to the negative (−) terminal, work is done by the charges on the device. The potential energy of the charges due to the voltage between the terminals is converted to kinetic energy in the device. These devices are called passive components or loads; they 'consume' electric power from the circuit, converting it to other forms of energy such as mechanical work, heat, light, etc. Examples are electrical appliances, such as light bulbs, electric motors, and electric heaters. In alternating current (AC) circuits the direction of the voltage periodically reverses, but the current always flows from the higher potential to the lower potential side. Passive sign convention Since electric power can flow either into or out of a component, a convention is needed for which direction represents positive power flow. Electric power flowing out of a circuit into a component is arbitrarily defined to have a positive sign, while power flowing into a circuit from a component is defined to have a negative sign. Thus passive components have positive power consumption, while power sources have negative power consumption. This is called the passive sign convention. Resistive circuits In the case of resistive (Ohmic, or linear) loads, the power formula (P = I·V) and Joule's first law (P = I^2·R) can be combined with Ohm's law (V = I·R) to produce alternative expressions for the amount of power that is dissipated: where R is the electrical resistance. Alternating current without harmonics In alternating current circuits, energy storage elements such as inductance and capacitance may result in periodic reversals of the direction of energy flow. The portion of energy flow (power) that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as real power (also referred to as active power). The amplitude of that portion of energy flow (power) that results in no net transfer of energy but instead oscillates between the source and load in each cycle due to stored energy, is known as the absolute value of reactive power. The product of the RMS value of the voltage wave and the RMS value of the current wave is known as apparent power. The real power P in watts consumed by a device is given by where Vp is the peak voltage in volts Ip is the peak current in amperes Vrms is the root-mean-square voltage in volts Irms is the root-mean-square current in amperes θ = θv − θi is the phase angle by which the voltage sine wave leads the current sine wave, or equivalently the phase angle by which the current sine wave lags the voltage sine wave The relationship between real power, reactive power and apparent power can be expressed by representing the quantities as vectors. Real power is represented as a horizontal vector and reactive power is represented as a vertical vector. The apparent power vector is the hypotenuse of a right triangle formed by connecting the real and reactive power vectors. This representation is often called the power triangle. Using the Pythagorean Theorem, the relationship among real, reactive and apparent power is: Real and reactive powers can also be calculated directly from the apparent power, when the current and voltage are both sinusoids with a known phase angle θ between them: The ratio of real power to apparent power is called power factor and is a number always between −1 and 1. Where the currents and voltages have non-sinusoidal forms, power factor is generalized to include the effects of distortion. Electromagnetic fields Electrical energy flows wherever electric and magnetic fields exist together and fluctuate in the same place. The simplest example of this is in electrical circuits, as the preceding section showed. In the general case, however, the simple equation P = IV may be replaced by a more complex calculation. The closed surface integral of the cross-product of the electric field intensity and magnetic field intensity vectors gives the total instantaneous power (in watts) out of the volume: The result is a scalar since it is the surface integral of the Poynting vector. Production Generation The fundamental principles of much electricity generation were discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electric current is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet. For electric utilities, it is the first process in the delivery of electricity to consumers. The other processes, electricity transmission, distribution, and electrical energy storage and recovery using pumped-storage methods are normally carried out by the electric power industry. Electricity is mostly generated at a power station by electromechanical generators, driven by heat engines heated by combustion, geothermal power or nuclear fission. Other generators are driven by the kinetic energy of flowing water and wind. There are many other technologies that are used to generate electricity such as photovoltaic solar panels. A battery is a device consisting of one or more electrochemical cells that convert stored chemical energy into electrical energy. Since the invention of the first battery (or "voltaic pile") in 1800 by Alessandro Volta and especially since the technically improved Daniell cell in 1836, batteries have become a common power source for many household and industrial applications. According to a 2005 estimate, the worldwide battery industry generates US$48 billion in sales each year, with 6% annual growth. There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Batteries are available in many sizes; from miniature button cells used to power hearing aids and wristwatches to battery banks the size of rooms that provide standby power for telephone exchanges and computer data centers. Electric power industry The electric power industry provides the production and delivery of power, in sufficient quantities to areas that need electricity, through a grid connection. The grid distributes electrical energy to customers. Electric power is generated by central power stations or by distributed generation. The electric power industry has gradually been trending towards deregulation – with emerging players offering consumers competition to the traditional public utility companies. Uses Electric power, produced from central generating stations and distributed over an electrical transmission grid, is widely used in industrial, commercial, and consumer applications. A country's per capita electric power consumption correlates with its industrial development. Electric motors power manufacturing machinery and propel subways and railway trains. Electric lighting is the most important form of artificial light. Electrical energy is used directly in processes such as extraction of aluminum from its ores and in production of steel in electric arc furnaces. Reliable electric power is essential to telecommunications and broadcasting. Electric power is used to provide air conditioning in hot climates, and in some places, electric power is an economically competitive energy source for building space heating. The use of electric power for pumping water ranges from individual household wells to irrigation and energy storage projects. See also EGRID Electric energy consumption Electric power system High-voltage cable Power engineering Rural electrification References Bibliography Reports on August 2003 Blackout, North American Electric Reliability Council website External links U.S. Department of Energy: Electric Power Power Temporal rates
Electric power
[ "Physics", "Mathematics", "Engineering" ]
2,013
[ "Temporal quantities", "Electromagnetic quantities", "Physical quantities", "Quantity", "Temporal rates", "Power (physics)", "Electric power", "Electrical engineering" ]
3,270,504
https://en.wikipedia.org/wiki/Oxinium
Oxinium is the brand name of a material used for replacement joints manufactured by the reconstructive orthopedic surgery division of medical devices company Smith & Nephew. It consists of a zirconium alloy metal substrate that transitions into a ceramic zirconium oxide outer surface. The ceramic surface is extremely abrasion resistant compared to traditional metal implant materials such as cobalt chromium. It also has a lower coefficient of friction against ultra-high molecular weight polyethylene (UHMWPE), the typical counterface material used in total joint replacements. These two factors likely contribute to the significantly lower UHMWPE wear rates observed in simulator testing. Reducing UHMWPE wear is thought to decrease the risk of implant failure due to osteolysis. All-ceramic materials can have a similar effect on reducing wear, but are brittle and difficult to manufacture. The metal substrate of Oxinium implants makes components easier to manufacture and gives them greater toughness (a combination of strength and ductility). In essence, this technology combines the abrasion resistance and low friction of a ceramic with the workability and toughness of a metal. This combination of properties led to Oxinium technology being the first ever implant-related technology to win the prestigious ASM International Engineering Materials Achievement Award (EMAA) in 2005. Current competitive reduced-wear options in total hip arthroplasty (THA) are ceramic-on-ceramic, metal-on-metal, and metal-on-cross-linked polyethylene. The only competitive reduced-wear option for total knee arthroplasty (TKA) is metal-on-cross-linked polyethylene. In September 2003, Smith & Nephew recalled its Macrotextured Oxinium Profix and Genesis II knee implants because of reports that 30 people receiving the implants without bone cement had to undergo a replacement surgery after they became loose. References External links Smith & Nephew Corporate Website. Biomaterials Implants (medicine) Zirconium alloys
Oxinium
[ "Physics", "Chemistry", "Biology" ]
417
[ "Biomaterials", "Materials", "Alloys", "Zirconium alloys", "Matter", "Medical technology" ]
3,271,052
https://en.wikipedia.org/wiki/Dynamic%20modulus
Dynamic modulus (sometimes complex modulus) is the ratio of stress to strain under vibratory conditions (calculated from data obtained from either free or forced vibration tests, in shear, compression, or elongation). It is a property of viscoelastic materials. Viscoelastic stress–strain phase-lag Viscoelasticity is studied using dynamic mechanical analysis where an oscillatory force (stress) is applied to a material and the resulting displacement (strain) is measured. In purely elastic materials the stress and strain occur in phase, so that the response of one occurs simultaneously with the other. In purely viscous materials, there is a phase difference between stress and strain, where strain lags stress by a 90 degree ( radian) phase lag. Viscoelastic materials exhibit behavior somewhere in between that of purely viscous and purely elastic materials, exhibiting some phase lag in strain. Stress and strain in a viscoelastic material can be represented using the following expressions: Strain: Stress: where where is frequency of strain oscillation, is time, is phase lag between stress and strain. The stress relaxation modulus is the ratio of the stress remaining at time after a step strain was applied at time : , which is the time-dependent generalization of Hooke's law. For visco-elastic solids, converges to the equilibrium shear modulus: . The fourier transform of the shear relaxation modulus is (see below). Storage and loss modulus The storage and loss modulus in viscoelastic materials measure the stored energy, representing the elastic portion, and the energy dissipated as heat, representing the viscous portion. The tensile storage and loss moduli are defined as follows: Storage: Loss: Similarly we also define shear storage and shear loss moduli, and . Complex variables can be used to express the moduli and as follows: where is the imaginary unit. Ratio between loss and storage modulus The ratio of the loss modulus to storage modulus in a viscoelastic material is defined as the , (cf. loss tangent), which provides a measure of damping in the material. can also be visualized as the tangent of the phase angle () between the storage and loss modulus. Tensile: Shear: For a material with a greater than 1, the energy-dissipating, viscous component of the complex modulus prevails. See also Dynamic mechanical analysis Elastic modulus Palierne equation References Physical quantities Solid mechanics Non-Newtonian fluids Viscoelasticity
Dynamic modulus
[ "Physics", "Mathematics" ]
519
[ "Solid mechanics", "Physical phenomena", "Physical quantities", "Quantity", "Mechanics", "Physical properties" ]
400,184
https://en.wikipedia.org/wiki/Henry%20Calderwood
Rev Henry Calderwood FRSE LLD (10 May 1830, Peebles – 19 November 1897, Edinburgh) was a Scottish minister and philosopher. Life He was born in Peebles on 10 May 1830, the son of William Calderwood, a corn merchant, and his wife Elizabeth Mitchell. He was educated at the Edinburgh Institution and then the High School in Edinburgh, and later attended University of Edinburgh. He studied for the ministry of the United Presbyterian Church of Scotland, and in 1856 was ordained pastor of the Greyfriars church, Glasgow. He also examined in mental philosophy for the University of Glasgow from 1861 to 1864, and from 1866 conducted the moral philosophy classes at that university, until in 1868 he became Professor of Moral Philosophy at Edinburgh, holding this post until his death 29 years later. He was made LL.D. of Glasgow in 1865. At this point the family lived at 197 St Vincent Street. In 1869 he was elected a Fellow of the Royal Society of Edinburgh his proposer being John Hutton Balfour. His address was then given as Craigrowan, a large villa in Merchiston on the west side of the city. His first and most famous work was The Philosophy of the Infinite (1854), in which he attacked the statement of Sir William Hamilton that we can have no knowledge of the Infinite. Calderwood maintained that such knowledge, though imperfect, is real and ever-increasing; that Faith implies Knowledge. His moral philosophy is in direct antagonism to Hegelian doctrine, and endeavours to substantiate the doctrine of divine sanction. Beside the data of experience, the mind has pure activity of its own whereby it apprehends the fundamental realities of life and combat. He wrote in addition A Handbook of Moral Philosophy, On the Relations of Mind and Brain, Science and Religion, The Evolution of Man's Place in Nature. Among his religious works the best-known is his Parables of Our Lord, and just before his death he finished a biography of David Hume in the Famous Scots Series. His interests were not confined to religious and intellectual matters; as the first chairman of the Edinburgh school board, he worked hard to bring the Education Act into working order. He published a well-known treatise on education. In the cause of philanthropy and temperance he was indefatigable. In politics he was at first a Liberal, but became a Liberal Unionist at the time of the Home Rule Bill. Calderwood was an advocate of theistic evolution. In his book Evolution and Man's Place in Nature he wrote that "evolution stands before us as an impressive reality in the history of Nature." He is buried in Morningside Cemetery, Edinburgh, towards the south-west. His wife Anne Hutton Leadbetter (d.1912) lies with him. Family He married Anne Hutton Leadbetter in 1857. Their children included the marine biologist William Leadbetter Calderwood FRSE (1865–1950). References The life of Henry Calderwood, LL.D., F.R.S.E. by his son W.L. Calderwood and David Woodside; with a special chapter on his philosophical works by A. Seth Pringle-Pattison (1900) Calderwood, Henry (1881). The Relations of Science and Religion. Macmillan (reissued by Cambridge University Press, 2009; ) External links 1830 births 1897 deaths People educated at Stewart's Melville College People educated at the Royal High School, Edinburgh People from Peebles 19th-century ministers of the Church of Scotland 19th-century Scottish Presbyterian ministers Scottish philosophers Fellows of the Royal Society of Edinburgh Alumni of the University of Edinburgh Academics of the University of Edinburgh 19th-century Scottish philosophers Theistic evolutionists
Henry Calderwood
[ "Biology" ]
751
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
400,339
https://en.wikipedia.org/wiki/Phenol%20formaldehyde%20resin
Phenol formaldehyde resins (PF), also called phenolic resins or phenoplasts, are synthetic polymers obtained by the reaction of phenol or substituted phenol with formaldehyde. Used as the basis for Bakelite, PFs were the first commercial synthetic resins. They have been widely used for the production of molded products including billiard balls, laboratory countertops, and as coatings and adhesives. They were at one time the primary material used for the production of circuit boards but have been largely replaced with epoxy resins and fiberglass cloth, as with fire-resistant FR-4 circuit board materials. There are two main production methods. One reacts phenol and formaldehyde directly to produce a thermosetting network polymer, while the other restricts the formaldehyde to produce a prepolymer known as novolac which can be moulded and then cured with the addition of more formaldehyde and heat. There are many variations in both production and input materials that are used to produce a wide variety of resins for special purposes. Formation and structure Phenol-formaldehyde resins, as a group, are formed by a step-growth polymerization reaction that can be either acid- or base-catalysed. Since formaldehyde exists predominantly in solution as a dynamic equilibrium of methylene glycol oligomers, the concentration of the reactive form of formaldehyde depends on temperature and pH. Phenol reacts with formaldehyde at the ortho and para sites (sites 2, 4 and 6) allowing up to 3 units of formaldehyde to attach to the ring. The initial reaction in all cases involves the formation of a hydroxymethyl phenol: HOC6H5 + CH2O → HOC6H4CH2OH The hydroxymethyl group is capable of reacting with either another free ortho or para site, or with another hydroxymethyl group. The first reaction gives a methylene bridge, and the second forms an ether bridge: HOC6H4CH2OH + HOC6H5 → (HOC6H4)2CH2 + H2O 2 HOC6H4CH2OH → (HOC6H4CH2)2O + H2O The diphenol (HOC6H4)2CH2 (sometimes called a "dimer") is called bisphenol F, which is an important monomer in the production of epoxy resins. Bisphenol-F can further link generating tri- and tetra-and higher phenol oligomers. Novolaks Novolaks (or novolacs) are phenol-formaldehyde resins with a formaldehyde to phenol molar ratio of less than one. In place of phenol itself, they are often produced from cresols (methylphenols). The polymerization is brought to completion using acid-catalysis such as sulfuric acid, oxalic acid, hydrochloric acid and rarely, sulfonic acids. The phenolic units are mainly linked by methylene and/or ether groups. The molecular weights are in the low thousands, corresponding to about 10–20 phenol units. Obtained polymer is thermoplastic and require a curing agent or hardener to form a thermoset. Hexamethylenetetramine is a hardener added to crosslink novolac. At a temperature greater than 90 °C, it forms methylene and dimethylene amino bridges. Resoles can also be used as a curing agent (hardener) for novolac resins. In either case, the curing agent is a source of formaldehyde which provides bridges between novolac chains, eventually completely crosslinking the system. Novolacs have multiple uses as tire tackifier, high temperature resin, binder for carbon bonded refractories, carbon brakes, photoresists and as a curing agent for epoxy resins. Resoles Base-catalysed phenol-formaldehyde resins are made with a formaldehyde to phenol ratio of greater than one (usually around 1.5). These resins are called resoles. Phenol, formaldehyde, water and catalyst are mixed in the desired amount, depending on the resin to be formed, and are then heated. The first part of the reaction, at around 70 °C, forms a thick reddish-brown tacky material, which is rich in hydroxymethyl and benzylic ether groups. The rate of the base-catalysed reaction initially increases with pH, and reaches a maximum at about pH = 10. The reactive species is the phenoxide anion (C6H5O−) formed by deprotonation of phenol. The negative charge is delocalised over the aromatic ring, activating sites 2, 4 and 6, which then react with the formaldehyde. Being thermosets, hydroxymethyl phenols will crosslink on heating to around 120 °C to form methylene and methyl ether bridges through the elimination of water molecules. At this point the resin is a 3-dimensional network, which is typical of polymerised phenolic resins. The high crosslinking gives this type of phenolic resin its hardness, good thermal stability, and chemical imperviousness. Resoles are referred to as "one step" resins as they cure without a cross linker unlike novolacs, a "two step" resin. Resoles are major polymeric resin materials widely used for gluing and bonding building materials. Exterior plywood, oriented strand boards (OSB), engineered high-pressure laminate are typical applications. Crosslinking and the formaldehyde/phenol ratio When the molar ratio of formaldehyde:phenol reaches one, in theory every phenol is linked together via methylene bridges, generating one single molecule, and the system is entirely crosslinked. This is why novolacs (F:P <1) do not harden without the addition of a crosslinking agents, and why resoles with the formula F:P >1 will. Applications Phenolic resins are found in myriad industrial products. Phenolic laminates are made by impregnating one or more layers of a base material such as paper, fiberglass, or cotton with phenolic resin and laminating the resin-saturated base material under heat and pressure. The resin fully polymerizes (cures) during this process forming the thermoset polymer matrix. The base material choice depends on the intended application of the finished product. Paper phenolics are used in manufacturing electrical components such as punch-through boards, in household laminates, and in paper composite panels. Glass phenolics are particularly well suited for use in the high speed bearing market. Phenolic micro-balloons are used for density control. The binding agent in normal (organic) brake pads, brake shoes, and clutch discs are phenolic resin. Synthetic resin bonded paper, made from phenolic resin and paper, is used to make countertops. Another use of phenolic resins is the making of duroplast, famously used in Trabant automobiles. Phenolic resins are also used for making exterior plywood commonly known as weather and boil proof (WBP) plywood because phenolic resins have no melting point but only a decomposing point in the temperature zone of and above. Phenolic resin is used as a binder in loudspeaker driver suspension components which are made of cloth. Higher end billiard balls are made from phenolic resins, as opposed to the polyesters used in less expensive sets. Sometimes people select fibre reinforced phenolic resin parts because their coefficient of thermal expansion closely matches that of the aluminium used for other parts of a system, as in early computer systems and Duramold. The Dutch painting forger Han van Meegeren mixed phenol formaldehyde with his oil paints before baking the finished canvas, in order to fake the drying out of the paint over the centuries. Atmospheric re-entry spacecraft use phenol formaldehyde resin as a key component in ablative heat shields (e.g. AVCOAT on the Apollo modules). As the heat shield skin temperature can reach 1000-2000 °C, the resin pyrolizes due to aerodynamic heating. This reaction absorbs significant thermal energy, insulating the deeper layers of the heat shield. The outgassing of pyrolisis reaction products and the removal of charred material by friction (ablation) also contribute to vehicle insulation, by mechanically carrying away the heat absorbed in those materials. Trade names Bakelite was originally made from phenolic resin and wood flour. Ebonol is a paper-filled phenolic resin designed as a replacement for ebony wood in stringed and woodwind instruments. Novotext is cotton fibre-reinforced phenolic, using randomly oriented fibres. Tufnol is a laminated plastic available as sheet and rods, which is made from layers of paper or cloth which have been soaked with phenolic resin and pressed under heat. Its high resistance to oils and solvents have made it suitable for many engineering applications. Oasis Floral Foam is "an open-celled phenolic foam that readily absorbs water and is used as a base for flower arrangements." Paxolin is a resin bonded paper product long used as a base material for printed circuit boards, although it is being replaced by fiberglass composites in many applications. Richlite is a paper-filled phenolic resin with many uses, from tabletops and cutting-boards to guitar fingerboards. Biodegradation Phenol-formaldehyde is degraded by the white rot fungus Phanerochaete chrysosporium. See also Urea-formaldehyde Para tertiary butylphenol formaldehyde resin References Synthetic resins Semiconductor device fabrication Thermosetting plastics
Phenol formaldehyde resin
[ "Chemistry", "Materials_science" ]
2,099
[ "Semiconductor device fabrication", "Synthetic materials", "Microtechnology", "Synthetic resins" ]
401,062
https://en.wikipedia.org/wiki/Shower-curtain%20effect
The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is running. The problem of identifying the cause of this effect has been featured in Scientific American magazine, with several theories given to explain the phenomenon but no definite conclusion. The shower-curtain effect may also be used to describe the observation of how nearby phase front distortions of an optical wave are more severe than remote distortions of the same amplitude. Hypotheses Buoyancy hypothesis Also called chimney effect or stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising air. By pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significant. However, the shower-curtain effect persists when cold water is used, implying that this is not the sole mechanism. Bernoulli effect hypothesis The most popular explanation given for the shower-curtain effect is Bernoulli's principle. Bernoulli's principle states that an increase in velocity results in a decrease in pressure. This theory presumes that the water flowing out of a shower head causes the air through which the water moves to start flowing in the same direction as the water. This movement would be parallel to the plane of the shower curtain. If air is moving across the inside surface of the shower curtain, Bernoulli's principle says the air pressure there will drop. This would result in a pressure differential between the inside and outside, causing the curtain to move inward. It would be strongest when the gap between the bather and the curtain is smallest, resulting in the curtain attaching to the bather. Horizontal vortex hypothesis A computer simulation of a typical bathroom found that none of the above theories pan out in their analysis, but instead found that the spray from the shower-head drives a horizontal vortex. This vortex has a low-pressure zone in the centre, which sucks the curtain. David Schmidt of the University of Massachusetts was awarded the 2001 Ig Nobel Prize in Physics for his partial solution to the question of why shower curtains billow inwards. He used a computational fluid dynamics code to achieve the results. Professor Schmidt is adamant that this was done "for fun" in his own free time without the use of grants. Coandă effect The Coandă effect, also known as "boundary layer attachment", is the tendency of a moving fluid to adhere to an adjacent wall. Condensation A hot shower will produce steam that condenses on the shower side of the curtain, lowering the pressure there. In a steady state the steam will be replaced by new steam delivered by the shower but in reality the water temperature will fluctuate and lead to times when the net steam production is negative. Air pressure Colder dense air outside and hot less dense air inside causes higher air pressure on the outside to force the shower curtain inwards to equalise the air pressure, this can be observed simply when the bathroom door is open allowing cold air into the bathroom. Solutions Many shower curtains come with features to reduce the shower-curtain effect. They may have adhesive suction cups on the bottom edges of the curtain, which are then pushed onto the sides of the shower when in use. Others may have magnets at the bottom, though these are not effective on acrylic or fiberglass tubs. It is possible to use a telescopic shower curtain rod to block the curtain on its lower part and to prevent it from sucking inside. Hanging the curtain rod higher or lower, or especially further away from the shower head, can reduce the effect. A convex shower rod can also be used to hold the curtain against the inside wall of a tub. A weight can be attached to a long string and the string attached to the curtain rod in the middle of the curtain (on the inside). Hanging the weight low against the curtain just above the rim of the shower pan or tub makes it an effective billowing deterrent without allowing the weight to hit the pan or tub and damage it. There are a few alternative solutions that either attach to the shower curtain directly, attach to the shower rod or attach to the wall. References External links Scientific American: Why does the shower curtain move toward the water? Why does the shower curtain blow up and in instead of down and out? Video demonstration of how this phenomenon could be solved. The Straight Dope: Why does the shower curtain blow in despite the water pushing it out (revisited)? 2001 Ig Nobel Prize Winners Fluent NEWS: Shower Curtain Grabs Scientist – But He Lives to Tell Why Arggh, Why Does the Shower Curtain Attack Me? by Joe Palca. All Things Considered, National Public Radio. November 4, 2006. (audio) Experimental Investigation of the Influence of the Relative Position of the Scattering Layer on Image Quality: the Shower Curtain Effect The shower curtain effect; ESA Fluid dynamics Physical phenomena Unsolved problems in physics Bathing
Shower-curtain effect
[ "Physics", "Chemistry", "Engineering" ]
1,010
[ "Physical phenomena", "Chemical engineering", "Unsolved problems in physics", "Piping", "Fluid dynamics" ]
401,314
https://en.wikipedia.org/wiki/Knudsen%20number
The Knudsen number (Kn) is a dimensionless number defined as the ratio of the molecular mean free path length to a representative physical length scale. This length scale could be, for example, the radius of a body in a fluid. The number is named after Danish physicist Martin Knudsen (1871–1949). The Knudsen number helps determine whether statistical mechanics or the continuum mechanics formulation of fluid dynamics should be used to model a situation. If the Knudsen number is near or greater than one, the mean free path of a molecule is comparable to a length scale of the problem, and the continuum assumption of fluid mechanics is no longer a good approximation. In such cases, statistical methods should be used. Definition The Knudsen number is a dimensionless number defined as where = mean free path [L1], = representative physical length scale [L1]. The representative length scale considered, , may correspond to various physical traits of a system, but most commonly relates to a gap length over which thermal transport or mass transport occurs through a gas phase. This is the case in porous and granular materials, where the thermal transport through a gas phase depends highly on its pressure and the consequent mean free path of molecules in this phase. For a Boltzmann gas, the mean free path may be readily calculated, so that where is the Boltzmann constant (1.380649 × 10−23 J/K in SI units) [M1 L2 T−2 Θ−1], is the thermodynamic temperature [θ1], is the particle hard-shell diameter [L1], is the static pressure [M1 L−1 T−2], is the specific gas constant [L2 T−2 θ−1] (287.05 J/(kg K) for air), is the density [M1 L−3]. If the temperature is increased, but the volume kept constant, then the Knudsen number (and the mean free path) doesn't change (for an ideal gas). In this case, the density stays the same. If the temperature is increased, and the pressure kept constant, then the gas expands and therefore its density decreases. In this case, the mean free path increases and so does the Knudsen number. Hence, it may be helpful to keep in mind that the mean free path (and therefore the Knudsen number) is really dependent on the thermodynamic variable density (proportional to the reciprocal of density), and only indirectly on temperature and pressure. For particle dynamics in the atmosphere, and assuming standard temperature and pressure, i.e. 0 °C and 1 atm, we have ≈ (80 nm). Relationship to Mach and Reynolds numbers in gases The Knudsen number can be related to the Mach number and the Reynolds number. Using the dynamic viscosity with the average molecule speed (from Maxwell–Boltzmann distribution) the mean free path is determined as follows: Dividing through by L (some characteristic length), the Knudsen number is obtained: where is the average molecular speed from the Maxwell–Boltzmann distribution [L1 T−1], T is the thermodynamic temperature [θ1], μ is the dynamic viscosity [M1 L−1 T−1], m is the molecular mass [M1], kB is the Boltzmann constant [M1 L2 T−2 θ−1], is the density [M1 L−3]. The dimensionless Mach number can be written as where the speed of sound is given by where U∞ is the freestream speed [L1 T−1], R is the Universal gas constant (in SI, 8.314 47215 J K−1 mol−1) [M1 L2 T−2 θ−1 mol−1], M is the molar mass [M1 mol−1], is the ratio of specific heats [1]. The dimensionless Reynolds number can be written as Dividing the Mach number by the Reynolds number: and by multiplying by yields the Knudsen number: The Mach, Reynolds and Knudsen numbers are therefore related by Application The Knudsen number can be used to determine the rarefaction of a flow: : Continuum flow : Slip flow : Transitional flow : Free molecular flow This regime classification is empirical and problem dependent but has proven useful to adequately model flows. Problems with high Knudsen numbers include the calculation of the motion of a dust particle through the lower atmosphere and the motion of a satellite through the exosphere. One of the most widely used applications for the Knudsen number is in microfluidics and MEMS device design where flows range from continuum to free-molecular. In recent years, it has been applied in other disciplines such as transport in porous media, e.g., petroleum reservoirs. Movements of fluids in situations with a high Knudsen number are said to exhibit Knudsen flow, also called free molecular flow. Airflow around an aircraft such as an airliner has a low Knudsen number, making it firmly in the realm of continuum mechanics. Using the Knudsen number an adjustment for Stokes' law can be used in the Cunningham correction factor, this is a drag force correction due to slip in small particles (i.e. dp < 5 μm). The flow of water through a nozzle will usually be a situation with a low Knudsen number. Mixtures of gases with different molecular masses can be partly separated by sending the mixture through small holes of a thin wall because the numbers of molecules that pass through a hole is proportional to the pressure of the gas and inversely proportional to its molecular mass. The technique has been used to separate isotopic mixtures, such as uranium, using porous membranes, It has also been successfully demonstrated for use in hydrogen production from water. The Knudsen number also plays an important role in thermal conduction in gases. For insulation materials, for example, where gases are contained under low pressure, the Knudsen number should be as high as possible to ensure low thermal conductivity. See also References External links Knudsen number and diffusivity calculators Fluid dynamics Dimensionless numbers of fluid mechanics
Knudsen number
[ "Chemistry", "Engineering" ]
1,278
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
402,048
https://en.wikipedia.org/wiki/Closed%20system
A closed system is a natural physical system that does not allow transfer of matter in or out of the system, althoughin the contexts of physics, chemistry, engineering, etc.the transfer of energy (e.g. as work or heat) is allowed. Physics In classical mechanics In nonrelativistic classical mechanics, a closed system is a physical system that does not exchange any matter with its surroundings, and is not subject to any net force whose source is external to the system. A closed system in classical mechanics would be equivalent to an isolated system in thermodynamics. Closed systems are often used to limit the factors that can affect the results of a specific problem or experiment. In thermodynamics In thermodynamics, a closed system can exchange energy (as heat or work) but not matter, with its surroundings. An isolated system cannot exchange any heat, work, or matter with the surroundings, while an open system can exchange energy and matter. (This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is used here.) For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where is the number of j-type molecules, is the number of atoms of element in molecule and is the total number of atoms of element in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system. In thermodynamics, a closed system is important for solving complicated thermodynamic problems. It allows the elimination of some external factors that could alter the results of the experiment or problem thus simplifying it. A closed system can also be used in situations where thermodynamic equilibrium is required to simplify the situation. In quantum physics This equation, called Schrödinger's equation, describes the behavior of an isolated or closed quantum system, that is, by definition, a system which does not interchange information (i.e. energy and/or matter) with another system. So if an isolated system is in some pure state |ψ(t) ∈ H at time t, where H denotes the Hilbert space of the system, the time evolution of this state (between two consecutive measurements). where is the imaginary unit, is the Planck constant divided by , the symbol indicates a partial derivative with respect to time , (the Greek letter psi) is the wave function of the quantum system, and is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation). In chemistry In chemistry, a closed system is where no reactants or products can escape, only heat can be exchanged freely (e.g. an ice cooler). A closed system can be used when conducting chemical experiments where temperature is not a factor (i.e. reaching thermal equilibrium). In engineering In an engineering context, a closed system is a bound system, i.e. defined, in which every input is known and every resultant is known (or can be known) within a specific time. See also Glossary of systems theory Dynamical system Isolated system Open system (systems theory) Sense and Respond Thermodynamic system References Cybernetics Systems theory Thermodynamic systems
Closed system
[ "Physics", "Chemistry", "Mathematics" ]
770
[ "Physical systems", "Thermodynamic systems", "Thermodynamics", "Dynamical systems" ]
402,244
https://en.wikipedia.org/wiki/Gold%20cyanidation
Gold cyanidation (also known as the cyanide process or the MacArthur–Forrest process) is a hydrometallurgical technique for extracting gold from low-grade ore through conversion to a water-soluble coordination complex. It is the most commonly used leaching process for gold extraction. Cyanidation is also widely used silver in extraction, usually after froth flotation. Production of reagents for mineral processing to recover gold represents 70% of cyanide consumption globally. While other metals, such as copper, zinc, and silver, are also recovered using cyanide, gold remains the primary driver of this technology. The highly toxic nature of cyanide has led to controversy regarding its use in gold mining, with it being banned in some parts of the world. However, when used with appropriate safety measures, cyanide can be safely employed in gold extraction processes. One critical factor in its safe use is maintaining an alkaline pH level above 10.5, which is typically controlled using lime in industrial-scale operations. Lime plays an essential role in gold processing, ensuring that the pH remains at the correct level to mitigate risks associated with cyanide use. History In 1783, Carl Wilhelm Scheele discovered that gold dissolved in aqueous solutions of cyanide. Through the work of Bagration (1844), Elsner (1846), and Faraday (1847), it was determined that each gold atom required two cyanide ions, i.e. the stoichiometry of the soluble compound. Industrial process The expansion of gold mining in the Rand of South Africa began to slow down in the 1880s, as the new deposits found tended to contain pyritic ore. The gold could not be extracted from this compound with any of the then available chemical processes or technologies. In 1887, John Stewart MacArthur, working in collaboration with brothers Robert and William Forrest for the Tennant Company in Glasgow, Scotland, developed the MacArthur–Forrest process for the extraction of gold from gold ores. Several patents were issued in the same year. By suspending the crushed ore in a cyanide solution, a separation of up to 96 percent pure gold was achieved. The process was first used on the Rand in 1890 and, despite operational imperfections, led to a boom of investment as larger gold mines were opened up. By 1891, Nebraska pharmacist Gilbert S. Peyton had refined the process at his Mercur Mine in Utah, "the first mining plant in the United States to make a commercial success of the cyanide process on gold ores." In 1896, Bodländer confirmed that oxygen was necessary for the process, something that had been doubted by MacArthur, and discovered that hydrogen peroxide was formed as an intermediate. Around 1900, the American metallurgist Charles Washington Merrill (1869–1956) and his engineer Thomas Bennett Crowe improved the treatment of the cyanide leachate, by using vacuum and zinc dust. Their process is the Merrill–Crowe process. Chemical reactions The chemical reaction for the dissolution of gold, the "Elsner equation", follows: 4Au + 8NaCN + O2 + 2H2O → 4Na[Au(CN)2] + 4NaOH Potassium cyanide and calcium cyanide are sometimes used in place of sodium cyanide. Gold is one of the few metals that dissolves in the presence of cyanide ions and oxygen. The soluble gold species is dicyanoaurate. from which it can be recovered by adsorption onto activated carbon. Application The ore is comminuted using grinding machinery. Depending on the ore, it is sometimes further concentrated by froth flotation or by centrifugal (gravity) concentration. Water is added to produce a slurry or pulp. The basic ore slurry can be combined with a solution of sodium cyanide or potassium cyanide; many operations use calcium cyanide, which is more cost effective. To prevent the creation of toxic hydrogen cyanide during processing, slaked lime (calcium hydroxide) or soda (sodium hydroxide) is added to the extracting solution to ensure that the acidity during cyanidation is maintained over pH 10.5 - strongly basic. Lead nitrate can improve gold leaching speed and quantity recovered, particularly in processing partially oxidized ores. Effect of dissolved oxygen Oxygen is one of the reagents consumed during cyanidation, accepting the electrons from the gold, and a deficiency in dissolved oxygen slows leaching rate. Air or pure oxygen gas can be purged through the pulp to maximize the dissolved oxygen concentration. Intimate oxygen-pulp contactors are used to increase the partial pressure of the oxygen in contact with the solution, thus raising the dissolved oxygen concentration much higher than the saturation level at atmospheric pressure. Oxygen can also be added by dosing the pulp with hydrogen peroxide solution. Pre-aeration and ore washing In some ores, particularly those that are partially sulfidized, aeration (prior to the introduction of cyanide) of the ore in water at high pH can render elements such as iron and sulfur less reactive to cyanide, therefore making the gold cyanidation process more efficient. Specifically, the oxidation of iron to iron (III) oxide and subsequent precipitation as iron hydroxide minimizes loss of cyanide from the formation of ferrous cyanide complexes. The oxidation of sulfur compounds to sulfate ions avoids the consumption of cyanide to thiocyanate (SCN−) byproduct. Recovery of gold from cyanide solutions In order of decreasing economic efficiency, the common processes for recovery of the solubilized gold from solution are (certain processes may be precluded from use by technical factors): Carbon in pulp Electrowinning Merrill–Crowe process Cyanide remediation processes The cyanide remaining in tails streams from gold plants is potentially hazardous. Therefore, some operations process the cyanide-containing waste streams in a detoxification step. This step lowers the concentrations of these cyanide compounds. The INCO-licensed process and the Caro's acid process oxidise the cyanide to cyanate, which is not as toxic as the cyanide ion, and which can then react to form carbonates and ammonia: + [O] → + 2 → + The Inco process can typically lower cyanide concentrations to below 50 mg/L, whereas the Caro's acid process can lower cyanide levels to between 10 and 50 mg/L, with the lower concentrations achievable in solution streams rather than slurries. Caro's acid – peroxomonosulfuric acid (H2SO5) - converts cyanide to cyanate. Cyanate then hydrolyses to ammonium and carbonate ions. The Caro's acid process is able to achieve discharge levels of Weak Acid Dissociable" (WAD) cyanide below 50 mg/L, which is generally suitable for discharge to tailings. Hydrogen peroxide and basic chlorination can also be used to oxidize cyanide, although these approaches are less common. Typically, this process blows compressed air through the tailings while adding sodium metabisulfite, which releases SO2. Lime is added to maintain the pH at around 8.5, and copper sulfate is added as a catalyst if there is insufficient copper in the ore extract. This procedure can reduce concentrations of WAD cyanide to below the 10 ppm mandated by the EU's Mining Waste Directive. This level compares to the 66-81 ppm free cyanide and 500-1000 ppm total cyanide in the pond at Baia Mare. Remaining free cyanide degrades in the pond, while cyanate ions hydrolyse to ammonium. Studies show that residual cyanide trapped in the gold-mine tailings causes persistent release of toxic metals (e.g. mercury ) into the groundwater and surface water systems. Effects on the environment Despite being used in 90% of gold production: gold cyanidation is controversial due to the toxic nature of cyanide. Although aqueous solutions of cyanide degrade rapidly in sunlight, the less-toxic products, such as cyanates and thiocyanates, may persist for some years. The famous disasters have killed few people — humans can be warned not to drink or go near polluted water, but cyanide spills can have a devastating effect on rivers, sometimes killing everything for several miles downstream. The cyanide could be washed out of river systems and, as long as organisms can migrate from unpolluted areas upstream, affected areas can soon be repopulated. Longer term impact and accumulation of cyanide in riparian or limnological benthos and environmental fate is less clear. According to Romanian authorities, in the Someș river below Baia Mare, the plankton returned to 60% of normal within 16 days of the spill; the numbers were not confirmed by Hungary or Yugoslavia. Famous cyanide spills include: Such spills have prompted fierce protests at new mines that involve use of cyanide, such as Roşia Montană in Romania, Lake Cowal in Australia, Pascua Lama in Chile, and Bukit Koman in Malaysia. Alternatives to cyanide Although cyanide is cheap, effective, and biodegradable, its high toxicity & increasingly poor impact on a mine's political and social license to operate, has incentivized alternative methods for extracting gold. Other extractants have been examined including thiosulfate (S2O32−), thiourea (SC(NH2)2), iodine/iodide, ammonia, liquid mercury, and alpha-cyclodextrin. Challenges include reagent cost and the efficiency of gold recovery, although some chlorination process using sodium hypochlorite (household bleach) have shown promise in terms or reagent regeneration. These technologies are at a pre-commercialisation stage and compare favourably to equivalent cyanidation methods, including gold recovery percentage. Thiourea has been implemented commercially for ores containing stibnite. Yet another alternative to cyanidation is the family of glycine-based lixiviants. Legislation The US states of Montana and Wisconsin, the Czech Republic, Hungary, have banned cyanide mining. The European Commission rejected a proposal for such a ban, noting that existing regulations (see below) provide adequate environmental and health protection. Several attempts to ban gold cyanidation in Romania were rejected by the Romanian Parliament. There are currently protests in Romania calling for a ban on the use of cyanide in mining (see 2013 Romanian protests against the Roșia Montană Project). In the EU, industrial use of hazardous chemicals is controlled by the so-called Seveso II Directive (Directive 96/82/EC, which replaced the original Seveso Directive (82/501/EEC brought in after the 1976 dioxin disaster. "Free cyanide and any compound capable of releasing free cyanide in solution" are further controlled by being on List I of the Groundwater Directive (Directive 80/68/EEC) which bans any discharge of a size which might cause deterioration in the quality of the groundwater at the time or in the future. The Groundwater Directive was largely replaced in 2000 by the Water Framework Directive (2000/60/EC). In response to the 2000 Baia Mare cyanide spill, the European Parliament and the Council adopted Directive 2006/21/EC on the management of waste from extractive industries. Article 13(6) requires "the concentration of weak acid dissociable cyanide in the pond is reduced to the lowest possible level using best available techniques", and at most all mines started after 1 May 2008 may not discharge waste containing over 10ppm WAD cyanide, mines built or permitted before that date are allowed no more than 50ppm initially, dropping to 25ppm in 2013 and 10ppm by 2018. Under Article 14, companies must also put in place financial guarantees to ensure clean-up after the mine has finished. This in particular may affect smaller companies wanting to build gold mines in the EU, as they are less likely to have the financial strength to give these kinds of guarantees. The industry has come up with a voluntary "Cyanide Code" that aims to reduce environmental impacts with third party audits of a company's cyanide management. References External links Efforts at a cleaner process Yestech A different commercial method that does not use toxic cyanide Cyanide Uncertainties (PDF) How gold is extracted by cyanidation process Cyanidation Cyano complexes Metallurgical processes
Gold cyanidation
[ "Chemistry", "Materials_science" ]
2,644
[ "Metallurgical processes", "Metallurgy" ]
15,077,797
https://en.wikipedia.org/wiki/PIGA%20accelerometer
A PIGA (Pendulous Integrating Gyroscopic Accelerometer) is a type of accelerometer that can measure acceleration and simultaneously integrates this acceleration against time to produce a speed measure as well. The PIGA's main use is in Inertial Navigation Systems (INS) for guidance of aircraft and most particularly for ballistic missile guidance. It is valued for its extremely high sensitivity and accuracy in conjunction with operation over a wide acceleration range. The PIGA is still considered the premier instrument for strategic grade missile guidance, though systems based on MEMS technology are attractive for lower performance requirements. Principle of operation The sensing element of a PIGA is a pendulous mass, free to pivot by being mounted on a bearing. A spinning gyroscope is attached such that it would restrain the pendulum against "falling" in the direction of acceleration. The pendulous mass and its attached gyroscope are themselves mounted on a pedestal that can be rotated by an electric torque motor. The rotational axis of this pedestal is mutually orthogonal to the spin axis of the gyroscope as well as the axis that the pendulum is free to move in. The axis of rotation of this pedestal is also in the direction of the measured acceleration. The position of the pendulum is sensed by precision electrical contacts or by optical or electromagnetic means. Should acceleration displace the pendulum arm from its null position the sensing mechanism will operate the torque motor and rotate the pedestal such that the property of gyroscopic precession restores the pendulum to its null position. The rate of rotation of the pedestal gives the acceleration while the total number of rotations of the shaft gives the speed, hence the term "integrating" in the PIGA acronym. A further level of integration of shaft rotations by either electronic means or by mechanical means, such as a Ball-and-disk integrator, can record the displacement or distance traveled, this latter mechanical method being used by early guidance systems prior to the availability of suitable digital computers. In most implementations of the PIGA the gyroscope itself is cantilevered on the end of the pendulum arm to act as the pendulous mass itself. Up to three such instruments may be required for each dimension of an INS with the three accelerometers mounted orthogonally generally on a platform stabilized gyroscopically within a system of gimbals. A critical requirement for accuracy is low static friction (stiction) in the bearings of the pendulum; this is achieved by various means ranging from double ball bearing with a superimposed oscillatory motion to dither the bearing above its threshold or through the use of gaseous or fluid bearings or by the alternative method of floating the gyroscope in a fluid and restraining the residual mass by jewel bearings or electromagnetic means. Although this later method still has the viscous friction of the fluid this is linear and has no threshold and has the advantage of having minimal static friction. Another aspect is the accurate control of the gyroscope's rotational rate. Missiles/rockets using PIGAs were the Redstone, Jupiter, Saturn V, Titan, Polaris, Minuteman, Peacekeeper, and . History The PIGA was based on an accelerometer developed by Dr. Fritz Mueller, then of the Kreiselgeraete Company, for the LEV-3 and experimental SG-66 guidance system of the Nazi era German V2 (EMW A4) ballistic missile and was known among the German rocket scientists as the MMIA "Mueller Mechanical Integrating Accelerometer". This system used precision electrical contacts to actuate the torque motor and achieved an accuracy of 1 part in 1000 to 1 part per 10000 (known in technical parlance as a scale error of 1000 to 100). This was equivalent to about 600 m of accuracy over the V2 1500 m/s speed and 320 km flight. Since the number of shaft rotations represented speed, a cam switch was used to initiate missile control sequences such as engine throttle-down and shut-off. A recovered MMIA accelerometer from an unexploded V2 was presented to Dr Charles Stark Draper of the Massachusetts Institute of Technology's instrumentation lab who had been developing the basis of inertial navigation for aircraft by initially concentrating efforts on achieving extremely low drift rate gyroscopes known as a floated integrating gyroscope. Draper combined ideas from his integrating gyroscopes, which were mounted in cans that floated in fluids that were held in place by jeweled bearings, with the recovered V2 accelerometer by floating the pendulum-gyroscope portion. The more generic name of PIGA was suggested by Dr. Draper due to the addition of various refinements such as electromagnetic or optical sensing of pendulum position. Such accelerometers were used in the Titan II, Polaris and Minuteman ICBM systems. PIGA accelerometers mounted in the AIRS (Advanced Inertial Reference Sphere) are part of the most accurate inertial navigation (INS) developed for the Peacekeeper missile. The INS drift rates are less than 1.5 x 10−5 degrees per hour of operation, about 8.5 m per hour with the overall accuracy of the missile affected more by defects in the gravitational maps. At the Redstone Arsenal and the adjoining Marshall Space Flight Center, near Huntsville, Alabama, the contingent of ex-German rocket scientists which had been brought into the United States under Operation Paperclip, including Dr. Mueller, continued to refine their original instruments in conjunction with American engineers and scientists. At the suggestion of Dr. Mueller, the technically difficult task of replacing the original ball bearings with gaseous bearings was achieved. Initially, compressed nitrogen was used but later fluorocarbons which had the advantage of being recyclable on board the missile or aircraft during extended waiting periods was used. Hence US accelerometers either consisted of the floating type or the gaseous bearing type with the US Army and US space program relying on the latter type of instrument. General references "Developments in the Field of Automatic Guidance and Control of Rockets", Walter Haeussermann, The Bendix Corporation, Huntsville, Ala. VOL. 4, NO. 3 J. GUIDANCE AND CONTROL MAY-JUNE 1981, History of Key Technologies AIAA 81-4120. From AIAA American Institute for Aeronautics & Astronautics Digital Library AIAA 2001-4288, "The Pendulous Integrating Gyroscope Accelerometer (PIGA) from the V-2 to Trident D5, the Strategic Instrument of Choice", R.E. Hopkins The Charles Stark Draper Laboratory, Inc. Cambridge, MA, Dr. Fritz K. Mueller, Dr. Walter Haeussermann, Huntsville, AL, Guidance, Navigation, and Control Conference & Exhibit, 6-9 August 2001 Montreal, Canada. From AIAA American Institute for Aeronautics & Astronautics Digital Library References Accelerometers
PIGA accelerometer
[ "Physics", "Technology", "Engineering" ]
1,428
[ "Accelerometers", "Physical quantities", "Acceleration", "Measuring instruments" ]
15,080,192
https://en.wikipedia.org/wiki/GTF2F1
General transcription factor IIF subunit 1 is a protein that in humans is encoded by the GTF2F1 gene. Interactions GTF2F1 has been shown to interact with: CTDP1, GTF2H4, HNRPU, MED21, POLR2A, Serum response factor TAF11, TAF1, TATA binding protein, and Transcription Factor II B. See also Transcription factor II F References Further reading External links Transcription factors
GTF2F1
[ "Chemistry", "Biology" ]
95
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,080,454
https://en.wikipedia.org/wiki/Dioxygenase
Dioxygenases are oxidoreductase enzymes. Aerobic life, from simple single-celled bacteria species to complex eukaryotic organisms, has evolved to depend on the oxidizing power of dioxygen in various metabolic pathways. From energetic adenosine triphosphate (ATP) generation to xenobiotic degradation, the use of dioxygen as a biological oxidant is widespread and varied in the exact mechanism of its use. Enzymes employ many different schemes to use dioxygen, and this largely depends on the substrate and reaction at hand. Comparison with monooxygenases In the monooxygenases, only a single atom of dioxygen is incorporated into a substrate with the other being reduced to a water molecule. The dioxygenases () catalyze the oxidation of a substrate without the reduction of one oxygen atom from dioxygen into a water molecule. However, this definition is ambiguous because it does not take into account how many substrates are involved in the reaction. The majority of dioxygenases fully incorporate dioxygen into a single substrate, and a variety of cofactor schemes are utilized to achieve this. For example, in the α-ketoglutarate-dependent enzymes, one atom of dioxygen is incorporated into two substrates, with one always being α-ketoglutarate, and this reaction is brought about by a mononuclear iron center. Iron-containing enzymes The most widely observed cofactor involved in dioxygenation reactions is iron, but the catalytic scheme employed by these iron-containing enzymes is highly diverse. Iron-containing dioxygenases can be subdivided into three classes on the basis of how iron is incorporated into the active site: those employing a mononuclear iron center, those containing a Rieske [2Fe-2S] cluster, and those utilizing a heme prosthetic group. Mononuclear iron dioxygenases The mononuclear iron dioxygenases, or non-heme iron-dependent dioxygenases as they are also termed, all utilize a single catalytic iron to incorporate either one or both atoms of dioxygen into a substrate. Despite this common oxygenation event, the mononuclear iron dioxygenases are diverse in how dioxygen activation is used to promote certain chemical reactions. For instance, carbon-carbon bond cleavage, fatty acid hydroperoxidation, carbon-sulfur bond cleavage, and thiol oxidation are all reactions catalyzed by mononuclear iron dioxygenases. Most mononuclear iron dioxygenases are members of the cupin superfamily in which the overall domain structure is described as a six-stranded β-barrel fold (or jelly roll motif). At the center this barrel structure is a metal ion, most commonly ferrous iron, whose coordination environment is frequently provided by residues in two partially conserved structural motifs: G(X)5HXH(X)3-4E(X)6G and G(X)5-7PXG(X)2H(X)3N. Two important groups of mononuclear, non-heme iron dioxygenases are catechol dioxygenases and 2-oxoglutarate (2OG)-dependent dioxygenases. The catechol dioxygenases, some of the most well-studied dioxygenase enzymes, use dioxygen to cleave a carbon-carbon bond of an aromatic catechol ring system. Catechol dioxygenases are further classified as being “extradiol” or “intradiol,” and this distinction is based on mechanistic differences in the reactions (figures 1 & 2). Intradiol enzymes cleave the carbon-carbon bond between the two hydroxyl groups. The active ferric center is coordinated by four protein ligands—two histidine and two tyrosinate residues—in a trigonal bipyramidal manner with a water molecule occupying the fifth coordination site. Once a catecholate substrate binds to the metal center in a bidentate fashion through the deprotonated hydroxyl groups, the ferric iron “activates” the substrate by means of abstracting an electron to produce a radical on the substrate. This then allows for reaction with dioxygen and subsequent intradiol cleavage to occur through a cyclic anhydride intermediate. Extradiol members utilize ferrous iron as the active redox state, and this center is commonly coordinated octahedrally through a 2-His-1-Glu motif with labile water ligands occupying empty positions. Once a substrate binds to the ferrous center, this promotes dioxygen binding and subsequent activation. This activated oxygen species then proceeds to react with the substrate ultimately cleaving the carbon-carbon bond adjacent to the hydroxyl groups through the formation of an α-keto lactone intermediate. In the 2OG-dependent dioxygenases, ferrous iron (Fe(II)) is also coordinated by a (His)2(Glu/Asp)1 "facial triad" motif. Bidentate coordination of 2OG and water completes a pseudo-octahedral coordination sphere. Following substrate binding, the water ligand is released, yielding an open coordination site for oxygen activation. Upon oxygen binding, a poorly understood transformation occurs during which 2OG is oxidatively decarboxylated to succinate and the O-O bond is cleaved to form a Fe(IV)-oxo (ferryl) intermediate. This powerful oxidant is then utilized to carry out various reactions, including hydroxylation, halogenation, and demethylation. In the best characterized case, the hydroxylases, the ferryl intermediate abstracts a hydrogen atom from the target position of the substrate, yielding a substrate radical and Fe(III)-OH. This radical then couples to the hydroxide ligand, producing the hydroxylated product and the Fe(II) resting state of the enzyme. Rieske dioxygenases The Rieske dioxygenases usually catalyze the cis-dihydroxylation of arenes to cis-dihydro-diol products. These dioxygenases also catalyze sulfoxidation, desaturation, and benzylic oxidation. These enzymes are prominently found in soil bacteria such as Pseudomonas, and their reactions constitute the initial step in the biodegradation of aromatic hydrocarbons. Rieske dioxygenases are structurally more complex than other dioxygenases due to the need for an efficient electron transfer pathway (figure 2) to mediate the additional, simultaneous two-electron reduction of the aromatic substrate. Rieske dioxygenases have three components: an NADH-dependent FAD reductase, a ferredoxin with two [2Fe-2S] Rieske clusters, and an α3β3 oxygenase with each α-subunit containing a mononuclear iron center and a [2Fe-2S] Rieske cluster. Within each α-subunit, the iron-sulfur cluster and mononuclear iron center are separated by a distance of ~43 Å, much too far for efficient electron transfer. Instead, it is proposed electron transfer is mediated through these two centers in adjacent subunits, that the iron-sulfur cluster of one subunit transfers electrons to the mononuclear iron center of the adjacent subunit which is conveniently separated by ~12 Å. While this distance would appear optimal for efficient electron transfer, replacement of the bridging aspartate residue causes a loss of enzyme function, suggesting that electron transfer instead proceeds through the hydrogen-bonding network held in place by this aspartate residue. The mechanism of O2 activation by this class of dioxygenases has been described. This species could represent the active oxidant, or it could undergo hemolytic O-O bond cleavage to yield an iron(V)-oxo intermediate as the working oxidizing agent. Heme-containing dioxygenases While most iron-dependent dioxygenases utilize a non-heme iron cofactor, the oxidation of L-(and D-)tryptophan to N-formylkynurenine is catalyzed by either tryptophan 2,3-dioxygenase (TDO) or indoleamine 2,3-dioxygenase (IDO), which are heme dioxygenases that utilize iron coordinated by a heme B prosthetic group. While these dioxygenases are of interest in part because they uniquely use heme for catalysis, they are also of interest due to their importance in tryptophan regulation in the cell, which has numerous physiological implications. The initial association of the substrate with the dioxygen-iron in the enzyme active site is thought to either proceed via radical or electrophilic addition, requiring either ferrous iron or ferric iron, respectively. While the exact reaction mechanism for the heme-dependent dioxygenases is still under debate, it is postulated that the reaction proceeds through either a dioxetane or Criegee mechanism (figures 4, 5). Cambialistic dioxygenases While iron is by far the most prevalent cofactor used for enzymatic dioxygenation, it is not required by all dioxygenases for catalysis. Quercetin 2,3-dioxygenase (quercetinase, QueD) catalyzes the dioxygenolytic cleavage of quercetin to 2-protocatechuoylphloroglucinolcarboxylic acid and carbon monoxide. The most characterized enzyme, from Aspergillus japonicus, requires the presence of copper, and bacterial quercetinases have been discovered that are quite promiscuous (cambialistic) in their requirements of a metal center, with varying degrees of activity reported with substitution of divalent manganese, cobalt, iron, nickel and copper. (Quercetin, role in metabolism). Acireductone (1,2-dihydroxy-5-(methylthio)pent-1-en-3-one) dioxygenase (ARD) is found in both prokaryotes and eukaryotes. ARD enzymes from most species bind ferrous iron and catalyze the oxidation of acireductone to 4-(methylthio)-2-oxobutanoate, the α-keto acid of methionine, and formic acid. However, ARD from Klebsiella oxytoca catalyzes an additional reaction when nickel(II) is bound: it instead produces 3-(methylthio)propionate, formate, and carbon monoxide from the reaction of acireductone with dioxygen. The activity of Fe-ARD is closely interwoven with the methionine salvage pathway, in which the methylthioadenosine product of cellular S-Adenosyl methionine (SAM) reactions is eventually converted to acireductone. While the exact role of Ni-ARD is not known, it is suspected to help regulate methionine levels by acting as a shunt in the salvage pathway. This K. oxytoca enzyme represents a unique example whereby the metal ion present dictates which reaction is catalyzed. The quercetinases and ARD enzymes all are members of the cupin superfamily, to which the mononuclear iron enzymes also belong. The metal coordination scheme for the QueD enzymes is either a 3-His or 3-His-1-Glu with the exact arrangement being organism-specific. The ARD enzymes all chelate the catalytic metal (either Ni or Fe) through the 3-His-1-Glu motif. In these dioxygenases, the coordinating ligands are provided by both of the typical cupin motifs. In the ARD enzymes, the metal exists in an octahedral arrangement with the three histidine residues comprising a facial triad. The bacterial quercetinase metal centers typically have a trigonal bipyramidal or octahedral coordination environment when there are four protein ligands; the metal centers of the copper-dependent QueD enzymes possesses a distorted tetrahedral geometry in which only the three conserved histidine residues provide coordination ligands. Empty coordination sites in all metal centers are occupied by aqua ligands until these are displaced by the incoming substrate. The ability of these dioxygenases to retain activity in the presence of other metal cofactors with wide ranges of redox potentials suggests the metal center does not play an active role in the activation of dioxygen. Rather, it is thought the metal center functions to hold the substrate in the proper geometry for it to react with dioxygen. In this respect, these enzymes are reminiscent of the intradiol catechol dioxygenases whereby the metal centers activate the substrate for subsequent reaction with dioxygen. Cofactor-independent dioxygenases Dioxygenases that catalyze reactions without the need for a cofactor are much more rare in nature than those that do require them. Two dioxygenases, 1H-3-hydroxy-4-oxo-quinoline 2,4-dioxygenase (QDO) and 1H-3-hydroxy-4-oxoquinaldine 2,4-dioxygenase (HDO), have been shown to require neither an organic or metal cofactor. These enzymes catalyze the degradation of quinolone heterocycles in a manner similar to quercetin dioxygenase, but are thought to mediate a radical reaction of a dioxygen molecule with a carbanion on the substrate (figure 5). Both HDO and QDO belong to the α/β hydrolase superfamily of enzymes, although the catalytic residues in HDO and QDO do not seem to serve the same function as they do in the rest of the enzymes in the α/β hydrolase superfamily. Clinical significance Diversity in the dioxygenase family means a wide range of biological roles: Tryptophan 2,3-dioxygenase (TDO) helps regulate tryptophan in the body and is expressed in many human tumors. The other heme iron-dependent dioxygenase, IDO, also has relevance to human health, as it functions in inflammatory responses in the context of certain diseases. IDO affects both tryptophan and kynurenine and has been linked to depression in humans. Alkaptonuria is a genetic disease that results in a deficiency of homogentisate 1,2-dioxygenase, which is responsible for catalyzing the formation of 4-maleylacetoacetate from homogentisate. Buildup of homogentisic acid can result in heart valve damage, kidney stones and damage to cartilage in the body. Pantothenate kinase-associated neurodegeneration (PKAN) is an autosomal recessive disorder that can lead to the development of iron granules and Lewy bodies in neurons. A study has shown that patients diagnosed with PKAN were found to have increased cysteine levels in the globus pallidus as a consequence of a cysteine dioxygenase deficiency. Patients with PKAN often develop symptoms of dementia and often die at an early age in adulthood. In DNA repair, the Fe (II)/2-oxoglutarate-dependent dioxygenase AlkB, functions in the oxidative removal of alkylation damage to DNA. Failure to remove DNA alkylation damage can result in cytotoxicity or mutagenesis during DNA replication. Cyclooxygenases (COX), which are responsible for forming prostanoids in the human body, are the target of many NSAID pain relievers. Inhibition of COX leads to reduced inflammation and has an analgesic effect due to the lowered level of prostaglandin and thromboxane synthesis. References Oxidoreductases EC 1.13.11 Oxygenases
Dioxygenase
[ "Chemistry" ]
3,454
[ "Oxidoreductases", "Bioinorganic chemistry" ]
15,086,544
https://en.wikipedia.org/wiki/Coccidioides%20posadasii
Coccidioides posadasii is a pathogenic fungus that, along with Coccidioides immitis, is the causative agent of coccidioidomycosis, or valley fever in humans. It resides in the soil in certain parts of the Southwestern United States, northern Mexico, and some other areas in the Americas, but its evolution was connected to its animal hosts. Coccidioides posadasii and C. immitis are morphologically identical, but genetically and epidemiologically distinct. C. posadasii was identified as a separate species other than C. immitis in 2002 after a phylogenetic analysis. The two species can be distinguished by DNA polymorphisms and different rates of growth in the presence of high salt concentrations: C. posadasii grows more slowly. It also differs epidemiologically, since it is found outside the San Joaquin Valley. Unlike C. immitis, which is geographically largely limited to California, C. posadasii can also be found in northern Mexico and South America. Early history As an intern in Buenos Aires in 1892, Alejandro Posadas described an Argentine soldier that had a dermatological problem since 1889. Posadas had seen the patient while a medical student in 1891 and skin biopsies revealed organisms resembling the protozoan Coccidia. The patient died in 1898 but during the interim Posadas successfully transmitted the infection to a dog, a cat, and a monkey, by inoculating them with material from his patient. In 1899 a 40 year old manual laborer from the San Joaquin Valley, a native of the Azores, entered a San Francisco hospital with fungating lesions similar to those of Posadas' patient. Dr. Emmet Rixford, a surgeon at San Francisco's Cooper Medical College, in attempts to determine the cause, concluded it was not from inadvertent self-inoculation. Further research produced a chronic ulcer in a rabbit and a lesion in a dog both excreting pus with the same organisms. Rixford issued a report, co-authored by Dr. Thomas Caspar Gilchrist (1862–1927), that was printed in 1896, one year after the patient died. A pathologist at Johns Hopkins Medical School and Gilchrist studied the material and determined the microbe was not a fungus but a protozoan resembling Coccidia. With the help of parasitologist C.W. Stiles, the organism was named Coccidioides (“resembling Coccidia”) immitis (“not mild”). Four years later William Ophüls and Herbert C. Moffitt proved that C. immitis was not a protozoan but was a fungus that existed in 2 forms. In 1905 Ophüls called the infections "coccidioidal granuloma" and that it could develop from inhalation of the organism. Also in 1905 Samuel Darling studied a case and, referring to the misnamed organism a protozoan, named it Histoplasma capsulatum, meaning three major endemic fungi in the United States were all initially misidentified as protozoa. Studies by Cooke on the immunology of the disease, and in 1927 a filtrate of culture specimens, later named coccidioidin, began to be used in skin testing to delineate the epidemiology of infection. In 1929 a second-year medical student, Harold Chope, was studying C. immitis in the laboratory of Ernest Dickson at Stanford University Medical School, and breathed in spores becoming infected but he later recovered. In 1934 Myrnie Gifford, a physician at San Francisco General Hospital, joined the Health Department of Kern County, California. She had observed that San Joaquin Valley Fever patients often suffered from erythema nodosum, and all tested positive for coccidioidomycosis. She met Ernest Dickson when he visited her in Kern County, California, and together they presented evidence to the California Medical Association. The two determined that San Joaquin fever represented C. immitis infection. The Kern County Health Department began obtaining epidemiologic histories and skin testing all cases involving Valley Fever. The investigations revealed, among other things, that a majority of the cases described a history of dust exposure, that coccidioidomycosis was common in the area, and that racial differences determined the host's response to the fungus. Chope left Stanford Medical School and Dickson recruited a classmate, Charles E. Smith, to replace him. Smith began an extensive 17-month study of coccidioidomycosis in Kern and Tulare County, that also began a lifelong professional focus of study of C. immitis and coccidioidomycosis, even after he became Dean of the School of Public Health at the University of California at Berkeley in 1951, until his death in 1967. Research by Smith resulted in more than a few discoveries that included serologic testing, that chlamydospores of the fungus c. immitis could be wind-blown dispersing the spores when the hot weather converted the soil to dust, scientific results of military personnel testing in the southern San Joaquin Valley before and during WWII, as well as people of Japanese descent (many US citizens) interned in camps, prisoners of war, and agricultural workers. Diagnoses of active disease and skin testing, showed that it was also found in southern Nevada and Utah, western Texas, as well as Arizona, where the southern and central areas appeared to impose the highest risk of infection in the United States. Smith's research added to the fundamental discoveries of microbiology, epidemiology, clinical findings, and diagnosis that had emerged since Posadas' initial case report in 1892. Later history Progress in studies from 1997 to 2007, including genomic restriction fragment length polymorphism (RFLP) concluded that there were two separate species. Earlier the two were referred to as types I and II, and later as Non-California and California distributions, determined as clades through microsatellite analyses. Genealogical Concordance Phylogenetic Species Recognition (GCPSR) criteria were met, so the two entities were proposed and generally recognized as two separate species: Coccidioides immitis, and the novel species Coccidioides posadasii. References External links Coccidioides posadasii overview, life cycle image at MetaPathogen resource Onygenales Fungal pathogens of humans Fungus species
Coccidioides posadasii
[ "Biology" ]
1,339
[ "Fungi", "Fungus species" ]
5,876,226
https://en.wikipedia.org/wiki/IC%20programming
IC programming is the process of transferring a software or firmware into an integrated circuit (IC), typically to enable the chip to perform specific tasks or functions. The process of IC programming usually requires an IC programmer, also known as a chip programmer, device programmer, or PROM writer, which is an electronic device used to load data into the non-volatile memory of programmable ICs. IC programming can be performed either off-board, where the IC is removed from its PCB and programmed externally, or on-board, where the IC is programmed while still mounted on the device's circuit board. IC programming is essential in providing the ability to program a range of programmable ICs used in diverse applications, from consumer electronics to industrial systems. The common types of programmable chips include: Programmable Read-Only Memory (PROM) Erasable Programmable Read-Only Memory (EPROM) Electrically Erasable Programmable Read-Only Memory (EEPROM) Flash memory Field Programmable Gate Arrays (FPGA) Microcontroller Units (MCU) Notes Embedded systems
IC programming
[ "Technology", "Engineering" ]
221
[ "Computer engineering", "Embedded systems", "Computer hardware stubs", "Computer systems", "Computer science", "Computing stubs" ]
5,876,658
https://en.wikipedia.org/wiki/Optical%20cross-connect
An optical cross-connect (OXC) is a device used by telecommunications carriers to switch high-speed optical signals in a fiber optic network, such as an optical mesh network. In the 1980s, when transmission speeds supported by optical fibers increased from to , carrier networks developed and introduced digital cross connects to restore , , and traffic. There are several ways to realize an OXC: Opaque OXCs (electronic switching) - One can implement an OXC in the electronic domain: all the input optical signals are converted into electronic signals after they are demultiplexed by demultiplexers. The electronic signals are then switched by an electronic switch module. Finally, the switched electronic signals are converted back into optical signals by using them to modulate lasers and then the resulting optical signals are multiplexed by optical multiplexers onto outlet optical fibers. This is known as an "OEO" (Optical-Electrical-Optical) design. Cross-connects based on an OEO switching process generally have a key limitation: the electronic circuits limit the maximum bandwidth of the signal. Such an architecture prevents an OXC from performing with the same speed as an all-optical cross-connect, and is not transparent to the network protocols used. On the other hand, it is easy to monitor signal quality in an OEO device, since everything is converted back to the electronic format at the switch node. An additional advantage is that the optical signals are regenerated, so they leave the node free of dispersion and attenuation. An electronic OXC is also called an opaque OXC. Transparent OXCs (optical switching) - Switching optical signals in an all-optical device is the second approach to realize an OXC. Such a switch is often called a transparent OXC or photonic cross-connect (PXC). Specifically, optical signals are demultiplexed, then the demultiplexed wavelengths are switched by optical switch modules. After switching, the optical signals are multiplexed onto output fibers by optical multiplexers. Such a switch architecture keeps the features of data rate and protocol transparency. However, because the signals are kept in the optical format, the transparent OXC architecture does not allow easy optical signal quality monitoring. Translucent OXCs (optical and electronic switching) - As a compromise between opaque and transparent OXC's, there is a type of OXC called a translucent OXC. In such a switch architecture, there is a switch stage which consists of an optical switch module and an electronic switch module. Optical signals passing through the switch stage can be switched either by the optical switch module or the electronic switch module. In most cases, the optical switch module is preferred for the purpose of transparency. When the optical switch module's switching interfaces are all busy or an optical signal needs signal regeneration through an OEO conversion process, the electronic module is used. Translucent OXC nodes provide a compromise of full optical signal transparency and comprehensive optical signal monitoring. It also provides the possibility of signal regeneration at each node. An optical add-drop multiplexer (OADM) can be viewed as a special case of an OXC, where to node degree is two. See also Optical switch Optical Carrier MEMS Digital access and cross-connect system References External links Transparent Optical Switches: Technology Issues and Challenges / International Engineering Consortium 2003 Annual Communications Review (2003). Cross-connect, optical Microtechnology
Optical cross-connect
[ "Materials_science", "Engineering" ]
692
[ "Materials science", "Microtechnology" ]
5,877,457
https://en.wikipedia.org/wiki/Inviscid%20flow
In fluid dynamics, inviscid flow is the flow of an inviscid fluid which is a fluid with zero viscosity. The Reynolds number of inviscid flow approaches infinity as the viscosity approaches zero. When viscous forces are neglected, such as the case of inviscid flow, the Navier–Stokes equation can be simplified to a form known as the Euler equation. This simplified equation is applicable to inviscid flow as well as flow with low viscosity and a Reynolds number much greater than one. Using the Euler equation, many fluid dynamics problems involving low viscosity are easily solved, however, the assumed negligible viscosity is no longer valid in the region of fluid near a solid boundary (the boundary layer) or, more generally in regions with large velocity gradients which are evidently accompanied by viscous forces. The flow of a superfluid is inviscid. Inviscid flows are broadly classified into potential flows (or, irrotational flows) and rotational inviscid flows. Prandtl hypothesis Ludwig Prandtl developed the modern concept of the boundary layer. His hypothesis establishes that for fluids of low viscosity, shear forces due to viscosity are evident only in thin regions at the boundary of the fluid, adjacent to solid surfaces. Outside these regions, and in regions of favorable pressure gradient, viscous shear forces are absent so the fluid flow field can be assumed to be the same as the flow of an inviscid fluid. By employing the Prandtl hypothesis it is possible to estimate the flow of a real fluid in regions of favorable pressure gradient by assuming inviscid flow and investigating the irrotational flow pattern around the solid body. Real fluids experience separation of the boundary layer and resulting turbulent wakes but these phenomena cannot be modelled using inviscid flow. Separation of the boundary layer usually occurs where the pressure gradient reverses from favorable to adverse so it is inaccurate to use inviscid flow to estimate the flow of a real fluid in regions of unfavorable pressure gradient. Reynolds number The Reynolds number (Re) is a dimensionless quantity that is commonly used in fluid dynamics and engineering. Originally described by George Gabriel Stokes in 1850, it became popularized by Osborne Reynolds after whom the concept was named by Arnold Sommerfeld in 1908. The Reynolds number is calculated as: The value represents the ratio of inertial forces to viscous forces in a fluid, and is useful in determining the relative importance of viscosity. In inviscid flow, since the viscous forces are zero, the Reynolds number approaches infinity. When viscous forces are negligible, the Reynolds number is much greater than one. In such cases (Re>>1), assuming inviscid flow can be useful in simplifying many fluid dynamics problems. Euler equations In a 1757 publication, Leonhard Euler described a set of equations governing inviscid flow: Assuming inviscid flow allows the Euler equation to be applied to flows in which viscous forces are insignificant. Some examples include flow around an airplane wing, upstream flow around bridge supports in a river, and ocean currents. Navier-Stokes equations In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations. Claude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theory. The Navier-Stokes equations describe the motion of fluids: When the fluid is inviscid, or the viscosity can be assumed to be negligible, the Navier-Stokes equation simplifies to the Euler equation: This simplification is much easier to solve, and can apply to many types of flow in which viscosity is negligible. Some examples include flow around an airplane wing, upstream flow around bridge supports in a river, and ocean currents. The Navier-Stokes equation reduces to the Euler equation when . Another condition that leads to the elimination of viscous force is , and this results in an "inviscid flow arrangement". Such flows are found to be vortex-like. Solid boundaries It is important to note, that negligible viscosity can no longer be assumed near solid boundaries, such as the case of the airplane wing. In turbulent flow regimes (Re >> 1), viscosity can typically be neglected, however this is only valid at distances far from solid interfaces. When considering flow in the vicinity of a solid surface, such as flow through a pipe or around a wing, it is convenient to categorize four distinct regions of flow near the surface: Main turbulent stream: Furthest from the surface, viscosity can be neglected. Inertial sub-layer: The start of the main turbulent stream, viscosity has only minor importance. Buffer layer: The transformation between inertial and viscous layers. Viscous sub-layer: Closest to the surface, here viscosity is important. Although these distinctions can be a useful tool in illustrating the significance of viscous forces near solid interfaces, it is important to note that these regions are fairly arbitrary. Assuming inviscid flow can be a useful tool in solving many fluid dynamics problems, however, this assumption requires careful consideration of the fluid sub layers when solid boundaries are involved. Superfluids Superfluid is the state of matter that exhibits frictionless flow, zero viscosity, also known as inviscid flow. To date, helium is the only fluid to exhibit superfluidity that has been discovered. Helium-4 becomes a superfluid once it is cooled to below 2.2K, a point known as the lambda point. At temperatures above the lambda point, helium exists as a liquid exhibiting normal fluid dynamic behavior. Once it is cooled to below 2.2K it begins to exhibit quantum behavior. For example, at the lambda point there is a sharp increase in heat capacity, as it is continued to be cooled, the heat capacity begins to decrease with temperature. In addition, the thermal conductivity is very large, contributing to the excellent coolant properties of superfluid helium. Similarly, Helium-3 is found become a superfluid at 2.491mK. Applications Spectrometers are kept at a very low temperature using helium as the coolant. This allows for minimal background flux in far-infrared readings. Some of the designs for the spectrometers may be simple, but even the frame is at its warmest less than 20 Kelvin. These devices are not commonly used as it is very expensive to use superfluid helium over other coolants. Superfluid helium has a very high thermal conductivity, which makes it very useful for cooling superconductors. Superconductors such as the ones used at the LHC (Large Hadron Collider) are cooled to temperatures of approximately 1.9 Kelvin. This temperature allows the niobium-titanium magnets to reach a superconductor state. Without the use of the superfluid helium, this temperature would not be possible. Using helium to cool to these temperatures is very expensive and cooling systems that use alternative fluids are more numerous. Another application of the superfluid helium is its uses in understanding quantum mechanics. Using lasers to look at small droplets allows scientists to view behaviors that may not normally be viewable. This is due to all the helium in each droplet being at the same quantum state. This application does not have any practical uses by itself, but it helps us better understand quantum mechanics which has its own applications. See also Couette flow Fluid dynamics Potential flow, a special case of inviscid flow Stokes flow, in which the viscous forces are much greater than inertial forces. Viscosity References Fluid dynamics Superfluidity
Inviscid flow
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,618
[ "Physical phenomena", "Phase transitions", "Chemical engineering", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Piping", "Matter", "Fluid dynamics" ]
17,984,491
https://en.wikipedia.org/wiki/Fiber-reinforced%20composite
A fiber-reinforced composite (FRC) is a composite building material that consists of three components: the fibers as the discontinuous or dispersed phase, the matrix as the continuous phase, and the fine interphase region, also known as the interface. This is a type of advanced composite group, which makes use of rice husk, rice hull, rice shell, and plastic as ingredients. This technology involves a method of refining, blending, and compounding natural fibers from cellulosic waste streams to form a high-strength fiber composite material in a polymer matrix. The designated waste or base raw materials used in this instance are those of waste thermoplastics and various categories of cellulosic waste including rice husk and saw dust. Introduction FRC is high-performance fiber composite achieved and made possible by cross-linking cellulosic fiber molecules with resins in the FRC material matrix through a proprietary molecular re-engineering process, yielding a product of exceptional structural properties. Through this feat of molecular re-engineering selected physical and structural properties of wood are successfully cloned and vested in the FRC product, in addition to other critical attributes to yield performance properties superior to contemporary wood. This material, unlike other composites, can be recycled up to 20 times, allowing scrap FRC to be reused again and again. The failure mechanisms in FRC materials include delamination, intralaminar matrix cracking, longitudinal matrix splitting, fiber/matrix debonding, fiber pull-out, and fiber fracture. Difference between wood plastic composite and fiber-reinforced composite: Properties Basic principles The appropriate "average" of the individual phase properties to be used in describing composite tensile behavior can be elucidated with reference to Fig. 6.2. Although this figure illustrates a plate-like composite, the results that follow are equally applicable to fiber composites having similar phase arrangements. The two phase material of Fig. 6.2 consists of lamellae of and phases of thickness and . and respectively. Thus, the volume fractions (, ) of the phases are and . Case I: Same stress, different strain A tensile force F is applied normal to the broad faces (dimensions Lx L) of the phases. In this arrangement the stress borne by each of the phases (= F/) is the same, but the strains (, ) they experience are different. composite strain is a volumetric weighted average of the strains of the individual phases. , The total elongation of the composite, is obtained as and the composite strain is, === Composite modulus Case II: different stress, same strain Fibers that are aligned parallel to the tensile axis, the strains in both phases are equal (and the same as the composite strain), but the external force is partitioned unequally between the phases. Deformation behavior When the fiber is aligned parallel to the direction of the matrix and applied the load as the same strain case. The fiber and matrix has the volume fraction , ; stress , ; strain,; and modulus , . And here ==. The uniaxial stress-strain response of a fiber composite can be divided into several stages. In stage 1, when the fiber and matrix both deform elastically, the stress and strain relation is In stage 2, when the stress for the fiber is bigger than the yield stress, the matrix starts to deform plastically, and the fiber are still elastic, the stress and strain relation is In stage 3, when the matrix the fiber both deform plastically, the stress and strain relation is Since some fibers do not deform permanently prior to fracture, stage 3 cannot be observed in some composite. In stage 4, when the fiber has already become fracture and matrix still deforms plastically, the stress and strain relation is However, it is not completely true, since the failure fibers can still carry some load. Reinforcement with discontinuous fibers For discontinuous fibers (also known as whiskers, depending on the length), tensile force is transmitted from the matrix to the fiber by means of shear stresses that develop along the fiber-matrix interface. Matrix has displacement equals zero at fiber midpoint and maximum at ends relative to the fiber along the interface. Displacement causes interfacial shear stress that is balanced with fiber tensile stress . is the fiber diameter, and is the distance from the fiber end. After only a very small strain, the magnitude of the shear stress at the fiber end becomes large. This leads to two situation: fiber-matrix delamination or matrix having plastic shear. If matrix has plastic shear: interfacial shear stress . Then there is a critical length that when , after certain , remains constant and equals to stress in equal-strain condition. The ratio, is called the "critical aspect ratio". It increases with composite strain . For the mid-point of a fiber to be stressed to the equal-strain condition at composite fracture, its length must be at least . Then calculate average stress. The fraction of the fiber length carrying stress is . The remaining fraction bears an average stress . For , average stress is with . The composite stress is modified as following: The above equations assumed the fibers were aligned with the direction of loading. A modified rule of mixtures can be used to predict composite strength, including an orientation efficiency factor, , which accounts for the decrease in strength from misaligned fibers. where is the fiber efficiency factor equal to for , and for . If the fibers are perfectly aligned with the direction of loading is 1. However, common values of for randomly oriented are roughly 0.375 for an in-plane two-dimensional array and 0.2 for a three-dimensional array. Appreciable reinforcement can be provided by discontinuous fibers provided their lengths are much greater than the (usually) small critical lengths. Such as MMCs. If there is fiber-matrix delamination. is replaced by friction stress where is the friction coefficient between the matrix and the fiber, and is an internal pressure. This happens in most resin-based composites. Composites with fibers length less than contribute little to strength. However, during composite fracture, the short fibers do not fracture. Instead they are pulled out of the matrix. The work associated with fiber pull-out provides an added component to the fracture work and has a great contribution to toughness. Application There are also applications in the market, which utilize only waste materials. Its most widespread use is in outdoor deck floors, but it is also used for railings, fences, landscaping timbers, cladding and siding, park benches, molding and trim, window and door frames, and indoor furniture. See for example the work of Waste for Life, which collaborates with garbage scavenging cooperatives to create fiber-reinforced building materials and domestic problems from the waste their members collect: Homepage of Waste for Life Adoption of natural fiber in reinforced polymer composites potentially to be used in automotive industry could significantly help developing a sustainable waste management. See also Fiber volume ratio Fracture mechanics Plastic composite (disambiguation) Plastic lumber Wood plastic composite Fibre-reinforced plastic References 3. Thomas H. Courtney. "Mechanical Behavior of Materials". 2nd Ed. Waveland Press, Inc. 2005. Woodworking materials Composite materials Recycled building materials Plastics Fibre-reinforced composites
Fiber-reinforced composite
[ "Physics" ]
1,495
[ "Unsolved problems in physics", "Composite materials", "Materials", "Amorphous solids", "Matter", "Plastics" ]
17,986,007
https://en.wikipedia.org/wiki/Bioasphalt
Bioasphalt is an asphalt alternative made from non-petroleum based renewable resources. These sources include sugar, molasses and rice, corn and potato starches, natural tree and gum resins, natural latex rubber and vegetable oils, lignin, cellulose, palm oil waste, coconut waste, peanut oil waste, canola oil waste, dried sewerage effluent and so on. Bitumen can also be made from waste vacuum tower bottoms produced in the process of cleaning used motor oils, which are normally burned or dumped into land fills. Non-petroleum based bitumen binders can be colored, which can reduce the temperatures of road surfaces and reduce the Urban heat islands. Petroleum, environmental, and heat concerns Because of concerns over Peak oil, pollution and climate change, as well the oil price increases since 2003, non-petroleum alternatives have become more popular. This has led to the introduction of biobitumen alternatives that are more environmentally friendly and nontoxic. For millions of people living in and around cities, heat islands are of growing concern. This phenomenon describes urban and suburban temperatures that are hotter than nearby rural areas. Elevated temperatures can impact communities by increasing peak energy demand, air conditioning costs, air pollution levels, and heat-related illness and mortality. There are common-sense measures that communities can take to reduce the negative effects of heat islands, such as replacing conventional black asphalt road surfaces with the new pigmentable bitumen that gives lighter colors. History and implementation Asphalt made with vegetable oil based binders was patented by Colas SA in France in 2004. A number of homeowners seeking an environmentally friendly alternative to asphalt for paving have experimented with waste vegetable oil as a binder for driveways and parking areas in single-family applications. The earliest known test occurred in 2002 in Ohio, where the homeowner combined waste vegetable oil with dry aggregate to create a low-cost and less polluting paving material for his 200-foot driveway. After five years, he reports the driveway is performing as well or better than petroleum-based materials. Shell Oil Company paved two public roads in Norway in 2007 with vegetable-oil-based asphalt. Results of this study are still premature. HALIK Asphalts LTD from Israel has been experimenting with recycled and secondary road building since 2003. The company is using various wastes such as vegetable fats & oils, wax and thermoplastic elastomers to build and repair roads. The results reported are so far satisfying. On October 6, 2010, a bicycle path in Des Moines, Iowa, was paved with bio-oil based asphalt through a partnership between Iowa State University, the City of Des Moines, and Avello Bioenergy Inc. Research is being conducted on the asphalt mixture, derived from plants and trees to replace petroleum-based mixes. Bioasphalt is a registered trademark of Avello Bioenergy Inc. Dr. Elham H. Fini, at North Carolina A&T University, has been spearheading research that has successfully produced bio asphalt from swine manure. Since November 2014 the Dutch Wageningen University & Research centre is running a pilot in the Dutch province of Zeeland with bioasphalt in which the binder of bitumen was substituted by lignin. In 2015, French researchers published their results about the usage of microalgaes as a source of asphalt binding material. See also Asphalt References Biomass Building materials Recycled building materials Pavements Chemical mixtures Amorphous solids
Bioasphalt
[ "Physics", "Chemistry", "Engineering" ]
709
[ "Building engineering", "Unsolved problems in physics", "Architecture", "Construction", "Materials", "Chemical mixtures", "nan", "Amorphous solids", "Matter", "Building materials" ]
17,986,913
https://en.wikipedia.org/wiki/Pharmacoepidemiology
Pharmacoepidemiology is the study of the uses and effects of drugs in well-defined populations. To accomplish this study, pharmacoepidemiology borrows from both pharmacology and epidemiology. Thus, pharmacoepidemiology is the bridge between both pharmacology and epidemiology. Pharmacology is the study of the effect of drugs and clinical pharmacology is the study of effect of drugs on clinical humans. Part of the task of clinical pharmacology is to provide a risk benefit assessment by effects of drugs in patients: doing the studies needed to provide an estimate of the probability of beneficial effects on populations, or assessing the probability of adverse effects on populations. Other parameters relating to drug use may benefit epidemiological methodology. Pharmacoepidemiology then can also be defined as the transparent application of epidemiological methods through pharmacological treatment of conditions to better understand the conditions to be treated. Epidemiology is the study of the distribution and determinants of diseases and other health states in populations. Epidemiological studies can be divided into two main types: Descriptive epidemiology describes disease and/or exposure and may consist of calculating rates, e.g., incidence and prevalence. Such descriptive studies do not at this time use health control groups and can only generate hypotheses, but not test them. Studies of drug use would generally fall under descriptive studies. Analytic epidemiology includes two types of studies: observational studies, such as case-control and cohort studies, and experimental studies which include clinical trials or randomized clinical trials. The analytic studies compare an exposed group with a control group and usually designed as hypothesis testing by studies. Pharmacoepidemiology benefits from the methodology developed in general epidemiology and may further develop them for applications of methodology unique to needs of pharmacoepidemiology. There are also some areas that are altogether unique to pharmacoepidemiology, e.g., pharmacovigilance. Pharmacovigilance is a type of continual monitoring of unwanted effects and other safety-related aspects of drugs that are already placed in current growing integrating markets. In practice, pharmacovigilance refers almost exclusively to spontaneous reporting systems which allow health care professionals and others to report adverse drug reactions to the central agency. The central agency combines reports from many sources to produce a more informative profile for drug products than could be done based on reports from fewer health care professionals. In Australia, a 10% sample of all people eligible for government-subsidised medicines by the Pharmaceutical Benefits Scheme (PBS) are made available for research purposes. Licences are held between Services Australia, who hold the data for the PBS, and academics at Monash University, University of New South Wales, University of South Australia and the University of Western Australia to use the 10% sample for research purposes. Research outputs from these data have to be approved by Services Australia prior to publication. These data create a useful picture of all dispensed medicines in Australia and allow for pharmacovigilance and to explore trends in medicines usage. See also Epidemiology Exposome International Society for Pharmacoepidemiology Molecular epidemiology Molecular pathological epidemiology Pharmacovigilance References External links European Pharmacovigilance and Pharmacoepidemiology - Eu2p Pharmacology Epidemiology
Pharmacoepidemiology
[ "Chemistry", "Environmental_science" ]
729
[ "Epidemiology", "Pharmacology", "Environmental social science", "Medicinal chemistry" ]
17,987,436
https://en.wikipedia.org/wiki/Bucket%20%28machine%20part%29
A bucket (also called a scoop to qualify shallower designs of tools) is a specialized container attached to a machine, as compared to a bucket adapted for manual use by a human being. It is a bulk material handling component. The bucket has an inner volume as compared to other types of machine attachments like blades or shovels. The bucket could be attached to the lifting hook of a crane, at the end of the arm of an excavating machine, to the wires of a dragline excavator, to the arms of a power shovel or a tractor equipped with a backhoe loader or to a loader, or to a dredge. The name "bucket" may have been coined from buckets used in water wheels, or used in water turbines or in similar-looking devices. Purposes Buckets in mechanical engineering can have a distinct quality from the traditional bucket (pail) whose purpose is to contain things. Larger versions of this type of bucket equip bucket trucks to contain human beings, buckets in water-hauling systems in mines or, for instance, in helicopter buckets to hold water to combat fires. Two other types of mechanical buckets can be distinguished according to the final destination of the device they equip: energy-consumer systems like excavators or energy-capturer systems like water bucket wheels or turbines. Size and shape Buckets exist in a variety of sizes or shapes. They can be quite large like those equipping Hulett cranes, used to discharge ore out of cargo ships in harbours or very small such as those used by deep-sea exploration vehicles. The shape of the bucket can vary from the truncated conical shape of an actual bucket to more scoop-like or spoon-like shapes akin to water turbines. The cross section can be round or square. Designs Simple design This is the same shape of a domestic form, the one-piece-standing single element, but often with an augmented size. Mining In early developments of mining, a large simple bucket allowed easy insertion of both miners and construction materials such as pit props, and later extraction of miners and ore. Common terms used in various parts of the world include: Bowk; Kibble; Hoppit; Hoppet. Latterly they have been called sinking buckets, as they are now only used when sinking new mine shafts before insertion of the cage, or for emergency rescue. Concrete bucket A concrete bucket delivers concrete by means of a tower crane. It has a bottom opening to allow concrete to flow out when in-place. See also tremie. Boom truck bucket A boom truck (or “bucket truck” bucket is an aerial work platform placed at the end of an excavator-like arm which allows a man to be hoisted to do construction work, such as tree pruning and electrical line maintenance. When necessary the bucket is made out of a non-conductive material for safety. A construction site man lift is a similar apparatus. There may be a door on the side of the bucket in either. Excavator bucket Excavator buckets are made of solid steel and generally present teeth protruding from the cutting edge, to disrupt hard material and avoid wear-and-tear of the bucket. Subsets of the excavator bucket are: the ditching bucket, trenching bucket, A ditching bucket is a wider bucket with no teeth, used for excavating larger excavations and grading stone. A trenching excavator bucket is normally wide and with protruding teeth. Bucket crusher A bucket crusher or crusher bucket is a type of jaw crusher. It's an attached tool for excavators for built-in crushing construction waste and demolition materials. Screening bucket The screening bucket is an attachment for the excavators, loaders, skid steers and backhoe loaders that helps the selection of natural material for different purposes at the jobsite. Clamshell bucket The clamshell bucket is a more sophisticated articulated several-piece device, including two elementary buckets associated on a hinged structure forming a claws-like appendage with an internal volume. Buckets-wheel In mining The design is used in bucket-wheel excavators. The buckets in the wheel have to be made of solid material to withstand the resistance of the material it cuts through. In water hoisting In energy production The bucket wheel design is also used to capture the water energy in water-wheels or water turbines like Pelton wheels. The buckets also have to be made of solid material to withstand the force of the water flow. Their shape is optimized according to their purpose. Other designs include vertical shaft wind turbines designs like on the Savonius wind turbine. In this case, the buckets have to be made of a light material. Buckets-ladders (buckets-chains) The buckets-ladders are used in bucket elevators or in the dredge design of some dredgers. Images References Construction equipment Hardware (mechanical) Excavating equipment
Bucket (machine part)
[ "Physics", "Technology", "Engineering" ]
1,029
[ "Machines", "Excavating equipment", "Construction equipment", "Physical systems", "Construction", "Engineering vehicles", "Hardware (mechanical)", "Industrial machinery" ]
1,122,854
https://en.wikipedia.org/wiki/Equilibrium%20constant
The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant. A knowledge of equilibrium constants is essential for the understanding of many chemical systems, as well as the biochemical processes such as oxygen transport by hemoglobin in blood and acid–base homeostasis in the human body. Stability constants, formation constants, binding constants, association constants and dissociation constants are all types of equilibrium constants. Basic definitions and properties For a system undergoing a reversible reaction described by the general chemical equation a thermodynamic equilibrium constant, denoted by , is defined to be the value of the reaction quotient Qt when forward and reverse reactions occur at the same rate. At chemical equilibrium, the chemical composition of the mixture does not change with time, and the Gibbs free energy change for the reaction is zero. If the composition of a mixture at equilibrium is changed by addition of some reagent, a new equilibrium position will be reached, given enough time. An equilibrium constant is related to the composition of the mixture at equilibrium by where {X} denotes the thermodynamic activity of reagent X at equilibrium, [X] the numerical value of the corresponding concentration in moles per liter, and γ the corresponding activity coefficient. If X is a gas, instead of [X] the numerical value of the partial pressure in bar is used. If it can be assumed that the quotient of activity coefficients, , is constant over a range of experimental conditions, such as pH, then an equilibrium constant can be derived as a quotient of concentrations. An equilibrium constant is related to the standard Gibbs free energy change of reaction by where R is the universal gas constant, T is the absolute temperature (in kelvins), and is the natural logarithm. This expression implies that must be a pure number and cannot have a dimension, since logarithms can only be taken of pure numbers. must also be a pure number. On the other hand, the reaction quotient at equilibrium does have the dimension of concentration raised to some power (see , below). Such reaction quotients are often referred to, in the biochemical literature, as equilibrium constants. For an equilibrium mixture of gases, an equilibrium constant can be defined in terms of partial pressure or fugacity. An equilibrium constant is related to the forward and backward rate constants, kf and kr of the reactions involved in reaching equilibrium: Types of equilibrium constants Cumulative and stepwise formation constants A cumulative or overall constant, given the symbol β, is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by M + 2 L ML2; [ML2] = β12[M][L]2 The stepwise constant, K, for the formation of the same complex from ML and L is given by ML + L ML2; [ML2] = K[ML][L] = Kβ11[M][L]2 It follows that β12 = Kβ11 A cumulative constant can always be expressed as the product of stepwise constants. There is no agreed notation for stepwise constants, though a symbol such as K is sometimes found in the literature. It is best always to define each stability constant by reference to an equilibrium expression. Competition method A particular use of a stepwise constant is in the determination of stability constant values outside the normal range for a given method. For example, EDTA complexes of many metals are outside the range for the potentiometric method. The stability constants for those complexes were determined by competition with a weaker ligand. ML + L′ ML′ + L The formation constant of [Pd(CN)4]2− was determined by the competition method. Association and dissociation constants In organic chemistry and biochemistry it is customary to use pKa values for acid dissociation equilibria. where log denotes a logarithm to base 10 or common logarithm, and Kdiss is a stepwise acid dissociation constant. For bases, the base association constant, pKb is used. For any given acid or base the two constants are related by , so pKa can always be used in calculations. On the other hand, stability constants for metal complexes, and binding constants for host–guest complexes are generally expressed as association constants. When considering equilibria such as M + HL ML + H it is customary to use association constants for both ML and HL. Also, in generalized computer programs dealing with equilibrium constants it is general practice to use cumulative constants rather than stepwise constants and to omit ionic charges from equilibrium expressions. For example, if NTA, nitrilotriacetic acid, N(CH2CO2H)3 is designated as H3L and forms complexes ML and MHL with a metal ion M, the following expressions would apply for the dissociation constants. The cumulative association constants can be expressed as Note how the subscripts define the stoichiometry of the equilibrium product. Micro-constants When two or more sites in an asymmetrical molecule may be involved in an equilibrium reaction there are more than one possible equilibrium constants. For example, the molecule -DOPA has two non-equivalent hydroxyl groups which may be deprotonated. Denoting -DOPA as LH2, the following diagram shows all the species that may be formed (X = ). The concentration of the species LH is equal to the sum of the concentrations of the two micro-species with the same chemical formula, labelled L1H and L2H. The constant K2 is for a reaction with these two micro-species as products, so that [LH] = [L1H] + [L2H] appears in the numerator, and it follows that this macro-constant is equal to the sum of the two micro-constants for the component reactions. K2 = k21 + k22 However, the constant K1 is for a reaction with these two micro-species as reactants, and [LH] = [L1H] + [L2H] in the denominator, so that in this case 1/K1 =1/ k11 + 1/k12, and therefore K1 =k11 k12 / (k11 + k12). Thus, in this example there are four micro-constants whose values are subject to two constraints; in consequence, only the two macro-constant values, for K1 and K2 can be derived from experimental data. Micro-constant values can, in principle, be determined using a spectroscopic technique, such as infrared spectroscopy, where each micro-species gives a different signal. Methods which have been used to estimate micro-constant values include Chemical: blocking one of the sites, for example by methylation of a hydroxyl group, followed by determination of the equilibrium constant of the related molecule, from which the micro-constant value for the "parent" molecule may be estimated. Mathematical: applying numerical procedures to 13C NMR data. Although the value of a micro-constant cannot be determined from experimental data, site occupancy, which is proportional to the micro-constant value, can be very important for biological activity. Therefore, various methods have been developed for estimating micro-constant values. For example, the isomerization constant for -DOPA has been estimated to have a value of 0.9, so the micro-species L1H and L2H have almost equal concentrations at all pH values. pH considerations (Brønsted constants) pH is defined in terms of the activity of the hydrogen ion pH = −log10 {H+} In the approximation of ideal behaviour, activity is replaced by concentration. pH is measured by means of a glass electrode, a mixed equilibrium constant, also known as a Brønsted constant, may result. HL L + H; It all depends on whether the electrode is calibrated by reference to solutions of known activity or known concentration. In the latter case the equilibrium constant would be a concentration quotient. If the electrode is calibrated in terms of known hydrogen ion concentrations it would be better to write p[H] rather than pH, but this suggestion is not generally adopted. Hydrolysis constants In aqueous solution the concentration of the hydroxide ion is related to the concentration of the hydrogen ion by \mathit{K}_W =[H][OH] [OH]=\mathit{K}_W[H]^{-1} The first step in metal ion hydrolysis can be expressed in two different ways It follows that . Hydrolysis constants are usually reported in the β* form and therefore often have values much less than 1. For example, if and so that β* = 10−10. In general when the hydrolysis product contains n hydroxide groups Conditional constants Conditional constants, also known as apparent constants, are concentration quotients which are not true equilibrium constants but can be derived from them. A very common instance is where pH is fixed at a particular value. For example, in the case of iron(III) interacting with EDTA, a conditional constant could be defined by This conditional constant will vary with pH. It has a maximum at a certain pH. That is the pH where the ligand sequesters the metal most effectively. In biochemistry equilibrium constants are often measured at a pH fixed by means of a buffer solution. Such constants are, by definition, conditional and different values may be obtained when using different buffers. Gas-phase equilibria For equilibria in a gas phase, fugacity, f, is used in place of activity. However, fugacity has the dimension of pressure, so it must be divided by a standard pressure, usually 1 bar, in order to produce a dimensionless quantity, . An equilibrium constant is expressed in terms of the dimensionless quantity. For example, for the equilibrium 2NO2 N2O4, Fugacity is related to partial pressure, , by a dimensionless fugacity coefficient ϕ: . Thus, for the example, Usually the standard pressure is omitted from such expressions. Expressions for equilibrium constants in the gas phase then resemble the expression for solution equilibria with fugacity coefficient in place of activity coefficient and partial pressure in place of concentration. Thermodynamic basis for equilibrium constant expressions Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant temperature and pressure the Gibbs free energy is minimum. The slope of the reaction free energy with respect to the extent of reaction, ξ, is zero when the free energy is at its minimum value. The free energy change, dGr, can be expressed as a weighted sum of change in amount times the chemical potential, the partial molar free energy of the species. The chemical potential, μi, of the ith species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, Ni A general chemical equilibrium can be written as where nj are the stoichiometric coefficients of the reactants in the equilibrium equation, and mj are the coefficients of the products. At equilibrium The chemical potential, μi, of the ith species can be calculated in terms of its activity, ai. μ is the standard chemical potential of the species, R is the gas constant and T is the temperature. Setting the sum for the reactants j to be equal to the sum for the products, k, so that δGr(Eq) = 0 Rearranging the terms, This relates the standard Gibbs free energy change, ΔGo to an equilibrium constant, K, the reaction quotient of activity values at equilibrium. Equivalence of thermodynamic and kinetic expressions for equilibrium constants At equilibrium the rate of the forward reaction is equal to the backward reaction rate. A simple reaction, such as ester hydrolysis AB + H2O <=> AH + B(OH) has reaction rates given by expressions According to Guldberg and Waage, equilibrium is attained when the forward and backward reaction rates are equal to each other. In these circumstances, an equilibrium constant is defined to be equal to the ratio of the forward and backward reaction rate constants . The concentration of water may be taken to be constant, resulting in the simpler expression . This particular concentration quotient, , has the dimension of concentration, but the thermodynamic equilibrium constant, , is always dimensionless. Unknown activity coefficient values It is very rare for activity coefficient values to have been determined experimentally for a system at equilibrium. There are three options for dealing with the situation where activity coefficient values are not known from experimental measurements. Use calculated activity coefficients, together with concentrations of reactants. For equilibria in solution estimates of the activity coefficients of charged species can be obtained using Debye–Hückel theory, an extended version, or SIT theory. For uncharged species, the activity coefficient γ0 mostly follows a "salting-out" model: log10 γ0 = bI where I stands for ionic strength. Assume that the activity coefficients are all equal to 1. This is acceptable when all concentrations are very low. For equilibria in solution use a medium of high ionic strength. In effect this redefines the standard state as referring to the medium. Activity coefficients in the standard state are, by definition, equal to 1. The value of an equilibrium constant determined in this manner is dependent on the ionic strength. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories. Dimensionality An equilibrium constant is related to the standard Gibbs free energy of reaction change, , for the reaction by the expression Therefore, K, must be a dimensionless number from which a logarithm can be derived. In the case of a simple equilibrium A + B <=> AB, the thermodynamic equilibrium constant is defined in terms of the activities, {AB}, {A} and {B}, of the species in equilibrium with each other: Now, each activity term can be expressed as a product of a concentration and a corresponding activity coefficient, . Therefore, When , the quotient of activity coefficients, is set equal to 1, we get K then appears to have the dimension of 1/concentration. This is what usually happens in practice when an equilibrium constant is calculated as a quotient of concentration values. This can be avoided by dividing each concentration by its standard-state value (usually mol/L or bar), which is standard practice in chemistry. The assumption underlying this practice is that the quotient of activities is constant under the conditions in which the equilibrium constant value is determined. These conditions are usually achieved by keeping the reaction temperature constant and by using a medium of relatively high ionic strength as the solvent. It is not unusual, particularly in texts relating to biochemical equilibria, to see an equilibrium constant value quoted with a dimension. The justification for this practice is that the concentration scale used may be either mol dm−3 or mmol dm−3, so that the concentration unit has to be stated in order to avoid there being any ambiguity. Note. When the concentration values are measured on the mole fraction scale all concentrations and activity coefficients are dimensionless quantities. In general equilibria between two reagents can be expressed as {\mathit{p}A} + \mathit{q}B <=> A_\mathit{p}B_\mathit{q} , in which case the equilibrium constant is defined, in terms of numerical concentration values, as The apparent dimension of this K value is concentration1−p−q; this may be written as M(1−p−q) or mM(1−p−q), where the symbol M signifies a molar concentration (). The apparent dimension of a dissociation constant is the reciprocal of the apparent dimension of the corresponding association constant, and vice versa. When discussing the thermodynamics of chemical equilibria it is necessary to take dimensionality into account. There are two possible approaches. Set the dimension of to be the reciprocal of the dimension of the concentration quotient. This is almost universal practice in the field of stability constant determinations. The "equilibrium constant" , is dimensionless. It will be a function of the ionic strength of the medium used for the determination. Setting the numerical value of to be 1 is equivalent to re-defining the standard states. Replace each concentration term by the dimensionless quotient , where is the concentration of reagent in its standard state (usually 1 mol/L or 1 bar). By definition the numerical value of is 1, so also has a numerical value of 1. In both approaches the numerical value of the stability constant is unchanged. The first is more useful for practical purposes; in fact, the unit of the concentration quotient is often attached to a published stability constant value in the biochemical literature. The second approach is consistent with the standard exposition of Debye–Hückel theory, where , etc. are taken to be pure numbers. Water as both reactant and solvent For reactions in aqueous solution, such as an acid dissociation reaction AH + H2O A− + H3O+ the concentration of water may be taken as being constant and the formation of the hydronium ion is implicit. AH A− + H+ Water concentration is omitted from expressions defining equilibrium constants, except when solutions are very concentrated. (K defined as a dissociation constant) Similar considerations apply to metal ion hydrolysis reactions. Enthalpy and entropy: temperature dependence If both the equilibrium constant, and the standard enthalpy change, , for a reaction have been determined experimentally, the standard entropy change for the reaction is easily derived. Since and To a first approximation the standard enthalpy change is independent of temperature. Using this approximation, definite integration of the van 't Hoff equation gives This equation can be used to calculate the value of log K at a temperature, T2, knowing the value at temperature T1. The van 't Hoff equation also shows that, for an exothermic reaction (), when temperature increases K decreases and when temperature decreases K increases, in accordance with Le Chatelier's principle. The reverse applies when the reaction is endothermic. When K has been determined at more than two temperatures, a straight line fitting procedure may be applied to a plot of against to obtain a value for . Error propagation theory can be used to show that, with this procedure, the error on the calculated value is much greater than the error on individual log K values. Consequently, K needs to be determined to high precision when using this method. For example, with a silver ion-selective electrode each log K value was determined with a precision of ca. 0.001 and the method was applied successfully. Standard thermodynamic arguments can be used to show that, more generally, enthalpy will change with temperature. where Cp is the heat capacity at constant pressure. A more complex formulation The calculation of K at a particular temperature from a known K at another given temperature can be approached as follows if standard thermodynamic properties are available. The effect of temperature on equilibrium constant is equivalent to the effect of temperature on Gibbs energy because: where ΔrGo is the reaction standard Gibbs energy, which is the sum of the standard Gibbs energies of the reaction products minus the sum of standard Gibbs energies of reactants. Here, the term "standard" denotes the ideal behaviour (i.e., an infinite dilution) and a hypothetical standard concentration (typically 1 mol/kg). It does not imply any particular temperature or pressure because, although contrary to IUPAC recommendation, it is more convenient when describing aqueous systems over wide temperature and pressure ranges. The standard Gibbs energy (for each species or for the entire reaction) can be represented (from the basic definitions) as: In the above equation, the effect of temperature on Gibbs energy (and thus on the equilibrium constant) is ascribed entirely to heat capacity. To evaluate the integrals in this equation, the form of the dependence of heat capacity on temperature needs to be known. If the standard molar heat capacity C can be approximated by some analytic function of temperature (e.g. the Shomate equation), then the integrals involved in calculating other parameters may be solved to yield analytic expressions for them. For example, using approximations of the following forms: For pure substances (solids, gas, liquid): For ionic species at : then the integrals can be evaluated and the following final form is obtained: The constants A, B, C, a, b and the absolute entropy, S̆, required for evaluation of C(T), as well as the values of G298 K and S298 K for many species are tabulated in the literature. Pressure dependence The pressure dependence of the equilibrium constant is usually weak in the range of pressures normally encountered in industry, and therefore, it is usually neglected in practice. This is true for condensed reactant/products (i.e., when reactants and products are solids or liquid) as well as gaseous ones. For a gaseous-reaction example, one may consider the well-studied reaction of hydrogen with nitrogen to produce ammonia: N2 + 3 H2 2 NH3 If the pressure is increased by the addition of an inert gas, then neither the composition at equilibrium nor the equilibrium constant are appreciably affected (because the partial pressures remain constant, assuming an ideal-gas behaviour of all gases involved). However, the composition at equilibrium will depend appreciably on pressure when: the pressure is changed by compression or expansion of the gaseous reacting system, and the reaction results in the change of the number of moles of gas in the system. In the example reaction above, the number of moles changes from 4 to 2, and an increase of pressure by system compression will result in appreciably more ammonia in the equilibrium mixture. In the general case of a gaseous reaction: α A + β B σ S + τ T the change of mixture composition with pressure can be quantified using: where p denote the partial pressures and X the mole fractions of the components, P is the total system pressure, Kp is the equilibrium constant expressed in terms of partial pressures and KX is the equilibrium constant expressed in terms of mole fractions. The above change in composition is in accordance with Le Chatelier's principle and does not involve any change of the equilibrium constant with the total system pressure. Indeed, for ideal-gas reactions Kp is independent of pressure. In a condensed phase, the pressure dependence of the equilibrium constant is associated with the reaction volume. For reaction: α A + β B σ S + τ T the reaction volume is: where V̄ denotes a partial molar volume of a reactant or a product. For the above reaction, one can expect the change of the reaction equilibrium constant (based either on mole-fraction or molal-concentration scale) with pressure at constant temperature to be: The matter is complicated as partial molar volume is itself dependent on pressure. Effect of isotopic substitution Isotopic substitution can lead to changes in the values of equilibrium constants, especially if hydrogen is replaced by deuterium (or tritium). This equilibrium isotope effect is analogous to the kinetic isotope effect on rate constants, and is primarily due to the change in zero-point vibrational energy of H–X bonds due to the change in mass upon isotopic substitution. The zero-point energy is inversely proportional to the square root of the mass of the vibrating hydrogen atom, and will therefore be smaller for a D–X bond that for an H–X bond. An example is a hydrogen atom abstraction reaction R' + H–R R'–H + R with equilibrium constant KH, where R' and R are organic radicals such that R' forms a stronger bond to hydrogen than does R. The decrease in zero-point energy due to deuterium substitution will then be more important for R'–H than for R–H, and R'–D will be stabilized more than R–D, so that the equilibrium constant KD for R' + D–R R'–D + R is greater than KH. This is summarized in the rule the heavier atom favors the stronger bond. Similar effects occur in solution for acid dissociation constants (Ka) which describe the transfer of H+ or D+ from a weak aqueous acid to a solvent molecule: HA + H2O = H3O+ + A− or DA + D2O D3O+ + A−. The deuterated acid is studied in heavy water, since if it were dissolved in ordinary water the deuterium would rapidly exchange with hydrogen in the solvent. The product species H3O+ (or D3O+) is a stronger acid than the solute acid, so that it dissociates more easily, and its H–O (or D–O) bond is weaker than the H–A (or D–A) bond of the solute acid. The decrease in zero-point energy due to isotopic substitution is therefore less important in D3O+ than in DA so that KD < KH, and the deuterated acid in D2O is weaker than the non-deuterated acid in H2O. In many cases the difference of logarithmic constants pKD – pKH is about 0.6, so that the pD corresponding to 50% dissociation of the deuterated acid is about 0.6 units higher than the pH for 50% dissociation of the non-deuterated acid. For similar reasons the self-ionization of heavy water is less than that of ordinary water at the same temperature. See also Determination of equilibrium constants Stability constants of complexes Equilibrium fractionation References Data sources IUPAC SC-Database A comprehensive database of published data on equilibrium constants of metal complexes and ligands NIST Standard Reference Database 46 : Critically selected stability constants of metal complexes Inorganic and organic acids and bases pKa data in water and DMSO NASA Glenn Thermodynamic Database webpage with links to (self-consistent) temperature-dependent specific heat, enthalpy, and entropy for elements and molecules Equilibrium chemistry Dimensionless numbers of chemistry
Equilibrium constant
[ "Chemistry" ]
5,648
[ "Equilibrium chemistry", "Dimensionless numbers of chemistry" ]
1,123,353
https://en.wikipedia.org/wiki/Phosphorus-32
Phosphorus-32 (32P) is a radioactive isotope of phosphorus. The nucleus of phosphorus-32 contains 15 protons and 17 neutrons, one more neutron than the most common isotope of phosphorus, phosphorus-31. Phosphorus-32 only exists in small quantities on Earth as it has a short half-life of 14 days and so decays rapidly. Phosphorus is found in many organic molecules, and so, phosphorus-32 has many applications in medicine, biochemistry, and molecular biology where it can be used to trace phosphorylated molecules (for example, in elucidating metabolic pathways) and radioactively label DNA and RNA. Decay Phosphorus-32 has a short half-life of 14.268 days and decays into sulfur-32 by beta decay as shown in this nuclear equation: {| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || |} 1.709 MeV of energy is released from this decay. The kinetic energy of the electron varies with an average of approximately 0.5 MeV and the remainder of the energy is carried by the nearly undetectable electron antineutrino. In comparison to other beta radiation-emitting nuclides, the electron is moderately energetic. It is blocked by around 1 m of air or 5 mm of acrylic glass. The sulfur-32 nucleus produced is in the ground state, so there is no additional gamma ray emission. Production Phosphorus-32 has important uses in medicine, biochemistry and molecular biology. It only exists naturally on earth in very small amounts and its short half-life means useful quantities have to be produced synthetically. Phosphorus-32 can be generated synthetically by irradiation of sulfur-32 with moderately fast neutrons as shown in this nuclear equation: {| border="0" |- style="height:2em;" | ||+ || ||→ || + || |} The sulfur-32 nucleus captures the neutron and emits a proton, reducing the atomic number by one while maintaining the mass number of 32. This reaction has also been used to determine the yield of nuclear weapons. Uses Phosphorus is abundant in biological systems and, as a radioactive isotope, is almost chemically identical with stable isotopes of the same element. Phosphorus-32 can be used to label biological molecules. The beta radiation emitted by the phosphorus-32 is sufficiently penetrating to be detected outside the organism or tissue which is being analysed Biochemistry and molecular biology The metabolic pathways of organisms extensively use phosphorus in the generation of different biomolecules within the cell. Phosphorus-32 finds use for analysing metabolic pathways in pulse chase experiments, where a culture of cells is treated for a short time with a phosphorus-32-containing substrate. The sequence of chemical changes, which happen to the substrate, can then be traced by detecting which molecules contain the phosphorus-32 at multiple time points following the initial treatment. DNA and RNA contain a large quantity of phosphorus in the phosphodiester linkages between bases in the oligonucleotide chain. DNA and RNA can therefore be tracked by replacing the phosphorus with phosphorus-32. This technique is extensively used in Southern blot and Northern blot analysis of DNA and RNA samples respectively. In both cases, a phosphorus-32-containing DNA probe hybridises to its complementary sequence, where it appears in a gel. Its location can then be detected by photographic film. Plant sciences Phosphorus-32 is used in plant sciences for tracking a plant's uptake of fertiliser from the roots to the leaves. The phosphorus-32-labelled fertiliser is given to the plant hydroponically, or via water in the soil, and the usage of the phosphorus can be mapped from the emitted beta radiation. The information gathered by mapping the fertiliser uptake shows how the plant takes up and uses the phosphorus from fertiliser. Safety The high energy of emitted beta particles and the low half-life of phosphorus-32 make it potentially harmful; Its molar activity is 338.61 TBq/mmol (9151.6 Ci/mmol) and its specific activity is 10.590 EBq/kg (286.22 kCi/g). Typical safety precautions when working with phosphorus-32 include wearing a personal dosimeter to monitor exposure and an acrylic or perspex radiation shield to protect the body. Dense shielding, such as lead, is less effective due to the high-energy bremsstrahlung produced by the interaction of the beta particle and the shielding. Because the beta radiation from phosphorus-32 is blocked by around 1 m of air, it is also advisable to wear dosimeters on the parts of the body, for example the fingers, which come into close contact with the phosphorus-32-containing sample. References External links β- DECAY, β+ DECAY, ELECTRON CAPTURE, & ISOMERIC TRANSITION Isotopes of phosphorus Medical isotopes
Phosphorus-32
[ "Chemistry" ]
1,030
[ "Isotopes of phosphorus", "Chemicals in medicine", "Isotopes", "Medical isotopes" ]
1,123,615
https://en.wikipedia.org/wiki/Herbig%E2%80%93Haro%20object
Herbig–Haro (HH) objects are bright patches of nebulosity associated with newborn stars. They are formed when narrow jets of partially ionised gas ejected by stars collide with nearby clouds of gas and dust at several hundred kilometers per second. Herbig–Haro objects are commonly found in star-forming regions, and several are often seen around a single star, aligned with its rotational axis. Most of them lie within about one parsec (3.26 light-years) of the source, although some have been observed several parsecs away. HH objects are transient phenomena that last around a few tens of thousands of years. They can change visibly over timescales of a few years as they move rapidly away from their parent star into the gas clouds of interstellar space (the interstellar medium or ISM). Hubble Space Telescope observations have revealed the complex evolution of HH objects over the period of a few years, as parts of the nebula fade while others brighten as they collide with the clumpy material of the interstellar medium. First observed in the late 19th century by Sherburne Wesley Burnham, Herbig–Haro objects were recognised as a distinct type of emission nebula in the 1940s. The first astronomers to study them in detail were George Herbig and Guillermo Haro, after whom they have been named. Herbig and Haro were working independently on studies of star formation when they first analysed the objects, and recognised that they were a by-product of the star formation process. Although HH objects are visible-wavelength phenomena, many remain invisible at these wavelengths due to dust and gas, and can only be detected at infrared wavelengths. Such objects, when observed in near-infrared, are called molecular hydrogen emission-line objects (MHOs). Discovery and history of observations The first HH object was observed in the late 19th century by Sherburne Wesley Burnham, when he observed the star T Tauri with the refracting telescope at Lick Observatory and noted a small patch of nebulosity nearby. It was thought to be an emission nebula, later becoming known as Burnham's Nebula, and was not recognized as a distinct class of object. T Tauri was found to be a very young and variable star, and is the prototype of the class of similar objects known as T Tauri stars which have yet to reach a state of hydrostatic equilibrium between gravitational collapse and energy generation through nuclear fusion at their centres. Fifty years after Burnham's discovery, several similar nebulae were discovered with almost star-like appearance. Both George Herbig and Guillermo Haro made independent observations of several of these objects in the Orion Nebula during the 1940s. Herbig also looked at Burnham's Nebula and found it displayed an unusual electromagnetic spectrum, with prominent emission lines of hydrogen, sulfur and oxygen. Haro found that all the objects of this type were invisible in infrared light. Following their independent discoveries, Herbig and Haro met at an astronomy conference in Tucson, Arizona in December 1949. Herbig had initially paid little attention to the objects he had discovered, being primarily concerned with the nearby stars, but on hearing Haro's findings he carried out more detailed studies of them. The Soviet astronomer Viktor Ambartsumian gave the objects their name (Herbig–Haro objects, normally shortened to HH objects), and based on their occurrence near young stars (a few hundred thousand years old), suggested they might represent an early stage in the formation of T Tauri stars. Studies of the HH objects showed they were highly ionised, and early theorists speculated that they were reflection nebulae containing low-luminosity hot stars deep inside. But the absence of infrared radiation from the nebulae meant there could not be stars within them, as these would have emitted abundant infrared light. In 1975 American astronomer R. D. Schwartz theorized that winds from T Tauri stars produce shocks in the ambient medium on encounter, resulting in generation of visible light. With the discovery of the first proto-stellar jet in HH 46/47, it became clear that HH objects are indeed shock-induced phenomena with shocks being driven by a collimated jet from protostars. Formation Stars form by gravitational collapse of interstellar gas clouds. As the collapse increases the density, radiative energy loss decreases due to increased opacity. This raises the temperature of the cloud which prevents further collapse, and a hydrostatic equilibrium is established. Gas continues to fall towards the core in a rotating disk. The core of this system is called a protostar. Some of the accreting material is ejected out along the star's axis of rotation in two jets of partially ionised gas (plasma). The mechanism for producing these collimated bipolar jets is not entirely understood, but it is believed that interaction between the accretion disk and the stellar magnetic field accelerates some of the accreting material from within a few astronomical units of the star away from the disk plane. At these distances the outflow is divergent, fanning out at an angle in the range of 10−30°, but it becomes increasingly collimated at distances of tens to hundreds of astronomical units from the source, as its expansion is constrained. The jets also carry away the excess angular momentum resulting from accretion of material onto the star, which would otherwise cause the star to rotate too rapidly and disintegrate. When these jets collide with the interstellar medium, they give rise to the small patches of bright emission which comprise HH objects. Properties Electromagnetic emission from HH objects is caused when their associated shock waves collide with the interstellar medium, creating what is called the "terminal working surfaces". The spectrum is continuous, but also has intense emission lines of neutral and ionized species. Spectroscopic observations of HH objects' doppler shifts indicate velocities of several hundred kilometers per second, but the emission lines in those spectra are weaker than what would be expected from such high-speed collisions. This suggests that some of the material they are colliding with is also moving along the beam, although at a lower speed. Spectroscopic observations of HH objects show they are moving away from the source stars at speeds of several hundred kilometres per second. In recent years, the high optical resolution of the Hubble Space Telescope has revealed the proper motion (movement along the sky plane) of many HH objects in observations spaced several years apart. As they move away from the parent star, HH objects evolve significantly, varying in brightness on timescales of a few years. Individual compact knots or clumps within an object may brighten and fade or disappear entirely, while new knots have been seen to appear. These arise likely because of the precession of their jets, along with the pulsating and intermittent eruptions from their parent stars. Faster jets catch up with earlier slower jets, creating the so-called "internal working surfaces", where streams of gas collide and generate shock waves and consequent emissions. The total mass being ejected by stars to form typical HH objects is estimated to be of the order of 10−8 to 10−6 per year, a very small amount of material compared to the mass of the stars themselves but amounting to about 1–10% of the total mass accreted by the source stars in a year. Mass loss tends to decrease with increasing age of the source. The temperatures observed in HH objects are typically about 9,000–12,000 K, similar to those found in other ionized nebulae such as H II regions and planetary nebulae. Densities, on the other hand, are higher than in other nebulae, ranging from a few thousand to a few tens of thousands of particles per cm3, compared to a few thousand particles per cm3 in most H II regions and planetary nebulae. Densities also decrease as the source evolves over time. HH objects consist mostly of hydrogen and helium, which account for about 75% and 24% of their mass respectively. Around 1% of the mass of HH objects is made up of heavier chemical elements, including oxygen, sulfur, nitrogen, iron, calcium and magnesium. Abundances of these elements, determined from emission lines of respective ions, are generally similar to their cosmic abundances. Many chemical compounds found in the surrounding interstellar medium, but not present in the source material, such as metal hydrides, are believed to have been produced by shock-induced chemical reactions. Around 20–30% of the gas in HH objects is ionized near the source star, but this proportion decreases at increasing distances. This implies the material is ionized in the polar jet, and recombines as it moves away from the star, rather than being ionized by later collisions. Shocking at the end of the jet can re-ionise some material, giving rise to bright "caps". Numbers and distribution HH objects are named approximately in order of their identification; HH 1/2 being the earliest such objects to be identified. More than a thousand individual objects are now known. They are always present in star-forming H II regions, and are often found in large groups. They are typically observed near Bok globules (dark nebulae which contain very young stars) and often emanate from them. Several HH objects have been seen near a single energy source, forming a string of objects along the line of the polar axis of the parent star. The number of known HH objects has increased rapidly over the last few years, but that is a very small proportion of the estimated up to 150,000 in the Milky Way, the vast majority of which are too far away to be resolved. Most HH objects lie within about one parsec of their parent star. Many, however, are seen several parsecs away. HH 46/47 is located about away from the Sun and is powered by a class I protostar binary. The bipolar jet is slamming into the surrounding medium at a velocity of 300 kilometers per second, producing two emission caps about apart. Jet outflow is accompanied by a long molecular gas outflow which is swept up by the jet itself. Infrared studies by Spitzer Space Telescope have revealed a variety of chemical compounds in the molecular outflow, including water (ice), methanol, methane, carbon dioxide (dry ice) and various silicates. Located around away in the Orion A molecular cloud, HH 34 is produced by a highly collimated bipolar jet powered by a class I protostar. Matter in the jet is moving at about 220 kilometers per second. Two bright bow shocks, separated by about , are present on the opposite sides of the source, followed by series of fainter ones at larger distances, making the whole complex about long. The jet is surrounded by a long weak molecular outflow near the source. Source stars The stars from which HH jets are emitted are all very young stars, a few tens of thousands to about a million years old. The youngest of these are still protostars in the process of collecting from their surrounding gases. Astronomers divide these stars into classes 0, I, II and III, according to how much infrared radiation the stars emit. A greater amount of infrared radiation implies a larger amount of cooler material surrounding the star, which indicates it is still coalescing. The numbering of the classes arises because class 0 objects (the youngest) were not discovered until classes I, II and III had already been defined. Class 0 objects are only a few thousand years old; so young that they are not yet undergoing nuclear fusion reactions at their centres. Instead, they are powered only by the gravitational potential energy released as material falls onto them. They mostly contain molecular outflows with low velocities (less than a hundred kilometres per second) and weak emissions in the outflows. Nuclear fusion has begun in the cores of Class I objects, but gas and dust are still falling onto their surfaces from the surrounding nebula, and most of their luminosity is accounted for by gravitational energy. They are generally still shrouded in dense clouds of dust and gas, which obscure all their visible light and as a result can only be observed at infrared and radio wavelengths. Outflows from this class are dominated by ionized species and velocities can range up to 400 kilometres per second. The in-fall of gas and dust has largely finished in Class II objects (Classical T Tauri stars), but they are still surrounded by disks of dust and gas, and produce weak outflows of low luminosity. Class III objects (Weak-line T Tauri stars) have only trace remnants of their original accretion disk. About 80% of the stars giving rise to HH objects are binary or multiple systems (two or more stars orbiting each other), which is a much higher proportion than that found for low mass stars on the main sequence. This may indicate that binary systems are more likely to generate the jets which give rise to HH objects, and evidence suggests the largest HH outflows might be formed when multiple–star systems disintegrate. It is thought that most stars originate from multiple star systems, but that a sizable fraction of these systems are disrupted before their stars reach the main sequence due to gravitational interactions with nearby stars and dense clouds of gas. The first and currently only (as of May 2017) large-scale Herbig-Haro object around a proto-brown dwarf is HH 1165, which is connected to the proto-brown dwarf Mayrit 1701117. HH 1165 has a length of 0.8 light-years (0.26 parsec) and is located in the vicinity of the sigma Orionis cluster. Previously only small mini-jets (≤0.03 parsec) were found around proto-brown dwarfs. Infrared counterparts HH objects associated with very young stars or very massive protostars are often hidden from view at optical wavelengths by the cloud of gas and dust from which they form. The intervening material can diminish the visual magnitude by factors of tens or even hundreds at optical wavelengths. Such deeply embedded objects can only be observed at infrared or radio wavelengths, usually in the frequencies of hot molecular hydrogen or warm carbon monoxide emission. In recent years, infrared images have revealed dozens of examples of "infrared HH objects". Most look like bow waves (similar to the waves at the head of a ship), and so are usually referred to as molecular "bow shocks". The physics of infrared bow shocks can be understood in much the same way as that of HH objects, since these objects are essentially the same – supersonic shocks driven by collimated jets from the opposite poles of a protostar. It is only the conditions in the jet and surrounding cloud that are different, causing infrared emission from molecules rather than optical emission from atoms and ions. In 2009 the acronym "MHO", for Molecular Hydrogen emission-line Object, was approved for such objects, detected in near-infrared, by the International Astronomical Union Working Group on Designations, and has been entered into their on-line Reference Dictionary of Nomenclature of Celestial Objects. As of 2010, almost 1000 objects are contained in the MHO catalog. Ultraviolet Herbig-Haro objects HH objects have been observed in the ultraviolet spectrum. See also Bipolar outflow Protostar Protoplanetary disk References External links Catalogue of HH Objects at VizieR Animations of HH object jets from HST observations A Catalogue of Molecular Hydrogen Emission-Line Objects in Outflows from Young Stars: MHO Catalogue Nebulae Star formation Articles containing video clips
Herbig–Haro object
[ "Astronomy" ]
3,207
[ "Nebulae", "Astronomical objects" ]
1,123,962
https://en.wikipedia.org/wiki/Shakedown%20%28continuum%20mechanics%29
In continuum mechanics, elastic shakedown behavior is one in which plastic deformation takes place during running in, while due to residual stresses or strain hardening the steady state is perfectly elastic. Plastic shakedown behavior is one in which the steady state is a closed elastic-plastic loop, with no net accumulation of plastic deformation. Ratcheting behavior is one in which the steady state is an open elastic-plastic loop, with the material accumulating a net strain during each cycle. Shakedown concept can be applied to solid metallic materials under cyclic repeated loading or to granular materials under cyclic loading (such case can occur in road pavements under traffic loading). Ratcheting Check Not needed for only primary loading that meets static loading requirements. Needed for cyclic thermal loading plus primary loading with a mean. Shakedown of granular materials If repeated loading on the granular induces stress beyond the yield surface, three different cases may be observed. In case 1 the residual strain in the materials increases almost without limit. This so-called “ratcheting” state is close to what can be predicted applying simple Mohr–Coulomb criterion to a cyclic loading. In the responses like case 2, residual strain in the materials grows to some extent, but at some stage the growth is stopped and further cyclic loading produces closed hysteresis loops of stress–strain. Finally in case 3 the growth of residual strain is practically diminishes when sufficient loading cycles are applied. Case 2 and case 3 are cases of plastic and elastic shakedown respectively. References Shakedown of Elastic-Plastic Structures, Jan A. Konig, Elsevier, 1987. Limit Analysis of Structures at Thermal Cycling, D. A. Gokhfeld and O. F. Cherniavsky, 1980. ASME Boiler and Pressure Vessel Code, American Society of Mechanical Engineers, New York, 2001. "Basic Conditions for Material and Structural Ratcheting", H. Hübel, Nuclear Engineering and Design, Vol. 162, pp 55–65 (1996) "Simplified Theory of Plastic Zones" (Chapter 2), H. Hübel, Springer International Publishing Switzerland, Cham (2016), Continuum mechanics
Shakedown (continuum mechanics)
[ "Physics" ]
440
[ "Classical mechanics", "Continuum mechanics" ]
1,124,025
https://en.wikipedia.org/wiki/Limit%20load%20%28physics%29
Limit load is the maximum load that a structure can safely carry. It's the load at which the structure is in a state of incipient plastic collapse. As the load on the structure increases, the displacements increases linearly in the elastic range until the load attains the yield value. Beyond this, the load-displacement response becomes non-linear and the plastic or irreversible part of the displacement increases steadily with the applied load. Plasticity spreads throughout the solid and at the limit load, the plastic zone becomes very large and the displacements become unbounded and the component is said to have collapsed. Any load above the limit load will lead to the formation of plastic hinge in the structure. Engineers use limit states to define and check a structure's performance. Bounding Theorems of Plastic-Limit Load Analysis: Plastic limit theorems provide a way to calculate limit loads without having to solve the boundary value problem in continuum mechanics. Finite element analysis provides an alternative way to estimate limit loads. They are: The Upper Bound Plastic Collapse Theorem The Lower Bound Plastic Collapse Theorem The Lower Bound Shakedown Theorem The Upper Bound Shakedown Theorem The Upper Bound Plastic Collapse Theorem states that an upper bound to the collapse loads can be obtained by postulating a collapse mechanism and computing the ratio of its plastic dissipation to the work done by the applied loads. References Notes Sources Brown University Engineering Notes Structural engineering Continuum mechanics
Limit load (physics)
[ "Physics", "Engineering" ]
289
[ "Structural engineering", "Continuum mechanics", "Classical mechanics", "Construction", "Civil engineering" ]
12,363,573
https://en.wikipedia.org/wiki/Diffusionless%20transformation
A diffusionless transformation, commonly known as displacive transformation, denotes solid-state alterations in crystal structures that do not hinge on the diffusion of atoms across extensive distances. Rather, these transformations manifest as a result of synchronized shifts in atomic positions, wherein atoms undergo displacements of distances smaller than the spacing between adjacent atoms, all while preserving their relative arrangement. An example of such a phenomenon is the martensitic transformation, a notable occurrence observed in the context of steel materials. The term "martensite" was originally coined to describe the rigid and finely dispersed constituent that emerges in steels subjected to rapid cooling. Subsequent investigations revealed that materials beyond ferrous alloys, such as non-ferrous alloys and ceramics, can also undergo diffusionless transformations. Consequently, the term "martensite" has evolved to encompass the resultant product arising from such transformations in a more inclusive manner. In the context of diffusionless transformations, a cooperative and homogeneous movement occurs, leading to a modification in the crystal structure during a phase change. These movements are small, usually less than their interatomic distances, and the neighbors of an atom remain close. The systematic movement of large numbers of atoms led some to refer to them as military transformations, in contrast to civilian diffusion-based phase changes, initially by Frederick Charles Frank and John Wyrill Christian. The most commonly encountered transformation of this type is the martensitic transformation, which is probably the most studied but is only one subset of non-diffusional transformations. The martensitic transformation in steel represents the most economically significant example of this category of phase transformations. However, an increasing number of alternatives, such as shape memory alloys, are becoming more important as well. Classification and definitions The phenomenon in which atoms or groups of atoms coordinate to displace their neighboring counterparts resulting in structural modification is known as a displacive transformation. The scope of displacive transformations is extensive, encompassing a diverse array of structural changes. As a result, additional classifications have been devised to provide a more nuanced understanding of these transformations. The first distinction can be drawn between transformations dominated by lattice-distortive strains and those where shuffles are of greater importance. Homogeneous lattice-distortive strains, also known as Bain strains, transform one Bravais lattice into a different one. This can be represented by a strain matrix S which transforms one vector, y, into a new vector, x: This is homogeneous, as straight lines are transformed into new straight lines. Examples of such transformations include a cubic lattice increasing in size on all three axes (dilation) or shearing into a monoclinic structure. Shuffles, aptly named, refer to the minute displacement of atoms within the unit cell. Notably, pure shuffles typically do not induce a modification in the shape of the unit cell; instead, they predominantly impact its symmetry and overall structural configuration. Phase transformations typically give rise to the formation of an interface delineating the transformed and parent materials. The energy requisite for establishing this new interface is contingent upon its characteristics, specifically how well the two structures interlock. An additional energy consideration arises when the transformation involves a change in shape. In such instances, if the new phase is constrained by the surrounding material, elastic or plastic deformation may occur, introducing a strain energy term. The interplay between these interfacial and strain energy terms significantly influences the kinetics of the transformation and the morphology of the resulting phase. Notably, in shuffle transformations characterized by minimal distortions, interfacial energies tend to predominate, distinguishing them from lattice-distortive transformations where the impact of strain energy is more pronounced. A subclassification of lattice-distortive displacements can be made by considering the dilutional and shear components of the distortion. In transformations dominated by the shear component, it is possible to find a line in the new phase that is undistorted from the parent phase while all lines are distorted when the dilation is predominant. Shear-dominated transformations can be further classified according to the magnitude of the strain energies involved compared to the innate vibrations of the atoms in the lattice and hence whether the strain energies have a notable influence on the kinetics of the transformation and the morphology of the resulting phase. If the strain energy is a significant factor, then the transformations are dubbed martensitic, if not the transformation is referred to as quasi-martensitic. Iron-carbon martensitic transformation The distinction between austenitic and martensitic steels is subtle in nature. Austenite exhibits a face-centered cubic (FCC) unit cell, whereas the transformation to martensite entails a distortion of this cube into a body-centered tetragonal shape (BCT). This transformation occurs due to a displacive process, where interstitial carbon atoms lack the time to diffuse out. Consequently, the unit cell undergoes a slight elongation in one dimension and contraction in the other two. Despite differences in the symmetry of the crystal structures, the chemical bonding between them remains similar. The iron-carbon martensitic transformation generates an increase in hardness. The martensitic phase of the steel is supersaturated in carbon and thus undergoes solid solution strengthening. Similar to work-hardened steels, defects prevent atoms from sliding past one another in an organized fashion, causing the material to become harder. Pseudo martensitic transformation In addition to displacive transformation and diffusive transformation, a new type of phase transformation that involves a displacive sublattice transition and atomic diffusion was discovered using a high-pressure X-ray diffraction system. The new transformation mechanism has been christened pseudo martensitic transformation. References Notes Bibliography Christian, J.W., Theory of Transformations in Metals and Alloys, Pergamon Press (1975) Khachaturyan, A.G., Theory of Structural Transformations in Solids, Dover Publications, NY (1983) Green, D.J.; Hannick, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. . External links Extensive resources from Cambridge University The cubic-to-tetragonal transition European Symposium on Martensitic Transformations (ESOMAT) PTC Lab for martensite crystallography Phase transitions
Diffusionless transformation
[ "Physics", "Chemistry" ]
1,295
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
12,365,585
https://en.wikipedia.org/wiki/Signal%20recognition%20particle%20RNA
The signal recognition particle RNA, (also known as 7SL, 6S, ffs, or 4.5S RNA) is part of the signal recognition particle (SRP) ribonucleoprotein complex. SRP recognizes the signal peptide and binds to the ribosome, halting protein synthesis. is a protein that is embedded in a membrane, and which contains a transmembrane pore. When the complex binds to , SRP releases the ribosome and drifts away. The ribosome resumes protein synthesis, but now the protein is moving through the transmembrane pore. In this way SRP directs the movement of proteins within the cell to bind with a transmembrane pore which allows the protein to cross the membrane to where it is needed. The RNA and protein components of this complex are highly conserved but do vary between the different kingdoms of life. The common SINE family Alu probably originated from a 7SL RNA gene after deletion of a central sequence. The eukaryotic SRP consists of a 300-nucleotide 7S RNA and six proteins: SRPs 72, 68, 54, 19, 14, and 9. Archaeal SRP consists of a 7S RNA and homologues of the eukaryotic SRP19 and SRP54 proteins. Eukaryotic and archaeal 7S RNAs have very similar secondary structures. In most bacteria, the SRP consists of an RNA molecule (4.5S) and the Ffh protein (a homologue of the eukaryotic SRP54 protein). Some Gram-positive bacteria (e.g. Bacillus subtilis) have a longer eukaryote-like SRP RNA that includes an Alu domain. In eukaryotes and archaea, eight helical elements fold into the Alu and S domains, separated by a long linker region. The Alu domain is thought to mediate the peptide chain elongation retardation function of the SRP. The universally conserved helix which interacts with the SRP54 M domain mediates signal sequence recognition. The SRP19-helix 6 complex is thought to be involved in SRP assembly and stabilises helix 8 for SRP54. binding Humans have three functional SRP RNA genes, conveniently named RN7SL1, RN7SL2, and RN7SL3. The human genome in particular is known to contain a large amount of SRP RNA related sequence, including Alu repeats. Discovery SRP RNA was first detected in avian and murine oncogenic RNA (ocorna) virus particles. Subsequently, SRP RNA was found to be a stable component of uninfected HeLa cells where it associated with membrane and polysome fractions. In 1980, cell biologists purified from canine pancreas an 11S "signal recognition protein" (fortuitously also abbreviated "SRP") which promoted the translocation of secretory proteins across the membrane of the endoplasmic reticulum. It was then discovered that SRP contained an RNA component. Comparing the SRP RNA genes from different species revealed helix 8 of the SRP RNA to be highly conserved in all domains of life. The regions near the 5′- and 3′-ends of the mammalian SRP RNA are similar to the dominant Alu family of middle repetitive sequences of the human genome. It is now understood that Alu DNA originated from SRP RNA by excision of the central SRP RNA-specific (S) fragment, followed by reverse transcription and integration into multiple sites of the human chromosomes. SRP RNAs have been identified also in some organelles, for example in the plastid SRPs of many photosynthetic organisms, and in the nuclear ribosomal internal transcribed spacer region of several ectomycorrhizal fungi. Transcription and processing Eukaryotic SRP RNAs are transcribed from DNA by RNA polymerase III (Pol III). RNA polymerase III also transcribes the genes for 5S ribosomal RNA, tRNA, 7SK RNA, and U6 spliceosomal RNA. The promoters of the human SRP RNA genes include elements located downstream of the transcriptional start site. Plant SRP RNA promoters contain an upstream stimulatory element (USE) and a TATA box. Yeast SRP RNA genes have a TATA box and additional intragenic promoter sequences (referred to as A- and B-blocks) which play a role in regulating transcription of the SRP gene by Pol III. In the bacteria, genes are organized in operons and transcribed by RNA polymerase. The 5′-end of the small (4.5S) SRP RNA of many bacteria is cleaved by RNase P. The ends of the Bacillus subtilis SRP RNA are processed by RNase III. So far, no SRP RNA introns have been observed. Function Co-translational translocation The SRP RNA is an integral part of the small and the large domain of the SRP. The function of the small domain is to delay protein translation until the ribosome-bound SRP has an opportunity to associate with the membrane-resident SRP receptor (SR). Within the large domain, the SRP RNA of the signal peptide-charged SRP promotes the hydrolysis of two guanosine triphosphate (GTP) molecules. This reaction releases the SRP from the SRP receptor and the ribosome, allowing translation to continue and the protein to enter the translocon. The protein transverses the membrane co-translationally (during translation) and enters into another cellular compartment or the extracellular space. In eukaryotes, the target is the membrane of the endoplasmic reticulum (ER). In Archaea, SRP delivers proteins to the plasma membrane. In the bacteria, SRP primarily incorporates proteins into the inner membrane. Post-translational transport SRP participates also in the sorting of proteins after their synthesis has been completed (post-translational protein sorting). In eukaryotes, tail-anchored proteins possessing a hydrophobic insertion sequence at their C-terminus are delivered to the endoplasmic reticulum (ER) by the SRP. Similarly, the SRP assists post-translationally in the import of nuclear-encoded proteins to the thylakoid membrane of chloroplasts. Structure In 2005, a nomenclature for all SRP RNAs proposed a numbering system of 12 helices. Helix sections are named with a lower case letter suffix (e.g. 5a). Insertions, or helix "branches" are given dotted numbers (e.g. 9.1 and 12.1). The SRP RNA spans a wide phylogenetic spectrum with respect to size and the number of its structural features (see the SRP RNA Secondary Structure Examples, below). The smallest functional SRP RNAs have been found in mycoplasma and related species. Escherichia coli SRP RNA (also called 4.5S RNA) is composed of 114 nucleotide residues and forms an RNA stem-loop. The gram-positive bacterium Bacillus subtilis encodes a larger 6S SRP RNA which resemble the Archaeal homologs but lacks SRP RNA helix 6. Archaeal SRP RNAs possess helices 1 to 8, lack helix 7, and are characterized by a tertiary structure which involves the apical loops of helix 3 and helix 4. The eukaryotic SRP RNAs lack helix 1 and contain a helix 7 of variable size. Some protozoan SRP RNAs have reduced helices 3 and 4. The ascomycota SRP RNAs have an altogether reduced small domain and lack helices 3 and 4. The largest SRP RNAs known to date are found in the yeasts (Saccharomycetes) which acquired helices 9 to 12 as insertions into helix 5, as well as an extended helix 7. Seed plants express numerous highly divergent SRP RNAs. Motifs Four conserved features (motifs) have been identified (shown in the Figure in dark gray): the (1) SRP54 binding motif, (2) Helix 6 GNAR tetraloop motif, (3) 5e motif, and (4) UGU(NR) motif. SRP54 binding The asymmetric loop between helical sections 8a and 8b and the adjacent base paired 8b section are a prominent property of every SRP RNA. Helical section 8b contains non-Watson–Crick base pairings which contribute to the formation of a flatted minor groove in the RNA suitable for the binding of protein SRP54 (called Ffh in the bacteria). The apical loop of helix 8 contains four, five, or six residues, depending on the species. It has a highly conserved guanosine as the first and an adenosine as the last loop residue. This feature is required for the interaction with the third adenosine residue of the helix 6 GNAR tetraloop motif. Helix 6 GNAR tetraloop The SRP RNAs of eukaryotes and Archaea have a GNAR tetraloop (N is for any nucleotide, R is for a purine) in helix 6. Its conserved adenosine residue is important for the binding of protein SRP19. This adenosine makes a tertiary interaction with another adenosine residue located in the apical loop of helix 8. 5e The 11 nucleotides of the 5e motif form four base pairs which are interrupted by a loop of three nucleotides. In the eukaryotes, the first nucleotide of the loop is an adenosine which is needed for the binding of protein SRP72. UGU(NR) The UGU(NR) motif connects helices 3 and 4 in the small (Alu) SRP domain. Fungal SRP RNAs lacking helices 3 and 4 contain the motif within the loop of helix 2. It is important in the binding of the SRP9/14 protein heterodimer as part of an RNA U-turn. Secondary Tertiary X-ray crystallography, nuclear magnetic resonance (NMR), and cryo-electron microscopy (cryo-EM] have been used to determine the molecular structure of portions of the SRP RNAs from various species. The available PDB structures show the RNA molecule either free or when bound to one or more SRP proteins. Binding proteins One or more SRP proteins bind to the SRP RNA to assemble the functional SRP. The SRP proteins are named according to their approximate molecular mass measured in kilodalton. Most bacterial SRPs are composed of SRP RNA and SRP54 (also named Ffh for "Fifty-four homolog"). The Archaeal SRP contains proteins SRP54 and SRP19. In eukaryotes, the SRP RNA combines with the imported SRP proteins SRP9/14, SRP19, and SRP68/72 in a region of the nucleolus. This pre-SRP is transported to the cytosol where it binds to protein SRP54. The molecular structures of the free or SRP RNA-bound proteins SRP9/14, SRP19, or SRP54 are known at high resolution. SRP9 and SRP14 SRP9 and SRP14 are structurally related and form the SRP9/14 heterodimer which binds to the SRP RNA of the small (Alu) domain. Yeast SRP lacks SRP9 and contains the structurally related binding protein SRP21. Yeast SRP14 forms homodimers in crystal and does not bind Alu. SRP9/14 is absent in the SRP of trypanosoma which instead possess a tRNA-like molecule. SRP19 SRP19 is found in the SRP of eukaryotes and Archaea. Its primary role is in preparing the SRP RNA for the binding of SRP54, SRP68, and SRP72 by properly arranging SRP RNA helices 6 and 8. Yeast SRP contains Sec65p, a larger homolog of SRP19. SRP54 Protein SRP54 (named Ffh in the bacteria) is an essential component of every SRP. It is composed of three functional domains: the N-terminal (N) domain, the GTPase (G) domain, and the methionine-rich (M) domain. SRP68 and SRP72 Proteins SRP68 and SRP72 are structurally unrelated constituents of the large domain of the eukaryotic SRP. They form a stable SRP68/72 heterodimer. About one third of the human SRP68 protein was shown to bind to the SRP RNA. A relatively small region located near the C-terminus of SRP72 binds to the 5e SRP RNA motif. References Further reading External links The SRP Database (SRPDB): Alignments of SRP RNAs and associated proteins, SRP RNA secondary structures and 3-D models Rfam entry for Metazoan type signal recognition particle RNA Rfam entry for Bacterial small signal recognition particle RNA Rfam entry for Bacterial large signal recognition particle RNA Rfam entry for Fungal signal recognition particle RNA Rfam entry for Plant signal recognition particle RNA Rfam entry for Protozoan signal recognition particle RNA Rfam entry for Archaeal signal recognition particle RNA Dnatube Signal Recognition Particle Movie RNA Protein biosynthesis Non-coding RNA
Signal recognition particle RNA
[ "Chemistry" ]
2,844
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
12,366,559
https://en.wikipedia.org/wiki/Crevice%20corrosion
Crevice corrosion refers to corrosion occurring in occluded spaces such as interstices in which a stagnant solution is trapped and not renewed. These spaces are generally called crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits and under sludge piles. Mechanism The corrosion resistance of a stainless steel is dependent on the presence of an ultra-thin protective oxide film (passive film) on its surface, but it is possible under certain conditions for this oxide film to break down, for example in halide solutions or reducing acids. Areas where the oxide film can break down can also sometimes be the result of the way components are designed, for example under gaskets, in sharp re-entrant corners or associated with incomplete weld penetration or overlapping surfaces. These can all form crevices which can promote corrosion. To function as a corrosion site, a crevice has to be of sufficient width to permit entry of the corrodent, but narrow enough to ensure that the corrodent remains stagnant. Accordingly crevice corrosion usually occurs in gaps a few micrometres wide, and is not found in grooves or slots in which circulation of the corrodent is possible. This problem can often be overcome by paying attention to the design of the component, in particular to avoiding formation of crevices or at least keeping them as open as possible. Crevice corrosion is a very similar mechanism to pitting corrosion; alloys resistant to one are generally resistant to both. Crevice corrosion can be viewed as a less severe form of localized corrosion when compared with pitting. The depth of penetration and the rate of propagation in pitting corrosion are significantly greater than in crevice corrosion. Crevices can develop a local chemistry which is very different from that of the bulk fluid. For example, in boilers, concentration of non-volatile impurities may occur in crevices near heat-transfer surfaces because of the continuous water vaporization. "Concentration factors" of many millions are not uncommon for common water impurities like sodium, sulfate or chloride ions. The concentration process is often referred to as "hideout" (HO), whereas the opposite process, whereby the concentrations tend to even out (e.g., during shutdown) is called "hideout return" (HOR). In a neutral pH solution, the pH inside the crevice can drop to 2, a highly acidic condition that accelerates the corrosion of most metals and alloys. For a given crevice type, two factors are important in the initiation of crevice corrosion: the chemical composition of the electrolyte in the crevice and the electrical potential drop into the crevice. Researchers had previously claimed that either one or the other of the two factors was responsible for initiating crevice corrosion, but recently it has been shown that it is a combination of the two that causes active crevice corrosion. Both the drop of potential and the change in composition of the crevice electrolyte are produced by the oxygen depletion of the solution inside the crevice (oxygen consumption caused by the metal oxidation at the inner surface of the occluded cavity) and the separation of electroactive areas, with net anodic reactions (oxidation) occurring within the crevice and net cathodic reactions (reduction) occurring at the exterior of the crevice (on the bold surface). The ratio of the surface areas between the cathodic and anodic region is significant. Some of the phenomena occurring within the crevice may be somewhat reminiscent of galvanic corrosion: galvanic corrosion two connected metals + single environment crevice corrosion one metal part + two connected environments The mechanism of crevice corrosion can be (but is not always) similar to that of pitting corrosion. However, there are sufficient differences to warrant a separate treatment. For example, in crevice corrosion, one has to consider the geometry of the crevice and the nature of the concentration process leading to the development of the differential local chemistry. The extreme and often unexpected local chemistry conditions inside the crevice need to be considered. Galvanic effects can play a role in crevice degradation. Mode of attack Depending on the environment developed in the crevice and the nature of the metal, the crevice corrosion can take a form of: pitting (i.e., formation of pits), but note pitting and crevice corrosion are not the same phenomenon, filiform corrosion (this type of crevice corrosion that may occur on a metallic surface underneath an organic coating), intergranular attack, or, stress corrosion cracking. Stress corrosion cracking A common form of crevice failure occurs due to stress corrosion cracking, where a crack or cracks develop from the base of the crevice where the stress concentration is greatest. This was the root cause of the fall of the Silver Bridge over the Ohio River, in 1967 in West Virginia, where a single critical crack only about 3 mm long suddenly grew and fractured a tie bar joint. The rest of the bridge fell in less than a minute. The disaster was caused by one single point of failure (SPOF). The eyebars in the Silver Bridge were not redundant, as links were composed of only two bars each, of high strength steel (more than twice as strong as common mild steel), rather than a thick stack of thinner bars of modest material strength "combed" together as is usual for redundancy. With only two bars, the failure of one could impose excessive loading on the second, causing total failure—unlikely if more bars are used. While a low-redundancy chain can be engineered to the design requirements, the safety is completely dependent upon correct, high quality manufacturing and assembly. Significance The susceptibility to crevice corrosion varies widely from one material-environment system to another. In general, crevice corrosion is of greatest concern for materials which are normally passive metals, like stainless steel or aluminum. Crevice corrosion tends to be of greatest significance to components built of highly corrosion-resistant superalloys and operating with the purest-available water chemistry. For example, steam generators in nuclear power plants degrade largely by crevice corrosion. Crevice corrosion is extremely dangerous because it is localized and can lead to component failure while the overall material loss is minimal. The initiation and progress of crevice corrosion can be difficult to detect. See also Corrosion engineering References External links Crevice Corrosion of Stainless Steels Corrosion Fouling Engineering failures Materials degradation
Crevice corrosion
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,366
[ "Systems engineering", "Reliability engineering", "Metallurgy", "Technological failures", "Materials science", "Corrosion", "Engineering failures", "Electrochemistry", "Civil engineering", "Materials degradation", "Fouling" ]
12,367,441
https://en.wikipedia.org/wiki/PROLITH
PROLITH (abbreviated from Positive Resist Optical LITHography) is a computer simulator modeling the optical and chemical aspects of photolithography. Chris Mack started developing PROLITH after he began working in the field of photolithography at the NSA in 1983. PROLITH was first developed on an IBM PC. The models implemented by the software were based on the work done by Rick Dill at IBM and Andy Neureuther at UC Berkeley, together with Chris Mack's own contributions such as the Mack model. Originally PROLITH was given away for free, while NSA was paying Chris Mack's salary. In 1990 he founded FINLE Technologies to commercialize PROLITH. The first commercial version of the software, named PROLITH/2, was released in June of that year. PROLITH was made easier to use and it grew to include many more aspects of lithography simulation. FINLE Technologies was purchased in February 2000 by KLA-Tencor, which now markets PROLITH. References External links Semiconductor device fabrication Scientific simulation software
PROLITH
[ "Materials_science" ]
219
[ "Semiconductor device fabrication", "Microtechnology" ]
12,367,811
https://en.wikipedia.org/wiki/GATA%20transcription%20factor
The GATA transcription factor family consists of six DNA-binding proteins (GATA1-6) that regulates transcription of DNA due to their ability to bind to the DNA sequence "GATA" which can therefore affect different diseases. These six proteins are divided into two subfamilies of GATA1/2/3 and GATA4/5/6 based on differences in differentiation of stem cell tissues. All six proteins are required for differentiating mesoderm derived tissues. The difference is that GATA1/2/3 is required in development and differentiation of ectoderm derived tissues (such as hematopoietic and the central nervous system), while GATA 4/5/6 is for differentiation of endoderm derived tissues (such as embryonic stem cells of the heart and skin. Mutations in the GATA gene leads to problems in the thyroid, ears, kidney, heart, and can cause cancer. GATA can be used as biomarkers in predicting different diseases such as acute megakaryoblastic leukemia (AMKL) in Down syndrome, colorectal, and breast cancer. GATA transcription factors have been correlated to their broader influence on stem cell development. Findings however, have pointed to a more direct influence by GATA transcription factors, as they are salient components in the more concentrated regulation of gene expression. Data points to the roles GATA transcription factors play in stages past early development in endocrine organs. Molecular structure of the GATA transcription factor family In non-vertebrates, the GATA genes are located close together on the chromosomes. Due to evolution, these genes in humans moved apart and are separated into 6 distinct chromosomal regions.To regulate transcription of DNA, GATA transcription factors containing the class IV zinc finger motif look for GATA sites in DNA with two conserved zinc finger involved in long range DNA interactions. In non-vertebrates, GATA transcription factors contain one zinc finger DNA binding domain (ZNI). In humans, GATA transcription factors contain two zinc finger DNA binding domains (ZNI and ZNII) which looks for adenine or thymine before the GATA sequence and adenine or guanine after as shown by the schematic: (A/T)GATA(A/G). Generally, ZNI and ZNII follow the sequence: CX2CX17–18CX2C. 70% of the regions in the zinc finger domains are the same while the terminal amino and carboxyl domains can change. Genes In humans: GATA1 (see also ) GATA2 (see also ) GATA3 (see also ) GATA4 (see also ) GATA5 (see also ) GATA6 (see also ) In yeast: GLN3 (see also GLN3) GAT1 (see also GAT1) DAL80 (see also DAL80) GZF3 (see also GZF3) Role in breast cancer Despite GATA’s influence on endocrine organs and cell development, they have a complex relation to the development and growth of breast cancer. Its immediate influence is not yet known, its high risk for mutation however, makes determining the immediate influence of paramount importance in battling breast cancer. Some research that has been done on the GATA transcription factor for its role in the development of breast cancer suggests that a specific GATA transcription factor GATA3 can actually inhibit further growth of breast cancer cells. The complete mechanism in which this happens is still not clear. However, research has suggested that the GATA transcription factor creates an unfavorable chemical environment for the breast cancer tumor cells which inhibits the progression of these cells. One way that has been suggested is that the GATA transcription factor lowers the level of adenosine triphosphate (ATP) in the cell. This creates an unfavorable chemical environment for the breast cancer cells because usually they require high levels of ATP to survive. In addition, research has suggested that there is a specific gene called the TRP1 that is expressed in breast cancer cells and the GATA3 transcription factor plays a role in regulating this gene. References External links Transcription factors Articles containing video clips
GATA transcription factor
[ "Chemistry", "Biology" ]
863
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
12,368,154
https://en.wikipedia.org/wiki/Molecular%20Biology%20and%20Evolution
Molecular Biology and Evolution (MBE) is a monthly peer-reviewed scientific journal published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. It publishes work in the intersection of molecular biology and evolutionary biology. The founding editors were Walter Fitch and Masatoshi Nei; the present editors-in-chief are Brandon Gaut and Claudia Russo. Subject matter Evolution is the most fundamental of biological processes. MBE publishes patterns and processes that impact the evolution of life at molecular levels, across a full breadth of taxonomy, genomic organization, and functions, forms, and phenotypes. MBE's Methods, Resource, and Protocol sections include research tools that enable discoveries, while the Reviews and Perspectives synthesize different aspects of the evolutionary thought. Editorial process All MBE manuscripts are peer-reviewed. Decisions to publish are made by the Board of Editors, led by the Editors-in-Chief (EiCs) that oversee processing and set the direction of the journal. The board also includes Associate Editors (AEs) who solicit peer reviews and make an initial recommendation and the Senior Editors (SEs) who make the final recommendation that lead to the decision for each manuscript. Management MBE is entirely owned by The Society for Molecular Biology and Evolution (SMBE). The SMBE Council appoints the Editor-in-Chief. The Journal is published by Oxford University Press. MBE has transitioned to full Open Access, with all papers free to access from 1 January 2021. All articles accepted for publication from 1 August 2020 have been published as Open Access. Impact According to the Journal Citation Reports, the journal has a 2017 Impact Factor of 10.217, a 2018 Impact Factor of 14.797, a 2019 Impact Factor of 11.062, a 2022 Impact Factor of 10.700 and a 2023 Impact Factor of 11.00. In 2023, MBE was ranked 8th out of the 172 journals in the Genetics & Heredity category, 16th out of 285 journals within the Biochemistry & Molecular Biology category and 4th  out of 54 in the Evolutionary Biology category. References External links Molecular and cellular biology journals Evolutionary biology journals English-language journals Monthly journals Oxford University Press academic journals Academic journals associated with learned and professional societies
Molecular Biology and Evolution
[ "Chemistry" ]
450
[ "Molecular and cellular biology journals", "Molecular biology" ]
12,368,347
https://en.wikipedia.org/wiki/Radio-frequency%20microelectromechanical%20system
A radio-frequency microelectromechanical system (RF MEMS) is a microelectromechanical system with electronic components comprising moving sub-millimeter-sized parts that provide radio-frequency (RF) functionality. RF functionality can be implemented using a variety of RF technologies. Besides RF MEMS technology, III-V compound semiconductor (GaAs, GaN, InP, InSb), ferrite, ferroelectric, silicon-based semiconductor (RF CMOS, SiC and SiGe), and vacuum tube technology are available to the RF designer. Each of the RF technologies offers a distinct trade-off between cost, frequency, gain, large-scale integration, lifetime, linearity, noise figure, packaging, power handling, power consumption, reliability, ruggedness, size, supply voltage, switching time and weight. Components There are various types of RF MEMS components, such as CMOS integrable RF MEMS resonators and self-sustained oscillators with small form factor and low phase noise, RF MEMS tunable inductors, and RF MEMS switches, switched capacitors and varactors. Switches, switched capacitors and varactors The components discussed in this article are based on RF MEMS switches, switched capacitors and varactors. These components can be used instead of FET and HEMT switches (FET and HEMT transistors in common gate configuration), and PIN diodes. RF MEMS switches, switched capacitors and varactors are classified by actuation method (electrostatic, electrothermal, magnetostatic, piezoelectric), by axis of deflection (lateral, vertical), by circuit configuration (series, shunt), by clamp configuration (cantilever, fixed-fixed beam), or by contact interface (capacitive, ohmic). Electrostatically actuated RF MEMS components offer low insertion loss and high isolation, linearity, power handling and Q factor, do not consume power, but require a high control voltage and hermetic single-chip packaging (thin film capping, LCP or LTCC packaging) or wafer-level packaging (anodic or glass frit wafer bonding). RF MEMS switches were pioneered by IBM Research Laboratory, San Jose, CA, Hughes Research Laboratories, Malibu, CA, Northeastern University in cooperation with Analog Devices, Boston, MA, Raytheon, Dallas, TX, and Rockwell Science, Thousand Oaks, CA. A capacitive fixed-fixed beam RF MEMS switch, as shown in Fig. 1(a), is in essence a micro-machined capacitor with a moving top electrode, which is the beam. It is generally connected in shunt with the transmission line and used in X- to W-band (77 GHz and 94 GHz) RF MEMS components. An ohmic cantilever RF MEMS switch, as shown in Fig. 1(b), is capacitive in the up-state, but makes an ohmic contact in the down-state. It is generally connected in series with the transmission line and is used in DC to the Ka-band components. From an electromechanical perspective, the components behave like a damped mass-spring system, actuated by an electrostatic force. The spring constant is a function of the dimensions of the beam, as well as the Young's modulus, the residual stress and the Poisson ratio of the beam material. The electrostatic force is a function of the capacitance and the bias voltage. Knowledge of the spring constant allows for hand calculation of the pull-in voltage, which is the bias voltage necessary to pull-in the beam, whereas knowledge of the spring constant and the mass allows for hand calculation of the switching time. From an RF perspective, the components behave like a series RLC circuit with negligible resistance and inductance. The up- and down-state capacitance are in the order of 50 fF and 1.2 pF, which are functional values for millimeter-wave circuit design. Switches typically have a capacitance ratio of 30 or higher, while switched capacitors and varactors have a capacitance ratio of about 1.2 to 10. The loaded Q factor is between 20 and 50 in the X-, Ku- and Ka-band. RF MEMS switched capacitors are capacitive fixed-fixed beam switches with a low capacitance ratio. RF MEMS varactors are capacitive fixed-fixed beam switches which are biased below pull-in voltage. Other examples of RF MEMS switches are ohmic cantilever switches, and capacitive single pole N throw (SPNT) switches based on the axial gap wobble motor. Biasing RF MEMS components are biased electrostatically using a bipolar NRZ drive voltage, as shown in Fig. 2, in order to avoid dielectric charging and to increase the lifetime of the device. Dielectric charges exert a permanent electrostatic force on the beam. The use of a bipolar NRZ drive voltage instead of a DC drive voltage avoids dielectric charging whereas the electrostatic force exerted on the beam is maintained, because the electrostatic force varies quadratically with the DC drive voltage. Electrostatic biasing implies no current flow, allowing high-resistivity bias lines to be used instead of RF chokes. Packaging RF MEMS components are fragile and require wafer level packaging or single chip packaging which allow for hermetic cavity sealing. A cavity is required to allow movement, whereas hermeticity is required to prevent cancellation of the spring force by the Van der Waals force exerted by water droplets and other contaminants on the beam. RF MEMS switches, switched capacitors and varactors can be packaged using wafer level packaging. Large monolithic RF MEMS filters, phase shifters, and tunable matching networks require single chip packaging. Wafer-level packaging is implemented before wafer dicing, as shown in Fig. 3(a), and is based on anodic, metal diffusion, metal eutectic, glass frit, polymer adhesive, and silicon fusion wafer bonding. The selection of a wafer-level packaging technique is based on balancing the thermal expansion coefficients of the material layers of the RF MEMS component and those of the substrates to minimize the wafer bow and the residual stress, as well as on alignment and hermeticity requirements. Figures of merit for wafer-level packaging techniques are chip size, hermeticity, processing temperature, (in)tolerance to alignment errors and surface roughness. Anodic and silicon fusion bonding do not require an intermediate layer, but do not tolerate surface roughness. Wafer-level packaging techniques based on a bonding technique with a conductive intermediate layer (conductive split ring) restrict the bandwidth and isolation of the RF MEMS component. The most common wafer-level packaging techniques are based on anodic and glass frit wafer bonding. Wafer-level packaging techniques, enhanced with vertical interconnects, offer the opportunity of three-dimensional integration. Single-chip packaging, as shown in Fig. 3(b), is implemented after wafer dicing, using pre-fabricated ceramic or organic packages, such as LCP injection molded packages or LTCC packages. Pre-fabricated packages require hermetic cavity sealing through clogging, shedding, soldering or welding. Figures of merit for single-chip packaging techniques are chip size, hermeticity, and processing temperature. Microfabrication An RF MEMS fabrication process is based on surface micromachining techniques, and allows for integration of SiCr or TaN thin film resistors (TFR), metal-air-metal (MAM) capacitors, metal-insulator-metal (MIM) capacitors, and RF MEMS components. An RF MEMS fabrication process can be realized on a variety of wafers: III-V compound semi-insulating, borosilicate glass, fused silica (quartz), LCP, sapphire, and passivated silicon wafers. As shown in Fig. 4, RF MEMS components can be fabricated in class 100 clean rooms using 6 to 8 optical lithography steps with a 5 μm contact alignment error, whereas state-of-the-art MMIC and RFIC fabrication processes require 13 to 25 lithography steps. As outlined in Fig. 4, the essential microfabrication steps are: Deposition of the bias lines (Fig. 4, step 1) Deposition of the electrode layer (Fig. 4, step 2) Deposition of the dielectric layer (Fig. 4, step 3) Deposition of the sacrificial spacer (Fig. 4, step 4) Deposition of seed layer and subsequent electroplating (Fig. 4, step 5) Beam patterning, release and critical point drying (Fig. 4, step 6) With the exception of the removal of the sacrificial spacer, which requires critical point drying, the fabrication steps are similar to CMOS fabrication process steps. RF MEMS fabrication processes, unlike BST or PZT ferroelectric and MMIC fabrication processes, do not require electron beam lithography, MBE, or MOCVD. Reliability Contact interface degradation poses a reliability issue for ohmic cantilever RF MEMS switches, whereas dielectric charging beam stiction, as shown in Fig. 5(a), and humidity induced beam stiction, as shown in Fig. 5(b), pose a reliability issue for capacitive fixed-fixed beam RF MEMS switches. Stiction is the inability of the beam to release after removal of the drive voltage. A high contact pressure assures a low-ohmic contact or alleviates dielectric charging induced beam stiction. Commercially available ohmic cantilever RF MEMS switches and capacitive fixed-fixed beam RF MEMS switches have demonstrated lifetimes in excess of 100 billion cycles at 100 mW of RF input power. Reliability issues pertaining to high-power operation are discussed in the limiter section. Applications RF MEMS resonators are applied in filters and reference oscillators. RF MEMS switches, switched capacitors and varactors are applied in electronically scanned (sub)arrays (phase shifters) and software-defined radios (reconfigurable antennas, tunable band-pass filters). Antennas Polarization and radiation pattern reconfigurability, and frequency tunability, are usually achieved by incorporation of III-V semiconductor components, such as SPST switches or varactor diodes. However, these components can be readily replaced by RF MEMS switches and varactors in order to take advantage of the low insertion loss and high Q factor offered by RF MEMS technology. In addition, RF MEMS components can be integrated monolithically on low-loss dielectric substrates, such as borosilicate glass, fused silica or LCP, whereas III-V compound semi-insulating and passivated silicon substrates are generally lossier and have a higher dielectric constant. A low loss tangent and low dielectric constant are of importance for the efficiency and the bandwidth of the antenna. The prior art includes an RF MEMS frequency tunable fractal antenna for the 0.1–6 GHz frequency range, and the actual integration of RF MEMS switches on a self-similar Sierpinski gasket antenna to increase its number of resonant frequencies, extending its range to 8 GHz, 14 GHz and 25 GHz, an RF MEMS radiation pattern reconfigurable spiral antenna for 6 and 10 GHz, an RF MEMS radiation pattern reconfigurable spiral antenna for the 6–7 GHz frequency band based on packaged Radant MEMS SPST-RMSW100 switches, an RF MEMS multiband Sierpinski fractal antenna, again with integrated RF MEMS switches, functioning at different bands from 2.4 to 18 GHz, and a 2-bit Ka-band RF MEMS frequency tunable slot antenna. The Samsung Omnia W was the first smart phone to include a RF MEMS antenna. Filters RF bandpass filters can be used to increase out-of-band rejection, in case the antenna fails to provide sufficient selectivity. Out-of-band rejection eases the dynamic range requirement on the LNA and the mixer in the light of interference. Off-chip RF bandpass filters based on lumped bulk acoustic wave (BAW), ceramic, SAW, quartz crystal, and FBAR resonators have superseded distributed RF bandpass filters based on transmission line resonators, printed on substrates with low loss tangent, or based on waveguide cavities. Tunable RF bandpass filters offer a significant size reduction over switched RF bandpass filter banks. They can be implemented using III-V semiconducting varactors, BST or PZT ferroelectric and RF MEMS resonators and switches, switched capacitors and varactors, and YIG ferrites. RF MEMS resonators offer the potential of on-chip integration of high-Q resonators and low-loss bandpass filters. The Q factor of RF MEMS resonators is in the order of 100–1000. RF MEMS switch, switched capacitor and varactor technology, offers the tunable filter designer a compelling trade-off between insertion loss, linearity, power consumption, power handling, size, and switching time. Phase shifters Passive subarrays based on RF MEMS phase shifters may be used to lower the amount of T/R modules in an active electronically scanned array. The statement is illustrated with examples in Fig. 6: assume a one-by-eight passive subarray is used for transmit as well as receive, with following characteristics: f = 38 GHz, Gr = Gt = 10 dBi, BW = 2 GHz, Pt = 4 W. The low loss (6.75 ps/dB) and good power handling (500 mW) of the RF MEMS phase shifters allow an EIRP of 40 W and a Gr/T of 0.036 1/K. EIRP, also referred to as the power-aperture product, is the product of the transmit gain, Gt, and the transmit power, Pt. Gr/T is the quotient of the receive gain and the antenna noise temperature. A high EIRP and Gr/T are a prerequisite for long-range detection. The EIRP and Gr/T are a function of the number of antenna elements per subarray and of the maximum scanning angle. The number of antenna elements per subarray should be chosen in order to optimize the EIRP or the EIRP x Gr/T product, as shown in Fig. 7 and Fig. 8. The radar range equation can be used to calculate the maximum range for which targets can be detected with 10 dB of SNR at the input of the receiver. in which kB is the Boltzmann constant, λ is the free-space wavelength, and σ is the RCS of the target. Range values are tabulated in Table 1 for following targets: a sphere with a radius, a, of 10 cm (σ = π a2), a dihedral corner reflector with facet size, a, of 10 cm (σ = 12 a4/λ2), the rear of a car (σ = 20 m2) and for a non-evasive fighter jet (σ = 400 m2). RF MEMS phase shifters enable wide-angle passive electronically scanned arrays, such as lens arrays, reflect arrays, subarrays and switched beamforming networks, with high EIRP and high Gr/T. The prior art in passive electronically scanned arrays, includes an X-band continuous transverse stub (CTS) array fed by a line source synthesized by sixteen 5-bit reflect-type RF MEMS phase shifters based on ohmic cantilever RF MEMS switches, an X-band 2-D lens array consisting of parallel-plate waveguides and featuring 25,000 ohmic cantilever RF MEMS switches, and a W-band switched beamforming network based on an RF MEMS SP4T switch and a Rotman lens focal plane scanner. The usage of true-time-delay TTD phase shifters instead of RF MEMS phase shifters allows UWB radar waveforms with associated high range resolution, and avoids beam squinting or frequency scanning. TTD phase shifters are designed using the switched-line principle or the distributed loaded-line principle. Switched-line TTD phase shifters outperform distributed loaded-line TTD phase shifters in terms of time delay per decibel NF, especially at frequencies up to X-band, but are inherently digital and require low-loss and high-isolation SPNT switches. Distributed loaded-line TTD phase shifters, however, can be realized analogously or digitally, and in smaller form factors, which is important at the subarray level. Analog phase shifters are biased through a single bias line, whereas multibit digital phase shifters require a parallel bus along with complex routing schemes at the subarray level. References Reading S. Lucyszyn (Ed), "Advanced RF MEMS", Cambridge University Press, Aug. 2010, Microelectronic and microelectromechanical systems
Radio-frequency microelectromechanical system
[ "Materials_science", "Engineering" ]
3,628
[ "Microelectronic and microelectromechanical systems", "Materials science", "Microtechnology" ]
12,368,774
https://en.wikipedia.org/wiki/Nanosubmarine
Nanosubmarines, or nanosubs, are synthetic microscopic devices that can navigate and perform specific tasks within the human body. Most of the self-propelled devices will be used to detect substances, decontaminate the environment, perform targeted drug delivery, conduct microsurgery and destroy malicious cells. Nanosubmarines use a variety of methods to navigate through the body; currently the preferred method uses the electrochemical properties of molecules. There have been multiple successful tests using this technology to heal mice with inflammatory bowel diseases. The general goal of nanosubmarines is to be able to produce a machine which can sense and respond autonomously, all while being fueled by its environment. Uses The main purpose of a nanosubmarine is to navigate the body and perform a specific task. The most speculated task is the treatment and diagnosis of diseases from within the body. This is supported by the task of detecting substances, as most diseases cause a specific type of protein or other molecule to be made in abundance within the bloodstream. Another speculated task is microsurgery. With this technology, doctors will be able to perform surgery on specific locations from within the body. One example of this could be a treatment for cancer. A nanosubmarine could be built to detect specific cancer cells within the body; after locating the cells, the nanosub would be able to kill only the mutated cells and ignore healthy cells. Navigation Navigation is one of the most difficult aspects to develop in nanosubmarines. The goal is to be able to travel throughout the bloodstream without getting stuck in even the smallest of capillaries. However, this is difficult because the smallest capillaries are 2 μm across (2.0 x 10−6m); blood cells are about 7μm but they are easily pliable and can squeeze through the capillaries. Another challenge with navigation is the fact that physics restricts the amount of propulsion such a small device can output. The blood flow is simply too strong for any device even compete with the flow, therefore the nanosubmarine would have to be carried by the blood. One form of propulsion nanosubmarines could use is electrochemical. One example of a motor is a nanorod which is platinum on one side and gold on the other. When submerged in hydrogen peroxide the platinum oxidizes the H2O2 into 2H+ and O2. This process occurs because platinum takes two electrons from the molecule. On the other side of the rod, the gold reduces hydrogen peroxide into water, in doing so an electron is pulled from the gold. This causes a steady electron flow from the platinum side of the rod towards the gold side. Since the rod is so small, Newton's third law of physics applies. For any action there is a reaction, when the electrons are pulled across the surface of the rod, so too is the rod pulled in the opposite direction. Scientific achievements The first recorded success of a nanosubmarine was performed by a team of students led by Dan Peer from Tel Aviv University in Israel. This was a continuation to Peer's work at Harvard on nanosubmarines and targeted drug delivery. Tests have proven successful in delivering drugs to heal mice with ulcerative colitis. Tests will continue and the team plans to experiment on the human body soon. See also Fantastic Voyage, novel and movie based on the nanosubmarine theme References Nanotechnology
Nanosubmarine
[ "Materials_science", "Engineering" ]
710
[ "Nanotechnology", "Materials science" ]
2,383,387
https://en.wikipedia.org/wiki/Neopterin
Neopterin is an organic compound belonging to the pteridine class of heterocyclic compounds. Neopterin belongs to the chemical group known as pteridines. It is synthesised by human macrophages upon stimulation with the cytokine interferon-gamma and is indicative of a pro-inflammatory immune status. Neopterin serves as a marker of cellular immune system activation. In humans neopterin follows a circadian (daily) and circaseptan (weekly) rhythm. Biosynthesis The biosynthesis of neopterin occurs in two steps from guanosine triphosphate (GTP). The first being catalyzed by GTP cyclohydrolase, which opens the ribose group. Phosphatases next catalyze the hydrolysis of the phosphate ester group. Neopterin as disease marker Measurement of neopterin concentrations in body fluids like blood serum, cerebrospinal fluid or urine provides information about activation of cellular immune activation in humans under the control of T helper cells type 1. High neopterin production is associated with increased production of reactive oxygen species, neopterin concentrations also allow to estimate the extent of oxidative stress elicited by the immune system. Increased neopterin production is found in, but not limited to, the following diseases: Viral infections including human immunodeficiency virus (HIV), hepatitis B and hepatitis C, SARS-CoV-1, SARS-CoV-2. Bacterial infections by intracellular living bacteria such as Borrelia (Lyme disease), Mycobacterium tuberculosis, and Helicobacter pylori. parasites such as Plasmodium (malaria) Autoimmune diseases such as rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) Malignant tumor diseases Allograft rejection episodes. A leukodystrophy called Aicardi-Goutieres syndrome Depression and somatization. Neopterin concentrations usually correlate with the extent and activity of the disease, and are also useful to monitor during therapy in these patients. Elevated neopterin concentrations are among the best predictors of adverse outcome in patients with HIV infection, in cardiovascular disease and in various types of cancer. In the laboratory it is measured by radioimmunoassay (RIA), ELISA, or high-performance liquid chromatography (HPLC). It has a native fluorescence of wavelength excitation at 353 nm and emission at 438 nm, rendering it readily detected. References Cavaleri et al. Blood concentrations of neopterin and biopterin in subjects with depression: A systematic review and meta-analysis. Progr Neuropsychopharmacol Biol Psychiatry 2023;120:110633. Murr C, et al. Neopterin as a marker for immune system activation. Curr Drug Metabol 2002;3:175-187. Fuchs D, et al. The role of neopterin in atherogenesis and cardiovascular risk stratification. Curr Med Chem 2009;16:4644-4653. Sucher R, et al. Neopterin, a prognostic marker in human malignancies. Cancer Lett 2010;287:13-22. Zeng B, et al. Serum neopterin for early assessment of severity of severe acute respiratory syndrome. Clin Immunol 2005;116(1):18-26. Roberston J, et al. Serum neopterin levels in relation to mild and severe COVID-19. BMC Infect Dis 2020;20(1):942. External links NeoPterin.net Determination of neopterin and biopterin by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) in rat and human plasma, cell extracts and tissue homogenates (a protocol) Metabolism Immunology Pteridines Triols
Neopterin
[ "Chemistry", "Biology" ]
842
[ "Biotechnology stubs", "Biochemistry stubs", "Immunology", "Cellular processes", "Biochemistry", "Metabolism" ]
2,383,470
https://en.wikipedia.org/wiki/Neo-Hookean%20solid
A neo-Hookean solid is a hyperelastic material model, similar to Hooke's law, that can be used for predicting the nonlinear stress–strain behavior of materials undergoing large deformations. The model was proposed by Ronald Rivlin in 1948 using invariants, though Mooney had already described a version in stretch form in 1940, and Wall had noted the equivalence in shear with the Hooke model in 1942. In contrast to linear elastic materials, the stress–strain curve of a neo-Hookean material is not linear. Instead, the relationship between applied stress and strain is initially linear, but at a certain point the stress–strain curve will plateau. The neo-Hookean model does not account for the dissipative release of energy as heat while straining the material, and perfect elasticity is assumed at all stages of deformation. In addition to being used to model physical materials, the stability and highly non-linear behaviour under compression has made neo-Hookean materials a popular choice for fictitious media approaches such as the third medium contact method. The neo-Hookean model is based on the statistical thermodynamics of cross-linked polymer chains and is usable for plastics and rubber-like substances. Cross-linked polymers will act in a neo-Hookean manner because initially the polymer chains can move relative to each other when a stress is applied. However, at a certain point the polymer chains will be stretched to the maximum point that the covalent cross links will allow, and this will cause a dramatic increase in the elastic modulus of the material. The neo-Hookean material model does not predict that increase in modulus at large strains and is typically accurate only for strains less than 20%. The model is also inadequate for biaxial states of stress and has been superseded by the Mooney-Rivlin model. The strain energy density function for an incompressible neo-Hookean material in a three-dimensional description is where is a material constant, and is the first invariant (trace), of the right Cauchy-Green deformation tensor, i.e., where are the principal stretches. For a compressible neo-Hookean material the strain energy density function is given by where is a material constant and is the deformation gradient. It can be shown that in 2D, the strain energy density function is Several alternative formulations exist for compressible neo-Hookean materials, for example where is the first invariant of the isochoric part of the right Cauchy–Green deformation tensor. For consistency with linear elasticity, where is the first Lamé parameter and is the shear modulus or the second Lamé parameter. Alternative definitions of and are sometimes used, notably in commercial finite element analysis software such as Abaqus. Cauchy stress in terms of deformation tensors Compressible neo-Hookean material For a compressible Ogden neo-Hookean material the Cauchy stress is given by where is the first Piola–Kirchhoff stress. By simplifying the right hand side we arrive at which for infinitesimal strains is equal to Comparison with Hooke's law shows that and . For a compressible Rivlin neo-Hookean material the Cauchy stress is given by where is the left Cauchy–Green deformation tensor, and For infinitesimal strains () and the Cauchy stress can be expressed as Comparison with Hooke's law shows that and . {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof: |- | The Cauchy stress in a compressible hyperelastic material is given by For a compressible Rivlin neo-Hookean material, while, for a compressible Ogden neo-Hookean material, Therefore, the Cauchy stress in a compressible Rivlin neo-Hookean material is given by while that for the corresponding Ogden material is If the isochoric part of the left Cauchy-Green deformation tensor is defined as , then we can write the Rivlin neo-Heooken stress as and the Ogden neo-Hookean stress as The quantities have the form of pressures and are usually treated as such. The Rivlin neo-Hookean stress can then be expressed in the form while the Ogden neo-Hookean stress has the form |} Incompressible neo-Hookean material For an incompressible neo-Hookean material with where is an undetermined pressure. Cauchy stress in terms of principal stretches Compressible neo-Hookean material For a compressible neo-Hookean hyperelastic material, the principal components of the Cauchy stress are given by Therefore, the differences between the principal stresses are {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof: |- | For a compressible hyperelastic material, the principal components of the Cauchy stress are given by The strain energy density function for a compressible neo Hookean material is Therefore, Since we have Hence, The principal Cauchy stresses are therefore given by |} Incompressible neo-Hookean material In terms of the principal stretches, the Cauchy stress differences for an incompressible hyperelastic material are given by For an incompressible neo-Hookean material, Therefore, which gives Uniaxial extension Compressible neo-Hookean material For a compressible material undergoing uniaxial extension, the principal stretches are Hence, the true (Cauchy) stresses for a compressible neo-Hookean material are given by The stress differences are given by If the material is unconstrained we have . Then Equating the two expressions for gives a relation for as a function of , i.e., or The above equation can be solved numerically using a Newton–Raphson iterative root-finding procedure. Incompressible neo-Hookean material Under uniaxial extension, and . Therefore, Assuming no traction on the sides, , so we can write where is the engineering strain. This equation is often written in alternative notation as The equation above is for the true stress (ratio of the elongation force to deformed cross-section). For the engineering stress the equation is: For small deformations we will have: Thus, the equivalent Young's modulus of a neo-Hookean solid in uniaxial extension is , which is in concordance with linear elasticity ( with for incompressibility). Equibiaxial extension Compressible neo-Hookean material In the case of equibiaxial extension Therefore, The stress differences are If the material is in a state of plane stress then and we have We also have a relation between and : or, This equation can be solved for using Newton's method. Incompressible neo-Hookean material For an incompressible material and the differences between the principal Cauchy stresses take the form Under plane stress conditions we have Pure dilation For the case of pure dilation Therefore, the principal Cauchy stresses for a compressible neo-Hookean material are given by If the material is incompressible then and the principal stresses can be arbitrary. The figures below show that extremely high stresses are needed to achieve large triaxial extensions or compressions. Equivalently, relatively small triaxial stretch states can cause very high stresses to develop in a rubber-like material. The magnitude of the stress is quite sensitive to the bulk modulus but not to the shear modulus. Simple shear For the case of simple shear the deformation gradient in terms of components with respect to a reference basis is of the form where is the shear deformation. Therefore, the left Cauchy-Green deformation tensor is Compressible neo-Hookean material In this case . Hence, . Now, Hence the Cauchy stress is given by Incompressible neo-Hookean material Using the relation for the Cauchy stress for an incompressible neo-Hookean material we get Thus neo-Hookean solid shows linear dependence of shear stresses upon shear deformation and quadratic dependence of the normal stress difference on the shear deformation. The expressions for the Cauchy stress for a compressible and an incompressible neo-Hookean material in simple shear represent the same quantity and provide a means of determining the unknown pressure . References See also Hyperelastic material Strain energy density function Mooney-Rivlin solid Finite strain theory Stress measures Continuum mechanics Elasticity (physics) Non-Newtonian fluids Rubber properties Solid mechanics
Neo-Hookean solid
[ "Physics", "Materials_science" ]
1,798
[ "Solid mechanics", "Physical phenomena", "Continuum mechanics", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Mechanics", "Physical properties" ]
2,383,535
https://en.wikipedia.org/wiki/Corey%E2%80%93Itsuno%20reduction
The Corey–Itsuno reduction, also known as the Corey–Bakshi–Shibata (CBS) reduction, is a chemical reaction in which a prochiral ketone is enantioselectively reduced to produce the corresponding chiral, non-racemic alcohol. The oxazaborolidine reagent which mediates the enantioselective reduction of ketones was previously developed by the laboratory of Itsuno and thus this transformation may more properly be called the Itsuno-Corey oxazaborolidine reduction. History In 1981, Itsuno and coworkers first reported the use of chiral alkoxy-amine-borane complexes in reducing achiral ketones to chiral alcohols enantioselectively and in high yield. Several years later in 1987, E. J. Corey and coworkers developed the reaction between chiral amino alcohols and borane (BH3), generating oxazaborolidine products which were shown to rapidly catalyze the enantioselective reduction of achiral ketones in the presence of BH3•THF. The CBS reduction has since been utilized by organic chemists as a reliable method for the asymmetric reduction of achiral ketones. Notably, it has found prominent use not only in a number of natural product syntheses, but has been utilized on large scale in industry (See Scope Below). Several reviews have been published. Mechanism Corey and coworkers originally proposed the following reaction mechanism to explain the selectivity obtained in the catalytic reduction. The first step of the mechanism involves the coordination of BH3 to the nitrogen atom of the oxazaborolidine CBS catalyst 1. This coordination serves to activate the BH3 as a hydride donor and to enhance the Lewis acidity of the catalyst's endocyclic boron. X-ray crystal structures and 11B NMR spectroscopic analyses of the coordinated catalyst-borane complex 2 have provided support for this initial step. Subsequently, the endocyclic boron of the catalyst coordinates to the ketone at the sterically more accessible electron lone pair (i.e. the lone pair closer to the smaller substituent, Rs). This preferential binding in 3 acts to minimize 1,3-allylic strain between the ketone (the large RL substituent directed away) and the R’ group of the catalyst, and aligns the carbonyl and the coordinated borane for a favorable, face-selective hydride transfer through a six-membered transition state 4. Hydride transfer yields the chiral 5, which upon acidic workup yields the chiral alcohol 6. The last step to regenerate the catalyst may take place by two different pathways (Path 1 or 2). The predominant driving force for this face-selective, intramolecular hydride transfer is the simultaneous activation of the borane reagent by coordination to the Lewis basic nitrogen and the enhancement of the Lewis acidity of the endocyclic boron atom for coordination to the ketone. Scope and limitations Stereo and chemoselectivity The CBS reduction has proven to be an effective and powerful method to reduce a wide range of different types of ketones in both a stereoselective and chemoselective manner. Substrates include a large variety of aryl-aliphatic, di-aliphatic, di-aryl, α,β unsaturated enone and ynone systems, as well as ketones containing heteroatoms. Combinations of different derivatives of the CBS catalyst and borane reducing agents have been employed to optimize enantioselectivity. Several interesting cases are worth noting in this selection of substrates. First, in the case of the diaryl system 9, relatively high stereoselectively is achieved despite the isosteric nature of the ketone substituents, suggesting that electronics in addition to sterics may play a role in the stereoselectivity of the CBS reduction. Differences in the substitution of the alkyne moieties in ynones 11 and 12 result in a change of selectivity for the alkyne to function as the more sterically bulky substituent rather than the smaller one. For the α,β unsaturated systems 10-12, efficient reduction of the ketone occurs despite the possible side reaction of hydroboration of the C-C unsaturated bond. The CBS reduction has also been shown to tolerate the presence of heteroatoms as in ketone 13, which is capable of coordinating to the borane. Experimental considerations and limitations The presence of water in the reaction mixture has been shown to have a significant effect on enantiomeric excesses, and thus the CBS reduction must be conducted under anhydrous conditions. Temperature also plays a critical role in the observed stereoselectivity. In general, at lower temperatures enantiomeric excesses (ee's) are obtained. However, when the temperature is increased, the ee values reach a maximum value that is dependent on the catalyst structure and borane reducing agent used. The use of the borane reagent catecholborane, which has been shown to participate in CBS reductions carried out at temperatures as low as -126 °C with marked enantioselectivity, offers a potential solution to improving the diminished ee values obtained at lower temperatures. Enantioselectivity issues associated with the use of BH3 as the reducing agent for the CBS reduction have been reported. Commercially available solutions of BH3•THF evaluated by Nettles et al. were shown to contain trace amounts of borohydride species, which participate in nonselective reductions that led to the diminished enantioselectivity. Though the borohydride catalyzed reduction pathway is much slower than the CBS catalyzed reduction, the side reaction still presents a potential challenge to optimize stereoselectivity. In 2012, Mahale et al. developed a safe and inexpensive procedure for asymmetric reduction of ketones using in situ prepared N,N-diethylaniline-borane and oxazaborolidine catalyst from sodium borohydride, N,N-diethylaniline hydrochloride and (S)-α,α-diphenylprolinol Variations Although CBS catalyst 1 developed by Corey has become commonly employed in the CBS reduction reaction, other derivatives of the catalyst have been developed and utilized successfully. The R’ group of the CBS catalyst plays an important role in the enantioselectivity of the reduction, and as illustrated in above in the Scope section, several variations of the CBS R’ group have been employed to optimize selectivity. Applications Over the past couple of decades, the CBS reduction has gained significant synthetic utility in the synthesis of a significant number of natural products, including lactones, terpenoids, alkaloids, steroids, and biotins. The enantioselective reduction has also been employed on large scale in industry. Jones et al. utilized the CBS reduction in the total synthesis of MK-0417, a water-soluble carbonic anhydrase inhibitor which has been used therapeutically to reduce intraocular pressure. Asymmetric reduction of a key bicyclic sulfone intermediate was accomplished with the CBS oxazaborolidine catalyst containing Me as the R’ group. Asymmetric reduction of a 1,1,1-trichloro-2-keto compound is the first stage of the Corey–Link reaction for synthesis of amino acids and related structures with a choice of either natural or un-natural stereochemistry and various side-chains. Asymmetric reduction of 7-(Benzyloxy)hept-1-en-3-one leads to (S)-7-(Benzyloxy)hept-1-en-3-ol, a chiral alcohol that leads directly to synthesis of kanamienamides, that are currently researched as enamide containing enol ethers that show potent inhibition of cancer cells. The selective formation of the chiral product is achieved by (R)-CBS catalyst with 89% yield and with 91% enantiomeric excess. See also Midland Alpine borane reduction Noyori asymmetric hydrogenation References Name reactions Organic reduction reactions
Corey–Itsuno reduction
[ "Chemistry" ]
1,748
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
2,385,242
https://en.wikipedia.org/wiki/Taylor%20number
In fluid dynamics, the Taylor number (Ta) is a dimensionless quantity that characterizes the importance of centrifugal "forces" or so-called inertial forces due to rotation of a fluid about an axis, relative to viscous forces. In 1923 Geoffrey Ingram Taylor introduced this quantity in his article on the stability of flow. The typical context of the Taylor number is in characterization of the Couette flow between rotating colinear cylinders or rotating concentric spheres. In the case of a system which is not rotating uniformly, such as the case of cylindrical Couette flow, where the outer cylinder is stationary and the inner cylinder is rotating, inertial forces will often tend to destabilize a system, whereas viscous forces tend to stabilize a system and damp out perturbations and turbulence. On the other hand, in other cases the effect of rotation can be stabilizing. For example, in the case of cylindrical Couette flow with positive Rayleigh discriminant, there are no axisymmetric instabilities. Another example is a bucket of water that is rotating uniformly (i.e. undergoing solid body rotation). Here the fluid is subject to the Taylor-Proudman theorem which says that small motions will tend to produce purely two-dimensional perturbations to the overall rotational flow. However, in this case the effects of rotation and viscosity are usually characterized by the Ekman number and the Rossby number rather than by the Taylor number. There are various definitions of the Taylor number which are not all equivalent, but most commonly it is given by where is a characteristic angular velocity, R is a characteristic linear dimension perpendicular to the rotation axis, and is the kinematic viscosity. In the case of inertial instability such as Taylor–Couette flow, the Taylor number is mathematically analogous to the Grashof number which characterizes the strength of buoyant forces relative to viscous forces in convection. When the former exceeds the latter by a critical ratio, convective instability sets in. Likewise, in various systems and geometries, when the Taylor number exceeds a critical value, inertial instabilities set in, sometimes known as Taylor instabilities, which may lead to Taylor vortices or cells. A Taylor–Couette flow describes the fluid behavior between 2 concentric cylinders in rotation. A textbook definition of the Taylor number is where R1 is the external radius of the internal cylinder, and R2 is the internal radius of the external cylinder. The critical Ta is about 1700. References Fluid dynamics Dimensionless numbers of fluid mechanics
Taylor number
[ "Chemistry", "Engineering" ]
537
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,385,900
https://en.wikipedia.org/wiki/Taylor%E2%80%93Couette%20flow
In fluid dynamics, the Taylor–Couette flow consists of a viscous fluid confined in the gap between two rotating cylinders. For low angular velocities, measured by the Reynolds number Re, the flow is steady and purely azimuthal. This basic state is known as circular Couette flow, after Maurice Marie Alfred Couette, who used this experimental device as a means to measure viscosity. Sir Geoffrey Ingram Taylor investigated the stability of Couette flow in a ground-breaking paper. Taylor's paper became a cornerstone in the development of hydrodynamic stability theory and demonstrated that the no-slip condition, which was in dispute by the scientific community at the time, was the correct boundary condition for viscous flows at a solid boundary. Taylor showed that when the angular velocity of the inner cylinder is increased above a certain threshold, Couette flow becomes unstable and a secondary steady state characterized by axisymmetric toroidal vortices, known as Taylor vortex flow, emerges. Subsequently, upon increasing the angular speed of the cylinder the system undergoes a progression of instabilities which lead to states with greater spatio-temporal complexity, with the next state being called wavy vortex flow. If the two cylinders rotate in opposite sense then spiral vortex flow arises. Beyond a certain Reynolds number there is the onset of turbulence. Circular Couette flow has wide applications ranging from desalination to magnetohydrodynamics and also in viscosimetric analysis. Different flow regimes have been categorized over the years including twisted Taylor vortices and wavy outflow boundaries. It has been a well researched and documented flow in fluid dynamics. Flow description A simple Taylor–Couette flow is a steady flow created between two rotating infinitely long coaxial cylinders. Since the cylinder lengths are infinitely long, the flow is essentially unidirectional in steady state. If the inner cylinder with radius is rotating at constant angular velocity and the outer cylinder with radius is rotating at constant angular velocity as shown in figure, then the azimuthal velocity component is given by where Rayleigh's criterion Lord Rayleigh studied the stability of the problem with inviscid assumption i.e., perturbing Euler equations. The criterion states that in the absence of viscosity the necessary and sufficient condition for distribution of azimuthal velocity to be stable is everywhere in the interval; and, further, that the distribution is unstable if should decrease anywhere in the interval. Since represents angular momentum per unit mass, of a fluid element about the axis of rotation, an alternative way of stating the criterion is: a stratification of angular momentum about an axis is stable if and if only it increases monotonically outward. Applying this criterion to the Taylor-Couette flow indicates that the flow is stable if , i.e., for stability, the outer cylinder must rotate (in the same sense) with an angular speed greater than -times that of the inner cylinder. The Rayleigh's criterion is violated () throughout the whole fluid when . On the other hand, when the cylinders rotate in opposite directions, i.e., when , Rayleigh's criterion is violated only in the inner region, i.e., for where . Taylor's criterion In a seminal work, G. I. Taylor found the criterion for instability in the presence of viscous forces both experimentally and theoretically. In general, viscous forces are found to postpone the onset of instability, predicted by Rayleigh's criterion. The stability is characterized by three parameters, namely, , and a Taylor number The first result pertains to the fact that the flow is stable for , consistent with Rayleigh's criterion. However, there are also stable cases in certain parametric range for . Taylor obtained explicit criterion for the narrow gap in which the annular gap is small compared with the mean radius , or in other words, . A better definition of Taylor number in the thin-gap approximation is In terms of this Taylor number, the critical condition for same-sense rotation was found to be As , the critical Taylor number is given by Taylor vortex Taylor vortices (also named after Sir Geoffrey Ingram Taylor) are vortices formed in rotating Taylor–Couette flow when the Taylor number () of the flow exceeds a critical value . For flow in which instabilities in the flow are not present, i.e. perturbations to the flow are damped out by viscous forces, and the flow is steady. But, as the exceeds , axisymmetric instabilities appear. The nature of these instabilities is that of an exchange of stabilities (rather than an overstability), and the result is not turbulence but rather a stable secondary flow pattern that emerges in which large toroidal vortices form in flow, stacked one on top of the other. These are the Taylor vortices. While the fluid mechanics of the original flow are unsteady when , the new flow, called Taylor–Couette flow, with the Taylor vortices present, is actually steady until the flow reaches a large Reynolds number, at which point the flow transitions to unsteady "wavy vortex" flow, presumably indicating the presence of non-axisymmetric instabilities. The idealized mathematical problem is posed by choosing a particular value of , , and . As and from below, the critical Taylor number is ⁠⁠ Gollub–Swinney circular Couette experiment In 1975, J. P. Gollub and H. L. Swinney published a paper on the onset of turbulence in rotating fluid. In a Taylor–Couette flow system, they observed that, as the rotation rate increases, the fluid stratifies into a pile of "fluid donuts". With further increases in the rotation rate, the donuts oscillate and twist and finally become turbulent. Their study helped establish the Ruelle–Takens scenario in turbulence, which is an important contribution by Floris Takens and David Ruelle towards understanding how hydrodynamic systems transition from stable flow patterns into turbulent. While the principal, governing factor for this transition is the Reynolds number, there are other important influencing factors: whether the flow is open (meaning there is a lateral up- and downstream) or closed (flow is laterally bound; e.g. rotating), and bounded (influenced by wall effects) or unbounded (not influenced by wall effects). According to this classification the Taylor–Couette flow is an example of a flow pattern forming in a closed, bounded flow system. References Further reading Fluid dynamics Fluid dynamic instabilities
Taylor–Couette flow
[ "Chemistry", "Engineering" ]
1,363
[ "Piping", "Chemical engineering", "Fluid dynamic instabilities", "Fluid dynamics" ]
2,386,081
https://en.wikipedia.org/wiki/Dentine%20bonding%20agents
Also known as a "bonderizer" bonding agents (spelled dentin bonding agents in American English) are resin materials used to make a dental composite filling material adhere to both dentin and enamel. Bonding agents are often methacrylates with some volatile carrier and solvent like acetone. They may also contain diluent monomers. For proper bonding of resin composite restorations, dentin should be conditioned with polyacrylic acids to remove the smear layer, created during mechanical treatment with dental bore, and expose some of the collagen network or organic matrix of dentin. Adhesive resin should create the so-called hybrid layer (consisting of a collagen network exposed by etching and embedded in adhesive resin). This layer is an interface between dentin and adhesive resin and the final quality of dental restoration depends greatly on its properties. Modern dental bonding systems come as a “three-step system”, where the etchant, primer, and adhesive are applied sequentially; as a “two-step system”, where the etchant and the primer are combined for simultaneous application; and as a “one-step system”, where all the components should be premixed and applied in a single application (so-called sixth generation of bonding agents). Chemical processes involved in bonding to dentine Removal of the smear layer and etching of dentine Priming of the dentine surface Bonding of the primed dentine surface to the restorative material Removal of the smear layer and dentine etching A dentine conditioning agent is used initially, to remove the smear layer resulting from the preparation of a cavity and, to alter the dentine surface by partially demineralising the intertubulary dentine. This partially demineralised dentine acts as a hollow scaffolding which can be perfused with the primer. Over-etching (as well as over-drying) of the dentine can lead to collapse of the collagen network, making infiltration of the primer more challenging. However, sclerosed dentine requires a longer time of exposure to the dentine conditioner compared to healthy dentine. Some dentine conditioners contain a chemical called glutaraldehyde, which reinforces the collagen matrix, preventing its collapse. Some common dentine conditioners include: phosphoric acid nitric acid maleic acid citric acid ethylene diamine tetra-acetic acid (EDTA) Priming of the dentine surface Bonding of the primed dentine surface to the restorative material Dentin bonding How does dentinal bonding occur? Dentin bonding refers to process of bonding a resin to conditioned dentin, where mineral component is replaced with resin monomers to form a biocomposite comprising dentin collagen and cured resin. The adhesive-dentin interface forms a tight and permanent bond between dentin and composite resins. It can be accomplished by either etch-and-rinse (total etch) or self-etch adhesives. In etch-and rinse, acid will dissolve the minerals to a certain depth and leaves the highly porous dentinal collagen network suspended in water. Then, the collagen network is infiltrated with resin monomers. After chemical polymerization of these monomers happen, activated by light cure, it will result in a polymer-collagen biocomposite, commonly known as the hybrid layer: The mechanism of action is explained below: a) Application of acid to dentin will result in partial/total removal of smear layer and demineralization of the dentin. b) Acid will demineralize the intertubular and peritubular dentin, and then open the dentinal tubules while exposing the collagen fibres, hence increasing the microporosity of intertubular dentin. c) Dentin will be demineralized by up to approximately 7.5 μmeter, depending on the type of acid used, time of application and concentration. d) Primer system is designed to increase critical surface tension of dentin, which gets decreased after etching of acid. e) Bonding mechanism is when: Primer and bonding resin are applied to etched dentin, they penetrate the intertubular dentin, forming hybrid layer. They also penetrate and polymerize in open dentinal tubules, forming resin tags. Moist bonding technique has been shown repeatedly to enhance bond strengths of etch-and-rinse adhesives because water preserves the porosity of collagen network for monomer interdiffusion. Hybrid layer Its presence was identified by Nakabayashi and co-workers where the hybrid layer consists of demineralized intertubular dentin and infiltrated and polymerized adhesive resin. The hybrid layer is hydrophobic, acid resistant and tough. The quality of hybrid layer formed decides the strength of resin dentin interface. When the hybrid layer becomes thicker and more uniform, the bond strength is better. Smear layer and its role in bonding Smear layer refers to a layer of debris on the inorganic surface of substrate which comprises residual inorganic and organic components. This layer is produced whenever the tooth structure undergoes a preparation with a bur. Smear layer will fill the orifices of the dentinal tubules, hence forming smear plugs. These smear plugs decrease dentin permeability by 90% and the smear plug alone can prevent adhesive resin penetration into dentinal tubules. The thickness of smear layer can range from 0.5-2 μmeter and for the smear plug, 1 to 10 μmeter. Smear layer poses some threat for optimal bonding to occur. That is why it needs to be removed. For example, smear layer needs to be removed prior to bonding by etch-and-rinse (total etch) adhesives. This will lead to thicker hybrid layer and long, denser resin tags which results in better bond strength. Carious versus sound dentin for dentinal bonding Some caries excavation methods lead to leaving caries-affected dentin behind to serve as the bonding substrate, mostly in indirect pulp capping. It is reported that the immediate bond strengths to caries-affected dentin are 20-50% lower than to sound dentin, and even lower with caries-infected dentin. How does caries progression correlates with this? First, it reduces mineral content, increases porosity and changes the dentinal collagen structure and its distribution too. These changes can cause a significant reduction in the mechanical properties in dentin e.g. hardness, stiffness, tensile strength, modulus of elasticity, and shrinkage during drying, which makes dentin in and under hybrid layer more prone to cohesive failures under occlusal forces. Lower mineral content of the caries-affected dentin will allow phosphoric acid or acidic monomers to demineralize matrix more deeply than in normal dentin, which results in even more residual water in exposed collagen matrix. How to improve resin-dentin bonding? Moist dentine One of the important factors in determining the dentinal bonding is collagen. When dentin is etched, smear layer and minerals from dentinal structure will be removed, hence exposing the collagen fibres. The areas where the minerals are removed are filled with water which functions as plasticizer for collagen and keeps it at expanded soft state. This means that the spaces for resin-dentin bonding are preserved. However, these collagen fibres can collapse in dry condition and if the organic layer of matrix is denatured, this will obstruct the resin to bond with dentin and form a hybrid layer. Because of this, the presence of moist or wet dentin is required to achieve successful dentin bonding. This is due to presence of water miscible organic solvents like ethanol or acetone in the primers. The acetone trails water and hence improves the penetration of the monomers into the dentin for better micromechanical bonding. Also, water will prevent collagen fibres from collapsing, thus making better penetration and bonding between resin and dentin. In order to get a moist dentin, it is advisable to not dry dentin with compressed air after rinsing away the etchant. Instead, high volume evacuation suction can be used to remove excess water and then blot the remaining water present on dentin using gauze or cotton. The dentin surface should appear glistening. If the dentin surface is too wet, water will dilute the resin primer and compete for the sites in collagen network, which will prevent hybrid layer formation. If the dentin surface is too dry, collapse of collagen fibres and demineralized dentin can occur, leading to low bond strength. Agitation of hydrophilic primer or adhesive during application Besides having adequate dentinal moisture, agitation of the primers during application of two-step etch-and-rinse adhesives may be critical for optimal penetration into the demineralized collagen fibres. It also may aid the evaporation of residual water in the adhesive and hybrid layers, thus preventing nano leakage. In a clinical trial comparing the performance of Prime & Bond NT using no rubbing action, slight rubbing action and vigorous rubbing action in the restoration of NCCLs, 92.5% of restorations in vigorous rubbing action group were found to retain after 24 months of clinical service. For the other two groups, the retention rates of the restoration were slightly lower, at 82.5%. See also Dental cement References See also Dentine bonding agents: an overview Dental materials Adhesives Polymer chemistry
Dentine bonding agents
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,027
[ "Dental materials", "Materials science", "Materials", "Polymer chemistry", "Matter" ]
2,388,144
https://en.wikipedia.org/wiki/Timing
Timing is the tracking or planning of the spacing of events in time. It may refer to: Timekeeping, the process of measuring the passage of time Synchronization, controlling the timing of a process relative to another process Time metrology, the measurement of time Timing in different fields Timing (comedy), use of rhythm, tempo and pausing to enhance comedy and humour Timing (linguistics), rhythmic division of time into equal portions by a language Timing (music), ability to "keep time" accurately and to synchronise to an ensemble Color timing, photochemical process of altering and enhancing the color of an image Ignition timing, timing of piston and crankshaft so that a spark will occur near the end of the compression stroke Market timing, by attempting to predict future market price movements Memory timings (or RAM timings), measure of the performance of DRAM memory Valve timing, the precise timing of the opening and closing of the valves in a piston engine Time
Timing
[ "Physics", "Mathematics" ]
198
[ "Physical quantities", "Time", "Time stubs", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
7,702,975
https://en.wikipedia.org/wiki/Frank%E2%80%93Tamm%20formula
The Frank–Tamm formula yields the amount of Cherenkov radiation emitted on a given frequency as a charged particle moves through a medium at superluminal velocity. It is named for Russian physicists Ilya Frank and Igor Tamm who developed the theory of the Cherenkov effect in 1937, for which they were awarded a Nobel Prize in Physics in 1958. When a charged particle moves faster than the phase speed of light in a medium, electrons interacting with the particle can emit coherent photons while conserving energy and momentum. This process can be viewed as a decay. See Cherenkov radiation and nonradiation condition for an explanation of this effect. Equation The energy emitted per unit length travelled by the particle per unit of frequency is: provided that . Here and are the frequency-dependent permeability and index of refraction of the medium respectively, is the electric charge of the particle, is the speed of the particle, and is the speed of light in vacuum. Cherenkov radiation does not have characteristic spectral peaks, as typical for fluorescence or emission spectra. The relative intensity of one frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum. The total amount of energy radiated per unit length is: This integral is done over the frequencies for which the particle's speed is greater than speed of light of the media . The integral is convergent (finite) because at high frequencies the refractive index becomes less than unity and for extremely high frequencies it becomes unity. Derivation of Frank–Tamm formula Consider a charged particle moving relativistically along -axis in a medium with refraction index with a constant velocity . Start with Maxwell's equations (in Gaussian units) in the wave forms (also known as the Lorenz gauge condition) and take the Fourier transform: For a charge of magnitude (where is the elementary charge) moving with velocity , the density and charge density can be expressed as and , taking the Fourier transform gives: Substituting this density and charge current into the wave equation, we can solve for the Fourier-form potentials: and Using the definition of the electromagnetic fields in terms of potentials, we then have the Fourier-form of the electric and magnetic field: and To find the radiated energy, we consider electric field as a function of frequency at some perpendicular distance from the particle trajectory, say, at , where is the impact parameter. It is given by the inverse Fourier transform: First we compute -component of the electric field (parallel to ): For brevity we define . Breaking the integral apart into , the integral can immediately be integrated by the definition of the Dirac Delta: The integral over has the value , giving: The last integral over is in the form of a modified (Macdonald) Bessel function, giving the evaluated parallel component in the form: One can follow a similar pattern of calculation for the other fields components arriving at: and We can now consider the radiated energy per particle traversed distance . It can be expressed through the electromagnetic energy flow through the surface of an infinite cylinder of radius around the path of the moving particle, which is given by the integral of the Poynting vector over the cylinder surface: The integral over at one instant of time is equal to the integral at one point over all time. Using : Converting this to the frequency domain: To go into the domain of Cherenkov radiation, we now consider perpendicular distance much greater than atomic distances in a medium, that is, . With this assumption we can expand the Bessel functions into their asymptotic form: and Thus: If has a positive real part (usually true), the exponential will cause the expression to vanish rapidly at large distances, meaning all the energy is deposited near the path. However, this isn't true when is purely imaginary – this instead causes the exponential to become 1 and then is independent of , meaning some of the energy escapes to infinity as radiation – this is Cherenkov radiation. is purely imaginary if is real and . That is, when is real, Cherenkov radiation has the condition that . This is the statement that the speed of the particle must be larger than the phase velocity of electromagnetic fields in the medium at frequency in order to have Cherenkov radiation. With this purely imaginary condition, and the integral can be simplified to: This is the Frank–Tamm equation in Gaussian units. Notes References External links Cherenkov radiation (Tagged ‘Frank-Tamm formula’) Particle physics Eponymous equations of physics Experimental particle physics
Frank–Tamm formula
[ "Physics" ]
978
[ "Equations of physics", "Eponymous equations of physics", "Experimental physics", "Particle physics", "Experimental particle physics" ]
7,705,107
https://en.wikipedia.org/wiki/DNA%20Data%20Bank%20of%20Japan
The DNA Data Bank of Japan (DDBJ) is a biological database that collects DNA sequences. It is located at the National Institute of Genetics (NIG) in the Shizuoka prefecture of Japan. It is also a member of the International Nucleotide Sequence Database Collaboration or INSDC. It exchanges its data with European Molecular Biology Laboratory at the European Bioinformatics Institute and with GenBank at the National Center for Biotechnology Information on a daily basis. Thus these three databanks contain the same data at any given time. History DDBJ began data bank activities in 1987 at NIG and remains the only nucleotide sequence data bank in Asia. Organisation Although DDBJ mainly receives its data from Japanese researchers, it can accept data from contributors from any other country. DDBJ is primarily funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT). DDBJ has an international advisory committee which consists of nine members, 3 members each from Europe, US, and Japan. This committee advises DDBJ about its maintenance, management and future plans once a year. Apart from this, DDBJ also has an international collaborative committee which advises on various technical issues related to international collaboration and consists of working-level participants. See also National Center for Biotechnology Information (NCBI) European Bioinformatics Institute (EBI) References External links Official site DDBJ entry in MetaBase. Genomics Genome databases Bioinformatics organizations Databases in Japan
DNA Data Bank of Japan
[ "Biology" ]
305
[ "Bioinformatics", "Bioinformatics organizations" ]
7,705,906
https://en.wikipedia.org/wiki/Meta-Chlorophenylpiperazine
meta-Chlorophenylpiperazine (mCPP) is a psychoactive drug of the phenylpiperazine class. It was initially developed in the late-1970s and used in scientific research before being sold as a designer drug in the mid-2000s. It has been detected in pills touted as legal alternatives to illicit stimulants in New Zealand and pills sold as "ecstasy" in Europe and the United States. Despite its advertisement as a recreational substance, mCPP is actually generally considered to be an unpleasant experience and is not desired by drug users. It lacks any reinforcing effects, but has "psychostimulant, anxiety-provoking, and hallucinogenic effects." It is also known to produce dysphoric, depressive, and anxiogenic effects in rodents and humans, and can induce panic attacks in individuals susceptible to them. It also worsens obsessive–compulsive symptoms in people with the disorder. mCPP is known to induce headaches in humans and has been used for testing potential antimigraine medications. It has potent anorectic effects and has encouraged the development of selective 5-HT2C receptor agonists for the treatment of obesity as well. Pharmacology Pharmacodynamics mCPP possesses significant affinity for the 5-HT1A, 5-HT1B, 5-HT1D, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT3, and 5-HT7 receptors, as well as the SERT. It also has some affinity for α1-adrenergic, α2-adrenergic, H1, I1, and NET. It behaves as an agonist at most serotonin receptors. mCPP has been shown to act not only as a serotonin reuptake inhibitor but as a serotonin releasing agent as well. mCPP's strongest actions are at the 5-HT2B and 5-HT2C receptors and its discriminative cue is mediated primarily by 5-HT2C. Its negative effects such as anxiety, headaches, and appetite loss are likely mediated by its actions on the 5-HT2C receptor. Other effects of mCPP include nausea, hypoactivity, and penile erection, the latter two the result of increased 5-HT2C activity and the former likely via 5-HT3 stimulation. In comparison studies, mCPP has approximately 10-fold selectivity for the human 5-HT2C receptor over the human 5-HT2A and 5-HT2B receptors (Ki = 3.4 nM vs. 32.1 and 28.8 nM). It acts as a partial agonist of the human 5-HT2A and 5-HT2C receptors but as an antagonist of the human 5-HT2B receptors. Despite acting as a serotonin 5-HT2A receptor agonist, mCPP has been described as non-hallucinogenic. However, hallucinations have occasionally been reported when large doses of mCPP are taken. Pharmacokinetics mCPP is metabolized via the CYP2D6 isoenzyme by hydroxylation to para-hydroxy-mCPP (p-OH-mCPP) and this plays a major role in its metabolism. The elimination half-life of mCPP is 4 to 14 hours. mCPP is a metabolite of a variety of other piperazine drugs including trazodone, nefazodone, etoperidone, enpiprazole, mepiprazole, cloperidone, peraclopone, and BRL-15,572. It is formed by dealkylation via CYP3A4. Caution should be exercised in concomitant administration of CYP2D6 inhibitors such as bupropion, fluoxetine, paroxetine, and thioridazine with drugs that produce mCPP as a metabolite as these drugs are known to increase concentrations of the parent molecule (e.g., trazodone) and of mCPP. Chemistry Analogues Analogues of mCPP include: 1-Benzylpiperazine (BZP) 1-Methyl-4-benzylpiperazine (MBZP) 1,4-Dibenzylpiperazine (DBZP) 3-Trifluoromethylphenylpiperazine (TFMPP) 3,4-Methylenedioxy-1-benzylpiperazine (MDBZP) 4-Bromo-2,5-dimethoxy-1-benzylpiperazine (2C-B-BZP) 4-Fluorophenylpiperazine (pFPP) 4-Methoxyphenylpiperazine (MeOPP) Some additional analogues include quipazine, ORG-12962, and 3C-PEP. Society and culture Legal status Belgium mCPP is illegal in Belgium. Brazil mCPP is illegal in Brazil. Canada mCPP is not a controlled drug in Canada. China As of October 2015 mCPP is a controlled substance in China. Czech Republic mCPP is legal in the Czech Republic. Denmark mCPP is illegal in Denmark. Finland mCPP is illegal in Finland. Germany mCPP is illegal in Germany. Hungary mCPP is illegal in Hungary since 2012. Japan mCPP is illegal in Japan since 2006. Netherlands mCPP is legal in the Netherlands. New Zealand Based on the recommendation of the EACD, the New Zealand government has passed legislation which placed BZP, along with the other piperazine derivatives TFMPP, mCPP, pFPP, MeOPP and MBZP, into Class C of the New Zealand Misuse of Drugs Act 1975. A ban was intended to come into effect in New Zealand on December 18, 2007, but the law change did not go through until the following year, and the sale of BZP and the other listed piperazines became illegal in New Zealand as of 1 April 2008. An amnesty for possession and usage of these drugs remained until October 2008, at which point they became completely illegal. However, mCPP is legally used for scientific research. Norway mCPP is illegal in Norway. Russia mCPP is illegal in Russia. Sweden mCPP is illegal in Sweden. Poland mCPP is illegal in Poland. United States mCPP is not scheduled at the federal level in the United States, but it is possible that it could be considered a controlled substance analog of BZP, in which case purchase, sale, or possession could be prosecuted under the Federal Analog Act. However, "chlorophenylpiperazine" is a Schedule I controlled substance in the state of Florida making it illegal to buy, sell, or possess in this state. Turkey mCPP is illegal in Turkey since 20/05/2009. See also Substituted piperazine References External links 3-Chlorophenyl compounds 5-HT2A agonists 5-HT2B antagonists 5-HT2C agonists Alpha-1 blockers Alpha-2 blockers Anxiogenics Designer drugs Human drug metabolites meta-Chlorophenylpiperazines Non-hallucinogenic 5-HT2A receptor agonists Serotonin receptor agonists Serotonin releasing agents Stimulants
Meta-Chlorophenylpiperazine
[ "Chemistry" ]
1,578
[ "Chemicals in medicine", "Human drug metabolites" ]
7,705,960
https://en.wikipedia.org/wiki/Modal%20dispersion
Modal dispersion is a distortion mechanism occurring in multimode fibers and other waveguides, in which the signal is spread in time because the propagation velocity of the optical signal is not the same for all modes. Other names for this phenomenon include multimode distortion, multimode dispersion, modal distortion, intermodal distortion, intermodal dispersion, and intermodal delay distortion. In the ray optics analogy, modal dispersion in a step-index optical fiber may be compared to multipath propagation of a radio signal. Rays of light enter the fiber with different angles to the fiber axis, up to the fiber's acceptance angle. Rays that enter with a shallower angle travel by a more direct path, and arrive sooner than rays that enter at a steeper angle (which reflect many more times off the boundaries of the core as they travel the length of the fiber). The arrival of different components of the signal at different times distorts the shape. Modal dispersion limits the bandwidth of multimode fibers. For example, a typical step-index fiber with a 50 μm core would be limited to approximately 20 MHz for a one kilometer length, in other words, a bandwidth of 20 MHz·km. Modal dispersion may be considerably reduced, but never completely eliminated, by the use of a core having a graded refractive index profile. However, multimode graded-index fibers having bandwidths exceeding 3.5 GHz·km at 850 nm are now commonly manufactured for use in 10 Gbit/s data links. Modal dispersion should not be confused with chromatic dispersion, a distortion that results due to the differences in propagation velocity of different wavelengths of light. Modal dispersion occurs even with an ideal, monochromatic light source. A special case of modal dispersion is polarization mode dispersion (PMD), a fiber dispersion phenomenon usually associated with single-mode fibers. PMD results when two modes that normally travel at the same speed due to fiber core geometric and stress symmetry (for example, two orthogonal polarizations in a waveguide of circular or square cross-section), travel at different speeds due to random imperfections that break the symmetry. Troubleshooting In multimode optical fiber with many wavelengths propagating, it is sometimes hard to identify the dispersed wavelength out of all the wavelengths that are present, if there is not yet a service degradation issue. One can compare the present optical power of each wavelength to the designed values and look for differences. After that, the optical fiber is tested end to end. If no loss is found, then most probably there is dispersion with that particular wavelength. Normally engineers start testing the fiber section by section until they reach the affected section; all wavelengths are tested and the affected wavelength produces a loss at the far end of the fiber. One can easily calculate how much of the fiber is affected and replace that part of fiber with a new one. Replacement of optical fiber is only required when there is an intense dispersion and service is being affected; otherwise various methods can be used to compensate for the dispersion. References External links Interactive webdemo for modal dispersion Institute of Telecommunications, University of Stuttgart Fiber optics Telecommunications engineering
Modal dispersion
[ "Engineering" ]
678
[ "Electrical engineering", "Telecommunications engineering" ]
7,707,347
https://en.wikipedia.org/wiki/Trap%20%28plumbing%29
In plumbing, a trap is a U-shaped portion of pipe designed to trap liquid or gas to prevent unwanted flow; most notably sewer gases from entering buildings while allowing waste materials to pass through. In oil refineries, traps are used to prevent hydrocarbons and other dangerous gases and chemical fumes from escaping through drains. In heating systems, the same feature is used to prevent thermo-siphoning which would allow heat to escape to locations where it is not wanted. Similarly, some pressure gauges are connected to systems using U bends to maintain a local gas while the system uses liquid. For decorative effect, they can be disguised as complete loops of pipe, creating more than one U for added efficacy. General description In domestic applications, traps are typically U, S, Q, or J-shaped pipe located below or within a plumbing fixture. An S-shaped trap is also known as an S-bend. It was invented by Alexander Cumming in 1775 but became known as the U-bend following the introduction of the U-shaped trap by Thomas Crapper in 1880. The U-bend could not jam, so, unlike the S-bend, it did not need an overflow. In the United States, traps are commonly referred to as P-traps. It is the addition of a 90 degree fitting on the outlet side of a U-bend, thereby creating a P-like shape (oriented horizontally). It is also referred to as a sink trap because it is installed under most sinks. Because of its shape, the trap retains some water after the fixture's use. This water creates an air seal that prevents sewer gas from passing from the drain pipes back into the building. Essentially all plumbing fixtures including sinks, bathtubs, and showers must be equipped with either an internal or external trap. Toilets almost always have an internal trap. Because it is a localized low-point in the plumbing, sink traps also tend to capture small and heavy objects (such as jewellery or coins) accidentally dropped down the sink. Traps also tend to collect hair, sand, food waste and other debris and limit the size of objects that enter the plumbing system, thereby catching oversized objects. For all of these reasons, most traps may be disassembled for cleaning or provide a cleanout feature. Where a volume of water may be rapidly discharged through the trap, a vertical vented pipe called a standpipe may be attached to the trap to prevent the disruption of the seal in other nearby traps. The most common use of standpipes in houses is for clothes washing machines, which rapidly dispense a large volume of wastewater while draining the wash and rinse cycles. In chemical engineering applications, a trap may be known as a lute. History An S-shaped trap is also known as an S-bend. It was invented by Alexander Cumming in 1775 but became known as the U-bend following the introduction of the U-shaped trap by Thomas Crapper in 1880. The new U-bend could not jam, so, unlike the S-bend, it did not need an overflow. Once invented, despite being simple and reasonably reliable, widespread use was slow coming. In Britain, the requirement to use traps was introduced only after the Great Stink in London, in the summer of 1858, when the objectionable smell of the River Thames, which was effectively an open sewer, affected the nearby Houses of Parliament. That motivated the legislators to authorise the construction of a modern sewerage system in the city, of which the S-bend was an essential component. , only about two-thirds of the world population have access to traps, in spite of the evidence that good sewage systems significantly improve economic productivity in places that employ them. Venting and auxiliary devices Maintaining the water seal is critical to trap operation; traps might dry out, and poor venting can suction or blow water out of the traps. This is usually avoided by venting the drain pipes downstream of the trap; by being vented to the atmosphere outside the building, the drain lines never operate at a pressure much higher or lower than atmospheric pressure. In the United States, plumbing codes usually provide strict limitations on how far a trap may be located from the nearest vent stack. When a vent cannot be provided, an air admittance valve may be used instead. These devices avoid negative pressure in the drain pipe by venting room air into the drain pipe (behind the trap). A "Chicago Loop" is another alternative. When a trap is installed on a fixture that is not routinely used—such as a floor drain—the eventual evaporation of the water in the trap must be considered. In these cases, a trap primer may be installed; these are devices that automatically recharge traps with water to maintain their water seals. Accepted traps in the United States In some regions of the US, "S" traps are no longer accepted by the building codes as unvented S-traps tend to siphon dry. It may be possible to determine whether a household uses an S- or U-bend by the presence of an overflow pipe outlet. What is required instead is a P-trap with proper venting. Certain drum-styled traps are also discouraged or banned. See also Buchan trap, an older type of trap Drainage Drain-waste-vent system Garbage disposal unit Lock (water navigation) Sanitation Septic system Septic tank Tap water Water pipe References External links Bathrooms Piping
Trap (plumbing)
[ "Chemistry", "Engineering" ]
1,108
[ "Piping", "Chemical engineering", "Mechanical engineering", "Building engineering" ]
7,709,624
https://en.wikipedia.org/wiki/Education%20for%20Chemical%20Engineers
Education for Chemical Engineers is a peer-reviewed academic journal published quarterly by Elsevier on behalf of the Institution of Chemical Engineers. The journal's scope covers all aspects of chemical engineering education. The journal was established in 2006 and publishes educational research papers, teaching and learning notes, and resource reviews. It is an official Journal of the European Federation of Chemical Engineering. Abstracting and indexing The journal is abstracted and indexed in EBSCOHost, Gale Database of Publications & Broadcast Media, and Scopus. External links Chemical industry in the United Kingdom Chemical engineering journals Chemical education journals Engineering education in the United Kingdom Academic journals established in 2006 Elsevier academic journals Quarterly journals English-language journals Institution of Chemical Engineers Academic journals associated with learned and professional societies of the United Kingdom 2006 establishments in England
Education for Chemical Engineers
[ "Chemistry", "Engineering" ]
160
[ "Chemical engineering journals", "Chemical engineering", "Chemical engineering organizations", "Institution of Chemical Engineers" ]
7,710,180
https://en.wikipedia.org/wiki/Quantum%20tic-tac-toe
Quantum tic-tac-toe is a "quantum generalization" of tic-tac-toe in which the players' moves are "superpositions" of plays in the classical game. The game was invented by Allan Goff of Novatia Labs, who describes it as "a way of introducing quantum physics without mathematics", and offering "a conceptual foundation for understanding the meaning of quantum mechanics". Background The motivation to invent quantum tic-tac-toe was to explore what it means to be in two places at once. In classical physics, a single object cannot be in two places at once. In quantum physics, however, the mathematics used to describe quantum systems seems to imply that before being subjected to quantum measurement (or "observed") certain quantum particles can be in multiple places at once. (The textbook example of this is the double-slit experiment.) How the universe can be like this is rather counterintuitive. There is a disconnect between the mathematics and our mental images of reality, a disconnect that is absent in classical physics. This is why quantum mechanics supports multiple "interpretations". The researchers who invented quantum tic-tac-toe were studying abstract quantum systems, formal systems whose axiomatic foundation included only a few of the axioms of quantum mechanics. Quantum tic-tac-toe became the most thoroughly studied abstract quantum system and offered insights that spawned new research. It also turned out to be a fun and engaging game, a game which also provides good pedagogy in the classroom. The rules of quantum tic-tac-toe attempt to capture three phenomena of quantum systems: superposition the ability of quantum objects to be in two places at once. entanglement the phenomenon where distant parts of a quantum system display correlations that cannot be explained by either timelike causality or common cause. collapsethe phenomenon where the quantum states of a system are reduced to classical states. Collapses occur when a measurement happens, but the mathematics of the current formulation of quantum mechanics is silent on the measurement process. Many of the interpretations of quantum mechanics derive from different efforts to deal with the measurement problem. Gameplay Quantum tic-tac-toe captures the three quantum phenomena discussed above by modifying one basic rule of classical tic-tac-toe: the number of marks allowed in each square. Additional rules specify when and how a set of marks "collapses" into classical moves. On each move, the current player marks two squares with their letter (X or O), instead of one, and each letter (X or O) is subscripted with the number of the move (beginning counting with 1). The pair of marks are called spooky marks. (Because X always moves first, the subscripts on X are always odd and the subscripts on O are always even.) For example, player 1's first move might be to place "X1" in both the upper left and lower right squares. The two squares thus marked are called entangled. During the game, there may be as many as eight spooky marks in a single square (if the square is entangled with all eight other squares). The phenomenon of collapse is captured by specifying that a "cyclic entanglement" causes a "measurement". A cyclic entanglement is a cycle in the entanglement graph; for example, if square 1 is entangled via move X1 with square 4, and square 4 is entangled via move X3 with square 8, and square 8 is in turn entangled via move O4 with square 1, then these three squares form a cyclic entanglement. At the end of the turn on which the cyclic entanglement was created, the player whose turn it is not — that is, the player who did not create the cycle — chooses one of two ways to "measure" the cycle and thus cause all the entangled squares to "collapse" into classical tic-tac-toe moves. In the preceding example, since player 2 created the cycle, player 1 decides how to "measure" it. Player 1's two options are: X1 collapses into square 1. This forces O4 to collapse into square 8 and X3 to collapse into square 4. X1 collapses into square 4. This forces X3 to collapse into square 8 and O4 to collapse into square 1. Any other chains of entanglements hanging off the cycle would also collapse at this time; for example, if square 1 were also entangled via O2 with square 5, then either measurement above would force O2 to collapse into square 5. (Note that it is impossible for two or more cyclic entanglements to be created in a single turn.) When a move collapses into a single square, that square is permanently marked (in larger print) with the letter and subscript of the collapsed move — a classical mark. A square containing a classical mark is fixed for the rest of the game; no more spooky marks may be placed in it. The first player to achieve a tic-tac-toe (three in a row horizontally, vertically, or diagonally) consisting entirely of classical marks is declared the winner. Since it is possible for a single measurement to collapse the entire board and give classical tic-tac-toes to both players simultaneously, the rules declare that the player whose tic-tac-toe has the lower maximum subscript (representing the first completed line in the collapsed timeline) earns one point, and the player whose tic-tac-toe has the higher maximum subscript earns only one-half point. See also Quantum game theory References External links Quantum Tic‐Tac‐Toe: A Game of Entanglement Tic-tac-toe Abstract strategy games Thought experiments in quantum mechanics Tic-tac-toe variants
Quantum tic-tac-toe
[ "Physics", "Mathematics" ]
1,209
[ "Quantum game theory", "Game theory", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
7,710,370
https://en.wikipedia.org/wiki/Suppressor%20mutation
A suppressor mutation is a second mutation that alleviates or reverts the phenotypic effects of an already existing mutation in a process defined synthetic rescue. Genetic suppression therefore restores the phenotype seen prior to the original background mutation. Suppressor mutations are useful for identifying new genetic sites which affect a biological process of interest. They also provide evidence between functionally interacting molecules and intersecting biological pathways. Intragenic vs. intergenic suppression Intragenic suppression Intragenic suppression results from suppressor mutations that occur in the same gene as the original mutation. In a classic study, Francis Crick (et al.) used intragenic suppression to study the fundamental nature of the genetic code. From this study it was shown that genes are expressed as non-overlapping triplets (codons). Researchers showed that mutations caused by either a single base insertion (+) or a single base deletion (-) could be "suppressed" or restored by a second mutation of the opposite sign, as long as the two mutations occurred in the same vicinity of the gene. This led to the conclusion that genes needed to be read in a specific "reading frame" and a single base insertion or deletion would shift the reading frame (frameshift mutation) in such a way that the remaining DNA would code for a different polypeptide than the one intended. Therefore, researchers concluded that the second mutation of opposite sign suppresses the original mutation by restoring the reading frame, as long as the portion between the two mutations is not critical for protein function. In addition to the reading frame, Crick also used suppressor mutations to determine codon size. It was found that while one and two base insertions/deletions of the same sign resulted in a mutant phenotype, deleting or inserting three bases could give a wild type phenotype. From these results it was concluded that an inserted or deleted triplet does not disturb the reading frame and the genetic code is in fact a triplet. Intergenic suppression Intergenic (also known as extragenic) suppression relieves the effects of a mutation in one gene by a mutation somewhere else within the genome. The second mutation is not on the same gene as the original mutation. Intergenic suppression is useful for identifying and studying interactions between molecules, such as proteins. For example, a mutation which disrupts the complementary interaction between protein molecules may be compensated for by a second mutation elsewhere in the genome that restores or provides a suitable alternative interaction between those molecules. Several proteins of biochemical, signal transduction, and gene expression pathways have been identified using this approach. Examples of such pathways include receptor-ligand interactions as well as the interaction of components involved in DNA replication, transcription, and translation. These Intergenic suppressions are also likely to persist in the population. When these compensatory mutations are established in organisms like E. coli making it resistant to the drug due to the presence of a drug, and the drug usage is halted, the resistant strains are not easily able to evolve back into strains that can then once again be sensitive to the drug they had incurred resistance to. These strains are likely not subject to losing these compensatory mutations and which would greatly decrease the fitness in the strain resulting in the intermediate strains. These intermediate strains are subjected to bottlenecking and thus making it difficult for the alleles to be reverted prior to Intergenic suppressions. Consequently, when drugs are halted it can be seen that these mutations are likely to persist in the population. Suppressor mutations also occur in genes that code for virus structural proteins. To create a viable phage T4 virus (see image), a balance of structural components is required. An amber mutant of phage T4 contains a mutation that changes a codon for an amino acid in a protein to the nonsense stop codon TAG (see stop codon and nonsense mutation). If, upon infection, an amber mutant defective in a gene encoding a needed structural component of phage T4 is weakly suppressed (in an E. coli host containing a specific altered tRNA – see nonsense suppressor), it will produce a reduced number of the needed structural component. As a consequence few if any viable phage are formed. However, it was found that viable phage could sometimes be produced in the host with the weak nonsense suppressor if a second amber mutation in a gene that encodes another structural protein is also present in the phage genome. It was found that the reason the second amber mutation could suppress the first one is that the two numerically reduced structural proteins would now be in balance. For instance, if the first amber mutation caused a reduction of tail fibers to one tenth the normal level, most phage particles produced would have insufficient tail fibers to be infective. However, if a second amber mutation is defective in a base plate component and causes one tenth the number of base plates to be made, this may restore the balance of tail fibers and base plates, and thus allow infective phage to be produced. Revertant In microbial genetics, a revertant is a mutant that has reverted to its former genotype or to the original phenotype by means of a suppressor mutation, or else by compensatory mutation somewhere in the gene (second site reversion). See also Synthetic viability References External links The mutations chapter of the WikiBooks General Biology textbook Examples of Beneficial Mutations Evolutionary biology Molecular genetics Radiation health effects
Suppressor mutation
[ "Chemistry", "Materials_science", "Biology" ]
1,109
[ "Evolutionary biology", "Radiation health effects", "Molecular genetics", "Molecular biology", "Radiation effects", "Radioactivity" ]
12,369,411
https://en.wikipedia.org/wiki/Surge%20tank
A surge tank (or surge drum or surge pool) is a standpipe or storage reservoir at the downstream end of a closed aqueduct, feeder pipe, or dam to absorb sudden rises of pressure, as well as to quickly provide extra water during a brief drop in pressure. In mining technology, ore pulp pumps use a relatively small surge tank to maintain a steady loading on the pump. For hydroelectric power uses, a surge tank is an additional storage space or reservoir fitted between the main storage reservoir and the powerhouse (as close to the powerhouse as possible). Surge tanks are usually provided in high or medium-head plants when there is a considerable distance between the water source and the power unit, necessitating a long penstock. The main functions of the surge tank are: When the load decreases, the water moves backward and gets stored in it. When the load increases, the additional supply of water will be provided by a surge tank. Operation Consider a pipe containing a flowing fluid. When a valve is either fully or partially closed at some point downstream, the fluid will continue to flow at the original velocity. In order to counteract the momentum of the fluid, the pressure will rise significantly (pressure surge) just upstream of the control valve and may result in damage to the pipe system. If a surge chamber is connected to the pipeline just upstream of the valve, on valve closure, the fluid, instead of being stopped suddenly by the valve, will flow upwards into the chamber, hence reducing the surge pressures experienced in the pipeline. Upon closure of the valve, the fluid continues to flow, passing into the surge tank causing the water level in the tank to rise. The level in the tank will continue to rise until the additional head due to the height of fluid in the tank balances the surge pressure in the pipeline. At this point the flow in the tank and pipeline will reverse causing the level in the tank to drop. This oscillation in tank height and flow will continue for some time but its magnitude will dissipate due to the effects of friction. Automotive surge tanks The surge tank is utilized in automotive applications to ensure that the inlet to the fuel pump is never starved for fuel. It is typically used in vehicles with electronic fuel injection or that will be sustaining high lateral acceleration loads for extended periods. Aircraft surge tanks Aircraft surge tanks are used on a select few aircraft to ensure that fuel does not spill over onto the ground when the fuel expands. These tanks must be periodically emptied to prevent fuel from spilling, which they do fairly often. See also References Hydraulics Plumbing
Surge tank
[ "Physics", "Chemistry", "Engineering" ]
516
[ "Plumbing", "Physical systems", "Construction", "Hydraulics", "Fluid dynamics" ]
12,370,450
https://en.wikipedia.org/wiki/Minichromosome%20maintenance
The minichromosome maintenance protein complex (MCM) is a DNA helicase essential for genomic DNA replication. Eukaryotic MCM consists of six gene products, Mcm2–7, which form a heterohexamer. As a critical protein for cell division, MCM is also the target of various checkpoint pathways, such as the S-phase entry and S-phase arrest checkpoints. Both the loading and activation of MCM helicase are strictly regulated and are coupled to cell growth cycles. Deregulation of MCM function has been linked to genomic instability and a variety of carcinomas. History and structure The minichromosome maintenance proteins were named after a yeast genetics screen for mutants defective in the regulation of DNA replication initiation. The rationale behind this screen was that if replication origins were regulated in a manner analogous to transcription promoters, where transcriptional regulators showed promoter specificity, then replication regulators should also show origin specificity. Since eukaryotic chromosomes contain multiple replication origins and the plasmids contain only one, a slight defect in these regulators would have a dramatic effect on the replication of plasmids but little effect on chromosomes. In this screen, mutants conditional for plasmid loss were identified. In a secondary screen, these conditional mutants were selected for defects in plasmid maintenance against a collection of plasmids each carrying a different origin sequence. Two classes of mcm mutants were identified: Those that affected the stability of all minichromosomes and others that affected the stability of only a subset of the minichromosomes. The former were mutants defective in chromosome segregation such as mcm16, mcm20 and mcm21. Among the latter class of origin-specific mutants were mcm1, mcm2, mcm3, mcm5 and mcm10. Later on, others identified Mcm4, Mcm6 and Mcm7 in yeasts and other eukaryotes based on homology to Mcm2p, Mcm3p and Mcm5p expanding the MCM family to six, subsequently known as the Mcm2-7 family. In archaea, the heterohexamer ring is replaced by a homohexamer made up of a single type mcm protein, pointing at a history of gene duplication and diversification. Mcm1 and Mcm10 are also involved in DNA replication, directly or indirectly, but have no sequence homology to the Mcm2-7 family. Function in DNA replication initiation and elongation MCM2-7 is required for both DNA replication initiation and elongation; its regulation at each stage is a central feature of eukaryotic DNA replication. During G1 phase, the two head-to-head Mcm2-7 rings serve as the scaffold for the assembly of the bidirectional replication initiation complexes at the replication origin. During S phase, the Mcm2-7 complex forms the catalytic core of the Cdc45-MCM-GINS helicase - the DNA unwinding engine of the replisome. G1/pre-replicative complex assembly Site selection for replication origins is carried out by the Origin Recognition Complex (ORC), a six subunit complex (Orc1-6). During the G1 phase of the cell cycle, Cdc6 is recruited by ORC to form a launching pad for the loading of two head-to-head Mcm2-7 hexamers, also known as the pre-replication complex (pre-RC). There is genetic and biochemical evidence that the recruitment of the double hexamer may involve either one or two ORCs. Soluble Mcm2-7 hexamer forms a flexible left-handed open-ringed structure stabilised by Cdt1 prior to its loading onto chromatin, one at a time. The structure of the ORC-Cdc6-Cdt1-MCM (OCCM) intermediate formed after the loading of the first Cdt1-Mcm2-7 heptamer indicates that the winged helix domain at the C-terminal extensions (CTE) of the Mcm2-7 complex firmly anchor onto the surfaces created by the ORC-Cdc6 ring structure around origin DNA. The fusion of the two head-to-head Mcm2-7 hexamers is believed to be facilitated by the removal of Cdt1, leaving the NTDs of the two MCM hexamers flexible for inter-ring interactions. The loading of MCM2-7 onto DNA is an active process that requires ATP hydrolysis by both Orc1-6 and Cdc6. This process is coined "Replication Licensing" as it is a prerequisite for DNA replication initiation in every cell division cycle. Late G1/early S - initiation In late G1/early S phase, the pre-RC is activated for DNA unwinding by the cyclin-dependent kinases (CDKs) and DDK. This facilitates the loading of additional replication factors (e.g., Cdc45, MCM10, GINS, and DNA polymerases) and unwinding of the DNA at the origin. Once pre-RC formation is complete, Orc1-6 and Cdc6 are no longer required for MCM2-7 retention at the origin, and they are dispensable for subsequent DNA replication. S-phase/elongation Upon entry into S phase, the activity of the CDKs and the Dbf4-dependent kinase (DDK) Cdc7 promotes the assembly of replication forks, likely in part by activating MCM2-7 to unwind DNA. Following DNA polymerase loading, bidirectional DNA replication commences. During S phase, Cdc6 and Cdt1 are degraded or inactivated to block additional pre-RC formation, and bidirectional DNA replication ensues. When the replication fork encounters lesions in the DNA, the S-phase checkpoint response slows or stops fork progression and stabilizes the association of MCM2-7 with the replication fork during DNA repair. Role in replication licensing The replication licensing system acts to ensure that the no section of the genome is replicated more than once in a single cell cycle. The inactivation of any of at least five of the six MCM subunits during S phase quickly blocks ongoing elongation. As a critical mechanism to ensure only a single round of DNA replication, the loading of additional MCM2-7 complexes into pre-RCs is inactivated by redundant means after passage into S phase.  MCM2-7 activity can also be regulated during elongation. The loss of replication fork integrity, an event precipitated by DNA damage, unusual DNA sequence, or insufficient deoxyribonucleotide precursors, can lead to the formation of DNA double-strand breaks and chromosome rearrangements. Normally, these replication problems trigger an S-phase checkpoint that minimizes genomic damage by blocking further elongation and physically stabilizing protein-DNA associations at the replication fork until the problem is fixed. This stabilization of the replication fork requires the physical interaction of MCM2-7 with Mrc1, Tof1, and Csm3 (M/T/C complex). In the absence of these proteins, dsDNA unwinding and replisome movement powered by MCM2-7 continue, but DNA synthesis stops. At least part of this stop is due to the dissociation of polymerase ε from the replication fork. Biochemical structure Each subunit in the MCM structure contains two large N- and C-terminal domains. The N-terminal domain consists of three small sub-domains and appears to be used mainly for structural organization. The N-domain can coordinate with a neighboring subunit's C-terminal AAA+ helicase domain through a long and conserved loop. This conserved loop, named the allosteric control loop, has been shown to play a role in regulating interactions between N- and C-terminal regions by facilitating communication between the domains in response to ATP hydrolysis [10]. The N-domain also establishes the in vitro 3′→5′ directionality of MCM. Models of DNA unwinding Regarding the physical mechanism of how a hexameric helicase unwinds DNA, two models have been proposed based on in vivo and in vitro data. In the "steric" model, the helicase tightly translocates along one strand of DNA while physically displacing the complementary strand. In the "pump" model, pairs of hexameric helicases unwind duplex DNA by either twisting it apart or extruding it through channels in the complex. Steric model The steric model hypothesizes that the helicase encircles dsDNA and, after local melting of the duplex DNA at the origin, translocates away from the origin, dragging a rigid proteinaceous "wedge" (either part of the helicase itself or another associated protein) that separates the DNA strands. Pump model The pump model postulates that multiple helicases load at replication origins, translocate away from one another, and in some manner eventually become anchored in place. They then rotate dsDNA in opposite directions, resulting in the unwinding of the double helix in the intervening region. The pump model has also been proposed to be restricted to the melting of origin DNA while the Mcm2-7 complexes are still anchored at the origin just before replication initiation. Role in cancer Various MCMs have been shown to promote cell proliferation in vitro and in vivo especially in certain types of cancer cell lines. The association between MCMs and proliferation in cancer cell lines is mostly attributed to its ability to enhance DNA replication. The roles of MCM2 and MCM7 in cell proliferation have been demonstrated in various cellular contexts and even in human specimens.  MCM2 has been shown to be frequently expressed in proliferating premalignant lung cells. Its expression was associated with cells having a higher proliferation potential in non-dysplastic squamous epithelium, malignant fibrous histiocytomas, and endometrial carcinoma, while MCM2 expression was also correlated higher mitotic index in breast cancer specimens.  Similarly, many research studies have shown the link between MCM7 expression and cell proliferation. Expression of MCM7 was significantly correlated with the expression of Ki67 in choriocarcinomas, lung cancer, papillary urothelial neoplasia, esophageal cancer, and endometrial cancer. Its expression was also associated with a higher proliferative index in prostatic intraepithelial neoplasia and cancer. See also Human genes encoding MCM proteins include: MCM2 MCM3 MCM4 MCM5 MCM6 MCM7 MCM8 MCM9 MCM10 References External links macromolecular structures of MCM at the EM Data Bank(EMDB) DNA replication
Minichromosome maintenance
[ "Biology" ]
2,288
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
12,374,236
https://en.wikipedia.org/wiki/Absoluteness%20%28logic%29
In mathematical logic, a formula is said to be absolute to some class of structures (also called models), if it has the same truth value in each of the members of that class. One can also speak of absoluteness of a formula between two structures, if it is absolute to some class which contains both of them. Theorems about absoluteness typically establish relationships between the absoluteness of formulas and their syntactic form. There are two weaker forms of partial absoluteness. If the truth of a formula in each substructure N of a structure M follows from its truth in M, the formula is downward absolute. If the truth of a formula in a structure N implies its truth in each structure M extending N, the formula is upward absolute. Issues of absoluteness are particularly important in set theory and model theory, fields where multiple structures are considered simultaneously. In model theory, several basic results and definitions are motivated by absoluteness. In set theory, the issue of which properties of sets are absolute is well studied. The Shoenfield absoluteness theorem, due to Joseph Shoenfield (1961), establishes the absoluteness of a large class of formulas between a model of set theory and its constructible universe, with important methodological consequences. The absoluteness of large cardinal axioms is also studied, with positive and negative results known. In model theory In model theory, there are several general results and definitions related to absoluteness. A fundamental example of downward absoluteness is that universal sentences (those with only universal quantifiers) that are true in a structure are also true in every substructure of the original structure. Conversely, existential sentences are upward absolute from a structure to any structure containing it. Two structures are defined to be elementarily equivalent if they agree about the truth value of all sentences in their shared language, that is, if all sentences in their language are absolute between the two structures. A theory is defined to be model complete if whenever M and N are models of the theory and M is a substructure of N, then M is an elementary substructure of N. In set theory A major part of modern set theory involves the study of different models of ZF and ZFC. It is crucial for the study of such models to know which properties of a set are absolute to different models. It is common to begin with a fixed model of set theory and only consider other transitive models containing the same ordinals as the fixed model. Certain properties are absolute to all transitive models of set theory, including the following (see Jech (2003 sec. I.12) and Kunen (1980 sec. IV.3)). x is the empty set. x is an ordinal. x is a finite ordinal. x is a successor ordinal. x is a limit ordinal. x = ω. x is finite. x is (the graph of) a function. Other properties are not absolute: being countable being a cardinal being a regular cardinal being a limit cardinal being an inaccessible cardinal Failure of absoluteness for countability Skolem's paradox is the seeming contradiction that on the one hand, the set of real numbers is uncountable (and this is provable from ZFC, or even from a small finite subsystem ZFC' of ZFC), while on the other hand there are countable transitive models of ZFC' (this is provable in ZFC), and the set of real numbers in such a model will be a countable set. The paradox can be resolved by noting that countability is not absolute to submodels of a particular model of ZFC. It is possible that a set X is countable in a model of set theory but uncountable in a submodel containing X, because the submodel may contain no bijection between X and ω, while the definition of countability is the existence of such a bijection. The Löwenheim–Skolem theorem, when applied to ZFC, shows that this situation does occur. Shoenfield's absoluteness theorem Shoenfield's absoluteness theorem shows that and sentences in the analytical hierarchy are absolute between a model V of ZF and the constructible universe L of the model, when interpreted as statements about the natural numbers in each model. The theorem can be relativized to allow the sentence to use sets of natural numbers from V as parameters, in which case L must be replaced by the smallest submodel containing those parameters and all the ordinals. The theorem has corollaries that sentences are upward absolute (if such a sentence holds in L then it holds in V) and sentences are downward absolute (if they hold in V then they hold in L). Because any two transitive models of set theory with the same ordinals have the same constructible universe, Shoenfield's theorem shows that two such models must agree about the truth of all sentences. One consequence of Shoenfield's theorem relates to the axiom of choice. Gödel proved that the constructible universe L always satisfies ZFC, including the axiom of choice, even when V is only assumed to satisfy ZF. Shoenfield's theorem shows that if there is a model of ZF in which a given statement φ is false, then φ is also false in the constructible universe of that model. In contrapositive, this means that if ZFC proves a sentence then that sentence is also provable in ZF. The same argument can be applied to any other principle that always holds in the constructible universe, such as the combinatorial principle ◊. Even if these principles are independent of ZF, each of their consequences is already provable in ZF. In particular, this includes any of their consequences that can be expressed in the (first-order) language of Peano arithmetic. Shoenfield's theorem also shows that there are limits to the independence results that can be obtained by forcing. In particular, any sentence of Peano arithmetic is absolute to transitive models of set theory with the same ordinals. Thus it is not possible to use forcing to change the truth value of arithmetical sentences, as forcing does not change the ordinals of the model to which it is applied. Many famous open problems, such as the Riemann hypothesis and the P = NP problem, can be expressed as sentences (or sentences of lower complexity), and thus cannot be proven independent of ZFC by forcing. Large cardinals There are certain large cardinals that cannot exist in the constructible universe (L) of any model of set theory. Nevertheless, the constructible universe contains all the ordinal numbers that the original model of set theory contains. This "paradox" can be resolved by noting that the defining properties of some large cardinals are not absolute to submodels. One example of such a nonabsolute large cardinal axiom is for measurable cardinals; for an ordinal to be a measurable cardinal there must exist another set (the measure) satisfying certain properties. It can be shown that no such measure is constructible. See also Conservative extension Lévy hierarchy References Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. . Shoenfield, Joseph, 1961. "The problem of predicativity", Essays on the foundations of mathematics, Y. Bar-Hillel et al., eds., pp. 132–142. Inline citations Mathematical logic Concepts in logic
Absoluteness (logic)
[ "Mathematics" ]
1,575
[ "Mathematical logic" ]
12,374,512
https://en.wikipedia.org/wiki/Aeroprediction
The Aeroprediction Code is a semi-empirical computer program that estimates the aerodynamics of weapons over the Mach number range 0 to 20, angle of attack range 0 to 90 degrees, and for configurations that have various cross sectional body shapes. Weapons considered include projectiles, missiles, bombs, rockets and mortars. Both static and dynamic aerodynamics are predicted with good accuracy. The code may be used to compute the center of pressure and static margin of missiles. The Defense Acquisition Workforce Improvement Act provides insights into how to use aerodynamic prediction codes such as Aeroprediction in the design of missile for US acquisition. See also Missile Datcom References 1. "Body Alone Aerodynamics of Guided and Unguided Projectiles at Subsonic, Transonic, and Supersonic Mach Numbers", NWL TR-3796, Nov 1972. 2. "Aerodynamic Drag and Lift of General Body Shapes at Subsonic, Transonic, and Supersonic Mach Numbers", AGARD CP-124, AGARD Conference on Aerodynamic Drag, Izmir, Turkey, April 1973. 3. "Aerodynamics of Guided and Unguided Weapons: Part I Theory and Application", NWL TR-3018, Dec 1973 (written with W. McKerley) . 4. "Aerodynamics of Guided and Unguided Weapons: Part II Computer Program and Usage", NWL TR-3036, Jan 1974 (written with W. McKerley) . 5/6 "Static Aerodynamics of Missile Configurations for Mach Number Zero to Three", AIAA Paper No.74-538, Jun 1974 and Journal of Aircraft, Vol. 12, No.10, Oct 1975. 7. "Static and Dynamic Aeroballistics of Projectiles and Missiles", Paper No.9 Presented at the 10th Navy Symposium on Aeroballistics, NSWCDL, Jul 1975 (written with C. Swanson). 8. "The Effect of Boattail Shape on Magnus ", NSWCDL TR-3581, Dec 1976 (co-authored with G. Graff) . 9. "Empirical Method for Predicting the Magnus Characteristics of Spinning Shells ", AIAA Journal, Vol. 15 No.10, Oct 1977, (written with G. Graff) . 10. "Aerodynamics of Tactical Weapons to Mach Number 3 and Angle of Attack 15 Degrees: Part I - Theory and Application", NSWCDL TR-3584, Feb 1977 (written with C. Swanson) . 11. "Aerodynamics of Tactical Weapons to Mach Number 3 and Angle of Attack 15 Degrees: Part II - Computer Program and Usage", NSWCDL TR-3600, Mar 1977 (written with C. Swanson) . 12. "Optimal Projectile Shapes for Minimum Total Drag", NSWCDL TR-3597, May 1977 (written with Hager and F. DeJarnette) . 13. "Dynamic Derivatives for Missile Configurations to Mach Number Three", Journal of Spacecraft and Rockets, Vol. 15, No.4, 1978 (written with C. Swanson) . 14/15. "Aerodynamic Prediction Code for Tactical Weapons", Paper presented at 11th Navy Aeroballistics Symposium, Warminster, PA, 1978 and presented at 17th AIAA Aerospace Sciences Meeting, New Orleans, 1979 (written with L. Devan and J. Sun) . 16. "Aerodynamics Design Manual for Tactical Weapons", NSWC TR 81–156, July 1981 (written with L. Mason, L. Devan and D. McMillan). 17. "Aerodynamics of Tactical Weapons to Mach Number 8 and Angle of Attack of 180 degrees", Paper No.82-0250, presented at AIAA 20th Aerospace Sciences meeting in Orlando, FL, Jan 1982 (written with L. Devan and L. Mason). 18. "Second-Order Shock Expansion Theory Extended to Include Real Gas Effects", NAVSWC TR 90-683, Feb 1992 (written with M. Armistead, S. Rowles and F. DeJarnette) . 19/20. "A New Approximate Method for Calculating Real Gas Effects on Missile Configurations", AIAA Paper No. 92-4637, Atmospheric Flight Mechanics Conference, Aug 1992 (written with Armistead, Rowles, DeJarnette) . Also Journal of Spacecraft and Rockets, Vol. 30, No.1, Jan-Feb 1993. 21. "New Methods for Predicting Nonlinear Lift, Center of Pressure, and Pitching Moment on Missile Configurations", NSWCDD/TR-92/217, Jul1992 (written with T. Hymer and L. Devan) . 22. "A New Semiempirical Method for Computing Nonlinear Angle-of- Attack Aerodynamics on Wing-Body-Tail Configurations", AIAA Paper No.93-0034 presented at 31st Aerospace Sciences Meeting, Jan 1993 (written with L. Devan and T. Hymer) . 23. "Incorporation of Boundary Layer Heating Predictive Methodology into the NAVSWC Aeroprediction Code", NSWCDD TR-93/29, Apr 1993 (written with R. McInville) . 24. "Improved Empirical Model for Base Drag Prediction on Missile Configurations Based on New Wind Tunnel Data", NSWCDD TR-92-509, Oct 1992, (written with F. Wilcox of NASA/LRC and T. Hymer of NSWC) . 25/26. "Base Drag Predictions of Missile Configurations", AIAA Paper No.93-3629 presented at AIAA Atmospheric Flight Mechanics Conference in Monterey, CA, Aug 1993. (Also Journal of Spacecraft and Rockets, Sept-Oct 1994, Vol. 31, No.5) (written with F. Wilcox of NASA/LRC and T. Hymer of NSWC). 27. "Improved Aeroprediction Code: Part I- Summary of New Methods and Comparison with Experiment", NSWCDD TR-93/91, May 1993 (written with T. Hymer and R. McInville) . 28. "Improved Aeroprediction Code: Part II - Computer Program User's Guide and Listing", NSWCDD TR-93/241, Aug 1993, (written with T. Hymer and R. McInville) . 29. "Application of 1993 Version of the Aeroprediction Code (AP93) to Several Missile Configurations", NSWCDD/TR-93/349, Sep 1993 (written with T. Hymer) 30. "Planar Nonlinear Missile Aeroprediction Code for All Mach Numbers", AIAA- Paper No.94-0026, 32nd Aerospace Sciences Meeting, Reno, NV, Jan 1994 (written with R. McInville and T. Hymer) . 31. "A New Semiempirical Method for Computing Nonlinear Missile Aerodynamics", AIAA Journal of Spacecraft and Rockets, Nov-Dec 1993 (written with L. Devan and T. Hymer) . 32. "State-of-the-Art Engineering Aeroprediction Methods with Emphasis on New Semiempirical Techniques for Predicting Nonlinear Aerodynamics on Complete Missile Configurations", NSWCDD/TR-93/551, Nov 1993. 33. "Engineering Codes: State-of-the-Art and New Methods", AGARD Paper No.2 on Missile Aerodynamics Given at Brussels, Belgium and Ankara, Turkey, June 1994. 34. "Incorporation of Boundary Layer Heating Predictive Methodology into the NAVSWC Aeroprediction Code", AIAA Paper No.2001, presented at 6th AIAA/ASME Joint Thermo physics and Heat Transfer Conference, Colorado Springs, CO, June 1994 (written with R. McInville) . 35. "Users guide for an Interactive, Personal Computer Interface for the Aeroprediction Code", NSWCDD/TR-94/107, June 1994 (written with T. Hymer and C. Downs of Vitro) . 36. "A New Method for Calculating Wing Alone Aerodynamics to Angle of Attack 180 Degrees", NSWCDD/TR-94/3, March 1994 (written with R. McInville) . 37. "An Improved Version of the Naval Surface Warfare Center Aeroprediction Code (AP93)", Journal of Spacecraft and Rockets, Sept-Oct 1994, Vol. 31, No.5, pp. 783–791 (written with R. McInville and T. Hymer) . 38. "The 1995 Version of the NSWC Aeroprediction Code: Part I - Summary of New Theoretical Methodology", NSWCDD/TR-94/379, Feb 1995 (written with R. McInville and T. Hymer) . 39. "The 1995 Version of the NSWC Aeroprediction Code: Part II - Computer Program Users Guide and Listing", NSWCDD/TR-94, March 1995 (written with T. Hymer and R. McInville) . 40. "Calculation of Wing-Alone Aerodynamics to High Angles of Attack", Journal of Spacecraft and Rockets, Jan-Feb 1995, Vol. 32, No.1, pp. 187–189 (written with R. McInville) . 41. "A New Method for Calculating Wing Alone Aerodynamics to Angle of Attack 180 Degrees", AIAA Paper 95–0757, presented at 33 Aerospace Sciences Meeting, Jan 9–12, 1995 at Reno, NV, (written with R. McInville) . 42. "Extension of the NSWCDD Aeroprediction Code to the Roll Position of 45 Degrees", NSWCDD/TR-95/160, Dec. 1995 (written with R. McInville) . 43. "Extension of the NSWCDD Aeroprediction Code Above Angle of Attack Thirty Degrees", Paper No.96-0065 34th Aerospace Sciences Meeting in Reno, NV, Jan 15–18, 1996 (written with R. McInville and T. Hymer) . 44. "Aeroprediction Code for Angle of Attack Above 30 Degrees", JSR, Vol. 33, No.3, May - June 1996, pp. 366–373, (written with R. McInville and T. Hymer) . 45. "A New Semiempirical Model for Wing-Tail Interference", AIAA Paper No.96-3393, AFM, San Diego, CA, July 29–31, 1996 (written with R. McInville) . 46. "Nonlinear Structural Load Distribution Methodology for the Aeroprediction Code", NSWC TR 96/133, Sept. 1966 (written with R. McInville and C. Housh of NAWCCL) . 47. "Calculation of Wing-Alone Aerodynamics to High Angles of Attack", Journal of Spacecraft and Rockets, Jan-Feb 1995, Vol. 32, No.1, pp. 187–189 (written with R. McInville) . 48. "Aeroprediction Methodology for Roll Positions of 0 and 45 Degrees", paper presented at Session 6B of AIAA Missile Sciences Conference, Monterey, CA, 3–5 December 1996 (Papers archived at DTIC/OCP, 8725 John Ray Kingman Road, Suite 0944, Fort Belvoir, VA 22060–6218) . (written by R. McInville) 49. "New Semiempirical Model for Wing Tail Interference", JSR, Vol. 34, No.1, Jan-Feb 1997, pp. 48–53. (written by R. McInville) 50. "Nonlinear Aeroprediction Methodology for roll Position of 45 Degrees", JSR, Vol. 34 No.1 Jan-Feb 1997, pp. 54–61. (written by R. McInville) 51. "An Improved Method for Predicting Axial Force at High Angle of Attack", NSWCDD/TR96/240, Feb 1997. (written by T. Hymer) 52. "Current Status and Future Plans of the Aeroprediction Code", invited AIAA Paper No.97-2279, 1997 AIAA Applied Aerodynamics conference, 24 June 1997, Atlanta, GA. 53/54. "Methods for distributing Semiempirical, Nonlinear, Aerodynamic Loads on Missile Components", AIAA Paper No. 97-1969, 29th AIAA Fluid Dynamics conference, 30 June-2 July 1997, Snowmass Village, CO (written by R. McInville and C. Housh of NAWCCL and JSR), Vol. 34 No 6 Nov-Dec 1997, pp 744–752. 55. "An Improved Semiempirical Method for Calculating Aerodynamics of Missiles With Noncircular Bodies", NSWCDD/TR-97/20, Sep 97 (written with R. McInville and T. Hymer) . 56/57. "Improved Methodology for Axial Force Prediction at Angle of Attack, " AIAA Paper 98–0579, 36th Aerospace Sciences Meeting, Jan 1998 and JSR Vol. 35, No.2, March–April 1998, pp 132–139 (written with T. Hymer) . 58. "The 1998 Version of the NSWC Aeroprediction Code : Part I - Summary of New Theoretical methodology", NSWC/TR98/1, Apr 98 (written with R. McInville and T. Hymer) . 59. "A Review of Some Recent New and Improved Semiempirical Aeroprediction methods", paper presented to the Applied Vehicle Technology panel of NATO in Sorrento, Italy, 11–15 May 1998 (written with R. McInville and T. Hymer) . 60. "User's guide for an Interactive Personal Computer Interface for the 1998 Aeroprediction Code (AP98) ", NSWCDD/TR-98/7, Jun 98 (written with T. Hymer and C.Downs) . 61. "The 1998 Version of the NSWCDD Aeroprediction Code: Part II - Program User's Guide and Source Code Listing", NSWCDD/TR-97/73, August 1998 (written with R. McInvi11e and T. Hymer) . 62. "A Robust Method for Calculating Aerodynamics of Noncircular-Cross section Weapons", AIAA Paper 98–4270, pp. 323–340, presented at AIAA AFM Conference, Boston, Massachusetts, Aug 98 (written with R. McInville and T.Hymer) . 63. "Review and Extension of Computational Methods for Noncircular-Cross Section Weapons", JSR, Vol. 35, No.5, Sept-Oct 1998, pp. 584–600 (written with R. McInville and T. Hymer) . 64. "The 1998 Version of the Aeroprediction Code", AIAA Paper 99–0762, AIAA 37th Aerospace -Sciences Meeting, Reno, Nevada, Jan. 1999 (written with R. McInville and T.Hymer) . 65. "A Simplified Method for Predicting Aerodynamics of Multi-Fin Weapons", NSWCDD/TR-99/19, March 1999 (written with R. McInville and D. Robinson) . 66. "Application of the 1998 Version of the Aeroprediction Code", JSR Vol. 36, No.5, Sept-Oct, 1999, pp. 633–645. 67. "Refinements in the Aeroprediction Code Based on Recent Wind Tunnel Data", NSWCDD/TR-99/116, December 1999 (written with R. McInville) . 68. "A Semiempirical method for Predicting Multifin Weapon Aerodynamics", AIAA Paper 2000–0766, 38th Aerospace Sciences Meeting, Reno, NV, Jan 10–13, 2000 (written with R. McInville and D. Robinson) . 69. "Improvements in Pitch Damping for the Aeroprediction Code with Particular Emphasis on Flare Configurations", NSWCDD TR-00/009, April 2000 (written with T. Hymer) . 70. "Modifications to the Aeroprediction Code Based on Recent Test Data", AIAA Paper presented at the AIAA Atmospheric Flight Mechanics Conference, Denver, CO, 14–17 August 2000 (written with R. McInville) . 71. Approximate Methods for Weapon Aerodynamics, Book published by AIAA progress in Astronautics and Aeronautics, Vol 186, August 2000 72. Improved Power-on, Base Drag Methodology for the Aeroprediction Code," NSWCDD/TR-00/67, Oct 2000. (written with T. Hymer) 73."Semiempirical Prediction of Pitch Damping Moments for Configurations with - Flares," AIAA paper 2001–0101, Reno, NV, Jan 2001 (written with T. Hymer) 74. "Evaluation and Improvements to the Aeroprediction Code Based on Recent Test Data," JSR Vol. 37, No.6, Nov.-Dec. 2000, pp. 720–730. (written with R. McInville and T. Hymer) 75. "Semiempirical Prediction of Pitch Damping Moments for Configuration with Flares", JSR Vo138, No. 2, March–April 2001, pps. 150–158. (written with T. Hymer) 76. "Improved Power-On, Base Drag Methodology for the Aeroprediction Code," NSWCDD/TR-00/67, May, 2001. (written with T. Hymer) 77/78. "An Improved Semiempirical Method for Power-On Base Drag Prediction," AIAA Paper No.2001-4328, Aug. 2001, and JSR, Vol, No. 1, Jan. – Feb 2002, pp. (written with T. Hymer) 79. "A SemiempiricalMethod for Predicting Aerodynamics of Trailing Edge Flaps," NSWCDD/TR 01/30, June 2001 (written with T. Hymer) 80/81. "A Semiempirical Method for Predicting Aerodynamics of Trailing Edge Flaps", AIAA Paper NO. 2002-4510 Aug. 2002 and JSR, Vol. 40, NO.1, Jan-Feb 2003 (written with T. Hymer). 82. "The 2002 Version of the Aeroprediction Code: Part I- Summary of New Theoretical Methodology", NSWCDD/TR-01/108, March 2002 (written with T. Hymer) 83."The 2002 version of the Aeroprediction Code: Part II Users Guide", NSWCDD Technical Report in publication (written with T. Hymer) 84. "Integration of the Aeroprediction Code with a Point Mass Ballistic Model (TRAMOD) and a Trim Three Degree-of-Freedom Model (MEM)," NSWCDD/ TR-00/77, March 2002 (written with T. Hymer) 85/86. "The 2002 Version of the Aeroprediction Code", AIAA Paper NO. 2003-26, Jan 6–9, 2003 and JSR, Vol. 41, NO.2, March – April 2004, pps. 232-247 (written with T. Hymer). 87/88. "An Approximate Method to Estimate Wing Trailing-Edge Bluntness Effects on Normal Force", AIAA Paper NO. 2004-16, Jan. 2004 and JSR Vol. 41, NO. 6, Nov. –Dec. 2004, pps. 932-941 (written with T. Hymer). 89. "Application of the 2002 Version of the Aeroprediction Code", RAES Aerospace Aerodynamics Research Conference, 10–12 June 2003, London, United Kingdom (written with T. Hymer). 90. "The 2005 Version of the Aeroprediction Code Part I- Summary of the New Theoretical Methodology", API Report NO. 1, Jan. 2004 (written with T. Hymer). 91. "The 2005 Version of the Aeroprediction Code Part II- Users Guide", API Report NO. 2, June 2004 (written with T. Hymer, Cornell Downs, and L Moore). 92/93. "The 2005 Version of the Aeroprediction Code" AIAA Paper NO. 2004-4715, Aug . 2004 and JSR Vol. 42, No.2, March–April, 2005 (written with T. Hymer). 94/95. "Improved Aerodynamics for Configurations with Boattails, AIAA Paper Presented at Atmospheric Flight Mechanics Conference Hilton Head Island, SC, August 2007, and JSR, Vol. 45, No. 2, March–April 2008, pps 270-281,(written with L. Moore). 96/97. "New Methods To Predict Nonlinear Pitch Damping," AIAA Paper presented at 46th AIAA Aerospace Sciences Meeting, Reno, NV, Jan. 2008 and JSR, Vol. 45, No. 3, May–June 2008, pp. 495 – 503 (written with L. Moore). 98/99. "2009 Version of the Aeroprediction Code: AP09," AIAA paper presented at 47th AIAA Aerospace Sciences Meeting, Orlando, Fl, Jan. 2009 and JSR, Vol. 45, No.4, July- Aug 2008, pp 677–690 (written with L. Moore). 100/101. "New Method To Predict Nonlinear Roll Damping Moments," AIAA paper presented at the 38th Fluid Dynamics Conference, Seattle, WA., June 2008, and JSR Vol 45, No. 5, Sept. – Oct. 2008, pp 955 – 964 (written with L. Moore). 102. "The 2009 Version of the Aeroprediction Code: The AP09", API Report No. 3, Jan. 2008 (written with L. Moore). References Aerospace engineering software Aerodynamics
Aeroprediction
[ "Chemistry", "Engineering" ]
4,649
[ "Aerospace engineering", "Aerodynamics", "Aerospace engineering software", "Fluid dynamics" ]
23,204
https://en.wikipedia.org/wiki/Physical%20quantity
A physical quantity (or simply quantity) is a property of a material or system that can be quantified by measurement. A physical quantity can be expressed as a value, which is the algebraic multiplication of a numerical value and a unit of measurement. For example, the physical quantity mass, symbol m, can be quantified as mnkg, where n is the numerical value and kg is the unit symbol (for kilogram). Quantities that are vectors have, besides numerical value and unit, direction or orientation in space. Components Following ISO 80000-1, any value or magnitude of a physical quantity is expressed as a comparison to a unit of that quantity. The value of a physical quantity Z is expressed as the product of a numerical value {Z} (a pure number) and a unit [Z]: For example, let be "2 metres"; then, is the numerical value and is the unit. Conversely, the numerical value expressed in an arbitrary unit can be obtained as: The multiplication sign is usually left out, just as it is left out between variables in the scientific notation of formulas. The convention used to express quantities is referred to as quantity calculus. In formulas, the unit [Z] can be treated as if it were a specific magnitude of a kind of physical dimension: see Dimensional analysis for more on this treatment. Symbols and nomenclature International recommendations for the use of symbols for quantities are set out in ISO/IEC 80000, the IUPAP red book and the IUPAC green book. For example, the recommended symbol for the physical quantity "mass" is m, and the recommended symbol for the quantity "electric charge" is Q. Typography Physical quantities are normally typeset in italics. Purely numerical quantities, even those denoted by letters, are usually printed in roman (upright) type, though sometimes in italics. Symbols for elementary functions (circular trigonometric, hyperbolic, logarithmic etc.), changes in a quantity like Δ in Δy or operators like d in dx, are also recommended to be printed in roman type. Examples: Real numbers, such as 1 or , e, the base of natural logarithms, i, the imaginary unit, π for the ratio of a circle's circumference to its diameter, 3.14159265... δx, Δy, dz, representing differences (finite or otherwise) in the quantities x, y and z sin α, sinh γ, log x Support Scalars A scalar is a physical quantity that has magnitude but no direction. Symbols for physical quantities are usually chosen to be a single letter of the Latin or Greek alphabet, and are printed in italic type. Vectors Vectors are physical quantities that possess both magnitude and direction and whose operations obey the axioms of a vector space. Symbols for physical quantities that are vectors are in bold type, underlined or with an arrow above. For example, if u is the speed of a particle, then the straightforward notations for its velocity are u, u, or . Tensors Scalar and vector quantities are the simplest tensor quantities, which are tensors can be used to describe more general physical properties. For example, the Cauchy stress tensor possesses magnitude, direction, and orientation qualities. Dimensions, units, and kind Dimensions The notion of dimension of a physical quantity was introduced by Joseph Fourier in 1822. By convention, physical quantities are organized in a dimensional system built upon base quantities, each of which is regarded as having its own dimension. Unit There is often a choice of unit, though SI units are usually used in scientific contexts due to their ease of use, international familiarity and prescription. For example, a quantity of mass might be represented by the symbol m, and could be expressed in the units kilograms (kg), pounds (lb), or daltons (Da). Kind Dimensional homogeneity is not necessarily sufficient for quantities to be comparable; for example, both kinematic viscosity and thermal diffusivity have dimension of square length per time (in units of m2/s). Quantities of the same kind share extra commonalities beyond their dimension and units allowing their comparison; for example, not all dimensionless quantities are of the same kind. Base and derived quantities Base quantities A systems of quantities relates physical quantities, and due to this dependence, a limited number of quantities can serve as a basis in terms of which the dimensions of all the remaining quantities of the system can be defined. A set of mutually independent quantities may be chosen by convention to act as such a set, and are called base quantities. The seven base quantities of the International System of Quantities (ISQ) and their corresponding SI units and dimensions are listed in the following table. Other conventions may have a different number of base units (e.g. the CGS and MKS systems of units). The angular quantities, plane angle and solid angle, are defined as derived dimensionless quantities in the SI. For some relations, their units radian and steradian can be written explicitly to emphasize the fact that the quantity involves plane or solid angles. General derived quantities Derived quantities are those whose definitions are based on other physical quantities (base quantities). Space Important applied base units for space and time are below. Area and volume are thus, of course, derived from the length, but included for completeness as they occur frequently in many derived quantities, in particular densities. Densities, flows, gradients, and moments Important and convenient derived quantities such as densities, fluxes, flows, currents are associated with many quantities. Sometimes different terms such as current density and flux density, rate, frequency and current, are used interchangeably in the same context; sometimes they are used uniquely. To clarify these effective template-derived quantities, we use q to stand for any quantity within some scope of context (not necessarily base quantities) and present in the table below some of the most commonly used symbols where applicable, their definitions, usage, SI units and SI dimensions – where [q] denotes the dimension of q. For time derivatives, specific, molar, and flux densities of quantities, there is no one symbol; nomenclature depends on the subject, though time derivatives can be generally written using overdot notation. For generality we use qm, qn, and F respectively. No symbol is necessarily required for the gradient of a scalar field, since only the nabla/del operator ∇ or grad needs to be written. For spatial density, current, current density and flux, the notations are common from one context to another, differing only by a change in subscripts. For current density, is a unit vector in the direction of flow, i.e. tangent to a flowline. Notice the dot product with the unit normal for a surface, since the amount of current passing through the surface is reduced when the current is not normal to the area. Only the current passing perpendicular to the surface contributes to the current passing through the surface, no current passes in the (tangential) plane of the surface. The calculus notations below can be used synonymously. If X is a n-variable function , then Differential The differential n-space volume element is , Integral: The multiple integral of X over the n-space volume is . See also List of physical quantities List of photometric quantities List of radiometric quantities Philosophy of science Quantity Observable quantity Specific quantity Notes References Further reading Cook, Alan H. The observational foundations of physics, Cambridge, 1994. Essential Principles of Physics, P.M. Whelan, M.J. Hodgson, 2nd Edition, 1978, John Murray, Encyclopedia of Physics, R.G. Lerner, G.L. Trigg, 2nd Edition, VHC Publishers, Hans Warlimont, Springer, 2005, pp 12–13 Physics for Scientists and Engineers: With Modern Physics (6th Edition), P.A. Tipler, G. Mosca, W.H. Freeman and Co, 2008, 9-781429-202657 External links Computer implementations DEVLIB project in C# Language and Delphi Language Physical Quantities project in C# Language at Code Plex Physical Measure C# library project in C# Language at Code Plex Ethical Measures project in C# Language at Code Plex Engineer JS online calculation and scripting tool supporting physical quantities. physical-quantity a web component (custom HTML element) for expressing physical quantities on the web/Internet, featuring self-contained unit conversion, a compact and clean UI, no redundant dual units, and seamless integration across all websites and platforms. Demo
Physical quantity
[ "Physics", "Mathematics" ]
1,762
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
23,205
https://en.wikipedia.org/wiki/Physical%20constant
A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that cannot be explained by a theory and therefore must be measured experimentally. It is distinct from a mathematical constant, which has a fixed numerical value, but does not directly involve any physical measurement. There are many physical constants in science, some of the most widely recognized being the speed of light in vacuum c, the gravitational constant G, the Planck constant h, the electric constant ε0, and the elementary charge e. Physical constants can take many dimensional forms: the speed of light signifies a maximum speed for any object and its dimension is length divided by time; while the proton-to-electron mass ratio is dimensionless. The term "fundamental physical constant" is sometimes used to refer to universal-but-dimensioned physical constants such as those mentioned above. Increasingly, however, physicists reserve the expression for the narrower case of dimensionless universal physical constants, such as the fine-structure constant α, which characterizes the strength of the electromagnetic interaction. Physical constants, as discussed here, should not be confused with empirical constants, which are coefficients or parameters assumed to be constant in a given context without being fundamental. Examples include the characteristic time, characteristic length, or characteristic number (dimensionless) of a given system, or material constants (e.g., Madelung constant, electrical resistivity, and heat capacity) of a particular material or substance. Characteristics Physical constants are parameters in a physical theory that cannot be explained by that theory. This may be due to the apparent fundamental nature of the constant or due to limitations in the theory. Consequently, physical constants must be measured experimentally. The set of parameters considered physical constants change as physical models change and how fundamental they appear can change. For example, , the speed of light, was originally considered a property of light, a specific system. The discovery and verification of Maxwell's equations connected the same quantity with an entire system, electromagnetism. When the theory of special relativity emerged, the quantity came to be understood as the basis of causality. The speed of light is so fundamental it now defines the international unit of length. Relationship to units Numerical values Whereas the physical quantity indicated by a physical constant does not depend on the unit system used to express the quantity, the numerical values of dimensional physical constants do depend on choice of unit system. The term "physical constant" refers to the physical quantity, and not to the numerical value within any given system of units. For example, the speed of light is defined as having the numerical value of when expressed in the SI unit metres per second, and as having the numerical value of 1 when expressed in the natural units Planck length per Planck time. While its numerical value can be defined at will by the choice of units, the speed of light itself is a single physical constant. International System of Units Since 2019 revision, all of the units in the International System of Units have been defined in terms of fixed natural phenomena, including three fundamental constants: the speed of light in vacuum, c; the Planck constant, h; and the elementary charge, e. As a result of the new definitions, an SI unit like the kilogram can be written in terms of fundamental constants and one experimentally measured constant, ΔνCs: 1 kg = . Natural units It is possible to combine dimensional universal physical constants to define fixed quantities of any desired dimension, and this property has been used to construct various systems of natural units of measurement. Depending on the choice and arrangement of constants used, the resulting natural units may be convenient to an area of study. For example, Planck units, constructed from c, G, ħ, and kB give conveniently sized measurement units for use in studies of quantum gravity, and atomic units, constructed from ħ, me, e and 4πε0 give convenient units in atomic physics. The choice of constants used leads to widely varying quantities. Number of fundamental constants The number of fundamental physical constants depends on the physical theory accepted as "fundamental". Currently, this is the theory of general relativity for gravitation and the Standard Model for electromagnetic, weak and strong nuclear interactions and the matter fields. Between them, these theories account for a total of 19 independent fundamental constants. There is, however, no single "correct" way of enumerating them, as it is a matter of arbitrary choice which quantities are considered "fundamental" and which as "derived". Uzan lists 22 "fundamental constants of our standard model" as follows: the gravitational constant G, the speed of light c, the Planck constant h, the 9 Yukawa couplings for the quarks and leptons (equivalent to specifying the rest mass of these elementary particles), 2 parameters of the Higgs field potential, 4 parameters for the quark mixing matrix, 3 coupling constants for the gauge groups SU(3) × SU(2) × U(1) (or equivalently, two coupling constants and the Weinberg angle), a phase for the quantum chromodynamics vacuum. The number of 19 independent fundamental physical constants is subject to change under possible extensions of the Standard Model, notably by the introduction of neutrino mass (equivalent to seven additional constants, i.e. 3 Yukawa couplings and 4 lepton mixing parameters). The discovery of variability in any of these constants would be equivalent to the discovery of "new physics". The question as to which constants are "fundamental" is neither straightforward nor meaningless, but a question of interpretation of the physical theory regarded as fundamental; as pointed out by , not all physical constants are of the same importance, with some having a deeper role than others. proposed a classification schemes of three types of constants: A: physical properties of particular objects B: characteristic of a class of physical phenomena C: universal constants The same physical constant may move from one category to another as the understanding of its role deepens; this has notably happened to the speed of light, which was a class A constant (characteristic of light) when it was first measured, but became a class B constant (characteristic of electromagnetic phenomena) with the development of classical electromagnetism, and finally a class C constant with the discovery of special relativity. Tests on time-independence By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification. Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments can in principle only put an upper bound on the relative change per year. For the fine-structure constant, this upper bound is comparatively low, at roughly 10−17 per year (as of 2008). The gravitational constant is much more difficult to measure with precision, and conflicting measurements in the 2000s have inspired the controversial suggestions of a periodic variation of its value in a 2015 paper. However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for the gravitational constant over the last nine billion years. Similarly, an upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy. It is problematic to discuss the proposed rate of change (or lack thereof) of a single dimensional physical constant in isolation. The reason for this is that the choice of units is arbitrary, making the question of whether a constant is undergoing change an artefact of the choice (and definition) of the units. For example, in SI units, the speed of light was given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now. Similarly, with effect from May 2019, the Planck constant has a defined value, such that all SI base units are now defined in terms of fundamental physical constants. With this change, the international prototype of the kilogram is being retired as the last physical object used in the definition of any SI unit. Tests on the immutability of physical constants look at dimensionless quantities, i.e. ratios between quantities of like dimensions, in order to escape this problem. Changes in physical constants are not meaningful if they result in an observationally indistinguishable universe. For example, a "change" in the speed of light c would be meaningless if accompanied by a corresponding change in the elementary charge e so that the expression (the fine-structure constant) remained unchanged. Dimensionless physical constants Any ratio between physical constants of the same dimensions results in a dimensionless physical constant, for example, the proton-to-electron mass ratio. The fine-structure constant α is the best known dimensionless fundamental physical constant. It is the value of the elementary charge squared expressed in Planck units. This value has become a standard example when discussing the derivability or non-derivability of physical constants. Introduced by Arnold Sommerfeld, its value and uncertainty as determined at the time was consistent with 1/137. This motivated Arthur Eddington (1929) to construct an argument why its value might be 1/137 precisely, which related to the Eddington number, his estimate of the number of protons in the Universe. By the 1940s, it became clear that the value of the fine-structure constant deviates significantly from the precise value of 1/137, refuting Eddington's argument. Fine-tuned universe Some physicists have explored the notion that if the dimensionless physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist. There are a variety of interpretations of the constants' values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that the universe is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist. Table of physical constants The table below lists some frequently used constants and their CODATA recommended values. For a more extended list, refer to List of physical constants. See also List of common physics notations List of mathematical constants List of physical constants Mathematical constant References External links Sixty Symbols, University of Nottingham IUPAC – Gold Book
Physical constant
[ "Physics", "Mathematics" ]
2,294
[ "Physical constants", "Quantity", "Physical quantities" ]
23,311
https://en.wikipedia.org/wiki/Pasteurization
In food processing, pasteurization (also pasteurisation) is a process of food preservation in which packaged foods (e.g., milk and fruit juices) are treated with mild heat, usually to less than , to eliminate pathogens and extend shelf life. Pasteurization either destroys or deactivates microorganisms and enzymes that contribute to food spoilage or the risk of disease, including vegetative bacteria, but most bacterial spores survive the process. Pasteurization is named after the French microbiologist Louis Pasteur, whose research in the 1860s demonstrated that thermal processing would deactivate unwanted microorganisms in wine. Spoilage enzymes are also inactivated during pasteurization. Today, pasteurization is used widely in the dairy industry and other food processing industries for food preservation and food safety. By the year 1999, most liquid products were heat treated in a continuous system where heat is applied using a heat exchanger or the direct or indirect use of hot water and steam. Due to the mild heat, there are minor changes to the nutritional quality and sensory characteristics of the treated foods. Pascalization or high pressure processing (HPP) and pulsed electric field (PEF) are non-thermal processes that are also used to pasteurize foods. History Heating wine for preservation has been known in China since AD 1117, and was documented in Japan in the diary Tamonin-nikki written by a series of monks between 1478 and 1618. In 1768, research performed by the Italian priest and scientist Lazzaro Spallanzani proved that a product could be made "sterile" after thermal processing. Spallanzani boiled meat broth for one hour, sealed the container immediately after boiling, and noticed that the broth did not spoil and was free from microorganisms. In 1795, a Parisian chef and confectioner named Nicolas Appert began experimenting with ways to preserve foodstuffs, succeeding with soups, vegetables, juices, dairy products, jellies, jams, and syrups. He placed the food in glass jars, sealed them with cork and sealing wax and placed them in boiling water. In that same year, the French military offered a cash prize of 12,000 francs for a new method to preserve food. After some 14 or 15 years of experimenting, Appert submitted his invention and won the prize in January 1810. Later that year, Appert published L'Art de conserver les substances animales et végétales ("The Art of Preserving Animal and Vegetable Substances"). This was the first cookbook on modern food preservation methods. La Maison Appert , in the town of Massy, near Paris, became the first food-bottling factory in the world, preserving a variety of foods in sealed bottles. Appert's filled thick, large-mouthed glass bottles with produce of every description, ranging from beef and fowl to eggs, milk and prepared dishes. He left air space at the top of the bottle, and the cork would then be sealed firmly in the jar by using a vise. The bottle was then wrapped in canvas to protect it while it was dunked into boiling water and then boiled for as much time as Appert deemed appropriate for cooking the contents thoroughly. Appert patented his method, sometimes called appertisation in his honor. Appert's method was so simple and workable that it quickly became widespread. In 1810, the British inventor and merchant Peter Durand, also of French origin, patented his own method, but this time in a tin can, so creating the modern-day process of canning foods. In 1812, the Englishmen Bryan Donkin and John Hall purchased both patents and began producing preserves. Just a decade later, Appert's method of canning had made its way to America. Tin can production was not common until the beginning of the 20th century, partly because a hammer and chisel were needed to open cans until the invention of a can opener by Robert Yeates in 1855. A less aggressive method was developed by French chemist Louis Pasteur during an 1864 summer holiday in Arbois. To remedy the frequent acidity of the local aged wines, he found out experimentally that it is sufficient to heat a young wine to only about for a short time to kill the microbes, and that the wine could subsequently be aged without sacrificing the final quality. In honor of Pasteur, this process is known as pasteurization. Pasteurization was originally used as a way of preventing wine and beer from souring, and it would be many years before milk was pasteurized. In the United States in the 1870s, before milk was regulated, it was common for milk to contain substances intended to mask spoilage. Milk Milk is an excellent medium for microbial growth, and when it is stored at ambient temperature, bacteria and other pathogens soon proliferate. The US Centers for Disease Control (CDC) says improperly handled raw milk is responsible for nearly three times more hospitalizations than any other food-borne disease source, making it one of the world's most dangerous food products. Diseases prevented by pasteurization can include tuberculosis, brucellosis, diphtheria, scarlet fever, and Q-fever; it also kills the harmful bacteria Salmonella, Listeria, Yersinia, Campylobacter, Staphylococcus aureus, and Escherichia coli O157:H7, among others. Prior to industrialization, dairy cows were kept in urban areas to limit the time between milk production and consumption, hence the risk of disease transmission via raw milk was reduced. As urban densities increased and supply chains lengthened to the distance from country to city, raw milk (often days old) became recognized as a source of disease. For example, between 1912 and 1937, some 65,000 people died of tuberculosis contracted from consuming milk in England and Wales alone. Because tuberculosis has a long incubation period in humans, it was difficult to link unpasteurized milk consumption with the disease. In 1892, chemist Ernst Lederle experimentally inoculated milk from tuberculosis-diseased cows into guinea pigs, which caused them to develop the disease. In 1910, Lederle, then in the role of Commissioner of Health, introduced mandatory pasteurization of milk in New York City. Developed countries adopted milk pasteurization to prevent such disease and loss of life, and as a result milk is now considered a safer food. A traditional form of pasteurization by scalding and straining of cream to increase the keeping qualities of butter was practiced in Great Britain in the 18th century and was introduced to Boston in the British Colonies by 1773, although it was not widely practiced in the United States for the next 20 years. Pasteurization of milk was suggested by Franz von Soxhlet in 1886. In the early 20th century, Milton Joseph Rosenau established the standards – i.e. low-temperature, slow heating at for 20 minutes – for the pasteurization of milk while at the United States Marine Hospital Service, notably in his publication of The Milk Question (1912). States in the U.S. soon began enacting mandatory dairy pasteurization laws, with the first in 1947, and in 1973 the U.S. federal government required pasteurization of milk used in any interstate commerce. The shelf life of refrigerated pasteurized milk is greater than that of raw milk. For example, high-temperature, short-time (HTST) pasteurized milk typically has a refrigerated shelf life of two to three weeks, whereas ultra-pasteurized milk can last much longer, sometimes two to three months. When ultra-heat treatment (UHT) is combined with sterile handling and container technology (such as aseptic packaging), it can even be stored non-refrigerated for up to 9 months. According to the Centers for Disease Control, between 1998 and 2011, 79% of dairy-related disease outbreaks in the United States were due to raw milk or cheese products. They report 148 outbreaks and 2,384 illnesses (with 284 requiring hospitalization), as well as two deaths due to raw milk or cheese products during the same time period. Medical equipment Medical equipment, notably respiratory and anesthesia equipment, is often disinfected using hot water, as an alternative to chemical disinfection. The temperature is raised to 70 °C (158 °F) for 30 minutes. Pasteurization process Pasteurization is a mild heat treatment of liquid foods (both packaged and unpackaged) where products are typically heated to below . The heat treatment and cooling process are designed to inhibit a phase change of the product. The acidity of the food determines the parameters (time and temperature) of the heat treatment as well as the duration of shelf life. Parameters also take into account nutritional and sensory qualities that are sensitive to heat. In acidic foods (with pH of 4.6 or less), such as fruit juice and beer, the heat treatments are designed to inactivate enzymes (pectin methylesterase and polygalacturonase in fruit juices) and destroy spoilage microbes (yeast and lactobacillus). Due to the low pH of acidic foods, pathogens are unable to grow. The shelf-life is thereby extended several weeks. In less acidic foods (with pH greater than 4.6), such as milk and liquid eggs, the heat treatments are designed to destroy pathogens and spoilage organisms (yeast and molds). Not all spoilage organisms are destroyed under pasteurization parameters, so subsequent refrigeration is necessary. High-temperature short-time (HTST) pasteurization, such as that used for milk ( for 15 seconds) ensures safety of milk and provides a refrigerated shelf life of approximately two weeks. In ultra-high-temperature (UHT) pasteurization, milk is pasteurized at for 1–2 seconds, which provides the same level of safety, but along with the packaging, extends shelf life to three months under refrigeration. Equipment Food can be pasteurized either before or after being packaged into containers. Pasteurization of food in containers generally uses either steam or hot water. When food is packaged in glass, hot water is used to avoid cracking the glass from thermal shock. When plastic or metal packaging is used, the risk of thermal shock is low, so steam or hot water is used. Most liquid foods are pasteurized by using a continuous process that passes the food through a heating zone, a hold tube to keep it at the pasteurization temperature for the desired time, and a cooling zone, after which the product is filled into the package. Plate heat exchangers are often used for low-viscosity products such as animal milks, nut milks and juices. A plate heat exchanger is composed of many thin vertical stainless steel plates that separate the liquid from the heating or cooling medium. Shell and tube heat exchangers are often used for the pasteurization of foods that are non-Newtonian fluids, such as dairy products, tomato ketchup and baby foods. A tube heat exchanger is made up of concentric stainless steel tubes. Food passes through the inner tube or tubes, while the heating/cooling medium is circulated through the outer tube. Scraped-surface heat exchangers are a type of shell and tube which contain an inner rotating shaft having spring-loaded blades that serve to scrape away any highly viscous material that accumulates on the wall of the tube. The benefits of using a heat exchanger to pasteurize foods before packaging, versus pasteurizing foods in containers are: Higher uniformity of treatment Greater flexibility with regard to the products that can be pasteurized Higher heat transfer-efficiency Greater throughput After being heated in a heat exchanger, the product flows through a hold tube for a set period of time to achieve the required treatment. If pasteurization temperature or time is not achieved, a flow diversion valve is used to divert under-processed product back to the raw product tank. If the product is adequately processed, it is cooled in a heat exchanger, then filled. Verification Direct microbiological techniques are the ultimate measurement of pathogen contamination, but these are costly and time-consuming, which means that products have a reduced shelf-life by the time pasteurization is verified. As a result of the unsuitability of microbiological techniques, milk pasteurization efficacy is typically monitored by checking for the presence of alkaline phosphatase, which is denatured by pasteurization. Destruction of alkaline phosphatase ensures the destruction of common milk pathogens. Therefore, the presence of alkaline phosphatase is an ideal indicator of pasteurization efficacy. For liquid eggs, the effectiveness of the heat treatment is measured by the residual activity of α-amylase. Efficacy against pathogenic bacteria During the early 20th century, there was no robust knowledge of what time and temperature combinations would inactivate pathogenic bacteria in milk, and so a number of different pasteurization standards were in use. By 1943, both HTST pasteurization conditions of for 15 seconds, as well as batch pasteurization conditions of for 30 minutes, were confirmed by studies of the complete thermal death (as best as could be measured at that time) for a range of pathogenic bacteria in milk. Complete inactivation of Coxiella burnetii (which was thought at the time to cause Q fever by oral ingestion of infected milk) as well as of Mycobacterium tuberculosis (which causes tuberculosis) were later demonstrated. For all practical purposes, these conditions were adequate for destroying almost all yeasts, molds, and common spoilage bacteria and also for ensuring adequate destruction of common pathogenic, heat-resistant organisms. However, the microbiological techniques used until the 1960s did not allow for the actual reduction of bacteria to be enumerated. Demonstration of the extent of inactivation of pathogenic bacteria by milk pasteurization came from a study of surviving bacteria in milk that was heat-treated after being deliberately spiked with high levels of the most heat-resistant strains of the most significant milk-borne pathogens. The mean log10 reductions and temperatures of inactivation of the major milk-borne pathogens during a 15-second treatment are: Staphylococcus aureus > 6.7 at Yersinia enterocolitica > 6.8 at Pathogenic Escherichia coli > 6.8 at Cronobacter sakazakii > 6.7 at Listeria monocytogenes > 6.9 at Salmonella ser. Typhimurium > 6.9 at (A log10 reduction between 6 and 7 means that 1 bacterium out of 1 million (106) to 10 million (107) bacteria survive the treatment.) The Codex Alimentarius Code of Hygienic Practice for Milk notes that milk pasteurization is designed to achieve at least a 5 log10 reduction of Coxiella burnetii. The Code also notes that: "The minimum pasteurization conditions are those having bactericidal effects equivalent to heating every particle of the milk to for 15 seconds (continuous flow pasteurization) or for 30 minutes (batch pasteurization)” and that "To ensure that each particle is sufficiently heated, the milk flow in heat exchangers should be turbulent, i.e. the Reynolds number should be sufficiently high". The point about turbulent flow is important because simplistic laboratory studies of heat inactivation that use test tubes, without flow, will have less bacterial inactivation than larger-scale experiments that seek to replicate conditions of commercial pasteurization. As a precaution, modern HTST pasteurization processes must be designed with flow-rate restriction as well as divert valves which ensure that the milk is heated evenly and that no part of the milk is subject to a shorter time or a lower temperature. It is common for the temperatures to exceed by . Double pasteurization Pasteurization is not sterilization and does not kill spores. "Double" pasteurization, which involves a secondary heating process, can extend shelf life by killing spores that have germinated. The acceptance of double pasteurization varies by jurisdiction. In places where it is allowed, milk is initially pasteurized when it is collected from the farm so it does not spoil before processing. Many countries prohibit the labelling of such milk as "pasteurized" but allow it to be marked "thermized", which refers to a lower-temperature process. Effects on nutritional and sensory characteristics of foods Because of its mild heat treatment, pasteurization increases the shelf-life by a few days or weeks. However, this mild heat also means there are only minor changes to heat-labile vitamins in the foods. Milk According to a systematic review and meta-analysis, it was found that pasteurization appeared to reduce concentrations of vitamins B12 and E, but it also increased concentrations of vitamin A. However, in the review, there was only limited research regarding how much pasteurization affects A, B12, and E levels. Milk is not considered an important source of vitamins B12 or E in the North American diet, so the effects of pasteurization on the adult daily intake of these vitamins is negligible. However, milk is considered an important source of vitamin A, and because pasteurization appears to increase vitamin A concentrations in milk, the effect of milk heat treatment on this vitamin is a not a major public health concern. Results of meta-analyses reveal that pasteurization of milk leads to a significant decrease in vitamin C and folate, but milk is also not an important source of these vitamins. A significant decrease in vitamin B2 concentrations was found after pasteurization. Vitamin B2 is typically found in bovine milk at concentrations of 1.83 mg/liter. Because the recommended daily intake for adults is 1.1 mg/day, milk consumption greatly contributes to the recommended daily intake of this vitamin. With the exception of B2, pasteurization does not appear to be a concern in diminishing the nutritive value of milk because milk is often not a primary source of these studied vitamins in the North American diet. Sensory effects Pasteurization also has a small but measurable effect on the sensory attributes of the foods that are processed. In fruit juices, pasteurization may result in loss of volatile aroma compounds. Fruit juice products undergo a deaeration process prior to pasteurization that may be responsible for this loss. Deaeration also minimizes the loss of nutrients like vitamin C and carotene. To prevent the decrease in quality resulting from the loss in volatile compounds, volatile recovery, though costly, can be utilized to produce higher-quality juice products. In regard to color, the pasteurization process does not have much effect on pigments such as chlorophylls, anthocyanins, and carotenoids in plants and animal tissues. In fruit juices, polyphenol oxidase (PPO) is the main enzyme responsible for causing browning and color changes. However, this enzyme is deactivated in the deaeration step prior to pasteurization with the removal of oxygen. In milk, the color difference between pasteurized and raw milk is related to the homogenization step that takes place prior to pasteurization. Before pasteurization milk is homogenized to emulsify its fat and water-soluble components, which results in the pasteurized milk having a whiter appearance compared to raw milk. For vegetable products, color degradation is dependent on the temperature conditions and the duration of heating. Pasteurization may result in some textural loss as a result of enzymatic and non-enzymatic transformations in the structure of pectin if the processing temperatures are too high as a result. However, with mild heat treatment pasteurization, tissue softening in the vegetables that causes textural loss is not of concern as long as the temperature does not get above . Novel pasteurization methods "Pasteurizing" in the broad sense refers to any method that reduces microbes by an amount (log reduction) equivalent to Pasteur's process. Novel processes, thermal and non-thermal, have been developed to pasteurize foods as a way of reducing the effects on nutritional and sensory characteristics of foods and preventing degradation of heat-labile nutrients. Pascalization or high pressure processing (HPP), pulsed electric field (PEF), ionising radiation, high pressure homogenisation, UV decontamination, pulsed high intensity light, high intensity laser, pulsed white light, high power ultrasound, oscillating magnetic fields, high voltage arc discharge, and streamer plasma are examples of these non-thermal pasteurization methods that are currently commercially utilized. Microwave volumetric heating (MVH) is the newest available pasteurization technology. It uses microwaves to heat liquids, suspensions, or semi-solids in a continuous flow. Because MVH delivers energy evenly and deeply into the whole body of a flowing product, it allows for gentler and shorter heating, so that almost all heat-sensitive substances in the milk are preserved. Products that are commonly pasteurized Beer Canned food Dairy products Eggs Milk Juices Low alcoholic beverages Syrups Vinegar Water Wines See also Food irradiation Flash pasteurization Pascalization Homogenization Pasteurized eggs Solar water disinfection Thermoduric bacteria Food preservation Food storage Food microbiology Sterilization Thermization Tyndallization Ultra-high-temperature processing References Further reading Raw milk expert testimony dated: April 25, 2008 Case: Organic Dairy Company, LLC, and Claravale Farm, Inc., Plaintiffs, vs. No. CU-07-00204 State of California and A.G. Kawamura, Secretary of California Department of Food and Agriculture, – Expert Witnesses: Dr. Theodore Beals & Dr. Ronald Hull An alternate view on the alleged safety of pasteurized vs. natural milk from Johns Hopkins University: Unraveling the mysteries of extended shelf life Food processing Unit operations Food preservation Louis Pasteur Industrial processes
Pasteurization
[ "Chemistry" ]
4,564
[ "Chemical process engineering", "Unit operations" ]
23,535
https://en.wikipedia.org/wiki/Photon
A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that can move no faster than the speed of light measured in vacuum. The photon belongs to the class of boson particles. As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. Nomenclature The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy was "made up of a completely determinate number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete energy quanta. He called these a light quantum (German: ein Lichtquant). The name photon derives from the Greek word for light, (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on 18 December 1926. The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971). The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted by most physicists very soon after Compton used it. In physics, a photon is usually denoted by the symbol (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where is the Planck constant and the Greek letter (nu) is the photon's frequency. Physical properties The photon has no electric charge, is generally considered to have zero rest mass and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−50 kg; its lifetime would be more than 1018 years. For comparison the age of the universe is about years. In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon obeys Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle and more than one can occupy the same bound quantum state. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). Relativistic energy and momentum In empty space, the photon moves at (the speed of light) and its energy and momentum are related by , where is the magnitude of the momentum vector . This derives from the following relativistic relation, with : The energy and momentum of a photon depend only on its frequency () or inversely, its wavelength (): where is the wave vector, where   is the wave number, and   is the angular frequency, and   is the reduced Planck constant. Since points in the direction of the photon's propagation, the magnitude of its momentum is Polarization and spin angular momentum The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light). The angular momentum of the photon has two possible values, either or . These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta. The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and S. Bhagavantam in 1931. Antiparticle annihilation The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of . Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term mAA would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of . The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of (the equivalent of ) given by the Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of from the test of Coulomb's law is valid. Historical development In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity. At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question then, was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See and , below.) Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that and The and are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Quantum field theory Quantization of the electromagnetic field In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider. In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. As a gauge boson The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states. In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally. Hadronic properties Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electrical charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon, which interacts only with electric charges, and vector mesons, which mediate the residual nuclear force. However, if experimentally probed at very short distances, the intrinsic structure of the photon appears to have as components a charge-neutral flux of quarks and gluons, quasi-free according to asymptotic freedom in QCD. That flux is described by the photon structure function. A review by presented a comprehensive comparison of data with theoretical predictions. Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei). This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium. Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves. In matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polaritons. Polaritons have a nonzero effective mass, which means that they cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering. Photons can be scattered by matter. For example, photons scatter so many times in the solar radiative zone after leaving the core of the Sun that radiant energy takes about a million years to reach the convection zone. However, photons emitted from the sun's photosphere take only 8.3 minutes to reach Earth. Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry. Technological applications Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas. Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations. Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy. In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins. Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1". Quantum optics and computation Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography. Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons. See also Notes References Further reading By date of publication Education with single photons External links Bosons Gauge bosons Elementary particles Electromagnetism Optics Quantum electrodynamics Photons Force carriers Subatomic particles with spin 1
Photon
[ "Physics", "Chemistry" ]
7,760
[ "Electromagnetism", "Physical phenomena", "Applied and interdisciplinary physics", "Optics", "Elementary particles", "Force carriers", "Bosons", "Subatomic particles", "Fundamental interactions", " molecular", "Atomic", "Matter", " and optical physics" ]
23,637
https://en.wikipedia.org/wiki/Phase%20%28matter%29
In the physical sciences, a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is a different material, in its own separate phase. (See .) More precisely, a phase is a region of space (a thermodynamic system), throughout which all physical properties of a material are essentially uniform. Examples of physical properties include density, index of refraction, magnetization and chemical composition. The term phase is sometimes used as a synonym for state of matter, but there can be several immiscible phases of the same state of matter (as where oil and water separate into distinct phases, both in the liquid state). It is also sometimes used to refer to the equilibrium states shown on a phase diagram, described in terms of state variables such as pressure and temperature and demarcated by phase boundaries. (Phase boundaries relate to changes in the organization of matter, including for example a subtle change within the solid state from one crystal structure to another, as well as state-changes such as between solid and liquid.) These two usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used. Types of phases Distinct phases may be described as different states of matter such as gas, liquid, solid, plasma or Bose–Einstein condensate. Useful mesophases between solid and liquid form other states of matter. Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot. As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate. Phase equilibrium Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ. Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase. At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils. Number of phases For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams. The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depend only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium. In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid. In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa. An unusual feature of the water phase diagram is that the solid–liquid phase line (illustrated by the dotted green line) has a negative slope. For most substances, the slope is positive as exemplified by the dark green line. This unusual feature of water is related to ice having a lower density than liquid water. Increasing the pressure drives the water into the higher density phase, which causes melting. Another interesting though not unusual feature of the phase diagram is the point where the solid–liquid phase line meets the liquid–gas phase line. The intersection is referred to as the triple point. At the triple point, all three phases can coexist. Experimentally, phase lines are relatively easy to map due to the interdependence of temperature and pressure that develops when multiple phases form. Gibbs' phase rule suggests that different phases are completely determined by these variables. Consider a test apparatus consisting of a closed and well-insulated cylinder equipped with a piston. By controlling the temperature and the pressure, the system can be brought to any point on the phase diagram. From a point in the solid stability region (left side of the diagram), increasing the temperature of the system would bring it into the region where a liquid or a gas is the equilibrium phase (depending on the pressure). If the piston is slowly lowered, the system will trace a curve of increasing temperature and pressure within the gas region of the phase diagram. At the point where gas begins to condense to liquid, the direction of the temperature and pressure curve will abruptly change to trace along the phase line until all of the water has condensed. Interfacial phenomena Between two phases in equilibrium there is a narrow region where the properties are not that of either phase. Although this region may be very thin, it can have significant and easily observable effects, such as causing a liquid to exhibit surface tension. In mixtures, some components may preferentially move toward the interface. In terms of modeling, describing, or understanding the behavior of a particular system, it may be efficacious to treat the interfacial region as a separate phase. Crystal phases A single material may have several distinct solid states capable of forming separate phases. Water is a well-known example of such a material. For example, water ice is ordinarily found in the hexagonal form ice Ih, but can also exist as the cubic ice Ic, the rhombohedral ice II, and many other forms. Polymorphism is the ability of a solid to exist in more than one crystal form. For pure chemical elements, polymorphism is known as allotropy. For example, diamond, graphite, and fullerenes are different allotropes of carbon. Phase transitions When a substance undergoes a phase transition (changes from one state of matter to another) it usually either takes up or releases energy. For example, when water evaporates, the increase in kinetic energy as the evaporating molecules escape the attractive forces of the liquid is reflected in a decrease in temperature. The energy required to induce the phase transition is taken from the internal thermal energy of the water, which cools the liquid to a lower temperature; hence evaporation is useful for cooling. See Enthalpy of vaporization. The reverse process, condensation, releases heat. The heat energy, or enthalpy, associated with a solid to liquid transition is the enthalpy of fusion and that associated with a solid to gas transition is the enthalpy of sublimation. Phases out of equilibrium While phases of matter are traditionally defined for systems in thermal equilibrium, work on quantum many-body localized (MBL) systems has provided a framework for defining phases out of equilibrium. MBL phases never reach thermal equilibrium, and can allow for new forms of order disallowed in equilibrium via a phenomenon known as localization protected quantum order. The transitions between different MBL phases and between MBL and thermalizing phases are novel dynamical phase transitions whose properties are active areas of research. Notes References External links French physicists find a solution that reversibly solidifies with a rise in temperature – α-cyclodextrin, water, and 4-methylpyridine Engineering thermodynamics Condensed matter physics Concepts in physics
Phase (matter)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,067
[ "Engineering thermodynamics", "Phases of matter", "Materials science", "Condensed matter physics", "Thermodynamics", "nan", "Mechanical engineering", "Matter" ]
23,665
https://en.wikipedia.org/wiki/Pixel
In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position. Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels). Etymology The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el include the words voxel , and texel . The word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, "pix" was being used in reference to still pictures by photojournalists. The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of scanned images from space probes to the Moon and Mars. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (). The concept of a "picture element" dates to the earliest days of television, for example as "Bildpunkt" (the German word for pixel, literally 'picture point') in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927, though it had been used earlier in various U.S. patents filed as early as 1911. Some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel. For example, IBM used it in their Technical Reference for the original PC. Pixilation, spelled with a second i, is an unrelated filmmaking technique that dates to the beginnings of cinema, in which live actors are posed frame by frame and photographed to create stop-motion animation. An archaic British word meaning "possession by spirits (pixies)", the term has been used to describe the animation process since the early 1950s; various animators, including Norman McLaren and Grant Munro, are credited with popularizing it. Technical thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart. The measures "dots per inch" (dpi) and "pixels per inch" (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display) and therefore has a total number of 640 × 480 = 307,200 pixels, or 0.3 megapixels. The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques. Sampling patterns For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another. For example: LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens. The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid. A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy. Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space. The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit. Pixels on computer monitors are normally "square" (that is, have equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, have unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard. Resolution of computer monitors Computer monitors (and TV sets) generally have a fixed native resolution. What it is depends on the monitor, and size. See below for historical exceptions. Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. Flat-panel monitors (and TV sets), e.g. OLED or LCD monitors, or E-ink, also use pixels to display an image, and have a native resolution, and it should (ideally) be matched to the video card resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On older, historically available, CRT monitors the resolution was possibly adjustable (still lower than what modern monitor achieve), while on some such monitors (or TV sets) the beam sweep rate was fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all – instead they have a set of resolutions that are equally well supported. To produce the sharpest images possible on a flat-panel, e.g. OLED or LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor. Resolution of telescopes The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale measured in radians is the ratio of the pixel spacing and focal length of the preceding optics, . (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because is usually expressed in units of arcseconds per pixel, because 1 radian equals (180/π) × 3600 ≈ 206,265 arcseconds, and because focal lengths are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as . Bits per pixel The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 21 = 2 colors (monochrome) 2 bpp, 22 = 4 colors 3 bpp, 23 = 8 colors 4 bpp, 24 = 16 colors 8 bpp, 28 = 256 colors 16 bpp, 216 = 65,536 colors ("Highcolor" ) 24 bpp, 224 = 16,777,216 colors ("Truecolor") For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue each, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image). Subpixels Many display and image-acquisition systems are not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance. In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels, mostly RGB colors. For example, LCDs typically divide each pixel vertically into three subpixels. When the square pixel is divided into three subpixels, each subpixel is necessarily rectangular. In display industry terminology, subpixels are often referred to as pixels, as they are the basic addressable elements in a viewpoint of hardware, and hence pixel circuits rather than subpixel circuits is used. Most digital camera image sensors use single-color sensor regions, for example using the Bayer filter pattern, and in the camera industry these are known as pixels just like in the display industry, not subpixels. For systems with subpixels, two different approaches can be taken: The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases. This latter approach, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately, producing an increase in the apparent resolution of color displays. While CRT displays use red-green-blue-masked phosphor areas, dictated by a mesh grid called the shadow mask, it would require a difficult calibration step to be aligned with the displayed pixel raster, and so CRTs do not use subpixel rendering. The concept of subpixels is related to samples. Logical pixel In graphic, web design, and user interfaces, a "pixel" may refer to a fixed length rather than a true pixel on the screen to accommodate different pixel densities. A typical definition, such as in CSS, is that a "physical" pixel is . Doing so makes sure a given element will display as the same size no matter what screen resolution views it. There may, however, be some further adjustments between a "physical" pixel and an on-screen logical pixel. As screens are viewed at difference distances (consider a phone, a computer display, and a TV), the desired length (a "reference pixel") is scaled relative to a reference viewing distance ( in CSS). In addition, as true screen pixel densities are rarely multiples of 96 dpi, some rounding is often applied so that a logical pixel is an integer amount of actual pixels. Doing so avoids render artifacts. The final "pixel" obtained after these two steps becomes the "anchor" to which all other absolute measurements (e.g. the "centimeter") are based on. Worked example, with a 2160p TV placed away from the viewer: Calculate the scaled pixel size as . Calculate the DPI of the TV as . Calculate the real-pixel count per logical-pixel as . A browser will then choose to use the 1.721× pixel size, or round to a 2× ratio. Megapixel A megapixel (MP''') is a million pixels; the term is used not only for the number of pixels in an image but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera that makes a 2048 × 1536 pixel image (3,145,728 finished image pixels) typically uses a few extra rows and columns of sensor elements and is commonly said to have "3.2 megapixels" or "3.4 megapixels", depending on whether the number reported is the "effective" or the "total" pixel count. The number of pixels is sometimes quoted as the "resolution" of a photo. This measure of resolution can be calculated by multiplying the width and height of a sensor in pixels. Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record one channel (only red or green or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel'' camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement). DxO Labs invented the Perceptual MegaPixel (P-MPix) to measure the sharpness that a camera produces when paired to a particular lens – as opposed to the MP a manufacturer states for a camera product, which is based only on the camera's sensor. The new P-MPix claims to be a more accurate and relevant value for photographers to consider when weighing up camera sharpness. As of mid-2013, the Sigma 35 mm f/1.4 DG HSM lens mounted on a Nikon D800 has the highest measured P-MPix. However, with a value of 23 MP, it still more than one-third of the D800's 36.3 MP sensor. In August 2019, Xiaomi released the Redmi Note 8 Pro as the world's first smartphone with 64 MP camera. On December 12, 2019, Samsung released Samsung A71 that also has a 64 MP camera. In late 2019, Xiaomi announced the first camera phone with 108 MP 1/1.33-inch across sensor. The sensor is larger than most of bridge camera with 1/2.3-inch across sensor. One new method to add megapixels has been introduced in a Micro Four Thirds System camera, which only uses a 16 MP sensor but can produce a 64 MP RAW (40 MP JPEG) image by making two exposures, shifting the sensor by a half pixel between them. Using a tripod to take level multi-shots within an instance, the multiple 16 MP images are then generated into a unified 64 MP image. See also Computer display standard Dexel Gigapixel image Image resolution Intrapixel and Interpixel processing LCD crosstalk PenTile matrix family Pixel advertising Pixel art Pixel art scaling algorithms Pixel aspect ratio Pixelation Pixelization Point (typography) Glossary of video terms Voxel Vector graphics References External links A Pixel Is Not A Little Square: Microsoft Memo by computer graphics pioneer Alvy Ray Smith. "Pixels and Me", 2016 lecture by Richard F. Lyon at the Computer History Museum Square and non-Square Pixels: Technical info on pixel aspect ratios of modern video standards (480i, 576i, 1080i, 720p), plus software implications. Computer graphics data structures Digital geometry Digital imaging Digital photography Display technology Image processing Television technology
Pixel
[ "Technology", "Engineering" ]
3,856
[ "Information and communications technology", "Electronic engineering", "Television technology", "Display technology" ]
23,666
https://en.wikipedia.org/wiki/Prime%20number
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, or , involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. The property of being prime is called primality. A simple but slow method of checking the primality of a given number , called trial division, tests whether is a multiple of any integer between 2 and . Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. the largest known prime number is a Mersenne prime with 41,024,320 decimal digits. There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is the prime number theorem, proven at the end of the 19th century, which says roughly that the probability of a randomly chosen large number being prime is inversely proportional to its number of digits, that is, to its logarithm. Several historical questions regarding prime numbers are still unsolved. These include Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and the twin prime conjecture, that there are infinitely many pairs of primes that differ by two. Such questions spurred the development of various branches of number theory, focusing on analytic or algebraic aspects of numbers. Primes are used in several routines in information technology, such as public-key cryptography, which relies on the difficulty of factoring large numbers into their prime factors. In abstract algebra, objects that behave in a generalized way like prime numbers include prime elements and prime ideals. Definition and examples A natural number (1, 2, 3, 4, 5, 6, etc.) is called a prime number (or a prime) if it is greater than 1 and cannot be written as the product of two smaller natural numbers. The numbers greater than 1 that are not prime are called composite numbers. In other words, is prime if items cannot be divided up into smaller equal-size groups of more than one item, or if it is not possible to arrange dots into a rectangular grid that is more than one dot wide and more than one dot high. For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers, as there are no other numbers that divide them evenly (without a remainder). 1 is not prime, as it is specifically excluded in the definition. and are both composite. The divisors of a natural number are the natural numbers that divide evenly. Every natural number has both 1 and itself as a divisor. If it has any other divisor, it cannot be prime. This leads to an equivalent definition of prime numbers: they are the numbers with exactly two positive divisors. Those two are 1 and the number itself. As 1 has only one divisor, itself, it is not prime by this definition. Yet another way to express the same thing is that a number is prime if it is greater than one and if none of the numbers divides evenly. The first 25 prime numbers (all the prime numbers less than 100) are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 . No even number greater than 2 is prime because any such number can be expressed as the product . Therefore, every prime number other than 2 is an odd number, and is called an odd prime. Similarly, when written in the usual decimal system, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5. The set of all primes is sometimes denoted by (a boldface capital P) or by (a blackboard bold capital P). History The Rhind Mathematical Papyrus, from around 1550 BC, has Egyptian fraction expansions of different forms for prime and composite numbers. However, the earliest surviving records of the study of prime numbers come from the ancient Greek mathematicians, who called them (). Euclid's Elements (c. 300 BC) proves the infinitude of primes and the fundamental theorem of arithmetic, and shows how to construct a perfect number from a Mersenne prime. Another Greek invention, the Sieve of Eratosthenes, is still used to construct lists of Around 1000 AD, the Islamic mathematician Ibn al-Haytham (Alhazen) found Wilson's theorem, characterizing the prime numbers as the numbers that evenly divide . He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it. Another Islamic mathematician, Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by considering only the prime divisors up to the square root of the upper limit. Fibonacci took the innovations from Islamic mathematics to Europe. His book Liber Abaci (1202) was the first to describe trial division for testing primality, again using divisors only up to the square root. In 1640 Pierre de Fermat stated (without proof) Fermat's little theorem (later proved by Leibniz and Euler). Fermat also investigated the primality of the Fermat numbers , and Marin Mersenne studied the Mersenne primes, prime numbers of the form with itself a prime. Christian Goldbach formulated Goldbach's conjecture, that every even number is the sum of two primes, in a 1742 letter to Euler. Euler proved Alhazen's conjecture (now the Euclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes. He introduced methods from mathematical analysis to this area in his proofs of the infinitude of the primes and the divergence of the sum of the reciprocals of the primes . At the start of the 19th century, Legendre and Gauss conjectured that as tends to infinity, the number of primes up to is asymptotic to , where is the natural logarithm of . A weaker consequence of this high density of primes was Bertrand's postulate, that for every there is a prime between and , proved in 1852 by Pafnuty Chebyshev. Ideas of Bernhard Riemann in his 1859 paper on the zeta-function sketched an outline for proving the conjecture of Legendre and Gauss. Although the closely related Riemann hypothesis remains unproven, Riemann's outline was completed in 1896 by Hadamard and de la Vallée Poussin, and the result is now known as the prime number theorem. Another important 19th century result was Dirichlet's theorem on arithmetic progressions, that certain arithmetic progressions contain infinitely many primes. Many mathematicians have worked on primality tests for numbers larger than those where trial division is practicably applicable. Methods that are restricted to specific number forms include Pépin's test for Fermat numbers (1877), Proth's theorem (c. 1878), the Lucas–Lehmer primality test (originated 1856), and the generalized Lucas primality test. Since 1951 all the largest known primes have been found using these tests on computers. The search for ever larger primes has generated interest outside mathematical circles, through the Great Internet Mersenne Prime Search and other distributed computing projects. The idea that prime numbers had few applications outside of pure mathematics was shattered in the 1970s when public-key cryptography and the RSA cryptosystem were invented, using prime numbers as their basis. The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. The mathematical theory of prime numbers also moved forward with the Green–Tao theorem (2004) that there are arbitrarily long arithmetic progressions of prime numbers, and Yitang Zhang's 2013 proof that there exist infinitely many prime gaps of bounded size. Primality of one Most early Greeks did not even consider to be a number, so they could not consider its primality. A few scholars in the Greek and later Roman tradition, including Nicomachus, Iamblichus, Boethius, and Cassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider to be prime either. However, Euclid and a majority of the other Greek mathematicians considered as prime. The medieval Islamic mathematicians largely followed the Greeks in viewing as not being a number. By the Middle Ages and Renaissance, mathematicians began treating as a number, and by the 17th century some of them included it as the first prime number. In the mid-18th century, Christian Goldbach listed as prime in his correspondence with Leonhard Euler; however, Euler himself did not consider 1 to be prime. Many 19th century mathematicians still considered to be prime, and Derrick Norman Lehmer included in his list of primes less than ten million published in 1914. Lists of primes that included 1 continued to be published as recently However, around this time, by the early 20th century, mathematicians started to agree that 1 should not be classified as a prime number. If were to be considered a prime, many statements involving primes would need to be awkwardly reworded. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than , because every number would have multiple factorizations with any number of copies of . Similarly, the sieve of Eratosthenes would not work correctly if it handled as a prime, because it would eliminate all multiples of (that is, all other numbers) and output only the single number . Some other more technical properties of prime numbers also do not hold for the number : for instance, the formulas for Euler's totient function or for the sum of divisors function are different for prime numbers than they are for . By the early 20th century, mathematicians began to agree that 1 should not be listed as prime, but rather in its own special category as a "unit". Elementary properties Unique factorization Writing a number as a product of prime numbers is called a prime factorization of the number. For example: The terms in the product are called prime factors. The same prime factor may occur more than once; this example has two copies of the prime factor When a prime occurs multiple times, exponentiation can be used to group together multiple copies of the same prime number: for example, in the second way of writing the product above, denotes the square or second power of . The central importance of prime numbers to number theory and mathematics in general stems from the fundamental theorem of arithmetic. This theorem states that every integer larger than can be written as a product of one or more primes. More strongly, this product is unique in the sense that any two prime factorizations of the same number will have the same numbers of copies of the same primes, although their ordering may differ. So, although there are many different ways of finding a factorization using an integer factorization algorithm, they all must produce the same result. Primes can thus be considered the "basic building blocks" of the natural numbers. Some proofs of the uniqueness of prime factorizations are based on Euclid's lemma: If is a prime number and divides a product of integers and then divides or divides (or both). Conversely, if a number has the property that when it divides a product it always divides at least one factor of the product, then must be prime. Infinitude There are infinitely many prime numbers. Another way of saying this is that the sequence of prime numbers never ends. This statement is referred to as Euclid's theorem in honor of the ancient Greek mathematician Euclid, since the first known proof for this statement is attributed to him. Many more proofs of the infinitude of primes are known, including an analytical proof by Euler, Goldbach's proof based on Fermat numbers, Furstenberg's proof using general topology, and Kummer's elegant proof. Euclid's proof shows that every finite list of primes is incomplete. The key idea is to multiply together the primes in any given list and add If the list consists of the primes this gives the number By the fundamental theorem, has a prime factorization with one or more prime factors. is evenly divisible by each of these factors, but has a remainder of one when divided by any of the prime numbers in the given list, so none of the prime factors of can be in the given list. Because there is no finite list of all the primes, there must be infinitely many primes. The numbers formed by adding one to the products of the smallest primes are called Euclid numbers. The first five of them are prime, but the sixth, is a composite number. Formulas for primes There is no known efficient formula for primes. For example, there is no non-constant polynomial, even in several variables, that takes only prime values. However, there are numerous expressions that do encode all primes, or only primes. One possible formula is based on Wilson's theorem and generates the number 2 many times and all other primes exactly once. There is also a set of Diophantine equations in nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. This can be used to obtain a single formula with the property that all its positive values are prime. Other examples of prime-generating formulas come from Mills' theorem and a theorem of Wright. These assert that there are real constants and such that are prime for any natural number in the first formula, and any number of exponents in the second formula. Here represents the floor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of or Open questions Many conjectures revolving about primes have been posed. Often having an elementary formulation, many of these conjectures have withstood proof for decades: all four of Landau's problems from 1912 are still unsolved. One of them is Goldbach's conjecture, which asserts that every even integer greater than can be written as a sum of two primes. , this conjecture has been verified for all numbers up to Weaker statements than this have been proven; for example, Vinogradov's theorem says that every sufficiently large odd integer can be written as a sum of three primes. Chen's theorem says that every sufficiently large even number can be expressed as the sum of a prime and a semiprime (the product of two primes). Also, any even integer greater than can be written as the sum of six primes. The branch of number theory studying such questions is called additive number theory. Another type of problem concerns prime gaps, the differences between consecutive primes. The existence of arbitrarily large prime gaps can be seen by noting that the sequence consists of composite numbers, for any natural number However, large prime gaps occur much earlier than this argument shows. For example, the first prime gap of length 8 is between the primes 89 and 97, much smaller than It is conjectured that there are infinitely many twin primes, pairs of primes with difference 2; this is the twin prime conjecture. Polignac's conjecture states more generally that for every positive integer there are infinitely many pairs of consecutive primes that differ by Andrica's conjecture, Brocard's conjecture, Legendre's conjecture, and Oppermann's conjecture all suggest that the largest gaps between primes from to should be at most approximately a result that is known to follow from the Riemann hypothesis, while the much stronger Cramér conjecture sets the largest gap size at . Prime gaps can be generalized to prime -tuples, patterns in the differences among more than two prime numbers. Their infinitude and density are the subject of the first Hardy–Littlewood conjecture, which can be motivated by the heuristic that the prime numbers behave similarly to a random sequence of numbers with density given by the prime number theorem. Analytic properties Analytic number theory studies number theory through the lens of continuous functions, limits, infinite series, and the related mathematics of the infinite and infinitesimal. This area of study began with Leonhard Euler and his first major result, the solution to the Basel problem. The problem asked for the value of the infinite sum which today can be recognized as the value of the Riemann zeta function. This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, the Riemann hypothesis. Euler showed that . The reciprocal of this number, , is the limiting probability that two random numbers selected uniformly from a large range are relatively prime (have no factors in common). The distribution of primes in the large, such as the question how many primes are smaller than a given, large threshold, is described by the prime number theorem, but no efficient formula for the -th prime is known. Dirichlet's theorem on arithmetic progressions, in its basic form, asserts that linear polynomials with relatively prime integers and take infinitely many prime values. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the same have approximately the same proportions of primes. Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often. Analytical proof of Euclid's theorem Euler's proof that there are infinitely many primes considers the sums of reciprocals of primes, Euler showed that, for any arbitrary real number , there exists a prime for which this sum is bigger than . This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past every . The growth rate of this sum is described more precisely by Mertens' second theorem. For comparison, the sum does not grow to infinity as goes to infinity (see the Basel problem). In this sense, prime numbers occur more often than squares of natural numbers, although both sets are infinite. Brun's theorem states that the sum of the reciprocals of twin primes, is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve the twin prime conjecture, that there exist infinitely many twin primes. Number of primes below a given bound The prime-counting function is defined as the number of primes not greater than . For example, , since there are five primes less than or equal to . Methods such as the Meissel–Lehmer algorithm can compute exact values of faster than it would be possible to list each prime up to . The prime number theorem states that is asymptotic to , which is denoted as and means that the ratio of to the right-hand fraction approaches 1 as grows to infinity. This implies that the likelihood that a randomly chosen number less than is prime is (approximately) inversely proportional to the number of digits in . It also implies that the th prime number is proportional to and therefore that the average size of a prime gap is proportional to . A more accurate estimate for is given by the offset logarithmic integral Arithmetic progressions An arithmetic progression is a finite or infinite sequence of numbers such that consecutive numbers in the sequence all have the same difference. This difference is called the modulus of the progression. For example, is an infinite arithmetic progression with modulus 9. In an arithmetic progression, all the numbers have the same remainder when divided by the modulus; in this example, the remainder is 3. Because both the modulus 9 and the remainder 3 are multiples of 3, so is every element in the sequence. Therefore, this progression contains only one prime number, 3 itself. In general, the infinite progression can have more than one prime only when its remainder and modulus are relatively prime. If they are relatively prime, Dirichlet's theorem on arithmetic progressions asserts that the progression contains infinitely many primes. The Green–Tao theorem shows that there are arbitrarily long finite arithmetic progressions consisting only of primes. Prime values of quadratic polynomials Euler noted that the function yields prime numbers for , although composite numbers appear among its later values. The search for an explanation for this phenomenon led to the deep algebraic number theory of Heegner numbers and the class number problem. The Hardy–Littlewood conjecture F predicts the density of primes among the values of quadratic polynomials with integer coefficients in terms of the logarithmic integral and the polynomial coefficients. No quadratic polynomial has been proven to take infinitely many prime values. The Ulam spiral arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others. Zeta function and the Riemann hypothesis One of the most famous unsolved questions in mathematics, dating from 1859, and one of the Millennium Prize Problems, is the Riemann hypothesis, which asks where the zeros of the Riemann zeta function are located. This function is an analytic function on the complex numbers. For complex numbers with real part greater than one it equals both an infinite sum over all integers, and an infinite product over the prime numbers, This equality between a sum and a product, discovered by Euler, is called an Euler product. The Euler product can be derived from the fundamental theorem of arithmetic, and shows the close connection between the zeta function and the prime numbers. It leads to another proof that there are infinitely many primes: if there were only finitely many, then the sum-product equality would also be valid at , but the sum would diverge (it is the harmonic series ) while the product would be finite, a contradiction. The Riemann hypothesis states that the zeros of the zeta-function are all either negative even numbers, or complex numbers with real part equal to 1/2. The original proof of the prime number theorem was based on a weak form of this hypothesis, that there are no zeros with real part equal to , although other more elementary proofs have been found. The prime-counting function can be expressed by Riemann's explicit formula as a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term. In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and the asymptotic distribution of primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of for intervals near a number ). Abstract algebra Modular arithmetic and finite fields Modular arithmetic modifies usual arithmetic by only using the numbers , for a natural number called the modulus. Any other natural number can be mapped into this system by replacing it by its remainder after division by . Modular sums, differences and products are calculated by performing the same replacement by the remainder on the result of the usual sum, difference, or product of integers. Equality of integers corresponds to congruence in modular arithmetic: and are congruent (written mod ) when they have the same remainder after division by . However, in this system of numbers, division by all nonzero numbers is possible if and only if the modulus is prime. For instance, with the prime number as modulus, division by is possible: , because clearing denominators by multiplying both sides by gives the valid formula . However, with the composite modulus , division by is impossible. There is no valid solution to : clearing denominators by multiplying by causes the left-hand side to become while the right-hand side becomes either or . In the terminology of abstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms a field or, more specifically, a finite field, while other moduli only give a ring but not a field. Several theorems about primes can be formulated using modular arithmetic. For instance, Fermat's little theorem states that if (mod ), then (mod ). Summing this over all choices of gives the equation valid whenever is prime. Giuga's conjecture says that this equation is also a sufficient condition for to be prime. Wilson's theorem says that an integer is prime if and only if the factorial is congruent to mod . For a composite number  this cannot hold, since one of its factors divides both and , and so is impossible. p-adic numbers The -adic order of an integer is the number of copies of in the prime factorization of . The same concept can be extended from integers to rational numbers by defining the -adic order of a fraction to be . The -adic absolute value of any rational number is then defined as . Multiplying an integer by its -adic absolute value cancels out the factors of in its factorization, leaving only the other primes. Just as the distance between two real numbers can be measured by the absolute value of their distance, the distance between two rational numbers can be measured by their -adic distance, the -adic absolute value of their difference. For this definition of distance, two numbers are close together (they have a small distance) when their difference is divisible by a high power of . In the same way that the real numbers can be formed from the rational numbers and their distances, by adding extra limiting values to form a complete field, the rational numbers with the -adic distance can be extended to a different complete field, the -adic numbers. This picture of an order, absolute value, and complete field derived from them can be generalized to algebraic number fields and their valuations (certain mappings from the multiplicative group of the field to a totally ordered additive group, also called orders), absolute values (certain multiplicative mappings from the field to the real numbers, also called norms), and places (extensions to complete fields in which the given field is a dense set, also called completions). The extension from the rational numbers to the real numbers, for instance, is a place in which the distance between numbers is the usual absolute value of their difference. The corresponding mapping to an additive group would be the logarithm of the absolute value, although this does not meet all the requirements of a valuation. According to Ostrowski's theorem, up to a natural notion of equivalence, the real numbers and -adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers. The local–global principle allows certain problems over the rational numbers to be solved by piecing together solutions from each of their places, again underlining the importance of primes to number theory. Prime elements of a ring A commutative ring is an algebraic structure where addition, subtraction and multiplication are defined. The integers are a ring, and the prime numbers in the integers have been generalized to rings in two different ways, prime elements and irreducible elements. An element of a ring is called prime if it is nonzero, has no multiplicative inverse (that is, it is not a unit), and satisfies the following requirement: whenever divides the product of two elements of , it also divides at least one of or . An element is irreducible if it is neither a unit nor the product of two other non-unit elements. In the ring of integers, the prime and irreducible elements form the same set, In an arbitrary ring, all prime elements are irreducible. The converse does not hold in general, but does hold for unique factorization domains. The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. An example of such a domain is the Gaussian integers , the ring of complex numbers of the form where denotes the imaginary unit and and are arbitrary integers. Its prime elements are known as Gaussian primes. Not every number that is prime among the integers remains prime in the Gaussian integers; for instance, the number 2 can be written as a product of the two Gaussian primes and . Rational primes (the prime elements in the integers) congruent to 3 mod 4 are Gaussian primes, but rational primes congruent to 1 mod 4 are not. This is a consequence of Fermat's theorem on sums of two squares, which states that an odd prime is expressible as the sum of two squares, , and therefore factorable as , exactly when is 1 mod 4. Prime ideals Not every ring is a unique factorization domain. For instance, in the ring of numbers (for integers and ) the number has two factorizations , where neither of the four factors can be reduced any further, so it does not have a unique factorization. In order to extend unique factorization to a larger class of rings, the notion of a number can be replaced with that of an ideal, a subset of the elements of a ring that contains all sums of pairs of its elements, and all products of its elements with ring elements. Prime ideals, which generalize prime elements in the sense that the principal ideal generated by a prime element is a prime ideal, are an important tool and object of study in commutative algebra, algebraic number theory and algebraic geometry. The prime ideals of the ring of integers are the ideals , , , , , , ... The fundamental theorem of arithmetic generalizes to the Lasker–Noether theorem, which expresses every ideal in a Noetherian commutative ring as an intersection of primary ideals, which are the appropriate generalizations of prime powers. The spectrum of a ring is a geometric space whose points are the prime ideals of the ring. Arithmetic geometry also benefits from this notion, and many concepts exist in both geometry and number theory. For example, factorization or ramification of prime ideals when lifted to an extension field, a basic problem of algebraic number theory, bears some resemblance with ramification in geometry. These concepts can even assist with in number-theoretic questions solely concerned with integers. For example, prime ideals in the ring of integers of quadratic number fields can be used in proving quadratic reciprocity, a statement that concerns the existence of square roots modulo integer prime numbers. Early attempts to prove Fermat's Last Theorem led to Kummer's introduction of regular primes, integer prime numbers connected with the failure of unique factorization in the cyclotomic integers. The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed by Chebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case. Group theory In the theory of finite groups the Sylow theorems imply that, if a power of a prime number divides the order of a group, then the group has a subgroup of order . By Lagrange's theorem, any group of prime order is a cyclic group, and by Burnside's theorem any group whose order is divisible by only two primes is solvable. Computational methods For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. Most primality tests only tell whether their argument is prime or not. Routines that also provide a prime factor of composite arguments (or all of its prime factors) are called factorization algorithms. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. Trial division The most basic method of checking the primality of a given integer is called trial division. This method divides by each integer from up to the square root of . Any such integer dividing evenly establishes as composite; otherwise it is prime. Integers larger than the square root do not need to be checked because, whenever , one of the two factors and is less than or equal to the square root of . Another optimization is to check only primes as factors in this range. For instance, to check whether is prime, this method divides it by the primes in the range from to , which are , , and . Each division produces a nonzero remainder, so is indeed prime. Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performs grows exponentially as a function of the number of digits of these integers. However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter. Sieves Before computers, mathematical tables listing all of the primes or prime factorizations up to a given limit were commonly printed. The oldest known method for generating a list of primes is called the sieve of Eratosthenes. The animation shows an optimized variant of this method. Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. In advanced mathematics, sieve theory applies similar methods to other problems. Primality testing versus primality proving Some of the fastest modern tests for whether an arbitrary given number is prime are probabilistic (or Monte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer. For instance the Solovay–Strassen primality test on a given number chooses a number randomly from through and uses modular exponentiation to check whether is divisible by . If so, it answers yes and otherwise it answers no. If really is prime, it will always answer yes, but if is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. If this test is repeated times on the same number, the probability that a composite number could pass the test every time is at most . Because this decreases exponentially with the number of tests, it provides high confidence (although not certainty) that a number that passes the repeated test is prime. On the other hand, if the test ever fails, then the number is certainly composite. A composite number that passes such a test is called a pseudoprime. In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. For instance, this is true of trial division. The algorithms with guaranteed-correct output include both deterministic (non-random) algorithms, such as the AKS primality test, and randomized Las Vegas algorithms where the random choices made by the algorithm do not affect its final answer, such as some variations of elliptic curve primality proving. When the elliptic curve method concludes that a number is prime, it provides primality certificate that can be verified quickly. The elliptic curve primality test is the fastest in practice of the guaranteed-correct primality tests, but its runtime analysis is based on heuristic arguments rather than rigorous proofs. The AKS primality test has mathematically proven time complexity, but is slower than elliptic curve primality proving in practice. These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; when doing this, a faster probabilistic test can quickly eliminate most composite numbers before a guaranteed-correct algorithm is used to verify that the remaining numbers are prime. The following table lists some of these tests. Their running time is given in terms of , the number to be tested and, for probabilistic algorithms, the number of tests performed. Moreover, is an arbitrarily small positive number, and log is the logarithm to an unspecified base. The big O notation means that each time bound should be multiplied by a constant factor to convert it from dimensionless units to units of time; this factor depends on implementation details such as the type of computer used to run the algorithm, but not on the input parameters and . Special-purpose algorithms and the largest known prime In addition to the aforementioned tests that apply to any natural number, some numbers of a special form can be tested for primality more quickly. For example, the Lucas–Lehmer primality test can determine whether a Mersenne number (one less than a power of two) is prime, deterministically, in the same time as a single iteration of the Miller–Rabin test. This is why since 1992 () the largest known prime has always been a Mersenne prime. It is conjectured that there are infinitely many Mersenne primes. The following table gives the largest known primes of various types. Some of these primes have been found using distributed computing. In 2009, the Great Internet Mersenne Prime Search project was awarded a US$100,000 prize for first discovering a prime with at least 10 million digits. The Electronic Frontier Foundation also offers $150,000 and $250,000 for primes with at least 100 million digits and 1 billion digits, respectively. Integer factorization Given a composite integer , the task of providing one (or all) prime factors is referred to as factorization of . It is significantly more difficult than primality testing, and although many factorization algorithms are known, they are slower than the fastest primality testing methods. Trial division and Pollard's rho algorithm can be used to find very small factors of , and elliptic curve factorization can be effective when has factors of moderate size. Methods suitable for arbitrary large numbers that do not depend on the size of its factors include the quadratic sieve and general number field sieve. As with primality testing, there are also factorization algorithms that require their input to have a special form, including the special number field sieve. the largest number known to have been factored by a general-purpose algorithm is RSA-240, which has 240 decimal digits (795 bits) and is the product of two large primes. Shor's algorithm can factor any integer in a polynomial number of steps on a quantum computer. However, current technology can only run this algorithm for very small numbers. , the largest number that has been factored by a quantum computer running Shor's algorithm is 21. Other computational applications Several public-key cryptography algorithms, such as RSA and the Diffie–Hellman key exchange, are based on large prime numbers (2048-bit primes are common). RSA relies on the assumption that it is much easier (that is, more efficient) to perform the multiplication of two (large) numbers and than to calculate and (assumed coprime) if only the product is known. The Diffie–Hellman key exchange relies on the fact that there are efficient algorithms for modular exponentiation (computing ), while the reverse operation (the discrete logarithm) is thought to be a hard problem. Prime numbers are frequently used for hash tables. For instance the original method of Carter and Wegman for universal hashing was based on computing hash functions by choosing random linear functions modulo large prime numbers. Carter and Wegman generalized this method to -independent hashing by using higher-degree polynomials, again modulo large primes. As well as in the hash function, prime numbers are used for the hash table size in quadratic probing based hash tables to ensure that the probe sequence covers the whole table. Some checksum methods are based on the mathematics of prime numbers. For instance the checksums used in International Standard Book Numbers are defined by taking the rest of the number modulo , a prime number. Because is prime this method can detect both single-digit errors and transpositions of adjacent digits. Another checksum method, Adler-32, uses arithmetic modulo , the largest prime number less than . Prime numbers are also used in pseudorandom number generators including linear congruential generators and the Mersenne Twister. Other applications Prime numbers are of central importance to number theory but also have many applications to other areas within mathematics, including abstract algebra and elementary geometry. For example, it is possible to place prime numbers of points in a two-dimensional grid so that no three are in a line, or so that every triangle formed by three of the points has large area. Another example is Eisenstein's criterion, a test for whether a polynomial is irreducible based on divisibility of its coefficients by a prime number and its square. The concept of a prime number is so important that it has been generalized in different ways in various branches of mathematics. Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. For example, the prime field of a given field is its smallest subfield that contains both 0 and 1. It is either the field of rational numbers or a finite field with a prime number of elements, whence the name. Often a second, additional meaning is intended by using the word prime, namely that any object can be, essentially uniquely, decomposed into its prime components. For example, in knot theory, a prime knot is a knot that is indecomposable in the sense that it cannot be written as the connected sum of two nontrivial knots. Any knot can be uniquely expressed as a connected sum of prime knots. The prime decomposition of 3-manifolds is another example of this type. Beyond mathematics and computing, prime numbers have potential connections to quantum mechanics, and have been used metaphorically in the arts and literature. They have also been used in evolutionary biology to explain the life cycles of cicadas. Constructible polygons and polygon partitions Fermat primes are primes of the form with a nonnegative integer. They are named after Pierre de Fermat, who conjectured that all such numbers are prime. The first five of these numbers – 3, 5, 17, 257, and 65,537 – are prime, but is composite and so are all other Fermat numbers that have been verified as of 2017. A regular -gon is constructible using straightedge and compass if and only if the odd prime factors of (if any) are distinct Fermat primes. Likewise, a regular -gon may be constructed using straightedge, compass, and an angle trisector if and only if the prime factors of are any number of copies of 2 or 3 together with a (possibly empty) set of distinct Pierpont primes, primes of the form . It is possible to partition any convex polygon into smaller convex polygons of equal area and equal perimeter, when is a power of a prime number, but this is not known for other values of . Quantum mechanics Beginning with the work of Hugh Montgomery and Freeman Dyson in the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels of quantum systems. Prime numbers are also significant in quantum information science, thanks to mathematical structures such as mutually unbiased bases and symmetric informationally complete positive-operator-valued measures. Biology The evolutionary strategy used by cicadas of the genus Magicicada makes use of prime numbers. These insects spend most of their lives as grubs underground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles. In contrast, the multi-year periods between flowering in bamboo plants are hypothesized to be smooth numbers, having only small prime numbers in their factorizations. Arts and literature Prime numbers have influenced many artists and writers. The French composer Olivier Messiaen used prime numbers to create ametrical music through "natural phenomena". In works such as La Nativité du Seigneur (1935) and Quatre études de rythme (1949–1950), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations". In his science fiction novel Contact, scientist Carl Sagan suggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomer Frank Drake in 1975. In the novel The Curious Incident of the Dog in the Night-Time by Mark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen with Asperger syndrome. Prime numbers are used as a metaphor for loneliness and isolation in the Paolo Giordano novel The Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers. Notes References External links Caldwell, Chris, The Prime Pages at primes.utm.edu. . "Teacher package: Prime numbers" from Plus, December 1, 2008, produced by the Millennium Mathematics Project at the University of Cambridge. Generators and calculators Prime factors calculator can factorize any positive integer up to 20 digits. Fast Online primality test with factorization makes use of the Elliptic Curve Method (up to thousand-digits numbers, requires Java). Huge database of prime numbers. Prime Numbers up to 1 trillion. . Articles containing proofs Integer sequences
Prime number
[ "Mathematics" ]
9,953
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Articles containing proofs", "Numbers", "Number theory" ]
23,670
https://en.wikipedia.org/wiki/Perfect%20number
In number theory, a perfect number is a positive integer that is equal to the sum of its positive proper divisors, that is, divisors excluding the number itself. For instance, 6 has proper divisors 1, 2 and 3, and 1 + 2 + 3 = 6, so 6 is a perfect number. The next perfect number is 28, since 1 + 2 + 4 + 7 + 14 = 28. The first four perfect numbers are 6, 28, 496 and 8128. The sum of proper divisors of a number is called its aliquot sum, so a perfect number is one that is equal to its aliquot sum. Equivalently, a perfect number is a number that is half the sum of all of its positive divisors; in symbols, where is the sum-of-divisors function. This definition is ancient, appearing as early as Euclid's Elements (VII.22) where it is called (perfect, ideal, or complete number). Euclid also proved a formation rule (IX.36) whereby is an even perfect number whenever is a prime of the form for positive integer —what is now called a Mersenne prime. Two millennia later, Leonhard Euler proved that all even perfect numbers are of this form. This is known as the Euclid–Euler theorem. It is not known whether there are any odd perfect numbers, nor whether infinitely many perfect numbers exist. History In about 300 BC Euclid showed that if 2p − 1 is prime then 2p−1(2p − 1) is perfect. The first four perfect numbers were the only ones known to early Greek mathematics, and the mathematician Nicomachus noted 8128 as early as around AD 100. In modern language, Nicomachus states without proof that perfect number is of the form where is prime. He seems to be unaware that itself has to be prime. He also says (wrongly) that the perfect numbers end in 6 or 8 alternately. (The first 5 perfect numbers end with digits 6, 8, 6, 8, 6; but the sixth also ends in 6.) Philo of Alexandria in his first-century book "On the creation" mentions perfect numbers, claiming that the world was created in 6 days and the moon orbits in 28 days because 6 and 28 are perfect. Philo is followed by Origen, and by Didymus the Blind, who adds the observation that there are only four perfect numbers that are less than 10,000. (Commentary on Genesis 1. 14–19). St Augustine defines perfect numbers in City of God (Book XI, Chapter 30) in the early 5th century AD, repeating the claim that God created the world in 6 days because 6 is the smallest perfect number. The Egyptian mathematician Ismail ibn Fallūs (1194–1252) mentioned the next three perfect numbers (33,550,336; 8,589,869,056; and 137,438,691,328) and listed a few more which are now known to be incorrect. The first known European mention of the fifth perfect number is a manuscript written between 1456 and 1461 by an unknown mathematician. In 1588, the Italian mathematician Pietro Cataldi identified the sixth (8,589,869,056) and the seventh (137,438,691,328) perfect numbers, and also proved that every perfect number obtained from Euclid's rule ends with a 6 or an 8. Even perfect numbers Euclid proved that is an even perfect number whenever is prime (Elements, Prop. IX.36). For example, the first four perfect numbers are generated by the formula with a prime number, as follows: Prime numbers of the form are known as Mersenne primes, after the seventeenth-century monk Marin Mersenne, who studied number theory and perfect numbers. For to be prime, it is necessary that itself be prime. However, not all numbers of the form with a prime are prime; for example, is not a prime number. In fact, Mersenne primes are very rare: of the primes up to 68,874,199, is prime for only 48 of them. While Nicomachus had stated (without proof) that perfect numbers were of the form where is prime (though he stated this somewhat differently), Ibn al-Haytham (Alhazen) circa AD 1000 was unwilling to go that far, declaring instead (also without proof) that the formula yielded only every even perfect number. It was not until the 18th century that Leonhard Euler proved that the formula will yield all the even perfect numbers. Thus, there is a one-to-one correspondence between even perfect numbers and Mersenne primes; each Mersenne prime generates one even perfect number, and vice versa. This result is often referred to as the Euclid–Euler theorem. An exhaustive search by the GIMPS distributed computing project has shown that the first 48 even perfect numbers are for = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609 and 57885161 . Four higher perfect numbers have also been discovered, namely those for which = 74207281, 77232917, 82589933 and 136279841. Although it is still possible there may be others within this range, initial but exhaustive tests by GIMPS have revealed no other perfect numbers for below 109332539. , 52 Mersenne primes are known, and therefore 52 even perfect numbers (the largest of which is with 82,048,640 digits). It is not known whether there are infinitely many perfect numbers, nor whether there are infinitely many Mersenne primes. As well as having the form , each even perfect number is the -th triangular number (and hence equal to the sum of the integers from 1 to ) and the -th hexagonal number. Furthermore, each even perfect number except for 6 is the -th centered nonagonal number and is equal to the sum of the first odd cubes (odd cubes up to the cube of ): Even perfect numbers (except 6) are of the form with each resulting triangular number , , (after subtracting 1 from the perfect number and dividing the result by 9) ending in 3 or 5, the sequence starting with , , , It follows that by adding the digits of any even perfect number (except 6), then adding the digits of the resulting number, and repeating this process until a single digit (called the digital root) is obtained, always produces the number 1. For example, the digital root of 8128 is 1, because , , and . This works with all perfect numbers with odd prime and, in fact, with numbers of the form for odd integer (not necessarily prime) . Owing to their form, every even perfect number is represented in binary form as ones followed by zeros; for example: Thus every even perfect number is a pernicious number. Every even perfect number is also a practical number (cf. Related concepts). Odd perfect numbers It is unknown whether any odd perfect numbers exist, though various results have been obtained. In 1496, Jacques Lefèvre stated that Euclid's rule gives all perfect numbers, thus implying that no odd perfect number exists, but Euler himself stated: "Whether ... there are any odd perfect numbers is a most difficult question". More recently, Carl Pomerance has presented a heuristic argument suggesting that indeed no odd perfect number should exist. All perfect numbers are also harmonic divisor numbers, and it has been conjectured as well that there are no odd harmonic divisor numbers other than 1. Many of the properties proved about odd perfect numbers also apply to Descartes numbers, and Pace Nielsen has suggested that sufficient study of those numbers may lead to a proof that no odd perfect numbers exist. Any odd perfect number N must satisfy the following conditions: N > 101500. N is not divisible by 105. N is of the form N ≡ 1 (mod 12) or N ≡ 117 (mod 468) or N ≡ 81 (mod 324). The largest prime factor of N is greater than 108, and less than The second largest prime factor is greater than 104, and is less than . The third largest prime factor is greater than 100, and less than N has at least 101 prime factors and at least 10 distinct prime factors. If 3 does not divide N, then N has at least 12 distinct prime factors. N is of the form where: q, p1, ..., pk are distinct odd primes (Euler). q ≡ α ≡ 1 (mod 4) (Euler). The smallest prime factor of N is at most At least one of the prime powers dividing N exceeds 1062. . . . Furthermore, several minor results are known about the exponents e1, ..., ek. Not all ei ≡ 1 (mod 3). Not all ei ≡ 2 (mod 5). If all ei ≡ 1 (mod 3) or 2 (mod 5), then the smallest prime factor of N must lie between 108 and 101000. More generally, if all 2ei+1 have a prime factor in a given finite set S, then the smallest prime factor of N must be smaller than an effectively computable constant depending only on S. If (e1, ..., ek) =  (1, ..., 1, 2, ..., 2) with t ones and u twos, then . (e1, ..., ek) ≠ (1, ..., 1, 3), (1, ..., 1, 5), (1, ..., 1, 6). If , then e cannot be 3, 5, 24, 6, 8, 11, 14 or 18. and . In 1888, Sylvester stated: Minor results All even perfect numbers have a very precise form; odd perfect numbers either do not exist or are rare. There are a number of results on perfect numbers that are actually quite easy to prove but nevertheless superficially impressive; some of them also come under Richard Guy's strong law of small numbers: The only even perfect number of the form n3 + 1 is 28 . 28 is also the only even perfect number that is a sum of two positive cubes of integers . The reciprocals of the divisors of a perfect number N must add up to 2 (to get this, take the definition of a perfect number, , and divide both sides by n): For 6, we have ; For 28, we have , etc. The number of divisors of a perfect number (whether even or odd) must be even, because N cannot be a perfect square. From these two results it follows that every perfect number is an Ore's harmonic number. The even perfect numbers are not trapezoidal numbers; that is, they cannot be represented as the difference of two positive non-consecutive triangular numbers. There are only three types of non-trapezoidal numbers: even perfect numbers, powers of two, and the numbers of the form formed as the product of a Fermat prime with a power of two in a similar way to the construction of even perfect numbers from Mersenne primes. The number of perfect numbers less than n is less than , where c > 0 is a constant. In fact it is , using little-o notation. Every even perfect number ends in 6 or 28, base ten; and, with the only exception of 6, ends in 1 in base 9. Therefore, in particular the digital root of every even perfect number other than 6 is 1. The only square-free perfect number is 6. Related concepts The sum of proper divisors gives various other kinds of numbers. Numbers where the sum is less than the number itself are called deficient, and where it is greater than the number, abundant. These terms, together with perfect itself, come from Greek numerology. A pair of numbers which are the sum of each other's proper divisors are called amicable, and larger cycles of numbers are called sociable. A positive integer such that every smaller positive integer is a sum of distinct divisors of it is a practical number. By definition, a perfect number is a fixed point of the restricted divisor function , and the aliquot sequence associated with a perfect number is a constant sequence. All perfect numbers are also -perfect numbers, or Granville numbers. A semiperfect number is a natural number that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number. Most abundant numbers are also semiperfect; abundant numbers which are not semiperfect are called weird numbers. See also Hyperperfect number Leinster group List of Mersenne primes and perfect numbers Multiply perfect number Superperfect numbers Unitary perfect number Harmonic divisor number Notes References Sources Euclid, Elements, Book IX, Proposition 36. See D.E. Joyce's website for a translation and discussion of this proposition and its proof. Further reading Nankar, M.L.: "History of perfect numbers," Ganita Bharati 1, no. 1–2 (1979), 7–8. Riele, H.J.J. "Perfect Numbers and Aliquot Sequences" in H.W. Lenstra and R. Tijdeman (eds.): Computational Methods in Number Theory, Vol. 154, Amsterdam, 1982, pp. 141–157. Riesel, H. Prime Numbers and Computer Methods for Factorisation, Birkhauser, 1985. External links David Moews: Perfect, amicable and sociable numbers Perfect numbers – History and Theory OddPerfect.org A projected distributed computing project to search for odd perfect numbers. Great Internet Mersenne Prime Search (GIMPS) Perfect Numbers, math forum at Drexel. Divisor function Integer sequences Unsolved problems in number theory Mersenne primes
Perfect number
[ "Mathematics" ]
3,089
[ "Sequences and series", "Unsolved problems in mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Perfect numbers", "Mathematical objects", "Unsolved problems in number theory", "Combinatorics", "Mathematical problems", "Numbers", "Number theory" ]
23,678
https://en.wikipedia.org/wiki/Panspermia
Panspermia () is the hypothesis that life exists throughout the Universe, distributed by space dust, meteoroids, asteroids, comets, and planetoids, as well as by spacecraft carrying unintended contamination by microorganisms, known as directed panspermia. The theory argues that life did not originate on Earth, but instead evolved somewhere else and seeded life as we know it. Panspermia comes in many forms, such as radiopanspermia, lithopanspermia, and directed panspermia. Regardless of its form, the theories generally propose that microbes able to survive in outer space (such as certain types of bacteria or plant spores) can become trapped in debris ejected into space after collisions between planets and small solar system bodies that harbor life. This debris containing the lifeforms is then transported by meteors between bodies in a solar system, or even across solar systems within a galaxy. In this way, panspermia studies concentrate not on how life began but on methods that may distribute it within the Universe. This point is often used as a criticism of the theory. Panspermia is a fringe theory with little support amongst mainstream scientists. Critics argue that it does not answer the question of the origin of life but merely places it on another celestial body. It is further criticized because it cannot be tested experimentally. Historically, disputes over the merit of this theory centered on whether life is ubiquitous or emergent throughout the Universe. The theory maintains support today, with some work being done to develop mathematical treatments of how life might migrate naturally throughout the Universe. Its long history lends itself to extensive speculation and hoaxes that have arisen from meteoritic events. In contrast, pseudo-panspermia is the well-supported hypothesis that many of the small organic molecules used for life originated in space, and were distributed to planetary surfaces. History Panspermia has a long history, dating back to the 5th century BCE and the natural philosopher Anaxagoras. Classicists came to agree that Anaxagoras maintained the Universe (or Cosmos) was full of life, and that life on Earth started from the fall of these extra-terrestrial seeds. Panspermia as it is known today, however, is not identical to this original theory. The name, as applied to this theory, was only first coined in 1908 by Svante Arrhenius, a Swedish scientist. Prior to this, since around the 1860s, many prominent scientists were becoming interested in the theory, for example Sir Fred Hoyle, and Chandra Wickramasinghe. In the 1860s, there were three scientific developments that began to bring the focus of the scientific community to the problem of the origin of life. Firstly, the Kant-Laplace Nebular theory of solar system and planetary formation was gaining favor, and implied that when the Earth first formed, the surface conditions would have been inhospitable to life as we know it. This meant that life could not have evolved parallel with the Earth, and must have evolved at a later date, without biological precursors. Secondly, Charles Darwin's famous theory of evolution implied some elusive origin, because in order for something to evolve, it must start somewhere. In his Origin of Species, Darwin was unable or unwilling to touch on this issue. Third and finally, Louis Pasteur and John Tyndall experimentally disproved the (now superseded) theory of spontaneous generation, which suggested that life was constantly evolving from non-living matter and did not have a common ancestor, as suggested by Darwin's theory of evolution. Altogether, these three developments in science presented the wider scientific community with a seemingly paradoxical situation regarding the origin of life: life must have evolved from non-biological precursors after the Earth was formed, and yet spontaneous generation as a theory had been experimentally disproved. From here, is where the study of the origin of life branched. Those who accepted Pasteur's rejection of spontaneous generation began to develop the theory that under (unknown) conditions on a primitive Earth, life must have gradually evolved from organic material. This theory became known as abiogenesis, and is the currently accepted one. On the other side of this are those scientists of the time who rejected Pasteur's results and instead supported the idea that life on Earth came from existing life. This necessarily requires that life has always existed somewhere on some planet, and that it has a mechanism of transferring between planets. Thus, the modern treatment of panspermia began in earnest. Lord Kelvin, in a presentation to The British Association for the Advancement of Science in 1871, proposed the idea that similarly to how seeds can be transferred through the air by winds, so can life be brought to Earth by the infall of a life-bearing meteorite. He further proposed the idea that life can only come from life, and that this principle is invariant under philosophical uniformitarianism, similar to how matter can neither be created nor destroyed. This argument was heavily criticized because of its boldness, and additionally due to technical objections from the wider community. In particular, Johann Zollner from Germany argued against Kelvin by saying that organisms carried in meteorites to Earth would not survive the descent through the atmosphere due to friction heating. The arguments went back and forth until Svante Arrhenius gave the theory its modern treatment and designation. Arrhenius argued against abiogenesis on the basis that it had no experimental foundation at the time, and believed that life had always existed somewhere in the Universe. He focused his efforts of developing the mechanism(s) by which this pervasive life may be transferred through the Universe. At this time, it was recently discovered that solar radiation can exert pressure, and thus force, on matter. Arrhenius thus concluded that it is possible that very small organisms such as bacterial spores could be moved around due to this radiation pressure. At this point, panspermia as a theory now had a potentially viable transport mechanism, as well as a vehicle for carrying life from planet to planet. The theory still faced criticism mostly due to doubts about how long spores would actually survive under the conditions of their transport from one planet, through space, to another. Despite all the emphasis placed on trying to establish the scientific legitimacy of this theory, it still lacked testability; that was and still is a serious problem the theory has yet to overcome. Support for the theory persisted, however, with Fred Hoyle and Chandra Wickramasinghe using two reasons for why an extra-terrestrial origin of life might be preferred. First is that required conditions for the origin of life may have been more favorable somewhere other than Earth, and second that life on Earth exhibits properties that are not accounted for by assuming an endogenic origin. Hoyle studied spectra of interstellar dust, and came to the conclusion that space contained large amounts of organics, which he suggested were the building blocks of the more complex chemical structures. Critically, Hoyle argued that this chemical evolution was unlikely to have taken place on a prebiotic Earth, and instead the most likely candidate is a comet. Furthermore, Hoyle and Wickramasinghe concluded that the evolution of life requires a large increase in genetic information and diversity, which might have resulted from the influx of viral material from space via comets. Hoyle reported (in a lecture at Oxford on January 16, 1978) a pattern of coincidence between the arrival of major epidemics and the occasions of close encounters with comets, which lead Hoyle to suggest that the epidemics were a direct result of material raining down from these comets. This claim in particular garnered criticism from biologists. Since the 1970s, a new era of planetary exploration meant that data could be used to test panspermia and potentially transform it from conjecture to a testable theory. Though it has yet to be tested, panspermia is still explored today in some mathematical treatments, and as its long history suggests, the appeal of the theory has stood the test of time. Overview Core requirements Panspermia requires: that life has always existed in the Universe somewhere that organic molecules originated in space (perhaps to be distributed to Earth) that life originated from these molecules, extraterrestrially that this extraterrestrial life was transported to Earth. The creation and distribution of organic molecules from space is now uncontroversial; it is known as pseudo-panspermia. The jump from organic materials to life originating from space, however, is hypothetical and currently untestable. Transport vessels Bacterial spores and plant seeds are two common proposed vessels for panspermia. According to the theory, they could be encased in a meteorite and transported to another planet from their origin, subsequently descend through the atmosphere and populate the surface with life (see lithopanspermia below). This naturally requires that these spores and seeds have formed somewhere else, maybe even in space in the case of how panspermia deals with bacteria. Understanding of planetary formation theory and meteorites has led to the idea that some rocky bodies originating from undifferentiated parent bodies could be able to generate local conditions conducive to life. Hypothetically, internal heating from radiogenic isotopes could melt ice to provide water as well as energy. In fact, some meteorites have been found to show signs of aqueous alteration which may indicate that this process has taken place. Given that there are such large numbers of these bodies found within the Solar System, an argument can be made that they each provide a potential site for life to develop. A collision occurring in the asteroid belt could alter the orbit of one such site, and eventually deliver it to Earth. Plant seeds can be an alternative transport vessel. Some plants produce seeds that are resistant to the conditions of space, which have been shown to lie dormant in extreme cold, vacuum, and resist short wavelength UV radiation. They are not typically proposed to have originated on space, but on another planet. Theoretically, even if a plant is partially damaged during its travel in space, the pieces could still seed life in a sterile environment. Sterility of the environment is relevant because it is unclear if the novel plant could out-compete existing life forms. This idea is based on previous evidence showing that cellular reconstruction can occur from cytoplasms released from damaged algae. Furthermore, plant cells contain obligate endosymbionts, which could be released into a new environment. Though both plant seeds and bacterial spores have been proposed as potentially viable vehicles, their ability to not only survive in space for the required time, but also survive atmospheric entry is debated. Space probes may be a viable transport mechanism for interplanetary cross-pollination within the Solar System. Space agencies have implemented planetary protection procedures to reduce the risk of planetary contamination, but microorganisms such as Tersicoccus phoenicis may be resistant to spacecraft assembly cleaning. Variations of panspermia theory Panspermia is generally subdivided into two classes: either transfer occurs between planets of the same system (interplanetary) or between stellar systems (interstellar). Further classifications are based on different proposed transport mechanisms, as follows. Radiopanspermia In 1903, Svante Arrhenius proposed radiopanspermia, the theory that singular microscopic forms of life can be propagated in space, driven by the radiation pressure from stars. This is the mechanism by which light can exert a force on matter. Arrhenius argued that particles at a critical size below 1.5 μm would be propelled at high speed by radiation pressure of a star. However, because its effectiveness decreases with increasing size of the particle, this mechanism holds for very tiny particles only, such as single bacterial spores. Counterarguments The main criticism of radiopanspermia came from Iosif Shklovsky and Carl Sagan, who cited evidence for the lethal action of space radiation (UV and X-rays) in the cosmos. If enough of these microorganisms are ejected into space, some may rain down on a planet in a new star system after 106 years wandering interstellar space. There would be enormous death rates of the organisms due to radiation and the generally hostile conditions of space, but nonetheless this theory is considered potentially viable by some. Data gathered by the orbital experiments ERA, BIOPAN, EXOSTACK and EXPOSE showed that isolated spores, including those of B. subtilis, were rapidly killed if exposed to the full space environment for merely a few seconds, but if shielded against solar UV, the spores were capable of surviving in space for up to six years while embedded in clay or meteorite powder (artificial meteorites). Spores would therefore need to be heavily protected against UV radiation: exposure of unprotected DNA to solar UV and cosmic ionizing radiation would break it up into its constituent bases. Rocks at least 1 meter in diameter are required to effectively shield resistant microorganisms, such as bacterial spores against galactic cosmic radiation. Additionally, exposing DNA to the ultrahigh vacuum of space alone is sufficient to cause DNA damage, so the transport of unprotected DNA or RNA during interplanetary flights powered solely by light pressure is extremely unlikely. The feasibility of other means of transport for the more massive shielded spores into the outer Solar System—for example, through gravitational capture by comets—is unknown. There is little evidence in full support of the radiopanspermia hypothesis. Lithopanspermia This transport mechanism generally arose following the discovery of exoplanets, and the sudden availability of data following the growth of planetary science. Lithopanspermia is the proposed transfer of organisms in rocks from one planet to another through planetary objects such as in comets or asteroids, and remains speculative. A variant would be for organisms to travel between star systems on nomadic exoplanets or exomoons. Although there is no concrete evidence that lithopanspermia has occurred in the Solar System, the various stages have become amenable to experimental testing. Planetary ejection – For lithopanspermia to occur, microorganisms must first survive ejection from a planetary surface (assuming they do not form on meteorites, as suggested in), which involves extreme forces of acceleration and shock with associated temperature rises. Hypothetical values of shock pressures experienced by ejected rocks are obtained from Martian meteorites, which suggest pressures of approximately 5 to 55 GPa, acceleration of 3 Mm/s2, jerk of 6 Gm/s3 and post-shock temperature increases of about 1 K to 1000 K. Though these conditions are extreme, some organisms appear able to survive them. Survival in transit – Now in space, the microorganisms have to make it to their next destination for lithopanspermia to be successful. The survival of microorganisms has been studied extensively using both simulated facilities and in low Earth orbit. A large number of microorganisms have been selected for exposure experiments, both human-borne microbes (significant for future crewed missions) and extremophiles (significant for determining the physiological requirements of survival in space). Bacteria in particular can exhibit a survival mechanism whereby a colony generates a biofilm that enhances its protection against UV radiation. Atmospheric entry – The final stage of lithopanspermia, is re-entry onto a viable planet via its atmosphere. This requires that the organisms are able to further survive potential atmospheric ablation. Tests of this stage could use sounding rockets and orbital vehicles. B. subtilis spores inoculated onto granite domes were twice subjected to hypervelocity atmospheric transit by launch to a ~120 km altitude on an Orion two-stage rocket. The spores survived on the sides of the rock, but not on the forward-facing surface that reached 145 °C. As photosynthetic organisms must be close to the surface of a rock to obtain sufficient light energy, atmospheric transit might act as a filter against them by ablating the surface layers of the rock. Although cyanobacteria can survive the desiccating, freezing conditions of space, the STONE experiment showed that they cannot survive atmospheric entry. Small non-photosynthetic organisms deep within rocks might survive the exit and entry process, including impact survival. Lithopanspermia, described by the mechanism above can exist as either interplanetary or interstellar. It is possible to quantify panspermia models and treat them as viable mathematical theories. For example, a recent study of planets of the Trappist-1 planetary system, presents a model for estimating the probability of interplanetary panspermia, similar to studies in the past done about Earth-Mars panspermia. This study found that lithopanspermia is 'orders of magnitude more likely to occur' in the Trappist-1 system as opposed to the Earth-to-Mars scenario. According to their analysis, the increase in probability of lithopanspermia is linked to an increased probability of abiogenesis amongst the Trappist-1 planets. In a way, these modern treatments attempt to keep panspermia as a contributing factor to abiogenesis, as opposed to a theory that directly opposes it. In line with this, it is suggested that if biosignatures could be detected on two (or more) adjacent planets, that would provide evidence that panspermia is a potentially required mechanism for abiogenesis. As of yet, no such discovery has been made. Lithopanspermia has also been hypothesized to operate between stellar systems. One mathematical analysis, estimating the total number of rocky or icy objects that could potentially be captured by planetary systems within the Milky Way, has concluded that lithopanspermia is not necessarily bound to a single stellar system. This not only requires these objects have life in the first place, but also that it survives the journey. Thus intragalactic lithopanspermia is heavily dependent on the survival lifetime of organisms, as well as the velocity of the transporter. Again, there is no evidence that such a process has, or can occur. Counterarguments The complex nature of the requirements for lithopanspermia, as well as evidence against the longevity of bacteria being able to survive under these conditions, makes lithopanspermia a difficult theory to get behind. That being said, impact events did happen a lot in the early stages of the solar system formation, and still happen to a certain degree today within the asteroid belt. Directed panspermia First proposed in 1972 by Nobel prize winner Francis Crick, along with Leslie Orgel, directed panspermia is the theory that life was deliberately brought to Earth by a higher intelligent being from another planet. In light of the evidence at the time that it seems unlikely for an organism to have been delivered to Earth via radiopanspermia or lithopanspermia, Crick and Orgel proposed this as an alternative theory, though it is worth noting that Orgel was less serious about the claim. They do acknowledge that the scientific evidence is lacking, but discuss what kinds of evidence would be needed to support the theory. In a similar vein, Thomas Gold suggested that life on Earth might have originated accidentally from a pile of 'Cosmic Garbage' dumped on Earth long ago by extraterrestrial beings. These theories are often considered more science fiction, however, Crick and Orgel use the principle of cosmic reversibility to argue for it. This principle is based on the fact that if our species is capable of infecting a sterile planet, then what is preventing another technological society from having done that to Earth in the past? They concluded that it would be possible to deliberately infect another planet in the foreseeable future. As far as evidence goes, Crick and Orgel argued that given the universality of the genetic code, it follows that an infective theory for life is viable. Directed panspermia could, in theory, be demonstrated by finding a distinctive 'signature' message had been deliberately implanted into either the genome or the genetic code of the first microorganisms by our hypothetical progenitor, some 4 billion years ago. However, there is no known mechanism that could prevent mutation and natural selection from removing such a message over long periods of time. Counterarguments In 1972, both abiogenesis and panspermia were seen as viable theories by different experts. Given this, Crick and Orgel argued that experimental evidence required to validate one theory over the other was lacking. That being said, evidence strongly in favor of abiogenesis over panspermia exists today, whereas evidence for panspermia, particularly directed panspermia, is decidedly lacking. Origination and distribution of organic molecules: Pseudo-panspermia Pseudo-panspermia is the well-supported hypothesis that many of the small organic molecules used for life originated in space, and were distributed to planetary surfaces. Life then emerged on Earth, and perhaps on other planets, by the processes of abiogenesis. Evidence for pseudo-panspermia includes the discovery of organic compounds such as sugars, amino acids, and nucleobases in meteorites and other extraterrestrial bodies, and the formation of similar compounds in the laboratory under outer space conditions. A prebiotic polyester system has been explored as an example. Hoaxes & speculations Orgueil meteorite On May 14, 1864, twenty fragments from a meteorite crashed into the French city of Orgueil. A separate fragment of the Orgueil meteorite (kept in a sealed glass jar since its discovery) was found in 1965 to have a seed capsule embedded in it, while the original glassy layer on the outside remained undisturbed. Despite great initial excitement, the seed was found to be that of a European Juncaceae or rush plant that had been glued into the fragment and camouflaged using coal dust. The outer "fusion layer" was in fact glue. While the perpetrator of this hoax is unknown, it is thought that they sought to influence the 19th-century debate on spontaneous generation—rather than panspermia—by demonstrating the transformation of inorganic to biological matter. Oumuamua In 2017, the Pan-STARRS telescope in Hawaii detected a reddish object up to 400 meters in length. Analysis of its orbit provided evidence that it was an interstellar object, originating from outside our Solar System. From this Avi Loeb speculated that the object was instead an artifact from an alien civilization and could potentially be evidence for directed panspermia. This claim has been considered unlikely by other authors. See also References Further reading External links Cox, Brian. "Are we thinking about alien life all wrong?". BBC Ideas, video made by Pomona Pictures, 29 November 2021. Loeb, Abraham. "Did Life from Earth Escape the Solar System Eons Ago?". Scientific American, 4 November 2019 Loeb, Abraham. "Noah's Spaceship" Scientific American, 29 November 2020 Astrobiology Origin of life Biological hypotheses Prebiotic chemistry Fringe science 1900s neologisms
Panspermia
[ "Chemistry", "Astronomy", "Biology" ]
4,676
[ "Origin of life", "Panspermia", "Speculative evolution", "Prebiotic chemistry", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
23,703
https://en.wikipedia.org/wiki/Potential%20energy
In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality. Common types of potential energy include the gravitational potential energy of an object, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J). Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is in-turn called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function. Overview There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration. Forces derivable from a potential are also called conservative forces. The work done by a conservative force is where is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep. Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall. Consider a ball whose mass is dropped from height . The acceleration of free fall is approximately constant, so the weight force of the ball is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position. History From around 1840 scientists sought to define and understand energy and work. The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as 'energy of configuration' in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded. Work and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B. The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces. Derivable from a potential In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that This means that the units of U′ must be this case, work along the curve is given by which can be evaluated using the gradient theorem to obtain This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path. Potential energy is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is In this case, the application of the del operator to the work function yields, and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field. Computing potential energy Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve from to , and computing, For the force field F, let , then the gradient theorem yields, The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is Examples of work that can be computed from potential functions are gravity and spring forces. Potential energy for near-Earth gravity For small height changes, gravitational potential energy can be computed using where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules. In classical physics, gravity exerts a constant downward force on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory , such as the track of a roller coaster is calculated using its velocity, , to obtain where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve . Potential energy for a linear spring A horizontal spring exerts a force that is proportional to its deformation in the axial or x direction. The work of this spring on a body moving along the space curve , is calculated using its velocity, , to obtain For convenience, consider contact with the spring occurs at , then the integral of the product of the distance x and the x-velocity, xvx, is x2/2. The function is called the potential energy of a linear spring. Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy. Potential energy for gravitational forces between two bodies The gravitational potential function, also known as gravitational potential energy, is: The negative sign follows the convention that work is gained from a loss of potential energy. Derivation The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation where is a vector of length 1 pointing from M to m and G is the gravitational constant. Let the mass m move at the velocity then the work of gravity on this mass as it moves from position to is given by The position and velocity of the mass m are given by where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to, This calculation uses the fact that Potential energy for electrostatic forces between two bodies The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's Law where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. The work W required to move q from A to any point B in the electrostatic force field is given by the potential function Reference level The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience. Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions. Gravitational potential energy Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount. Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact. The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail. Local approximation The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the equation for work, and the equation The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember ). The upward force required while moving at a constant velocity is equal to the weight, , of an object, so the work done in lifting it through a height is the product . Thus, when accounting only for mass, gravity, and altitude, the equation is: where is the potential energy of the object relative to its being on the Earth's surface, is the mass of the object, is the acceleration due to gravity, and h is the altitude of the object. Hence, the potential difference is General formula However, over large variations in distance, the approximation that is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance between the two bodies. Using that definition, the gravitational potential energy of a system of masses and at a distance using the Newtonian constant of gravitation is where is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making negative; for why this is physically reasonable, see below. Given this formula for , the total potential energy of a system of bodies is found by summing, for all pairs of two bodies, the potential energy of the system of those two bodies. Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. therefore, Negative gravitational energy As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which becomes zero: and . The choice of at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative. The singularity at in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with for , would result in potential energy being positive, but infinitely large for all nonzero values of , and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and is always non-zero in practice, the choice of at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first. The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this. Uses Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction. Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window. Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls. Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES). Chemical potential energy Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. Electric potential energy An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy). Electrostatic potential energy Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q which is given by where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby. The work W required to move q from A to any point B in the electrostatic force field is given by typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge. Magnetic potential energy The energy of a magnetic moment in an externally produced magnetic B-field has potential energy The magnetization in a field is where the integral can be over all space or, equivalently, where is nonzero. Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart. Nuclear potential energy Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space. Forces and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by or , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is The gravitational potential (specific energy) of the two bodies is where is the reduced mass. The work done against gravity by moving an infinitesimal mass from point A with to point B with is and the work done going back the other way is so that the total work done in moving from A to B and returning to A is If the potential is redefined at A to be and the potential at B to be , where is a constant (i.e. can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is as before. In practical terms, this means that one can set the zero of and anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section). A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. Notes References External links What is potential energy? Energy (physics) Forms of energy Mechanical quantities
Potential energy
[ "Physics", "Mathematics" ]
4,992
[ "Mechanical quantities", "Physical quantities", "Quantity", "Forms of energy", "Energy (physics)", "Mechanics", "Wikipedia categories named after physical quantities" ]
23,731
https://en.wikipedia.org/wiki/Plasma%20ashing
In semiconductor manufacturing plasma ashing is the process of removing the photoresist (light sensitive coating) from an etched wafer. Using a plasma source, a monatomic (single atom) substance known as a reactive species is generated. Oxygen or fluorine are the most common reactive species. Other gases used are N2/H2 where the H2 portion is 2%. The reactive species combines with the photoresist to form ash which is removed with a vacuum pump. Typically, monatomic oxygen plasma is created by exposing oxygen gas (O2) at a low pressure to high power radio waves, which ionise it. This process is done under vacuum in order to create a plasma. As the plasma is formed, many free radicals and also oxygen ions are created. These ions could damage the wafer due to the electric field build up between the plasma and the wafer surface. Newer, smaller circuitry is increasingly susceptible to these charged particles that can get implanted into the surface. Originally, plasma was generated in the process chamber, but as the need to get rid of the ions has increased, many machines now use a downstream plasma configuration, where plasma is formed remotely and the desired particles are channeled to the wafer. This allows electrically charged particles time to recombine before they reach the wafer surface, and prevents damage to the wafer surface. Types Two forms of plasma ashing are typically performed on wafers. High temperature ashing, or stripping, is performed to remove as much photo resist as possible, while the "descum" process is used to remove residual photo resist in trenches. The main difference between the two processes is the temperature the wafer is exposed to while in an ashing chamber. Typical issues arise when this photoresist has undergone an implant step previously and heavy metal are embedded in the photoresist and it has experienced high temperatures causing it to be resistant to oxidizing. Monatomic oxygen is electrically neutral and although it does recombine during the channeling, it does so at a slower rate than the positively or negatively charged free radicals, which attract one another. This means that when all of the free radicals have recombined, there is still a portion of the active species available for process. Because a large portion of the active species is lost to recombination, process times may take longer. To some extent, these longer process times can be mitigated by increasing the temperature of the reaction area. This also contribute to the observation of the spectral optical traces, these can be what is normally expected when the emission declines, the process is over; it can also mean that spectral lines increase in illuminance as the available reactants are consumed causing a rise in certain spectral lines representing the available ionic species. See also Plasma etching Semiconductor device fabrication Plasma processing References
Plasma ashing
[ "Materials_science", "Engineering" ]
579
[ "Semiconductor device fabrication", "Materials science stubs", "Materials science", "Microtechnology" ]
23,740
https://en.wikipedia.org/wiki/Toxin
A toxin is a naturally occurring poison produced by metabolic activities of living cells or organisms. They occur especially as proteins, often conjugated. The term was first used by organic chemist Ludwig Brieger (1849–1919), derived from toxic. Toxins can be small molecules, peptides, or proteins that are capable of causing disease on contact with or absorption by body tissues interacting with biological macromolecules such as enzymes or cellular receptors. They vary greatly in their toxicity, ranging from usually minor (such as a bee sting) to potentially fatal even at extremely low doses (such as botulinum toxin). Terminology Toxins are often distinguished from other chemical agents strictly based on their biological origin. Less strict understandings embrace naturally occurring inorganic toxins, such as arsenic. Other understandings embrace synthetic analogs of naturally occurring organic poisons as toxins, and may or may not embrace naturally occurring inorganic poisons. It is important to confirm usage if a common understanding is critical. Toxins are a subset of toxicants. The term toxicant is preferred when the poison is man-made and therefore artificial. The human and scientific genetic assembly of a natural-based toxin should be considered a toxin as it is identical to its natural counterpart. The debate is one of linguistic semantics. The word toxin does not specify method of delivery (as opposed to venom, a toxin delivered via a bite, sting, etc.). Poison is a related but broader term that encompasses both toxins and toxicants; poisons may enter the body through any means - typically inhalation, ingestion, or skin absorption. Toxin, toxicant, and poison are often used interchangeably despite these subtle differences in definition. The term toxungen has also been proposed to refer to toxins that are delivered onto the body surface of another organism without an accompanying wound. A rather informal terminology of individual toxins relates them to the anatomical location where their effects are most notable: Genitotoxin, damages the urinary organs or the reproductive organs Hemotoxin, causes destruction of red blood cells (hemolysis) Phototoxin, causes dangerous photosensitivity Hepatotoxins affect the liver Neurotoxins affect the nervous system On a broader scale, toxins may be classified as either exotoxins, excreted by an organism, or endotoxins, which are released mainly when bacteria are lysed. Biological The term "biotoxin" is sometimes used to explicitly confirm the biological origin as opposed to environmental or anthropogenic origins. Biotoxins can be classified by their mechanism of delivery as poisons (passively transferred via ingestion, inhalation, or absorption across the skin), toxungens (actively transferred to the target's surface by spitting, spraying, or smearing), or venoms (delivered through a wound generated by a bite, sting, or other such action). They can also be classified by their source, such as fungal biotoxins, microbial toxins, plant biotoxins, or animal biotoxins. Toxins produced by microorganisms are important virulence determinants responsible for microbial pathogenicity and/or evasion of the host immune response. Biotoxins vary greatly in purpose and mechanism, and can be highly complex (the venom of the cone snail can contain over 100 unique peptides, which target specific nerve channels or receptors). Biotoxins in nature have two primary functions: Predation, such as in the spider, snake, scorpion, jellyfish, and wasp Defense as in the bee, ant, termite, honey bee, wasp, poison dart frog and plants producing toxins The toxins used as defense in species among the poison dart frog can also be used for medicinal purposes Some of the more well known types of biotoxins include: Cyanotoxins, produced by cyanobacteria Dinotoxins, produced by dinoflagellates Necrotoxins cause necrosis (i.e., death) in the cells they encounter. Necrotoxins spread through the bloodstream. In humans, skin and muscle tissues are most sensitive to necrotoxins. Organisms that possess necrotoxins include: The brown recluse or "fiddle back" spider Most rattlesnakes and vipers produce phospholipase and various trypsin-like serine proteases Puff adder Necrotizing fasciitis (caused by the "flesh eating" bacterium Streptococcus pyogenes) – produces a pore forming toxin Neurotoxins primarily affect the nervous systems of animals. The group neurotoxins generally consists of ion channel toxins that disrupt ion channel conductance. Organisms that possess neurotoxins include: The black widow spider. Most scorpions The box jellyfish Elapid snakes The cone snail The Blue-ringed octopus Venomous fish Frogs Palythoa coral Various different types of algae, cyanobacteria and dinoflagellates Myotoxins are small, basic peptides found in snake and lizard venoms, They cause muscle tissue damage by a non-enzymatic receptor based mechanism. Organisms that possess myotoxins include: rattlesnakes Mexican beaded lizard Cytotoxins are toxic at the level of individual cells, either in a non-specific fashion or only in certain types of living cells: Ricin, from castor beans Apitoxin, from honey bees T-2 mycotoxin, from certain toxic mushrooms Cardiotoxin III, from Chinese cobra Hemotoxin, from vipers Weaponry Many living organisms employ toxins offensively or defensively. A relatively small number of toxins are known to have the potential to cause widespread sickness or casualties. They are often inexpensive and easily available, and in some cases it is possible to refine them outside the laboratory. As biotoxins act quickly, and are highly toxic even at low doses, they can be more efficient than chemical agents. Due to these factors, it is vital to raise awareness of the clinical symptoms of biotoxin poisoning, and to develop effective countermeasures including rapid investigation, response, and treatment. Environmental The term "environmental toxin" can sometimes explicitly include synthetic contaminants such as industrial pollutants and other artificially made toxic substances. As this contradicts most formal definitions of the term "toxin", it is important to confirm what the researcher means when encountering the term outside of microbiological contexts. Environmental toxins from food chains that may be dangerous to human health include: Paralytic shellfish poisoning (PSP) Amnesic shellfish poisoning (ASP) Diarrheal shellfish poisoning (DSP) Neurotoxic shellfish poisoning (NSP) Research In general, when scientists determine the amount of a substance that may be hazardous for humans, animals and/or the environment they determine the amount of the substance likely to trigger effects and if possible establish a safe level. In Europe, the European Food Safety Authority produced risk assessments for more than 4,000 substances in over 1,600 scientific opinions and they provide open access summaries of human health, animal health and ecological hazard assessments in their OpenFoodTox database. The OpenFoodTox database can be used to screen potential new foods for toxicity. The Toxicology and Environmental Health Information Program (TEHIP) at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to toxins-related resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET), an integrated system of toxicology and environmental health databases that are available free of charge on the web. TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. See also ArachnoServer Brevetoxin Cangitoxin Detoxification (alternative medicine) Dose–response relationship Excitotoxicity Environment and health Exposome Insect toxin List of highly toxic gases List of poisonous plants Pollution Secondary metabolite Toxalbumin Toxicophore, feature or group within a molecule that is thought to be responsible for its toxic properties. Toxin-antitoxin system References External links T3DB: Toxin-target database ATDB: Animal toxin database Society of Toxicology The Journal of Venomous Animals and Toxins including Tropical Diseases ToxSeek: Meta-search engine in toxicology and environmental health Website on Models & Ecotoxicology Biology terminology Chemical ecology
Toxin
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
1,854
[ "Toxicology", "Chemical ecology", "Harmful chemical substances", "Materials", "Toxicants", "nan", "Biochemistry", "Toxins", "Matter" ]
23,750
https://en.wikipedia.org/wiki/Paramagnetism
Paramagnetism is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagnetic materials include most chemical elements and some compounds; they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUID magnetometer. Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Due to their spin, unpaired electrons have a magnetic dipole moment and act like tiny magnets. An external magnetic field causes the electrons' spins to align parallel to the field, causing a net attraction. Paramagnetic materials include aluminium, oxygen, titanium, and iron oxide (FeO). Therefore, a simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: if all electrons in the particle are paired, then the substance made of this particle is diamagnetic; if it has unpaired electrons, then the substance is paramagnetic. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. (Some paramagnetic materials retain spin disorder even at absolute zero, meaning they are paramagnetic in the ground state, i.e. in the absence of thermal motion.) Thus the total magnetization drops to zero when the applied field is removed. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself. Relation to electron spins Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum. If there is sufficient energy exchange between neighbouring dipoles, they will interact, and may spontaneously align or anti-align and form magnetic domains, resulting in ferromagnetism (permanent magnets) or antiferromagnetism, respectively. Paramagnetic behavior can also be observed in ferromagnetic materials that are above their Curie temperature, and in antiferromagnets above their Néel temperature. At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins. In general, paramagnetic effects are quite small: the magnetic susceptibility is of the order of 10−3 to 10−5 for most paramagnets, but may be as high as 10−1 for synthetic paramagnets such as ferrofluids. Delocalization In conductive materials, the electrons are delocalized, that is, they travel through the solid more or less as free electrons. Conductivity can be understood in a band structure picture as arising from the incomplete filling of energy bands. In an ordinary nonmagnetic conductor the conduction band is identical for both spin-up and spin-down electrons. When a magnetic field is applied, the conduction band splits apart into a spin-up and a spin-down band due to the difference in magnetic potential energy for spin-up and spin-down electrons. Since the Fermi level must be identical for both bands, this means that there will be a small surplus of the type of spin in the band that moved downwards. This effect is a weak form of paramagnetism known as Pauli paramagnetism. The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. Stronger forms of magnetism usually require localized rather than itinerant electrons. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. If one subband is preferentially filled over the other, one can have itinerant ferromagnetic order. This situation usually only occurs in relatively narrow (d-)bands, which are poorly delocalized. s and p electrons Generally, strong delocalization in a solid due to large overlap with neighboring wave functions means that there will be a large Fermi velocity; this means that the number of electrons in a band is less sensitive to shifts in that band's energy, implying a weak magnetism. This is why s- and p-type metals are typically either Pauli-paramagnetic or as in the case of gold even diamagnetic. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons. d and f electrons Stronger magnetic effects are typically only observed when d or f electrons are involved. Particularly the latter are usually strongly localized. Moreover, the size of the magnetic moment on a lanthanide atom can be quite large as it can carry up to 7 unpaired electrons in the case of gadolinium(III) (hence its use in MRI). The high magnetic moments associated with lanthanides is one reason why superstrong magnets are typically based on elements like neodymium or samarium. Molecular localization The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. Molecular structure can also lead to localization of electrons. Although there are usually energetic reasons why a molecular structure results such that it does not exhibit partly filled orbitals (i.e. unpaired spins), some non-closed shell moieties do occur in nature. Molecular oxygen is a good example. Even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O2 molecules. The distances to other oxygen atoms in the lattice remain too large to lead to delocalization and the magnetic moments remain unpaired. Theory The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Both descriptions are given below. Curie's law For low levels of magnetization, the magnetization of paramagnets follows what is known as Curie's law, at least approximately. This law indicates that the susceptibility, , of paramagnetic materials is inversely proportional to their temperature, i.e. that materials become more magnetic at lower temperatures. The mathematical expression is: where: is the resulting magnetization, measured in amperes/meter (A/m), is the volume magnetic susceptibility (dimensionless), is the auxiliary magnetic field (A/m), is absolute temperature, measured in kelvins (K), is a material-specific Curie constant (K). Curie's law is valid under the commonly encountered conditions of low magnetization (μBH ≲ kBT), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs (μBH ≳ kBT) and magnetic dipoles are all aligned with the applied field. When the dipoles are aligned, increasing the external field will not increase the total magnetization since there can be no further alignment. For a paramagnetic ion with noninteracting magnetic moments with angular momentum J, the Curie constant is related to the individual ions' magnetic moments, where n is the number of atoms per unit volume. The parameter μeff is interpreted as the effective magnetic moment per paramagnetic ion. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ, a Curie Law expression of the same form will emerge with μ appearing in place of μeff. When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d3 or high-spin d5 configurations, the effective magnetic moment takes the form ( with g-factor ge = 2.0023... ≈ 2), where Nu is the number of unpaired electrons. In other transition metal complexes this yields a useful, if somewhat cruder, estimate. When Curie constant is null, second order effects that couple the ground state with the excited states can also lead to a paramagnetic susceptibility independent of the temperature, known as Van Vleck susceptibility. Pauli paramagnetism For some alkali metals and noble metals, conduction electrons are weakly interacting and delocalized in space forming a Fermi gas. For these materials one contribution to the magnetic response comes from the interaction between the electron spins and the magnetic field known as Pauli paramagnetism. For a small magnetic field , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by: where is the vacuum permeability, is the electron magnetic moment, is the Bohr magneton, is the reduced Planck constant, and the g-factor cancels with the spin . The indicates that the sign is positive (negative) when the electron spin component in the direction of is parallel (antiparallel) to the magnetic field. For low temperatures with respect to the Fermi temperature (around 104 kelvins for metals), the number density of electrons () pointing parallel (antiparallel) to the magnetic field can be written as: with the total free-electrons density and the electronic density of states (number of states per energy per volume) at the Fermi energy . In this approximation the magnetization is given as the magnetic moment of one electron times the difference in densities: which yields a positive paramagnetic susceptibility independent of temperature: The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers can differ from the electron mass . The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect. Pauli paramagnetism is named after the physicist Wolfgang Pauli. Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading Drude model could not account for this contribution without the use of quantum statistics. Pauli paramagnetism and Landau diamagnetism are essentially applications of the spin and the free electron model, the first is due to intrinsic spin of electrons; the second is due to their orbital motion. Examples of paramagnets Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. In principle any system that contains atoms, ions, or molecules with unpaired spins can be called a paramagnet, but the interactions between them need to be carefully considered. Systems with minimal interactions The narrowest definition would be: a system with unpaired spins that do not interact with each other. In this narrowest sense, the only pure paramagnet is a dilute gas of monatomic hydrogen atoms. Each atom has one non-interacting unpaired electron. A gas of lithium atoms already possess two paired core electrons that produce a diamagnetic response of opposite sign. Strictly speaking Li is a mixed system therefore, although admittedly the diamagnetic component is weak and often neglected. In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H2 and in so doing, the magnetic moments are lost (quenched), because of the spins pair. Hydrogen is therefore diamagnetic and the same holds true for many other elements. Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. The quenching tendency is weakest for f-electrons because f (especially 4f) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered. Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. There are two classes of materials for which this holds: Molecular materials with a (isolated) paramagnetic center. Good examples are coordination complexes of d- or f-metals or proteins with such centers, e.g. myoglobin. In such materials the organic part of the molecule acts as an envelope shielding the spins from their neighbors. Small molecules can be stable in radical form, oxygen O2 is a good example. Such systems are quite rare because they tend to be rather reactive. Dilute systems. Dissolving a paramagnetic species in a diamagnetic lattice at small concentrations, e.g. Nd3+ in CaCl2 will separate the neodymium ions at large enough distances that they do not interact. Such systems are of prime importance for what can be considered the most sensitive method to study paramagnetic systems: EPR. Systems with interactions As stated above, many materials that contain d- or f-elements do retain unquenched spins. Salts of such elements often show paramagnetic behavior but at low enough temperatures the magnetic moments may order. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. Even for iron it is not uncommon to say that iron becomes a paramagnet above its relatively high Curie-point. In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law: This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. The sign of θ depends on whether ferro- or antiferromagnetic interactions dominate and it is seldom exactly zero, except in the dilute, isolated cases mentioned above. Obviously, the paramagnetic Curie–Weiss description above TN or TC is a rather different interpretation of the word "paramagnet" as it does not imply the absence of interactions, but rather that the magnetic structure is random in the absence of an external field at these sufficiently high temperatures. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. An additional complication is that the interactions are often different in different directions of the crystalline lattice (anisotropy), leading to complicated magnetic structures once ordered. Randomness of the structure also applies to the many metals that show a net paramagnetic response over a broad temperature range. They do not follow a Curie type law as function of temperature however; often they are more or less temperature independent. This type of behavior is of an itinerant nature and better called Pauli-paramagnetism, but it is not unusual to see, for example, the metal aluminium called a "paramagnet", even though interactions are strong enough to give this element very good electrical conductivity. Superparamagnets Some materials show induced magnetic behavior that follows a Curie type law but with exceptionally large values for the Curie constants. These materials are known as superparamagnets. They are characterized by a strong ferromagnetic or ferrimagnetic type of coupling into domains of a limited size that behave independently from one another. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. The materials do show an ordering temperature above which the behavior reverts to ordinary paramagnetism (with interaction). Ferrofluids are a good example, but the phenomenon can also occur inside solids, e.g., when dilute paramagnetic centers are introduced in a strong itinerant medium of ferromagnetic coupling such as when Fe is substituted in TlCu2Se2 or the alloy AuFe. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. They are also called mictomagnets. See also Magnetochemistry References Further reading [https://feynmanlectures.caltech.edu/II_35.html The Feynman Lectures on Physics Vol. II, Ch. 35: "Paramagnetism and Magnetic Resonance] Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 1996). John David Jackson, Classical Electrodynamics (Wiley: New York, 1999). External links "Magnetism: Models and Mechanisms" in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter'', Jülich, 2013, Electric and magnetic fields in matter Magnetism Physical phenomena Quantum phases
Paramagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,178
[ "Quantum phases", "Physical phenomena", "Phases of matter", "Electric and magnetic fields in matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Matter" ]
23,781
https://en.wikipedia.org/wiki/Photoresist
A photoresist (also known simply as a resist) is a light-sensitive material used in several processes, such as photolithography and photoengraving, to form a patterned coating on a surface. This process is crucial in the electronics industry. The process begins by coating a substrate with a light-sensitive organic material. A patterned mask is then applied to the surface to block light, so that only unmasked regions of the material will be exposed to light. A solvent, called a developer, is then applied to the surface. In the case of a positive photoresist, the photo-sensitive material is degraded by light and the developer will dissolve away the regions that were exposed to light, leaving behind a coating where the mask was placed. In the case of a negative photoresist, the photosensitive material is strengthened (either polymerized or cross-linked) by light, and the developer will dissolve away only the regions that were not exposed to light, leaving behind a coating in areas where the mask was not placed. A BARC coating (Bottom Anti-Reflectant Coating) may be applied before the photoresist is applied, to avoid reflections from occurring under the photoresist and to improve the photoresist's performance at smaller semiconductor nodes. Conventional photoresists typically consist of 3 components: resin (a binder that provides physical properties such as adhesion, chemical resistance, etc), sensitizer (which has a photoactive compound), and solvent (which keeps the resist liquid). Definitions Simple resist polarity Positive: light will weaken the resist, and create a hole Negative: light will toughen the resist and create an etch resistant mask. To explain this in graphical form you may have a graph on Log exposure energy versus fraction of resist thickness remaining. The positive resist will be completely removed at the final exposure energy and the negative resist will be completely hardened and insoluble by the end of exposure energy. The slope of this graph is the contrast ratio. Intensity (I) is related to energy by E = I*t. Positive photoresist A positive photoresist is a type of photoresist in which a portion is exposed to light and becomes soluble to the photoresist developer. The unexposed portion of the photoresist remains insoluble in the photoresist developer. Some examples of positive photoresists are: PMMA (polymethylmethacrylate) single-component Resist for deep-UV, e-beam, x-ray Resin itself is DUV sensitive (slow) Chain scission mechanism Two-component DQN resists: Common resists for mercury lamps Diazoquinone ester (DQ) 20-50% weight photosensitive hydrophobic, not water soluble Phenolic Novolak Resin (N) Frequently used for near-UV exposures Water soluble UV exposure destroys the inhibitory effect of DQ Issues: Adhesion, Etch Resistance Negative photoresist A negative photoresist is a type of photoresist in which the portion of the photoresist that is exposed to light becomes insoluble in the photoresist developer. The unexposed portion of the photoresist is dissolved by the photoresist developer. Based on cyclized polyisoprene (rubber) variety of sensitizers (only a few % by weight) free radical initiated photo cross-linking of polymers Issues: potential oxygen inhibition swelling during development long narrow lines can become wavy swelling is an issue for high-resolution patterning Example: SU-8 (epoxy-based polymer), good adhesion), Kodak Photoresist (KPR) Modulation transfer function MTF (modulation transfer function is the ratio of image intensity modulation and object intensity modulation and it is a parameter that indicates the capability of an optical system. Differences between positive and negative resist The following table is based on generalizations which are generally accepted in the microelectromechanical systems (MEMS) fabrication industry. Classification Based on the chemical structure of photoresists, they can be classified into three types: photopolymeric, photodecomposing, and photocrosslinking photoresist. Photopolymeric photoresist is a type of photoresist, usually allyl monomer, which could generate free radical when exposed to light, then initiates the photopolymerization of monomer to produce a polymer. Photopolymeric photoresists are usually used for negative photoresist, e.g. methyl methacrylate and poly(phthalaldehyde)/PAG blends Photocrosslinking photoresist is a type of photoresist, which could crosslink chain by chain when exposed to light, to generate an insoluble network. Photocrosslinking photoresist are usually used for negative photoresist. Photodecomposing photoresist is a type of photoresist that generates hydrophilic products under light. Photodecomposing photoresists are usually used for positive photoresist. A typical example is azide quinone, e.g. diazonaphthaquinone (DQ). For self-assembled monolayer (SAM) photoresist, first a SAM is formed on the substrate by self-assembly. Then, this surface covered by SAM is irradiated through a mask, similar to other photoresist, which generates a photo-patterned sample in the irradiated areas. And finally developer is used to remove the designed part (could be used as both positive or negative photoresist). Light sources Absorption at UV and shorter wavelengths In lithography, decreasing the wavelength of light source is the most efficient way to achieve higher resolution. Photoresists are most commonly used at wavelengths in the ultraviolet spectrum or shorter (<400 nm). For example, diazonaphthoquinone (DNQ) absorbs strongly from approximately 300 nm to 450 nm. The absorption bands can be assigned to n-π* (S0–S1) and π-π* (S1–S2) transitions in the DNQ molecule. In the deep ultraviolet (DUV) spectrum, the π-π* electronic transition in benzene or carbon double-bond chromophores appears at around 200 nm. Due to the appearance of more possible absorption transitions involving larger energy differences, the absorption tends to increase with shorter wavelength, or larger photon energy. Photons with energies exceeding the ionization potential of the photoresist (can be as low as 5 eV in condensed solutions) can also release electrons which are capable of additional exposure of the photoresist. From about 5 eV to about 20 eV, photoionization of outer "valence band" electrons is the main absorption mechanism. Above 20 eV, inner electron ionization and Auger transitions become more important. Photon absorption begins to decrease as the X-ray region is approached, as fewer Auger transitions between deep atomic levels are allowed for the higher photon energy. The absorbed energy can drive further reactions and ultimately dissipates as heat. This is associated with the outgassing and contamination from the photoresist. Electron-beam exposure Photoresists can also be exposed by electron beams, producing the same results as exposure by light. The main difference is that while photons are absorbed, depositing all their energy at once, electrons deposit their energy gradually, and scatter within the photoresist during this process. As with high-energy wavelengths, many transitions are excited by electron beams, and heating and outgassing are still a concern. The dissociation energy for a C-C bond is 3.6 eV. Secondary electrons generated by primary ionizing radiation have energies sufficient to dissociate this bond, causing scission. In addition, the low-energy electrons have a longer photoresist interaction time due to their lower speed; essentially the electron has to be at rest with respect to the molecule in order to react most strongly via dissociative electron attachment, where the electron comes to rest at the molecule, depositing all its kinetic energy. The resulting scission breaks the original polymer into segments of lower molecular weight, which are more readily dissolved in a solvent, or else releases other chemical species (acids) which catalyze further scission reactions (see the discussion on chemically amplified resists below). It is not common to select photoresists for electron-beam exposure. Electron beam lithography usually relies on resists dedicated specifically to electron-beam exposure. Parameters Physical, chemical, and optical properties of photoresists influence their selection for different processes. The primary properties of the photoresist are resolution capability, process dose and focus latitudes required for curing, and resistance to reactive ion etching. Other key properties are sensitivity, compatibility with tetramethylammonium hydroxide (TMAH), adhesion, environmental stability, and shelf life. Resolution Resolution is the ability to differ the neighboring features on the substrate. Critical dimension (CD) is a main measure of resolution. The smaller the CD is, the higher resolution would be. Contrast Contrast is the difference from exposed portion to unexposed portion. The higher the contrast is, the more obvious the difference between exposed and unexposed portions would be. Sensitivity Sensitivity is the minimum energy that is required to generate a well-defined feature in the photoresist on the substrate, measured in mJ/cm2. The sensitivity of a photoresist is important when using deep ultraviolet (DUV) or extreme-ultraviolet (EUV). Viscosity Viscosity is a measure of the internal friction of a fluid, affecting how easily it will flow. When it is needed to produce a thicker layer, a photoresist with higher viscosity will be preferred. Adherence Adherence is the adhesive strength between photoresist and substrate. If the resist comes off the substrate, some features will be missing or damaged. Etching resistance Anti-etching is the ability of a photoresist to resist the high temperature, different pH environment or the ion bombardment in the process of post-modification. Surface tension Surface tension is the tension that induced by a liquid tended to minimize its surface area, which is caused by the attraction of the particles in the surface layer. In order to better wet the surface of substrate, photoresists are required to possess relatively low surface tension. Chemical amplification Photoresists used in production for DUV and shorter wavelengths require the use of chemical amplification to increase the sensitivity to the exposure energy. This is done in order to combat the larger absorption at shorter wavelengths. Chemical amplification is also often used in electron-beam exposures to increase the sensitivity to the exposure dose. In the process, acids released by the exposure radiation diffuse during the post-exposure bake step. These acids render surrounding polymer soluble in developer. A single acid molecule can catalyze many such 'deprotection' reactions; hence, fewer photons or electrons are needed. Acid diffusion is important not only to increase photoresist sensitivity and throughput, but also to limit line edge roughness due to shot noise statistics. However, the acid diffusion length is itself a potential resolution limiter. In addition, too much diffusion reduces chemical contrast, leading again to more roughness. The following reactions are an example of commercial chemically amplified photoresists in use today: photoacid generator + hν (193 nm) → acid cation + sulfonate anion sulfonate anion + hν (193 nm) → e− + sulfonate e− + photoacid generator → e− + acid cation + sulfonate anion The e− represents a solvated electron, or a freed electron that may react with other constituents of the solution. It typically travels a distance on the order of many nanometers before being contained; such a large travel distance is consistent with the release of electrons through thick oxide in UV EPROM in response to ultraviolet light. This parasitic exposure would degrade the resolution of the photoresist; for 193 nm the optical resolution is the limiting factor anyway, but for electron beam lithography or EUVL it is the electron range that determines the resolution rather than the optics. Types DNQ-Novolac photoresist One very common positive photoresist used with the I, G and H-lines from a mercury-vapor lamp is based on a mixture of diazonaphthoquinone (DNQ) and novolac resin (a phenol formaldehyde resin). DNQ inhibits the dissolution of the novolac resin, but upon exposure to light, the dissolution rate increases even beyond that of pure novolac. The mechanism by which unexposed DNQ inhibits novolac dissolution is not well understood, but is believed to be related to hydrogen bonding (or more exactly diazocoupling in the unexposed region). DNQ-novolac resists are developed by dissolution in a basic solution (usually 0.26N tetramethylammonium hydroxide (TMAH) in water). Epoxy-based resists One very common negative photoresist is based on epoxy-based oligomer. The common product name is SU-8 photoresist, and it was originally invented by IBM, but is now sold by Microchem and Gersteltec. One unique property of SU-8 is that it is very difficult to strip. As such, it is often used in applications where a permanent resist pattern (one that is not strippable, and can even be used in harsh temperature and pressure environments) is needed for a device. Mechanism of epoxy-based polymer is shown in 1.2.3 SU-8. SU-8 is prone to swelling at smaller feature sizes, which has led to the development of small-molecule alternatives that are capable of obtaining higher resolutions than SU-8. Off-stoichiometry thiol-enes(OSTE) polymer In 2016, OSTE Polymers were shown to possess a unique photolithography mechanism, based on diffusion-induced monomer depletion, which enables high photostructuring accuracy. The OSTE polymer material was originally invented at the KTH Royal Institute of Technology, but is now sold by Mercene Labs. Whereas the material has properties similar to those of SU8, OSTE has the specific advantage that it contains reactive surface molecules, which make this material attractive for microfluidic or biomedical applications. Hydrogen silsesquioxane (HSQ) HSQ is a common negative resist for e-beam, but also useful for photolithography. Originally invented by Dow Corning (1970), and now produced (2017) by Applied Quantum Materials Inc. (AQM). Unlike other negative resists, HSQ is inorganic and metal-free. Therefore, exposed HSQ provides a low dielectric constant (low-k) Si-rich oxide. A comparative study against other photoresists was reported in 2015 (Dow Corning HSQ). Applications Microcontact printing Microcontact printing was described by Whitesides Group in 1993. Generally, in this techniques, an elastomeric stamp is used to generate two-dimensional patterns, through printing the “ink” molecules onto the surface of a solid substrate. Step 1 for microcontact printing. A scheme for the creation of a polydimethylsiloxane (PDMS) master stamp. Step 2 for microcontact printing A scheme of the inking and contact process of microprinting lithography. Printed circuit boards The manufacture of printed circuit boards is one of the most important uses of photoresist. Photolithography allows the complex wiring of an electronic system to be rapidly, economically, and accurately reproduced as if run off a printing press. The general process is applying photoresist, exposing image to ultraviolet rays, and then etching to remove the copper-clad substrate. Patterning and etching of substrates This includes specialty photonics materials, MicroElectro-Mechanical Systems (MEMS), glass printed circuit boards, and other micropatterning tasks. Photoresist tends not to be etched by solutions with a pH greater than 3. Microelectronics This application, mainly applied to silicon wafers and silicon integrated circuits is the most developed of the technologies and the most specialized in the field. See also Photopolymer Hardmask References Light-sensitive chemicals Lithography (microfabrication) Materials science Polymers
Photoresist
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,417
[ "Light-sensitive chemicals", "Applied and interdisciplinary physics", "Microtechnology", "Materials science", "Light reactions", "Polymer chemistry", "nan", "Polymers", "Nanotechnology", "Lithography (microfabrication)" ]
23,863
https://en.wikipedia.org/wiki/Pyridine
Pyridine is a basic heterocyclic organic compound with the chemical formula . It is structurally related to benzene, with one methine group replaced by a nitrogen atom . It is a highly flammable, weakly alkaline, water-miscible liquid with a distinctive, unpleasant fish-like smell. Pyridine is colorless, but older or impure samples can appear yellow, due to the formation of extended, unsaturated polymeric chains, which show significant electrical conductivity. The pyridine ring occurs in many important compounds, including agrochemicals, pharmaceuticals, and vitamins. Historically, pyridine was produced from coal tar. As of 2016, it is synthesized on the scale of about 20,000 tons per year worldwide. Properties Physical properties Pyridine is diamagnetic. Its critical parameters are: pressure 5.63 MPa, temperature 619 K and volume 248 cm3/mol. In the temperature range 340–426 °C its vapor pressure p can be described with the Antoine equation where T is temperature, A = 4.16272, B = 1371.358 K and C = −58.496 K. Structure Pyridine ring forms a hexagon. Slight variations of the and distances as well as the bond angles are observed. Crystallography Pyridine crystallizes in an orthorhombic crystal system with space group Pna21 and lattice parameters a = 1752 pm, b = 897 pm, c = 1135 pm, and 16 formula units per unit cell (measured at 153 K). For comparison, crystalline benzene is also orthorhombic, with space group Pbca, a = 729.2 pm, b = 947.1 pm, c = 674.2 pm (at 78 K), but the number of molecules per cell is only 4. This difference is partly related to the lower symmetry of the individual pyridine molecule (C2v vs D6h for benzene). A trihydrate (pyridine·3H2O) is known; it also crystallizes in an orthorhombic system in the space group Pbca, lattice parameters a = 1244 pm, b = 1783 pm, c = 679 pm and eight formula units per unit cell (measured at 223 K). Spectroscopy The optical absorption spectrum of pyridine in hexane consists of bands at the wavelengths of 195, 251, and 270 nm. With respective extinction coefficients (ε) of 7500, 2000, and 450 L·mol−1·cm−1, these bands are assigned to π → π*, π → π*, and n → π* transitions. The compound displays very low fluorescence. The 1H nuclear magnetic resonance (NMR) spectrum shows signals for α-(δ 8.5), γ-(δ7.5) and β-protons (δ7). By contrast, the proton signal for benzene is found at δ7.27. The larger chemical shifts of the α- and γ-protons in comparison to benzene result from the lower electron density in the α- and γ-positions, which can be derived from the resonance structures. The situation is rather similar for the 13C NMR spectra of pyridine and benzene: pyridine shows a triplet at δ(α-C) = 150 ppm, δ(β-C) = 124 ppm and δ(γ-C) = 136 ppm, whereas benzene has a single line at 129 ppm. All shifts are quoted for the solvent-free substances. Pyridine is conventionally detected by the gas chromatography and mass spectrometry methods. Bonding Pyridine has a conjugated system of six π electrons that are delocalized over the ring. The molecule is planar and, thus, follows the Hückel criteria for aromatic systems. In contrast to benzene, the electron density is not evenly distributed over the ring, reflecting the negative inductive effect of the nitrogen atom. For this reason, pyridine has a dipole moment and a weaker resonant stabilization than benzene (resonance energy 117 kJ/mol in pyridine vs. 150 kJ/mol in benzene). The ring atoms in the pyridine molecule are sp2-hybridized. The nitrogen is involved in the π-bonding aromatic system using its unhybridized p orbital. The lone pair is in an sp2 orbital, projecting outward from the ring in the same plane as the σ bonds. As a result, the lone pair does not contribute to the aromatic system but importantly influences the chemical properties of pyridine, as it easily supports bond formation via an electrophilic attack. However, because of the separation of the lone pair from the aromatic ring system, the nitrogen atom cannot exhibit a positive mesomeric effect. Many analogues of pyridine are known where N is replaced by other heteroatoms from the same column of the Periodic Table of Elements (see figure below). Substitution of one C–H in pyridine with a second N gives rise to the diazine heterocycles (C4H4N2), with the names pyridazine, pyrimidine, and pyrazine. History Impure pyridine was undoubtedly prepared by early alchemists by heating animal bones and other organic matter, but the earliest documented reference is attributed to the Scottish scientist Thomas Anderson. In 1849, Anderson examined the contents of the oil obtained through high-temperature heating of animal bones. Among other substances, he separated from the oil a colorless liquid with unpleasant odor, from which he isolated pure pyridine two years later. He described it as highly soluble in water, readily soluble in concentrated acids and salts upon heating, and only slightly soluble in oils. Owing to its flammability, Anderson named the new substance pyridine, after (pyr) meaning fire. The suffix idine was added in compliance with the chemical nomenclature, as in toluidine, to indicate a cyclic compound containing a nitrogen atom. The chemical structure of pyridine was determined decades after its discovery. Wilhelm Körner (1869) and James Dewar (1871) suggested that, in analogy between quinoline and naphthalene, the structure of pyridine is derived from benzene by substituting one C–H unit with a nitrogen atom. The suggestion by Körner and Dewar was later confirmed in an experiment where pyridine was reduced to piperidine with sodium in ethanol. In 1876, William Ramsay combined acetylene and hydrogen cyanide into pyridine in a red-hot iron-tube furnace. This was the first synthesis of a heteroaromatic compound. The first major synthesis of pyridine derivatives was described in 1881 by Arthur Rudolf Hantzsch. The Hantzsch pyridine synthesis typically uses a 2:1:1 mixture of a β-keto acid (often acetoacetate), an aldehyde (often formaldehyde), and ammonia or its salt as the nitrogen donor. First, a double hydrogenated pyridine is obtained, which is then oxidized to the corresponding pyridine derivative. Emil Knoevenagel showed that asymmetrically substituted pyridine derivatives can be produced with this process. The contemporary methods of pyridine production had a low yield, and the increasing demand for the new compound urged to search for more efficient routes. A breakthrough came in 1924 when the Russian chemist Aleksei Chichibabin invented a pyridine synthesis reaction, which was based on inexpensive reagents. This method is still used for the industrial production of pyridine. Occurrence Pyridine is not abundant in nature, except for the leaves and roots of belladonna (Atropa belladonna) and in marshmallow (Althaea officinalis). Pyridine derivatives, however, are often part of biomolecules such as alkaloids. In daily life, trace amounts of pyridine are components of the volatile organic compounds that are produced in roasting and canning processes, e.g. in fried chicken, sukiyaki, roasted coffee, potato chips, and fried bacon. Traces of pyridine can be found in Beaufort cheese, vaginal secretions, black tea, saliva of those suffering from gingivitis, and sunflower honey. Trace amounts of up to 16 μg/m3 have been detected in tobacco smoke. Minor amounts of pyridine are released into environment from some industrial processes such as steel manufacture, processing of oil shale, coal gasification, coking plants and incinerators. The atmosphere at oil shale processing plants can contain pyridine concentrations of up to 13 μg/m3, and 53 μg/m3 levels were measured in the groundwater in the vicinity of a coal gasification plant. According to a study by the US National Institute for Occupational Safety and Health, about 43,000 Americans work in contact with pyridine. In foods Pyridine has historically been added to foods to give them a bitter flavour, although this practise is now banned in the U.S. It may still be added to ethanol to make it unsuitable for drinking. Production Historically, pyridine was extracted from coal tar or obtained as a byproduct of coal gasification. The process is labor-consuming and inefficient: coal tar contains only about 0.1% pyridine, and therefore a multi-stage purification was required, which further reduced the output. Nowadays, most pyridines are synthesized from ammonia, aldehydes, and nitriles, a few combinations of which are suited for pyridine itself. Various name reactions are also known, but they are not practiced on scale. In 1989, 26,000 tonnes of pyridine was produced worldwide. Other major derivatives are 2-, 3-, 4-methylpyridines and 5-ethyl-2-methylpyridine. The combined scale of these alkylpyridines matches that of pyridine itself. Among the largest 25 production sites for pyridine, eleven are located in Europe (as of 1999). The major producers of pyridine include Evonik Industries, Rütgers Chemicals, Jubilant Life Sciences, Imperial Chemical Industries, and Koei Chemical. Pyridine production significantly increased in the early 2000s, with an annual production capacity of 30,000 tonnes in mainland China alone. The US–Chinese joint venture Vertellus is currently the world leader in pyridine production. Chichibabin synthesis The Chichibabin pyridine synthesis was reported in 1924 and the basic approach underpins several industrial routes. In its general form, the reaction involves the condensation reaction of aldehydes, ketones, α,β-unsaturated carbonyl compounds, or any combination of the above, in ammonia or ammonia derivatives. Application of the Chichibabin pyridine synthesis suffer from low yields, often about 30%, however the precursors are inexpensive. In particular, unsubstituted pyridine is produced from formaldehyde and acetaldehyde. First, acrolein is formed in a Knoevenagel condensation from the acetaldehyde and formaldehyde. The acrolein then condenses with acetaldehyde and ammonia to give dihydropyridine, which is oxidized to pyridine. This process is carried out in a gas phase at 400–450 °C. Typical catalysts are modified forms of alumina and silica. The reaction has been tailored to produce various methylpyridines. Dealkylation and decarboxylation of substituted pyridines Pyridine can be prepared by dealkylation of alkylated pyridines, which are obtained as byproducts in the syntheses of other pyridines. The oxidative dealkylation is carried out either using air over vanadium(V) oxide catalyst, by vapor-dealkylation on nickel-based catalyst, or hydrodealkylation with a silver- or platinum-based catalyst. Yields of pyridine up to be 93% can be achieved with the nickel-based catalyst. Pyridine can also be produced by the decarboxylation of nicotinic acid with copper chromite. Bönnemann cyclization The trimerization of a part of a nitrile molecule and two parts of acetylene into pyridine is called Bönnemann cyclization. This modification of the Reppe synthesis can be activated either by heat or by light. While the thermal activation requires high pressures and temperatures, the photoinduced cycloaddition proceeds at ambient conditions with CoCp2(cod) (Cp = cyclopentadienyl, cod = 1,5-cyclooctadiene) as a catalyst, and can be performed even in water. A series of pyridine derivatives can be produced in this way. When using acetonitrile as the nitrile, 2-methylpyridine is obtained, which can be dealkylated to pyridine. Other methods The Kröhnke pyridine synthesis provides a fairly general method for generating substituted pyridines using pyridine itself as a reagent which does not become incorporated into the final product. The reaction of pyridine with bromomethyl ketones gives the related pyridinium salt, wherein the methylene group is highly acidic. This species undergoes a Michael-like addition to α,β-unsaturated carbonyls in the presence of ammonium acetate to undergo ring closure and formation of the targeted substituted pyridine as well as pyridinium bromide. The Ciamician–Dennstedt rearrangement entails the ring-expansion of pyrrole with dichlorocarbene to 3-chloropyridine. In the Gattermann–Skita synthesis, a malonate ester salt reacts with dichloromethylamine. Other methods include the Boger pyridine synthesis and Diels–Alder reaction of an alkene and an oxazole. Biosynthesis Several pyridine derivatives play important roles in biological systems. While its biosynthesis is not fully understood, nicotinic acid (vitamin B3) occurs in some bacteria, fungi, and mammals. Mammals synthesize nicotinic acid through oxidation of the amino acid tryptophan, where an intermediate product, the aniline derivative kynurenine, creates a pyridine derivative, quinolinate and then nicotinic acid. On the contrary, the bacteria Mycobacterium tuberculosis and Escherichia coli produce nicotinic acid by condensation of glyceraldehyde 3-phosphate and aspartic acid. Reactions Because of the electronegative nitrogen in the pyridine ring, pyridine enters less readily into electrophilic aromatic substitution reactions than benzene derivatives. Instead, in terms of its reactivity, pyridine resembles nitrobenzene. Correspondingly pyridine is more prone to nucleophilic substitution, as evidenced by the ease of metalation by strong organometallic bases. The reactivity of pyridine can be distinguished for three chemical groups. With electrophiles, electrophilic substitution takes place where pyridine expresses aromatic properties. With nucleophiles, pyridine reacts at positions 2 and 4 and thus behaves similar to imines and carbonyls. The reaction with many Lewis acids results in the addition to the nitrogen atom of pyridine, which is similar to the reactivity of tertiary amines. The ability of pyridine and its derivatives to oxidize, forming amine oxides (N-oxides), is also a feature of tertiary amines. The nitrogen center of pyridine features a basic lone pair of electrons. This lone pair does not overlap with the aromatic π-system ring, consequently pyridine is basic, having chemical properties similar to those of tertiary amines. Protonation gives pyridinium, C5H5NH+.The pKa of the conjugate acid (the pyridinium cation) is 5.25. The structures of pyridine and pyridinium are almost identical. The pyridinium cation is isoelectronic with benzene. Pyridinium p-toluenesulfonate (PPTS) is an illustrative pyridinium salt; it is produced by treating pyridine with p-toluenesulfonic acid. In addition to protonation, pyridine undergoes N-centred alkylation, acylation, and N-oxidation. Pyridine and poly(4-vinyl) pyridine have been shown to form conducting molecular wires with remarkable polyenimine structure on UV irradiation, a process which accounts for at least some of the visible light absorption by aged pyridine samples. These wires have been theoretically predicted to be both highly efficient electron donors and acceptors, and yet are resistant to air oxidation. Electrophilic substitutions Owing to the decreased electron density in the aromatic system, electrophilic substitutions are suppressed in pyridine and its derivatives. Friedel–Crafts alkylation or acylation, usually fail for pyridine because they lead only to the addition at the nitrogen atom. Substitutions usually occur at the 3-position, which is the most electron-rich carbon atom in the ring and is, therefore, more susceptible to an electrophilic addition. Direct nitration of pyridine is sluggish. Pyridine derivatives wherein the nitrogen atom is screened sterically and/or electronically can be obtained by nitration with nitronium tetrafluoroborate (NO2BF4). In this way, 3-nitropyridine can be obtained via the synthesis of 2,6-dibromopyridine followed by nitration and debromination. Sulfonation of pyridine is even more difficult than nitration. However, pyridine-3-sulfonic acid can be obtained. Reaction with the SO3 group also facilitates addition of sulfur to the nitrogen atom, especially in the presence of a mercury(II) sulfate catalyst. In contrast to the sluggish nitrations and sulfonations, the bromination and chlorination of pyridine proceed well. Pyridine N-oxide Oxidation of pyridine occurs at nitrogen to give pyridine N-oxide. The oxidation can be achieved with peracids: C5H5N + RCO3H → C5H5NO + RCO2H Some electrophilic substitutions on the pyridine are usefully effected using pyridine N-oxide followed by deoxygenation. Addition of oxygen suppresses further reactions at nitrogen atom and promotes substitution at the 2- and 4-carbons. The oxygen atom can then be removed, e.g., using zinc dust. Nucleophilic substitutions In contrast to benzene ring, pyridine efficiently supports several nucleophilic substitutions. The reason for this is relatively lower electron density of the carbon atoms of the ring. These reactions include substitutions with elimination of a hydride ion and elimination-additions with formation of an intermediate aryne configuration, and usually proceed at the 2- or 4-position. Many nucleophilic substitutions occur more easily not with bare pyridine but with pyridine modified with bromine, chlorine, fluorine, or sulfonic acid fragments that then become a leaving group. So fluorine is the best leaving group for the substitution with organolithium compounds. The nucleophilic attack compounds may be alkoxides, thiolates, amines, and ammonia (at elevated pressures). In general, the hydride ion is a poor leaving group and occurs only in a few heterocyclic reactions. They include the Chichibabin reaction, which yields pyridine derivatives aminated at the 2-position. Here, sodium amide is used as the nucleophile yielding 2-aminopyridine. The hydride ion released in this reaction combines with a proton of an available amino group, forming a hydrogen molecule. Analogous to benzene, nucleophilic substitutions to pyridine can result in the formation of pyridyne intermediates as heteroaryne. For this purpose, pyridine derivatives can be eliminated with good leaving groups using strong bases such as sodium and potassium tert-butoxide. The subsequent addition of a nucleophile to the triple bond has low selectivity, and the result is a mixture of the two possible adducts. Radical reactions Pyridine supports a series of radical reactions, which is used in its dimerization to bipyridines. Radical dimerization of pyridine with elemental sodium or Raney nickel selectively yields 4,4'-bipyridine, or 2,2'-bipyridine, which are important precursor reagents in the chemical industry. One of the name reactions involving free radicals is the Minisci reaction. It can produce 2-tert-butylpyridine upon reacting pyridine with pivalic acid, silver nitrate and ammonium in sulfuric acid with a yield of 97%. Reactions on the nitrogen atom Lewis acids easily add to the nitrogen atom of pyridine, forming pyridinium salts. The reaction with alkyl halides leads to alkylation of the nitrogen atom. This creates a positive charge in the ring that increases the reactivity of pyridine to both oxidation and reduction. The Zincke reaction is used for the selective introduction of radicals in pyridinium compounds (it has no relation to the chemical element zinc). Hydrogenation and reduction Piperidine is produced by hydrogenation of pyridine with a nickel-, cobalt-, or ruthenium-based catalyst at elevated temperatures. The hydrogenation of pyridine to piperidine releases 193.8 kJ/mol, which is slightly less than the energy of the hydrogenation of benzene (205.3 kJ/mol). Partially hydrogenated derivatives are obtained under milder conditions. For example, reduction with lithium aluminium hydride yields a mixture of 1,4-dihydropyridine, 1,2-dihydropyridine, and 2,5-dihydropyridine. Selective synthesis of 1,4-dihydropyridine is achieved in the presence of organometallic complexes of magnesium and zinc, and (Δ3,4)-tetrahydropyridine is obtained by electrochemical reduction of pyridine. Birch reduction converts pyridine to dihydropyridines. Lewis basicity and coordination compounds Pyridine is a Lewis base, donating its pair of electrons to a Lewis acid. Its Lewis base properties are discussed in the ECW model. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. One example is the sulfur trioxide pyridine complex (melting point 175 °C), which is a sulfation agent used to convert alcohols to sulfate esters. Pyridine-borane (, melting point 10–11 °C) is a mild reducing agent. Transition metal pyridine complexes are numerous. Typical octahedral complexes have the stoichiometry and . Octahedral homoleptic complexes of the type are rare or tend to dissociate pyridine. Numerous square planar complexes are known, such as Crabtree's catalyst. The pyridine ligand replaced during the reaction is restored after its completion. The η6 coordination mode, as occurs in η6 benzene complexes, is observed only in sterically encumbered derivatives that block the nitrogen center. Applications Pesticides and pharmaceuticals The main use of pyridine is as a precursor to the herbicides paraquat and diquat. The first synthesis step of insecticide chlorpyrifos consists of the chlorination of pyridine. Pyridine is also the starting compound for the preparation of pyrithione-based fungicides. Cetylpyridinium and laurylpyridinium, which can be produced from pyridine with a Zincke reaction, are used as antiseptic in oral and dental care products. Pyridine is easily attacked by alkylating agents to give N-alkylpyridinium salts. One example is cetylpyridinium chloride. It is also used in the textile industry to improve network capacity of cotton. Laboratory use Pyridine is used as a polar, basic, low-reactive solvent, for example in Knoevenagel condensations. It is especially suitable for the dehalogenation, where it acts as the base for the elimination reaction. In esterifications and acylations, pyridine activates the carboxylic acid chlorides and anhydrides. Even more active in these reactions are the derivatives 4-dimethylaminopyridine (DMAP) and 4-(1-pyrrolidinyl) pyridine. Pyridine is also used as a base in some condensation reactions. Reagents As a base, pyridine can be used as the Karl Fischer reagent, but it is usually replaced by alternatives with a more pleasant odor, such as imidazole. Pyridinium chlorochromate, pyridinium dichromate, and the Collins reagent (the complex of chromium(VI) oxide) are used for the oxidation of alcohols. Hazards Pyridine is a toxic, flammable liquid with a strong and unpleasant fishy odour. Its odour threshold of 0.04 to 20 ppm is close to its threshold limit of 5 ppm for adverse effects, thus most (but not all) adults will be able to tell when it is present at harmful levels. Pyridine easily dissolves in water and harms both animals and plants in aquatic systems. Fire Pyridine has a flash point of 20 °C and is therefore highly flammable. Combustion produces toxic fumes which can include bipyridines, nitrogen oxides, and carbon monoxide. Short-term exposure Pyridine can cause chemical burns on contact with the skin and its fumes may be irritating to the eyes or upon inhalation. Pyridine depresses the nervous system giving symptoms similar to intoxication with vapor concentrations of above 3600 ppm posing a greater health risk. The effects may have a delayed onset of several hours and include dizziness, headache, lack of coordination, nausea, salivation, and loss of appetite. They may progress into abdominal pain, pulmonary congestion and unconsciousness. The lowest known lethal dose (LDLo) for the ingestion of pyridine in humans is 500 mg/kg. Long-term exposure Prolonged exposure to pyridine may result in liver, heart and kidney damage. Evaluations as a possible carcinogenic agent showed that there is inadequate evidence in humans for the carcinogenicity of pyridine, although there is sufficient evidence in experimental animals. Therefore, IARC considers pyridine as possibly carcinogenic to humans (Group 2B). Metabolism Exposure to pyridine would normally lead to its inhalation and absorption in the lungs and gastrointestinal tract, where it either remains unchanged or is metabolized. The major products of pyridine metabolism are N-methylpyridiniumhydroxide, which are formed by N-methyltransferases (e.g., pyridine N-methyltransferase), as well as pyridine N-oxide, and 2-, 3-, and 4-hydroxypyridine, which are generated by the action of monooxygenase. In humans, pyridine is metabolized only into N-methylpyridiniumhydroxide. Environmental fate Pyridine is readily degraded by bacteria to ammonia and carbon dioxide. The unsubstituted pyridine ring degrades more rapidly than picoline, lutidine, chloropyridine, or aminopyridines, and a number of pyridine degraders have been shown to overproduce riboflavin in the presence of pyridine. Ionizable N-heterocyclic compounds, including pyridine, interact with environmental surfaces (such as soils and sediments) via multiple pH-dependent mechanisms, including partitioning to soil organic matter, cation exchange, and surface complexation. Such adsorption to surfaces reduces bioavailability of pyridines for microbial degraders and other organisms, thus slowing degradation rates and reducing ecotoxicity. Nomenclature The systematic name of pyridine, within the Hantzsch–Widman nomenclature recommended by the IUPAC, is . However, systematic names for simple compounds are used very rarely; instead, heterocyclic nomenclature follows historically established common names. IUPAC discourages the use of in favor of pyridine. The numbering of the ring atoms in pyridine starts at the nitrogen (see infobox). An allocation of positions by letter of the Greek alphabet (α-γ) and the substitution pattern nomenclature common for homoaromatic systems (ortho, meta, para) are used sometimes. Here α (ortho), β (meta), and γ (para) refer to the 2, 3, and 4 position, respectively. The systematic name for the pyridine derivatives is pyridinyl, wherein the position of the substituted atom is preceded by a number. However, the historical name pyridyl is encouraged by the IUPAC and used instead of the systematic name. The cationic derivative formed by the addition of an electrophile to the nitrogen atom is called pyridinium. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium 6-membered rings with two nitrogen atoms: diazines 6-membered rings with three nitrogen atoms: triazines 6-membered rings with four nitrogen atoms: tetrazines 6-membered rings with five nitrogen atoms: pentazine 6-membered rings with six nitrogen atoms: hexazine References Bibliography External links Synthesis and properties of pyridines at chemsynthesis.com International Chemical Safety Card 0323 NIOSH Pocket Guide to Chemical Hazards Synthesis of pyridines (overview of recent methods) Amine solvents Foul-smelling chemicals Aromatic bases Simple aromatic rings Functional groups Aromatic solvents
Pyridine
[ "Chemistry" ]
6,687
[ "Bases (chemistry)", "Functional groups", "Aromatic bases" ]
23,872
https://en.wikipedia.org/wiki/Polymerization
In polymer chemistry, polymerization (American English), or polymerisation (British English), is a process of reacting monomer molecules together in a chemical reaction to form polymer chains or three-dimensional networks. There are many forms of polymerization and different systems exist to categorize them. In chemical compounds, polymerization can occur via a variety of reaction mechanisms that vary in complexity due to the functional groups present in the reactants and their inherent steric effects. In more straightforward polymerizations, alkenes form polymers through relatively simple radical reactions; in contrast, reactions involving substitution at a carbonyl group require more complex synthesis due to the way in which reactants polymerize. As alkenes can polymerize in somewhat straightforward radical reactions, they form useful compounds such as polyethylene and polyvinyl chloride (PVC), which are produced in high tonnages each year due to their usefulness in manufacturing processes of commercial products, such as piping, insulation and packaging. In general, polymers such as PVC are referred to as "homopolymers", as they consist of repeated long chains or structures of the same monomer unit, whereas polymers that consist of more than one monomer unit are referred to as copolymers (or co-polymers). Other monomer units, such as formaldehyde hydrates or simple aldehydes, are able to polymerize themselves at quite low temperatures (ca. −80 °C) to form trimers; molecules consisting of 3 monomer units, which can cyclize to form ring cyclic structures, or undergo further reactions to form tetramers, or 4 monomer-unit compounds. Such small polymers are referred to as oligomers. Generally, because formaldehyde is an exceptionally reactive electrophile it allows nucleophilic addition of hemiacetal intermediates, which are in general short-lived and relatively unstable "mid-stage" compounds that react with other non-polar molecules present to form more stable polymeric compounds. Polymerization that is not sufficiently moderated and proceeds at a fast rate can be very hazardous. This phenomenon is known as autoacceleration, and can cause fires and explosions. Step-growth vs. chain-growth polymerization Step-growth and chain-growth are the main classes of polymerization reaction mechanisms. The former is often easier to implement but requires precise control of stoichiometry. The latter more reliably affords high molecular-weight polymers, but only applies to certain monomers. Step-growth In step-growth (or step) polymerization, pairs of reactants, of any lengths, combine at each step to form a longer polymer molecule. The average molar mass increases slowly. Long chains form only late in the reaction. Step-growth polymers are formed by independent reaction steps between functional groups of monomer units, usually containing heteroatoms such as nitrogen or oxygen. Most step-growth polymers are also classified as condensation polymers, since a small molecule such as water is lost when the polymer chain is lengthened. For example, polyester chains grow by reaction of alcohol and carboxylic acid groups to form ester links with loss of water. However, there are exceptions; for example polyurethanes are step-growth polymers formed from isocyanate and alcohol bifunctional monomers) without loss of water or other volatile molecules, and are classified as addition polymers rather than condensation polymers. Step-growth polymers increase in molecular weight at a very slow rate at lower conversions and reach moderately high molecular weights only at very high conversion (i.e., >95%). Solid state polymerization to afford polyamides (e.g., nylons) is an example of step-growth polymerization. Chain-growth In chain-growth (or chain) polymerization, the only chain-extension reaction step is the addition of a monomer to a growing chain with an active center such as a free radical, cation, or anion. Once the growth of a chain is initiated by formation of an active center, chain propagation is usually rapid by addition of a sequence of monomers. Long chains are formed from the beginning of the reaction. Chain-growth polymerization (or addition polymerization) involves the linking together of unsaturated monomers, especially containing carbon-carbon double bonds. The pi-bond is lost by formation of a new sigma bond. Chain-growth polymerization is involved in the manufacture of polymers such as polyethylene, polypropylene, polyvinyl chloride (PVC), and acrylate. In these cases, the alkenes RCH=CH2 are converted to high molecular weight alkanes (-RCHCH2-)n (R = H, CH3, Cl, CO2CH3). Other forms of chain growth polymerization include cationic addition polymerization and anionic addition polymerization. A special case of chain-growth polymerization leads to living polymerization. Ziegler–Natta polymerization allows considerable control of polymer branching. Diverse methods are employed to manipulate the initiation, propagation, and termination rates during chain polymerization. A related issue is temperature control, also called heat management, during these reactions, which are often highly exothermic. For example, for the polymerization of ethylene, 93.6 kJ of energy are released per mole of monomer. The manner in which polymerization is conducted is a highly evolved technology. Methods include emulsion polymerization, solution polymerization, suspension polymerization, and precipitation polymerization. Although the polymer dispersity and molecular weight may be improved, these methods may introduce additional processing requirements to isolate the product from a solvent. Photopolymerization Most photopolymerization reactions are chain-growth polymerizations which are initiated by the absorption of visible or ultraviolet light. Photopolymerization can also be a step-growth polymerization. The light may be absorbed either directly by the reactant monomer (direct photopolymerization), or else by a photosensitizer which absorbs the light and then transfers energy to the monomer. In general, only the initiation step differs from that of the ordinary thermal polymerization of the same monomer; subsequent propagation, termination, and chain-transfer steps are unchanged. In step-growth photopolymerization, absorption of light triggers an addition (or condensation) reaction between two comonomers that do not react without light. A propagation cycle is not initiated because each growth step requires the assistance of light. Photopolymerization can be used as a photographic or printing process because polymerization only occurs in regions which have been exposed to light. Unreacted monomer can be removed from unexposed regions, leaving a relief polymeric image. Several forms of 3D printing—including layer-by-layer stereolithography and two-photon absorption 3D photopolymerization—use photopolymerization. Multiphoton polymerization using single pulses have also been demonstrated for fabrication of complex structures using a digital micromirror device. See also Cross-link Enzymatic polymerization In situ polymerization Metallocene Plasma polymerization Polymer characterization Polymer physics Reversible addition−fragmentation chain-transfer polymerization Ring-opening polymerization Sequence-controlled polymers Sol-gel References
Polymerization
[ "Chemistry", "Materials_science" ]
1,507
[ "Polymerization reactions", "Polymer chemistry" ]
24,032
https://en.wikipedia.org/wiki/Positron%20emission%20tomography
Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes, and in other physiological activities including blood flow, regional chemical composition, and absorption. Different tracers are used for various imaging purposes, depending on the target process within the body. For example: Fluorodeoxyglucose ([18F]FDG or FDG) is commonly used to detect cancer; [18F]Sodium fluoride (Na18F) is widely used for detecting bone formation; Oxygen-15 (15O) is sometimes used to measure blood flow. PET is a common imaging technique, a medical scintillography technique used in nuclear medicine. A radiopharmaceutical – a radioisotope attached to a drug – is injected into the body as a tracer. When the radiopharmaceutical undergoes beta plus decay, a positron is emitted, and when the positron interacts with an ordinary electron, the two particles annihilate and two gamma rays are emitted in opposite directions. These gamma rays are detected by two gamma cameras to form a three-dimensional image. PET scanners can incorporate a computed tomography scanner (CT) and are known as PET-CT scanners. PET scan images can be reconstructed using a CT scan performed using one scanner during the same session. One of the disadvantages of a PET scanner is its high initial cost and ongoing operating costs. Uses PET is both a medical and research tool used in pre-clinical and clinical settings. It is used heavily in the imaging of tumors and the search for metastases within the field of clinical oncology, and for the clinical diagnosis of certain diffuse brain diseases such as those causing various types of dementias. PET is a valuable research tool to learn and enhance our knowledge of the normal human brain, heart function, and support drug development. PET is also used in pre-clinical studies using animals. It allows repeated investigations into the same subjects over time, where subjects can act as their own control and substantially reduces the numbers of animals required for a given study. This approach allows research studies to reduce the sample size needed while increasing the statistical quality of its results. Physiological processes lead to anatomical changes in the body. Since PET is capable of detecting biochemical processes as well as expression of some proteins, PET can provide molecular-level information much before any anatomic changes are visible. PET scanning does this by using radiolabelled molecular probes that have different rates of uptake depending on the type and function of tissue involved. Regional tracer uptake in various anatomic structures can be visualized and relatively quantified in terms of injected positron emitter within a PET scan. PET imaging is best performed using a dedicated PET scanner. It is also possible to acquire PET images using a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET imaging is lower, and the scans take longer to acquire. However, this method allows a low-cost on-site solution to institutions with low PET scanning demand. An alternative would be to refer these patients to another center or relying on a visit by a mobile scanner. Alternative methods of medical imaging include single-photon emission computed tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), and ultrasound. SPECT is an imaging technique similar to PET that uses radioligands to detect molecules in the body. SPECT is less expensive and provides inferior image quality than PET. Oncology PET scanning with the radiotracer [18F]fluorodeoxyglucose (FDG) is widely used in clinical oncology. FDG is a glucose analog that is taken up by glucose-using cells and phosphorylated by hexokinase (whose mitochondrial form is significantly elevated in rapidly growing malignant tumors). Metabolic trapping of the radioactive glucose molecule allows the PET scan to be utilized. The concentrations of imaged FDG tracer indicate tissue metabolic activity as it corresponds to the regional glucose uptake. FDG is used to explore the possibility of cancer spreading to other body sites (cancer metastasis). These FDG PET scans for detecting cancer metastasis are the most common in standard medical care (representing 90% of current scans). The same tracer may also be used for the diagnosis of types of dementia. Less often, other radioactive tracers, usually but not always labelled with fluorine-18 (18F), are used to image the tissue concentration of different kinds of molecules of interest inside the body. A typical dose of FDG used in an oncological scan has an effective radiation dose of 7.6 mSv. Because the hydroxy group that is replaced by fluorine-18 to generate FDG is required for the next step in glucose metabolism in all cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell that takes it up until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This results in intense radiolabeling of tissues with high glucose uptake, such as the normal brain, liver, kidneys, and most cancers, which have a higher glucose uptake than most normal tissue due to the Warburg effect. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin lymphoma, non-Hodgkin lymphoma, and lung cancer. A 2020 review of research on the use of PET for Hodgkin lymphoma found evidence that negative findings in interim PET scans are linked to higher overall survival and progression-free survival; however, the certainty of the available evidence was moderate for survival, and very low for progression-free survival. A few other isotopes and radiotracers are slowly being introduced into oncology for specific purposes. For example, 11C-labelled metomidate (11C-metomidate) has been used to detect tumors of adrenocortical origin. Also, fluorodopa (FDOPA) PET/CT (also called F-18-DOPA PET/CT) has proven to be a more sensitive alternative to finding and also localizing pheochromocytoma than the Iobenguane (MIBG) scan. Neuroimaging Neurology PET imaging with oxygen-15 indirectly measures blood flow to the brain. In this method, increased radioactivity signal indicates increased blood flow which is assumed to correlate with increased brain activity. Because of its 2-minute half-life, oxygen-15 must be piped directly from a medical cyclotron for such uses, which is difficult. PET imaging with FDG takes advantage of the fact that the brain is normally a rapid user of glucose. Standard FDG PET of the brain measures regional glucose use and can be used in neuropathological diagnosis. Brain pathologies such as Alzheimer's disease (AD) greatly decrease brain metabolism of both glucose and oxygen in tandem. Therefore FDG PET of the brain may also be used to successfully differentiate Alzheimer's disease from other dementing processes, and also to make early diagnoses of Alzheimer's disease. The advantage of FDG PET for these uses is its much wider availability. Some fluorine-18 based radioactive tracers used for Alzheimer's include florbetapir, flutemetamol, Pittsburgh compound B (PiB) and florbetaben, which are all used to detect amyloid-beta plaques, a potential biomarker for Alzheimer's in the brain. PET imaging with FDG can also be used for localization of "seizure focus". A seizure focus will appear as hypometabolic during an interictal scan. Several radiotracers (i.e. radioligands) have been developed for PET that are ligands for specific neuroreceptor subtypes such as [11C]raclopride, [18F]fallypride and [18F]desmethoxyfallypride for dopamine D2/D3 receptors; [11C]McN5652 and [11C]DASB for serotonin transporters; [18F]mefway for serotonin 5HT1A receptors; and [18F]nifene for nicotinic acetylcholine receptors or enzyme substrates (e.g. 6-FDOPA for the AADC enzyme). These agents permit the visualization of neuroreceptor pools in the context of a plurality of neuropsychiatric and neurologic illnesses. PET may also be used for the diagnosis of hippocampal sclerosis, which causes epilepsy. FDG, and the less common tracers flumazenil and MPPF have been explored for this purpose. If the sclerosis is unilateral (right hippocampus or left hippocampus), FDG uptake can be compared with the healthy side. Even if the diagnosis is difficult with MRI, it may be diagnosed with PET. The development of a number of novel probes for non-invasive, in-vivo PET imaging of neuroaggregate in human brain has brought amyloid imaging close to clinical use. The earliest amyloid imaging probes included [18F]FDDNP developed at the University of California, Los Angeles and Pittsburgh compound B (PiB) developed at the University of Pittsburgh. These probes permit the visualization of amyloid plaques in the brains of Alzheimer's patients and could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the development of novel anti-amyloid therapies. [11C]polymethylpentene (PMP) is a novel radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system by acting as a substrate for acetylcholinesterase. Post-mortem examination of AD patients have shown decreased levels of acetylcholinesterase. [11C]PMP is used to map the acetylcholinesterase activity in the brain, which could allow for premortem diagnoses of AD and help to monitor AD treatments. Avid Radiopharmaceuticals has developed and commercialized a compound called florbetapir that uses the longer-lasting radionuclide fluorine-18 to detect amyloid plaques using PET scans. Neuropsychology or cognitive neuroscience To examine links between specific psychological processes or disorders and brain activity. Psychiatry Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have been radiolabeled with C-11 or F-18. Radioligands that bind to dopamine receptors (D1, D2, reuptake transporter), serotonin receptors (5HT1A, 5HT2A, reuptake transporter), opioid receptors (mu and kappa), cholinergic receptors (nicotinic and muscarinic) and other sites have been used successfully in studies with human subjects. Studies have been performed examining the state of these receptors in patients compared to healthy controls in schizophrenia, substance abuse, mood disorders and other psychiatric conditions. Stereotactic surgery and radiosurgery PET can also be used in image guided surgery for the treatment of intracranial tumors, arteriovenous malformations and other surgically treatable conditions. Cardiology Cardiology, atherosclerosis and vascular disease study: FDG PET can help in identifying hibernating myocardium. However, the cost-effectiveness of PET for this role versus SPECT is unclear. FDG PET imaging of atherosclerosis to detect patients at risk of stroke is also feasible. Also, it can help test the efficacy of novel anti-atherosclerosis therapies. Infectious diseases Imaging infections with molecular imaging technologies can improve diagnosis and treatment follow-up. Clinically, PET has been widely used to image bacterial infections using FDG to identify the infection-associated inflammatory response. Three different PET contrast agents have been developed to image bacterial infections in vivo are [18F]maltose, [18F]maltohexaose, and [18F]2-fluorodeoxysorbitol (FDS). FDS has the added benefit of being able to target only Enterobacteriaceae. Bio-distribution studies In pre-clinical trials, a new drug can be radiolabeled and injected into animals. Such scans are referred to as biodistribution studies. The information regarding drug uptake, retention and elimination over time can be obtained quickly and cost-effectively compare to the older technique of killing and dissecting the animals. Commonly, drug occupancy at a purported site of action can be inferred indirectly by competition studies between unlabeled drug and radiolabeled compounds to bind with specificity to the site. A single radioligand can be used this way to test many potential drug candidates for the same target. A related technique involves scanning with radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate that a drug causes the release of the natural substance. Small animal imaging A miniature animal PET has been constructed that is small enough for a fully conscious rat to be scanned. This RatCAP (rat conscious animal PET) allows animals to be scanned without the confounding effects of anesthesia. PET scanners designed specifically for imaging rodents, often referred to as microPET, as well as scanners for small primates, are marketed for academic and pharmaceutical research. The scanners are based on microminiature scintillators and amplified avalanche photodiodes (APDs) through a system that uses single-chip silicon photomultipliers. In 2018 the UC Davis School of Veterinary Medicine became the first veterinary center to employ a small clinical PET scanner as a scanner for clinical (rather than research) animal diagnosis. Because of cost as well as the marginal utility of detecting cancer metastases in companion animals (the primary use of this modality), veterinary PET scanning is expected to be rarely available in the immediate future. Musculo-skeletal imaging PET imaging has been used for imaging muscles and bones. FDG is the most commonly used tracer for imaging muscles, and NaF-F18 is the most widely used tracer for imaging bones. Muscles PET is a feasible technique for studying skeletal muscles during exercise. Also, PET can provide muscle activation data about deep-lying muscles (such as the vastus intermedialis and the gluteus minimus) compared to techniques like electromyography, which can be used only on superficial muscles directly under the skin. However, a disadvantage is that PET provides no timing information about muscle activation because it has to be measured after the exercise is completed. This is due to the time it takes for FDG to accumulate in the activated muscles. Bones Together with [18F]sodium floride, PET for bone imaging has been in use for 60 years for measuring regional bone metabolism and blood flow using static and dynamic scans. Researchers have recently started using [18F]sodium fluoride to study bone metastasis as well. Safety PET scanning is non-invasive, but it does involve exposure to ionizing radiation. FDG, which is now the standard radiotracer used for PET neuroimaging and cancer patient management, has an effective radiation dose of 14 mSv. The amount of radiation in FDG is similar to the effective dose of spending one year in the American city of Denver, Colorado (12.4 mSv/year). For comparison, radiation dosage for other medical procedures range from 0.02 mSv for a chest X-ray and 6.5–8 mSv for a CT scan of the chest. Average civil aircrews are exposed to 3 mSv/year, and the whole body occupational dose limit for nuclear energy workers in the US is 50 mSv/year. For scale, see Orders of magnitude (radiation). For PET-CT scanning, the radiation exposure may be substantial—around 23–26 mSv (for a 70 kg person—dose is likely to be higher for higher body weights). Operation Radionuclides and radiotracers Radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water, or ammonia, or into molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers. PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus, the specific processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and processes are continuing to be synthesized. As of this writing there are already dozens in clinical use and hundreds applied in research. In 2020 by far the most commonly used radiotracer in clinical PET scanning is the carbohydrate derivative FDG. This radiotracer is used in essentially all scans for oncology and most scans in neurology, thus makes up the large majority of radiotracer (>95%) used in PET and PET-CT scanning. Due to the short half-lives of most positron-emitting radioisotopes, the radiotracers have traditionally been produced using a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at offsite locations and shipped to imaging centers. Recently rubidium-82 generators have become commercially available. These contain strontium-82, which decays by electron capture to produce positron-emitting rubidium-82. The use of positron-emitting isotopes of metals in PET scans has been reviewed, including elements not listed above, such as lanthanides. Immuno-PET The isotope 89Zr has been applied to the tracking and quantification of molecular antibodies with PET cameras (a method called "immuno-PET"). The biological half-life of antibodies is typically on the order of days, see daclizumab and erenumab by way of example. To visualize and quantify the distribution of such antibodies in the body, the PET isotope 89Zr is well suited because its physical half-life matches the typical biological half-life of antibodies, see table above. Emission To conduct the scan, a short-lived radioactive tracer isotope is injected into the living subject (usually into blood circulation). Each tracer atom has been chemically incorporated into a biologically active molecule. There is a waiting period while the active molecule becomes concentrated in tissues of interest. Then the subject is placed in the imaging scanner. The molecule most commonly used for this purpose is FDG, a sugar, for which the waiting period is typically an hour. During the scan, a record of tissue concentration is made as the tracer decays. As the radioisotope undergoes positron emission decay (also known as positive beta decay), it emits a positron, an antiparticle of the electron with opposite charge. The emitted positron travels in tissue for a short distance (typically less than 1 mm, but dependent on the isotope), during which time it loses kinetic energy, until it decelerates to a point where it can interact with an electron. The encounter annihilates both electron and positron, producing a pair of annihilation (gamma) photons moving in approximately opposite directions. These are detected when they reach a scintillator in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving in approximately opposite directions (they would be exactly opposite in their center of mass frame, but the scanner has no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal "pairs" (i.e. within a timing-window of a few nanoseconds) are ignored. Localization of the positron annihilation event The most significant fraction of electron–positron annihilations results in two 511 keV gamma photons being emitted at almost 180 degrees to each other. Hence, it is possible to localize their source along a straight line of coincidence (also called the line of response, or LOR). In practice, the LOR has a non-zero width as the emitted photons are not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 nanoseconds, it is possible to localize the event to a segment of a chord, whose length is determined by the detector timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on some new systems. Image reconstruction The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection (typically, within a window of 6 to 12 nanoseconds of each other) of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred (i.e., the line of response (LOR)). Analytical techniques, much like the reconstruction of computed tomography (CT) and single-photon emission computed tomography (SPECT) data, are commonly used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult. Coincidence events can be grouped into projection images, called sinograms. The sinograms are sorted by the angle of each view and tilt (for 3D images). The sinogram images are analogous to the projections captured by CT scanners, and can be reconstructed in a similar way. The statistics of data thereby obtained are much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. This contributes to PET images appearing "noisier" than CT. Two major sources of noise in PET are scatter (a detected pair of photons, at least one of which was deflected from its original path by interaction with matter in the field of view, leading to the pair being assigned to an incorrect LOR) and random events (photons originating from two different annihilation events but incorrectly recorded as a coincidence pair because their arrival at their respective detectors occurred within a coincidence timing window). In practice, considerable pre-processing of the data is required – correction for random coincidences, estimation and subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must "cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in sensitivity due to angle of incidence). Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm has the advantage of being simple while having a low requirement for computing resources. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. Also, FBP treats the data deterministically – it does not account for the inherent randomness associated with PET data, thus requiring all the pre-reconstruction corrections described above. Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms such as the Shepp–Vardi algorithm are now the preferred method of reconstruction. These algorithms compute an estimate of the likely distribution of annihilation events that led to the measured data, based on statistical principles. The advantage is a better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is greater computer resource requirements. A further advantage of statistical image reconstruction techniques is that the physical effects that would need to be pre-corrected for when using an analytical reconstruction algorithm, such as scattered photons, random coincidences, attenuation and detector dead-time, can be incorporated into the likelihood model being used in the reconstruction, allowing for additional noise reduction. Iterative reconstruction has also been shown to result in improvements in the resolution of the reconstructed images, since more sophisticated models of the scanner physics can be incorporated into the likelihood model than those used by analytical reconstruction methods, allowing for improved quantification of the radioactivity distribution. Research has shown that Bayesian methods that involve a Poisson likelihood function and an appropriate prior probability (e.g., a smoothing prior leading to total variation regularization or a Laplacian distribution leading to -based regularization in a wavelet or other domain), such as via Ulf Grenander's Sieve estimator or via Bayes penalty methods or via I.J. Good's roughness method may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function but do not involve such a prior. Attenuation correction: Quantitative PET Imaging requires attenuation correction. In these systems attenuation correction is based on a transmission scan using 68Ge rotating rod source. Transmission scans directly measure attenuation values at 511 keV. Attenuation occurs when photons emitted by the radiotracer inside the body are absorbed by intervening tissue between the detector and the emission of the photon. As different LORs must traverse different thicknesses of tissue, the photons are attenuated differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray (positron emitting) source and the PET detectors. While attenuation-corrected images are generally more faithful representations, the correction process is itself susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and read together. 2D/3D reconstruction: Early PET scanners had only a single ring of detectors, hence the acquisition of data and subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple rings, essentially forming a cylinder of detectors. There are two approaches to reconstructing data from such a scanner: Treat each ring as a separate entity, so that only coincidences within a ring are detected, the image from each ring can then be reconstructed individually (2D reconstruction), or Allow coincidences to be detected between rings as well as within rings, then reconstruct the entire volume together (3D). 3D techniques have better sensitivity (because more coincidences are detected and used) hence less noise, but are more sensitive to the effects of scatter and random coincidences, as well as requiring greater computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence rejection, thus favoring 3D image reconstruction. Time-of-flight (TOF) PET: For modern systems with a higher time resolution (roughly 3 nanoseconds) a technique called "time-of-flight" is used to improve the overall performance. Time-of-flight PET makes use of very fast gamma-ray detectors and data processing system which can more precisely decide the difference in time between the detection of the two photons. It is impossible to localize the point of origin of the annihilation event exactly (currently within 10 cm). Therefore, image reconstruction is still needed. TOF technique gives a remarkable improvement in image quality, especially signal-to-noise ratio. Combination of PET with CT or MRI PET scans are increasingly read alongside CT or MRI scans, with the combination (co-registration) giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners (PET-CT). Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more precisely registered, so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher anatomical variation, which is more common outside the brain. At the Jülich Institute of Neurosciences and Biophysics, the world's largest PET-MRI device began operation in April 2009. A 9.4-tesla magnetic resonance tomograph (MRT) combined with a PET. Presently, only the head and brain can be imaged at these high magnetic field strengths. For brain imaging, registration of CT, MRI and PET scans may be accomplished without the need for an integrated PET-CT or PET-MRI scanner by using a device known as the N-localizer. Limitations The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Since the tracers are radioactive, the elderly and pregnant are unable to use it due to risks posed by radiation. Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals after radioisotope preparation. Organic radiotracer molecules that will contain a positron-emitting radioisotope cannot be synthesized first and then the radioisotope prepared within them, because bombardment with a cyclotron to prepare the radioisotope destroys any organic carrier for it. Instead, the isotope must be prepared first, then the chemistry to prepare any organic radiotracer (such as FDG) accomplished very quickly, in the short time before the isotope decays. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers that can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half-life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82 (used as rubidium-82 chloride) with a half-life of 1.27 minutes, which is created in a portable generator and is used for myocardial perfusion studies. In recent years a few on-site cyclotrons with integrated shielding and "hot labs" (automated chemistry labs that are able to work with radioisotopes) have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines. In recent years the shortage of PET scans has been alleviated in the US, as rollout of radiopharmacies to supply radioisotopes has grown 30%/year. Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling. History The concept of emission and transmission tomography was introduced by David E. Kuhl, Luke Chapman and Roy Edwards in the late 1950s. Their work would lead to the design and construction of several tomographic instruments at Washington University School of Medicine and later at the University of Pennsylvania. In the 1960s and 70s tomographic imaging instruments and techniques were further developed by Michel Ter-Pogossian, Michael E. Phelps, Edward J. Hoffman and others at Washington University School of Medicine. Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning in the 1950s contributed significantly to the development of PET technology and included the first demonstration of annihilation radiation for medical imaging. Their innovations, including the use of light pipes and volumetric analysis, have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker." One of the factors most responsible for the acceptance of positron imaging was the development of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (FDG-firstly synthethized and described by two Czech scientists from Charles University in Prague in 1968) by the Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of PET imaging. The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic scanners, to yield the modern procedure. The logical extension of positron instrumentation was a design using two 2-dimensional arrays. PC-I was the first instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 1970. It soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James Robertson and Zang-Hee Cho were the first to propose a ring system that has become the prototype of the current shape of PET. The first multislice cylindrical array PET scanner was completed in 1974 at the Mallinckrodt Institute of Radiology by the group led by Ter-Pogossian. The PET-CT scanner, attributed to David Townsend and Ronald Nutt, was named by Time as the medical invention of the year in 2000. Cost As of August 2008, Cancer Care Ontario reports that the current average incremental cost to perform a PET scan in the province is CA$1,000–1,200 per scan. This includes the cost of the radiopharmaceutical and a stipend for the physician reading the scan. In the United States, a PET scan is estimated to be US$1500-$5000. In England, the National Health Service reference cost (2015–2016) for an adult outpatient PET scan is £798. In Australia, as of July 2018, the Medicare Benefits Schedule Fee for whole body FDG PET ranges from A$953 to A$999, depending on the indication for the scan. Quality control The overall performance of PET systems can be evaluated by quality control tools such as the Jaszczak phantom. See also Diffuse optical imaging Hot cell (equipment used to produce the radiopharmaceuticals used in PET) Molecular imaging Neurotherapy References External links PET-CT atlas Harvard Medical School National Isotope Development Center—U.S. government source of radionuclides including those for PET—production, research, development, distribution, and information 3D nuclear medical imaging American inventions Medical physics Neuroimaging Radiation therapy Medicinal radiochemistry Armenian inventions Positron
Positron emission tomography
[ "Physics", "Chemistry" ]
7,593
[ "Electron", "Antimatter", "Applied and interdisciplinary physics", "Medicinal radiochemistry", "Positron emission tomography", "Medical physics", "Medicinal chemistry", "Positron", "Matter" ]
24,065
https://en.wikipedia.org/wiki/Population%20inversion
In physics, specifically statistical mechanics, a population inversion occurs when a system (such as a group of atoms or molecules) exists in a state in which more members of the system are in higher, excited states than in lower, unexcited energy states. It is called an "inversion" because in many familiar and commonly encountered physical systems, this is not possible. This concept is of fundamental importance in laser science because the production of a population inversion is a necessary step in the workings of a standard laser. Boltzmann distributions and thermal equilibrium To understand the concept of a population inversion, it is necessary to understand some thermodynamics and the way that light interacts with matter. To do so, it is useful to consider a very simple assembly of atoms forming a laser medium. Assume there is a group of N atoms, each of which is capable of being in one of two energy states: either The ground state, with energy E1; or The excited state, with energy E2, with E2 > E1. The number of these atoms which are in the ground state is given by N1, and the number in the excited state N2. Since there are N atoms in total, The energy difference between the two states, given by determines the characteristic frequency of light which will interact with the atoms; This is given by the relation h being the Planck constant. If the group of atoms is in thermal equilibrium, it can be shown from Maxwell–Boltzmann statistics that the ratio of the number of atoms in each state is given by the ratio of two Boltzmann distributions, the Boltzmann factor: where T is the thermodynamic temperature of the group of atoms, k is the Boltzmann constant and g1 and g2 are the degeneracies of each state. Calculable is the ratio of the populations of the two states at room temperature (T ≈ 300 K) for an energy difference ΔE that corresponds to light of a frequency corresponding to visible light (ν ≈ ). In this case ΔE = ≈ 2.07 eV, and kT ≈ 0.026 eV. Since , it follows that the argument of the exponential in the equation above is a large negative number, and as such N2/N1 is vanishingly small; i.e., there are almost no atoms in the excited state. When in thermal equilibrium, then, it is seen that the lower energy state is more populated than the higher energy state, and this is the normal state of the system. As T increases, the number of electrons in the high-energy state (N2) increases, but N2 never exceeds N1 for a system at thermal equilibrium; rather, at infinite temperature, the populations N2 and N1 become equal. In other words, a population inversion () can never exist for a system at thermal equilibrium. To achieve population inversion therefore requires pushing the system into a non-equilibrated state. Interaction of light with matter There are three types of possible interactions between a system of atoms and light that are of interest: Absorption If light (photons) of frequency ν12 passes through the group of atoms, there is a possibility of the light being absorbed by electrons which are in the ground state, which will cause them to be excited to the higher energy state. The rate of absorption is proportional to the radiation density of the light, and also to the number of atoms currently in the ground state, N1. Spontaneous emission If atoms are in the excited state, spontaneous decay events to the ground state will occur at a rate proportional to N2, the number of atoms in the excited state. The energy difference between the two states ΔE21 is emitted from the atom as a photon of frequency ν21 as given by the frequency-energy relation above. The photons are emitted stochastically, and there is no fixed phase relationship between photons emitted from a group of excited atoms; in other words, spontaneous emission is incoherent. In the absence of other processes, the number of atoms in the excited state at time t, is given by where N2(0) is the number of excited atoms at time t = 0, and τ21 is the mean lifetime of the transition between the two states. Stimulated emission If an atom is already in the excited state, it may be agitated by the passage of a photon that has a frequency ν21 corresponding to the energy gap ΔE of the excited state to ground state transition. In this case, the excited atom relaxes to the ground state, and it produces a second photon of frequency ν21. The original photon is not absorbed by the atom, and so the result is two photons of the same frequency. This process is known as stimulated emission. Specifically, an excited atom will act like a small electric dipole which will oscillate with the external field provided. One of the consequences of this oscillation is that it encourages electrons to decay to the lowest energy state. When this happens due to the presence of the electromagnetic field from a photon, a photon is released in the same phase and direction as the "stimulating" photon, and is called stimulated emission. The rate at which stimulated emission occurs is proportional to the number of atoms N2 in the excited state, and the radiation density of the light. The base probability of a photon causing stimulated emission in a single excited atom was shown by Albert Einstein to be exactly equal to the probability of a photon being absorbed by an atom in the ground state. Therefore, when the numbers of atoms in the ground and excited states are equal, the rate of stimulated emission is equal to the rate of absorption for a given radiation density. The critical detail of stimulated emission is that the induced photon has the same frequency and phase as the incident photon. In other words, the two photons are coherent. It is this property that allows optical amplification, and the production of a laser system. During the operation of a laser, all three light-matter interactions described above are taking place. Initially, atoms are energized from the ground state to the excited state by a process called pumping, described below. Some of these atoms decay via spontaneous emission, releasing incoherent light as photons of frequency, ν. These photons are fed back into the laser medium, usually by an optical resonator. Some of these photons are absorbed by the atoms in the ground state, and the photons are lost to the laser process. However, some photons cause stimulated emission in excited-state atoms, releasing another coherent photon. In effect, this results in optical amplification. If the number of photons being amplified per unit time is greater than the number of photons being absorbed, then the net result is a continuously increasing number of photons being produced; the laser medium is said to have a gain of greater than unity. Recall from the descriptions of absorption and stimulated emission above that the rates of these two processes are proportional to the number of atoms in the ground and excited states, N1 and N2, respectively. If the ground state has a higher population than the excited state (N1 > N2), then the absorption process dominates, and there is a net attenuation of photons. If the populations of the two states are the same (N1 = N2), the rate of absorption of light exactly balances the rate of emission; the medium is then said to be optically transparent. If the higher energy state has a greater population than the lower energy state (N1 < N2), then the emission process dominates, and light in the system undergoes a net increase in intensity. It is thus clear that to produce a faster rate of stimulated emissions than absorptions, it is required that the ratio of the populations of the two states is such that N2/N1 > 1; In other words, a population inversion is required for laser operation. Selection rules Many transitions involving electromagnetic radiation are strictly forbidden under quantum mechanics. The allowed transitions are described by so-called selection rules, which describe the conditions under which a radiative transition is allowed. For instance, transitions are only allowed if ΔS = 0, S being the total spin angular momentum of the system. In real materials, other effects, such as interactions with the crystal lattice, intervene to circumvent the formal rules by providing alternate mechanisms. In these systems, the forbidden transitions can occur, but usually at slower rates than allowed transitions. A classic example is phosphorescence where a material has a ground state with S = 0, an excited state with S = 0, and an intermediate state with S = 1. The transition from the intermediate state to the ground state by emission of light is slow because of the selection rules. Thus emission may continue after the external illumination is removed. In contrast fluorescence in materials is characterized by emission which ceases when the external illumination is removed. Transitions that do not involve the absorption or emission of radiation are not affected by selection rules. The radiationless transition between levels, such as between the excited S = 0 and S = 1 states, may proceed quickly enough to siphon off a portion of the S = 0 population before it spontaneously returns to the ground state. The existence of intermediate states in materials is essential to the technique of optical pumping of lasers (see below). Creating a population inversion A population inversion is required for laser operation, but cannot be achieved in the above theoretical group of atoms with two energy-levels when they are in thermal equilibrium. In fact, any method by which the atoms are directly and continuously excited from the ground state to the excited state (such as optical absorption) will eventually reach equilibrium with the de-exciting processes of spontaneous and stimulated emission. At best, an equal population of the two states, N1 = N2 = N/2, can be achieved, resulting in optical transparency but no net optical gain. Three-level lasers To achieve lasting non-equilibrium conditions, an indirect method of populating the excited state must be used. To understand how this is done, consider a slightly more realistic model, that of a three-level laser. Again consider a group of N atoms, this time with each atom able to exist in any of three energy states, levels 1, 2 and 3, with energies E1, E2, and E3, and populations N1, N2, and N3, respectively. Assume E1 < E2 < E3; that is, the energy of level 2 lies between that of the ground state and level 3. Initially, the system of atoms is at thermal equilibrium, and the majority of the atoms will be in the ground state, i.e., N1 ≈ N, . If the atoms are subjected to light of a frequency , the process of optical absorption will excite electrons from the ground state to level 3. This process is called pumping, and does not necessarily always directly involve light absorption; other methods of exciting the laser medium, such as electrical discharge or chemical reactions, may be used. The level 3 is sometimes referred to as the pump level or pump band, and the energy transition as the pump transition, which is shown as the arrow marked P in the diagram on the right. Upon pumping the medium, an appreciable number of atoms will transition to level 3, such that . To have a medium suitable for laser operation, it is necessary that these excited atoms quickly decay to level 2. The energy released in this transition may be emitted as a photon (spontaneous emission), however in practice the transition called the Auger effect (labeled R in the diagram) is usually radiationless, with the energy being transferred to vibrational motion (heat) of the host material surrounding the atoms, without the generation of a photon. An electron in level 2 may decay by spontaneous emission to the ground state, releasing a photon of frequency ν12 (given by ), which is shown as the transition L, called the laser transition in the diagram. If the lifetime of this transition, τ21 is much longer than the lifetime of the radiationless transition τ32 (if , known as a favourable lifetime ratio), the population of the E3 will be essentially zero () and a population of excited state atoms will accumulate in level 2 (). If over half the N atoms can be accumulated in this state, this will exceed the population of the ground state N1. A population inversion (N2 > N1 ) has thus been achieved between level 1 and 2, and optical amplification at the frequency ν21 can be obtained. Because at least half the population of atoms must be excited from the ground state to obtain a population inversion, the laser medium must be very strongly pumped. This makes three-level lasers rather inefficient, despite being the first type of laser to be discovered (based on a ruby laser medium, by Theodore Maiman in 1960). A three-level system could also have a radiative transition between level 3 and 2, and a non-radiative transition between 2 and 1. In this case, the pumping requirements are weaker. In practice, most lasers are four-level lasers, described below. Four-level laser Here, there are four energy levels, energies E1, E2, E3, E4, and populations N1, N2, N3, N4, respectively. The energies of each level are such that E1 < E2 < E3 < E4. In this system, the pumping transition P excites the atoms in the ground state (level 1) into the pump band (level 4). From level 4, the atoms again decay by a fast, non-radiative transition Ra into the level 3. Since the lifetime of the laser transition L is long compared to that of Ra (), a population accumulates in level 3 (the upper laser level), which may relax by spontaneous or stimulated emission into level 2 (the lower laser level). This level likewise has a fast, non-radiative decay Rb into the ground state. As before, the presence of a fast, radiationless decay transition results in the population of the pump band being quickly depleted (N4 ≈ 0). In a four-level system, any atom in the lower laser level E2 is also quickly de-excited, leading to a negligible population in that state (N2 ≈ 0). This is important, since any appreciable population accumulating in level 3, the upper laser level, will form a population inversion with respect to level 2. That is, as long as N3 > 0, then N3 > N2, and a population inversion is achieved. Thus optical amplification, and laser operation, can take place at a frequency of ν32 (E3 − E2 = hν32). Since only a few atoms must be excited into the upper laser level to form a population inversion, a four-level laser is much more efficient than a three-level one, and most practical lasers are of this type. In reality, many more than four energy levels may be involved in the laser process, with complex excitation and relaxation processes involved between these levels. In particular, the pump band may consist of several distinct energy levels, or a continuum of levels, which allow optical pumping of the medium over a wide range of wavelengths. Note that in both three- and four-level lasers, the energy of the pumping transition is greater than that of the laser transition. This means that, if the laser is optically pumped, the frequency of the pumping light must be greater than that of the resulting laser light. In other words, the pump wavelength is shorter than the laser wavelength. It is possible in some media to use multiple photon absorptions between multiple lower-energy transitions to reach the pump level; such lasers are called up-conversion lasers. While in many lasers the laser process involves the transition of atoms between different electronic energy states, as described in the model above, this is not the only mechanism that can result in laser action. For example, there are many common lasers (e.g., dye lasers, carbon dioxide lasers) where the laser medium consists of complete molecules, and energy states correspond to vibrational and rotational modes of oscillation of the molecules. This is the case with water masers, that occur in nature. In some media it is possible, by imposing an additional optical or microwave field, to use quantum coherence effects to reduce the likelihood of a ground-state to excited-state transition. This technique, known as lasing without inversion, allows optical amplification to take place without producing a population inversion between the two states. Other methods of creating a population inversion Stimulated emission was first observed in the microwave region of the electromagnetic spectrum, giving rise to the acronym MASER for Microwave Amplification by Stimulated Emission of Radiation. In the microwave region, the Boltzmann distribution of molecules among energy states is such that, at room temperature, all states are populated almost equally. To create a population inversion under these conditions, it is necessary to selectively remove some atoms or molecules from the system based on differences in properties. For instance, in a hydrogen maser, the well-known 21cm wave transition in atomic hydrogen, where the lone electron flips its spin state from parallel to the nuclear spin to antiparallel, can be used to create a population inversion because the parallel state has a magnetic moment and the antiparallel state does not. A strong inhomogeneous magnetic field will separate atoms in the higher energy state from a beam of mixed-state atoms. The separated population represents a population inversion that can exhibit stimulated emissions. See also Laser construction Negative temperature Quantum electronics References Svelto, Orazio (1998). Principles of Lasers, 4th ed. (trans. David Hanna), Springer. Laser science Statistical mechanics
Population inversion
[ "Physics" ]
3,654
[ "Statistical mechanics" ]
24,138
https://en.wikipedia.org/wiki/Proton%20decay
In particle physics, proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least . According to the Standard Model, the proton, a type of baryon, is stable because baryon number (quark number) is conserved (under normal circumstances; see Chiral anomaly for an exception). Therefore, protons will not decay into other particles on their own, because they are the lightest (and therefore least energetic) baryon. Positron emission and electron capture—forms of radioactive decay in which a proton becomes a neutron—are not proton decay, since the proton interacts with other particles within the atom. Some beyond-the-Standard-Model grand unified theories (GUTs) explicitly break the baryon number symmetry, allowing protons to decay via the Higgs particle, magnetic monopoles, or new X bosons with a half-life of 10 to 10 years. For comparison, the universe is roughly years old. To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed. Quantum tunnelling may be one of the mechanisms of proton decay. Quantum gravity (via virtual black holes and Hawking radiation) may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry. There are theoretical methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1 (as required in proton decay). These included B and/or L violations of 2, 3, or other numbers, or B − L violation. Such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons or vice versa (a key factor in leptogenesis and non-GUT baryogenesis). Baryogenesis One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons. Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons or massive Higgs bosons (). The rate at which these events occur is governed largely by the mass of the intermediate or particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay. Experimental evidence Proton decay is one of the key predictions of the various grand unified theories (GUTs) proposed in the 1970s, another major one being the existence of magnetic monopoles. Both concepts have been the focus of major experimental physics efforts since the early 1980s. To date, all attempts to observe these events have failed; however, these experiments have been able to establish lower bounds on the half-life of the proton. Currently, the most precise results come from the Super-Kamiokande water Cherenkov radiation detector in Japan: a lower bound on the proton's half-life of via positron decay, and similarly, via antimuon decay, close to a supersymmetry (SUSY) prediction of 1034–1036 years. An upgraded version, Hyper-Kamiokande, probably will have sensitivity 5–10 times better than Super-Kamiokande. Theoretical motivation Despite the lack of observational evidence for proton decay, some grand unification theories, such as the SU(5) Georgi–Glashow model and SO(10), along with their supersymmetric variants, require it. According to such theories, the proton has a half-life of about ~ years and decays into a positron and a neutral pion that itself immediately decays into two gamma ray photons: Since a positron is an antilepton this decay preserves number, which is conserved in most GUTs. Additional decay modes are available (e.g.: ), both directly and when catalyzed via interaction with GUT-predicted magnetic monopoles. Though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. Such detectors include the Hyper-Kamiokande. Early grand unification theories (GUTs) such as the Georgi–Glashow model, which were the first consistent theories to suggest proton decay, postulated that the proton's half-life would be at least . As further experiments and calculations were performed in the 1990s, it became clear that the proton half-life could not lie below . Many books from that period refer to this figure for the possible decay time for baryonic matter. More recent findings have pushed the minimum proton half-life to at least – years, ruling out the simpler GUTs (including minimal SU(5) / Georgi–Glashow) and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at , a bound applicable to SUSY models, with a maximum for (minimal) non-SUSY GUTs at . Although the phenomenon is referred to as "proton decay", the effect would also be seen in neutrons bound inside atomic nuclei. Free neutrons—those not inside an atomic nucleus—are already known to decay into protons (and an electron and an antineutrino) in a process called beta decay. Free neutrons have a half-life of 10 minutes () due to the weak interaction. Neutrons bound inside a nucleus have an immensely longer half-life – apparently as great as that of the proton. Projected proton lifetimes The lifetime of the proton in vanilla SU(5) can be naively estimated as . Supersymmetric GUTs with reunification scales around   yield a lifetime of around , roughly the current experimental lower bound. Decay operators Dimension-6 proton decay operators The dimension-6 proton decay operators are and where is the cutoff scale for the Standard Model. All of these operators violate both baryon number () and lepton number () conservation but not the combination  − . In GUT models, the exchange of an X or Y boson with the mass can lead to the last two operators suppressed by . The exchange of a triplet Higgs with mass can lead to all of the operators suppressed by . See Doublet–triplet splitting problem. Dimension-5 proton decay operators In supersymmetric extensions (such as the MSSM), we can also have dimension-5 operators involving two fermions and two sfermions caused by the exchange of a tripletino of mass . The sfermions will then exchange a gaugino or Higgsino or gravitino leaving two fermions. The overall Feynman diagram has a loop (and other complications due to strong interaction physics). This decay rate is suppressed by where is the mass scale of the superpartners. Dimension-4 proton decay operators In the absence of matter parity, supersymmetric extensions of the Standard Model can give rise to the last operator suppressed by the inverse square of sdown quark mass. This is due to the dimension-4 operators and . The proton decay rate is only suppressed by which is far too fast unless the couplings are very small. See also Age of the universe B − L Virtual black hole Weak hypercharge X and Y bosons Iron star References Further reading External links Proton decay at Super-Kamiokande Pictorial history of the IMB experiment Proton Nuclear physics Physics beyond the Standard Model Grand Unified Theory Supersymmetric quantum field theory Hypothetical processes Ultimate fate of the universe 1967 in science ja:陽子#陽子の崩壊
Proton decay
[ "Physics" ]
1,866
[ "Supersymmetric quantum field theory", "Hypotheses in physics", "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Grand Unified Theory", "Nuclear physics", "Supersymmetry", "Physics beyond the Standard Model", "Symmetry" ]
24,446
https://en.wikipedia.org/wiki/Peptide%20bond
In organic chemistry, a peptide bond is an amide type of covalent chemical bond linking two consecutive alpha-amino acids from C1 (carbon number one) of one alpha-amino acid and N2 (nitrogen number two) of another, along a peptide or protein chain. It can also be called a eupeptide bond to distinguish it from an isopeptide bond, which is another type of amide bond between two amino acids. Synthesis When two amino acids form a dipeptide through a peptide bond, it is a type of condensation reaction. In this kind of condensation, two amino acids approach each other, with the non-side chain (C1) carboxylic acid moiety of one coming near the non-side chain (N2) amino moiety of the other. One loses a hydrogen and oxygen from its carboxyl group (COOH) and the other loses a hydrogen from its amino group (NH2). This reaction produces a molecule of water (H2O) and two amino acids joined by a peptide bond (−CO−NH−). The two joined amino acids are called a dipeptide. The amide bond is synthesized when the carboxyl group of one amino acid molecule reacts with the amino group of the other amino acid molecule, causing the release of a molecule of water (H2O), hence the process is a dehydration synthesis reaction. The formation of the peptide bond consumes energy, which, in organisms, is derived from ATP. Peptides and proteins are chains of amino acids held together by peptide bonds (and sometimes by a few isopeptide bonds). Organisms use enzymes to produce nonribosomal peptides, and ribosomes to produce proteins via reactions that differ in details from dehydration synthesis. Some peptides, like alpha-amanitin, are called ribosomal peptides as they are made by ribosomes, but many are nonribosomal peptides as they are synthesized by specialized enzymes rather than ribosomes. For example, the tripeptide glutathione is synthesized in two steps from free amino acids, by two enzymes: glutamate–cysteine ligase (forms an isopeptide bond, which is not a peptide bond) and glutathione synthetase (forms a peptide bond). Degradation A peptide bond can be broken by hydrolysis (the addition of water). The hydrolysis of peptide bonds in water releases 8–16 kJ/mol (2–4 kcal/mol) of Gibbs energy. This process is extremely slow, with the half life at 25 °C of between 350 and 600 years per bond. In living organisms, the process is normally catalyzed by enzymes known as peptidases or proteases, although there are reports of peptide bond hydrolysis caused by conformational strain as the peptide/protein folds into the native structure. This non-enzymatic process is thus not accelerated by transition state stabilization, but rather by ground-state destabilization. Spectra The wavelength of absorption for a peptide bond is 190–230 nm, which makes it particularly susceptible to UV radiation. Cis/trans isomers of the peptide group Significant delocalisation of the lone pair of electrons on the nitrogen atom gives the group a partial double-bond character. The partial double bond renders the amide group planar, occurring in either the cis or trans isomers. In the unfolded state of proteins, the peptide groups are free to isomerize and adopt both isomers; however, in the folded state, only a single isomer is adopted at each position (with rare exceptions). The trans form is preferred overwhelmingly in most peptide bonds (roughly 1000:1 ratio in trans:cis populations). However, X-Pro peptide groups tend to have a roughly 30:1 ratio, presumably because the symmetry between the Cα and Cδ atoms of proline makes the cis and trans isomers nearly equal in energy, as shown in the figure below. The dihedral angle associated with the peptide group (defined by the four atoms Cα–C'–N–Cα) is denoted ; for the cis isomer (synperiplanar conformation), and for the trans isomer (antiperiplanar conformation). Amide groups can isomerize about the C'–N bond between the cis and trans forms, albeit slowly ( seconds at room temperature). The transition states requires that the partial double bond be broken, so that the activation energy is roughly 80 kJ/mol (20 kcal/mol). However, the activation energy can be lowered (and the isomerization catalyzed) by changes that favor the single-bonded form, such as placing the peptide group in a hydrophobic environment or donating a hydrogen bond to the nitrogen atom of an X-Pro peptide group. Both of these mechanisms for lowering the activation energy have been observed in peptidyl prolyl isomerases (PPIases), which are naturally occurring enzymes that catalyze the cis-trans isomerization of X-Pro peptide bonds. Conformational protein folding is usually much faster (typically 10–100 ms) than cis-trans isomerization (10–100 s). A nonnative isomer of some peptide groups can disrupt the conformational folding significantly, either slowing it or preventing it from even occurring until the native isomer is reached. However, not all peptide groups have the same effect on folding; nonnative isomers of other peptide groups may not affect folding at all. Chemical reactions Due to its resonance stabilization, the peptide bond is relatively unreactive under physiological conditions, even less than similar compounds such as esters. Nevertheless, peptide bonds can undergo chemical reactions, usually through an attack of an electronegative atom on the carbonyl carbon, breaking the carbonyl double bond and forming a tetrahedral intermediate. This is the pathway followed in proteolysis and, more generally, in N–O acyl exchange reactions such as those of inteins. When the functional group attacking the peptide bond is a thiol, hydroxyl or amine, the resulting molecule may be called a cyclol or, more specifically, a thiacyclol, an oxacyclol or an azacyclol, respectively. See also The Proteolysis Map References Protein structure Chemical bonding
Peptide bond
[ "Physics", "Chemistry", "Materials_science" ]
1,337
[ "Condensed matter physics", "nan", "Structural biology", "Chemical bonding", "Protein structure" ]
24,530
https://en.wikipedia.org/wiki/PH
In chemistry, pH ( ), also referred to as acidity or basicity, historically denotes "potential of hydrogen" (or "power of hydrogen"). It is a logarithmic scale used to specify the acidity or basicity of aqueous solutions. Acidic solutions (solutions with higher concentrations of hydrogen () ions) are measured to have lower pH values than basic or alkaline solutions. The pH scale is logarithmic and inversely indicates the activity of hydrogen ions in the solution where [H+] is the equilibrium molar concentration of H+ (in M = mol/L) in the solution. At 25 °C (77 °F), solutions of which the pH is less than 7 are acidic, and solutions of which the pH is greater than 7 are basic. Solutions with a pH of 7 at 25 °C are neutral (i.e. have the same concentration of H+ ions as OH− ions, i.e. the same as pure water). The neutral value of the pH depends on the temperature and is lower than 7 if the temperature increases above 25 °C. The pH range is commonly given as zero to 14, but a pH value can be less than 0 for very concentrated strong acids or greater than 14 for very concentrated strong bases. The pH scale is traceable to a set of standard solutions whose pH is established by international agreement. Primary pH standard values are determined using a concentration cell with transference by measuring the potential difference between a hydrogen electrode and a standard electrode such as the silver chloride electrode. The pH of aqueous solutions can be measured with a glass electrode and a pH meter or a color-changing indicator. Measurements of pH are important in chemistry, agronomy, medicine, water treatment, and many other applications. History In 1909, the Danish chemist Søren Peter Lauritz Sørensen introduced the concept of pH at the Carlsberg Laboratory, originally using the notation "pH•", with H• as a subscript to the lowercase p. The concept was later revised in 1924 to the modern pH to accommodate definitions and measurements in terms of electrochemical cells.For the sign p, I propose the name 'hydrogen ion exponent' and the symbol pH•. Then, for the hydrogen ion exponent (pH•) of a solution, the negative value of the Briggsian logarithm of the related hydrogen ion normality factor is to be understood.Sørensen did not explain why he used the letter p, and the exact meaning of the letter is still disputed. Sørensen described a way of measuring pH using potential differences, and it represents the negative power of 10 in the concentration of hydrogen ions. The letter p could stand for the French puissance, German Potenz, or Danish potens, all meaning "power", or it could mean "potential". All of these words start with the letter p in French, German, and Danish, which were the languages in which Sørensen published: Carlsberg Laboratory was French-speaking; German was the dominant language of scientific publishing; Sørensen was Danish. He also used the letter q in much the same way elsewhere in the paper, and he might have arbitrarily labelled the test solution "p" and the reference solution "q"; these letters are often paired with e4 then e5. Some literature sources suggest that "pH" stands for the Latin term pondus hydrogenii (quantity of hydrogen) or potentia hydrogenii (power of hydrogen), although this is not supported by Sørensen's writings. In modern chemistry, the p stands for "the negative decimal logarithm of", and is used in the term pKa for acid dissociation constants, so pH is "the negative decimal logarithm of H+ ion concentration", while pOH is "the negative decimal logarithm of OH− ion concentration". Bacteriologist Alice Catherine Evans, who influenced dairying and food safety, credited William Mansfield Clark and colleagues, including herself, with developing pH measuring methods in the 1910s, which had a wide influence on laboratory and industrial use thereafter. In her memoir, she does not mention how much, or how little, Clark and colleagues knew about Sørensen's work a few years prior. She said:In these studies [of bacterial metabolism] Dr. Clark's attention was directed to the effect of acid on the growth of bacteria. He found that it is the intensity of the acid in terms of hydrogen-ion concentration that affects their growth. But existing methods of measuring acidity determined the quantity, not the intensity, of the acid. Next, with his collaborators, Dr. Clark developed accurate methods for measuring hydrogen-ion concentration. These methods replaced the inaccurate titration method of determining the acid content in use in biologic laboratories throughout the world. Also they were found to be applicable in many industrial and other processes in which they came into wide usage.The first electronic method for measuring pH was invented by Arnold Orville Beckman, a professor at the California Institute of Technology in 1934. It was in response to a request from the local citrus grower Sunkist, which wanted a better method for quickly testing the pH of lemons they were picking from their nearby orchards. Definition pH The pH of a solution is defined as the decimal logarithm of the reciprocal of the hydrogen ion activity, aH+. Mathematically, pH is expressed as: For example, for a solution with a hydrogen ion activity of (i.e., the concentration of hydrogen ions), the pH of the solution can be calculated as follows: The concept of pH was developed because ion-selective electrodes, which are used to measure pH, respond to activity. The electrode potential, E, follows the Nernst equation for the hydrogen ion, which can be expressed as: where E is a measured potential, E0 is the standard electrode potential, R is the molar gas constant, T is the thermodynamic temperature, F is the Faraday constant. For , the number of electrons transferred is one. The electrode potential is proportional to pH when pH is defined in terms of activity. The precise measurement of pH is presented in International Standard ISO 31-8 as follows: A galvanic cell is set up to measure the electromotive force (e.m.f.) between a reference electrode and an electrode sensitive to the hydrogen ion activity when they are both immersed in the same aqueous solution. The reference electrode may be a silver chloride electrode or a calomel electrode, and the hydrogen-ion selective electrode is a standard hydrogen electrode. Firstly, the cell is filled with a solution of known hydrogen ion activity and the electromotive force, ES, is measured. Then the electromotive force, EX, of the same cell containing the solution of unknown pH is measured. The difference between the two measured electromotive force values is proportional to pH. This method of calibration avoids the need to know the standard electrode potential. The proportionality constant, 1/z, is ideally equal to , the "Nernstian slope". In practice, a glass electrode is used instead of the cumbersome hydrogen electrode. A combined glass electrode has an in-built reference electrode. It is calibrated against Buffer solutions of known hydrogen ion () activity proposed by the International Union of Pure and Applied Chemistry (IUPAC). Two or more buffer solutions are used in order to accommodate the fact that the "slope" may differ slightly from ideal. To calibrate the electrode, it is first immersed in a standard solution, and the reading on a pH meter is adjusted to be equal to the standard buffer's value. The reading from a second standard buffer solution is then adjusted using the "slope" control to be equal to the pH for that solution. Further details, are given in the IUPAC recommendations. When more than two buffer solutions are used the electrode is calibrated by fitting observed pH values to a straight line with respect to standard buffer values. Commercial standard buffer solutions usually come with information on the value at 25 °C and a correction factor to be applied for other temperatures. The pH scale is logarithmic and therefore pH is a dimensionless quantity. p[H] This was the original definition of Sørensen in 1909, which was superseded in favor of pH in 1924. [H] is the concentration of hydrogen ions, denoted [] in modern chemistry. More correctly, the thermodynamic activity of in dilute solution should be replaced by []/c0, where the standard state concentration c0 = 1 mol/L. This ratio is a pure number whose logarithm can be defined. It is possible to measure the concentration of hydrogen ions directly using an electrode calibrated in terms of hydrogen ion concentrations. One common method is to titrate a solution of known concentration of a strong acid with a solution of known concentration of strong base in the presence of a relatively high concentration of background electrolyte. By knowing the concentrations of the acid and base, the concentration of hydrogen ions can be calculated and the measured potential can be correlated with concentrations. The calibration is usually carried out using a Gran plot. This procedure makes the activity of hydrogen ions equal to the numerical value of concentration. The glass electrode (and other Ion selective electrodes) should be calibrated in a medium similar to the one being investigated. For instance, if one wishes to measure the pH of a seawater sample, the electrode should be calibrated in a solution resembling seawater in its chemical composition. The difference between p[H] and pH is quite small, and it has been stated that pH = p[H] + 0.04. However, it is common practice to use the term "pH" for both types of measurement. pOH pOH is sometimes used as a measure of the concentration of hydroxide ions, . By definition, pOH is the negative logarithm (to the base 10) of the hydroxide ion concentration (mol/L). pOH values can be derived from pH measurements and vice-versa. The concentration of hydroxide ions in water is related to the concentration of hydrogen ions by where KW is the self-ionization constant of water. Taking Logarithms, So, at room temperature, pOH ≈ 14 − pH. However this relationship is not strictly valid in other circumstances, such as in measurements of soil alkalinity. Measurement pH Indicators pH can be measured using indicators, which change color depending on the pH of the solution they are in. By comparing the color of a test solution to a standard color chart, the pH can be estimated to the nearest whole number. For more precise measurements, the color can be measured using a colorimeter or spectrophotometer. A Universal indicator is a mixture of several indicators that can provide a continuous color change over a range of pH values, typically from about pH 2 to pH 10. Universal indicator paper is made from absorbent paper that has been impregnated with a universal indicator. An alternative method of measuring pH is using an electronic pH meter, which directly measures the voltage difference between a pH-sensitive electrode and a reference electrode. Non-aqueous solutions pH values can be measured in non-aqueous solutions, but they are based on a different scale from aqueous pH values because the standard states used for calculating hydrogen ion concentrations (activities) are different. The hydrogen ion activity, aH+, is defined as: where μH+ is the chemical potential of the hydrogen ion, is its chemical potential in the chosen standard state, R is the molar gas constant and T is the thermodynamic temperature. Therefore, pH values on the different scales cannot be compared directly because of differences in the solvated proton ions, such as lyonium ions, which require an insolvent scale that involves the transfer activity coefficient of hydronium/lyonium ion. pH is an example of an acidity function, but others can be defined. For example, the Hammett acidity function, H0, has been developed in connection with Superacids. Unified absolute pH scale In 2010, a new approach to measuring pH was proposed, called the unified absolute pH scale. This approach allows for a common reference standard to be used across different solutions, regardless of their pH range. The unified absolute pH scale is based on the absolute chemical potential of the proton, as defined by the Lewis acid–base theory. This scale applies to liquids, gases, and even solids. The advantages of the unified absolute pH scale include consistency, accuracy, and applicability to a wide range of sample types. It is precise and versatile because it serves as a common reference standard for pH measurements. However, implementation efforts, compatibility with existing data, complexity, and potential costs are some challenges. Extremes of pH measurements The measurement of pH can become difficult at extremely acidic or alkaline conditions, such as below pH 2.5 (ca. 0.003 mol/dm3 acid) or above pH 10.5 (above ca. 0.0003  mol/dm3 alkaline). This is due to the breakdown of the Nernst equation in such conditions when using a glass electrode. Several factors contribute to this problem. First, liquid junction potentials may not be independent of pH. Second, the high ionic strength of concentrated solutions can affect the electrode potentials. At high pH the glass electrode may be affected by "alkaline error", because the electrode becomes sensitive to the concentration of cations such as and in the solution. To overcome these problems, specially constructed electrodes are available. Runoff from mines or mine tailings can produce some extremely low pH values, down to −3.6. Applications Pure water has a pH of 7 at 25 °C, meaning it is neutral. When an acid is dissolved in water, the pH will be less than 7, while a base, or alkali, will have a pH greater than 7. A strong acid, such as hydrochloric acid, at concentration 1 mol dm−3 has a pH of 0, while a strong alkali like sodium hydroxide, at the same concentration, has a pH of 14. Since pH is a logarithmic scale, a difference of one in pH is equivalent to a tenfold difference in hydrogen ion concentration. Neutrality is not exactly 7 at 25 °C, but 7 serves as a good approximation in most cases. Neutrality occurs when the concentration of hydrogen ions ([]) equals the concentration of hydroxide ions ([]), or when their activities are equal. Since self-ionization of water holds the product of these concentration [] × [] = Kw, it can be seen that at neutrality [] = [] = , or pH = pKw/2. pKw is approximately 14 but depends on ionic strength and temperature, and so the pH of neutrality does also. Pure water and a solution of NaCl in pure water are both neutral, since dissociation of water produces equal numbers of both ions. However the pH of the neutral NaCl solution will be slightly different from that of neutral pure water because the hydrogen and hydroxide ions' activity is dependent on ionic strength, so Kw varies with ionic strength. When pure water is exposed to air, it becomes mildly acidic. This is because water absorbs carbon dioxide from the air, which is then slowly converted into bicarbonate and hydrogen ions (essentially creating carbonic acid). pH in soil The United States Department of Agriculture Natural Resources Conservation Service, formerly Soil Conservation Service classifies soil pH ranges as follows: Topsoil pH is influenced by soil parent material, erosional effects, climate and vegetation. A recent map of topsoil pH in Europe shows the alkaline soils in Mediterranean, Hungary, East Romania, North France. Scandinavian countries, Portugal, Poland and North Germany have more acid soils. pH in plants Plants contain pH-dependent pigments that can be used as pH indicators, such as those found in hibiscus, red cabbage (anthocyanin), and grapes (red wine). Citrus fruits have acidic juice primarily due to the presence of citric acid, while other carboxylic acids can be found in various living systems. The protonation state of phosphate derivatives, including ATP, is pH-dependent. Hemoglobin, an oxygen-transport enzyme, is also affected by pH in a phenomenon known as the Root effect. pH in the ocean The pH of seawater plays an important role in the ocean's carbon cycle. There is evidence of ongoing ocean acidification (meaning a drop in pH value): Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide (CO2) levels exceeding 410 ppm (in 2020). CO2 from the atmosphere is absorbed by the oceans. This produces carbonic acid (H2CO3) which dissociates into a bicarbonate ion () and a hydrogen ion (H+). The presence of free hydrogen ions (H+) lowers the pH of the ocean. Three pH scales in oceanography The measurement of pH in seawater is complicated by the chemical properties of seawater, and three distinct pH scales exist in chemical oceanography. In practical terms, the three seawater pH scales differ in their pH values up to 0.10, differences that are much larger than the accuracy of pH measurements typically required, in particular, in relation to the ocean's carbonate system. Since it omits consideration of sulfate and fluoride ions, the free scale is significantly different from both the total and seawater scales. Because of the relative unimportance of the fluoride ion, the total and seawater scales differ only very slightly. As part of its operational definition of the pH scale, the IUPAC defines a series of Buffer solutions across a range of pH values (often denoted with National Bureau of Standards (NBS) or National Institute of Standards and Technology (NIST) designation). These solutions have a relatively low ionic strength (≈ 0.1) compared to that of seawater (≈&mnsp;0.7), and, as a consequence, are not recommended for use in characterizing the pH of seawater, since the ionic strength differences cause changes in electrode potential. To resolve this problem, an alternative series of buffers based on artificial seawater was developed. This new series resolves the problem of ionic strength differences between samples and the buffers, and the new pH scale is referred to as the total scale, often denoted as pHT. The total scale was defined using a medium containing sulfate ions. These ions experience protonation, + , such that the total scale includes the effect of both protons (free hydrogen ions) and hydrogen sulfate ions: []T = []F + [] An alternative scale, the free scale, often denoted pHF, omits this consideration and focuses solely on []F, in principle making it a simpler representation of hydrogen ion concentration. Only []T can be determined, therefore []F must be estimated using the [] and the stability constant of , : []F = []T − [] = []T ( 1 + [] / K )−1 However, it is difficult to estimate K in seawater, limiting the utility of the otherwise more straightforward free scale. Another scale, known as the seawater scale, often denoted pHSWS, takes account of a further protonation relationship between hydrogen ions and fluoride ions, + ⇌ HF. Resulting in the following expression for []SWS: []SWS = []F + [] + [HF] However, the advantage of considering this additional complexity is dependent upon the abundance of fluoride in the medium. In seawater, for instance, sulfate ions occur at much greater concentrations (> 400 times) than those of fluoride. As a consequence, for most practical purposes, the difference between the total and seawater scales is very small. The following three equations summarize the three scales of pH: pHF = −log10[]F pHT = −log10([]F + []) = −log10[]T pHSWS = −log10(]F + [] + [HF]) = −log10[v]SWS pH in food The pH level of food influences its flavor, texture, and shelf life. Acidic foods, such as citrus fruits, tomatoes, and vinegar, typically have a pH below 4.6 with sharp and tangy taste, while basic foods taste bitter or soapy. Maintaining the appropriate pH in foods is essential for preventing the growth of harmful microorganisms. The alkalinity of vegetables such as spinach and kale can also influence their texture and color during cooking. The pH also influences the Maillard reaction, which is responsible for the browning of food during cooking, impacting both flavor and appearance. pH of various body fluids {| class="wikitable" |+pH of various body fluids |- ! Compartment ! pH |- | Gastric acid || 1.5–3.5 |- | Lysosomes || 4.5 |- | Human skin || 4.7 |- | Granules of chromaffin cells || 5.5 |- | Urine || 6.0 |- | Breast milk || 7.0–7.45 |- | Cytosol || 7.2 |- | Blood (natural pH) || 7.34–7.45 |- | Cerebrospinal fluid (CSF) || 7.5 |- | Mitochondrial matrix || 7.5 |- | Pancreas secretions || 8.1 |} In living organisms, the pH of various Body fluids, cellular compartments, and organs is tightly regulated to maintain a state of acid-base balance known as acid–base homeostasis. Acidosis, defined by blood pH below 7.35, is the most common disorder of acid–base homeostasis and occurs when there is an excess of acid in the body. In contrast, alkalosis is characterized by excessively high blood pH. Blood pH is usually slightly basic, with a pH of 7.365, referred to as physiological pH in biology and medicine. Plaque formation in teeth can create a local acidic environment that results in tooth decay through demineralization. Enzymes and other Proteins have an optimal pH range for function and can become inactivated or denatured outside this range. pH calculations When calculating the pH of a solution containing acids and/or bases, a chemical speciation calculation is used to determine the concentration of all chemical species present in the solution. The complexity of the procedure depends on the nature of the solution. Strong acids and bases are compounds that are almost completely dissociated in water, which simplifies the calculation. However, for weak acids, a quadratic equation must be solved, and for weak bases, a cubic equation is required. In general, a set of non-linear simultaneous equations must be solved. Water itself is a weak acid and a weak base, so its dissociation must be taken into account at high pH and low solute concentration (see Amphoterism). It dissociates according to the equilibrium with a dissociation constant, defined as where [H+] stands for the concentration of the aqueous hydronium ion and [OH−] represents the concentration of the hydroxide ion. This equilibrium needs to be taken into account at high pH and when the solute concentration is extremely low. Strong acids and bases Strong acids and bases are compounds that are essentially fully dissociated in water. This means that in an acidic solution, the concentration of hydrogen ions (H+) can be considered equal to the concentration of the acid. Similarly, in a basic solution, the concentration of hydroxide ions (OH-) can be considered equal to the concentration of the base. The pH of a solution is defined as the negative logarithm of the concentration of H+, and the pOH is defined as the negative logarithm of the concentration of OH−. For example, the pH of a 0.01 in moles per litreM solution of hydrochloric acid (HCl) is equal to 2 (pH = −log10(0.01)), while the pOH of a 0.01 M solution of sodium hydroxide (NaOH) is equal to 2 (pOH = −log10(0.01)), which corresponds to a pH of about 12. However, self-ionization of water must also be considered when concentrations of a strong acid or base is very low or high. For instance, a solution of HCl would be expected to have a pH of 7.3 based on the above procedure, which is incorrect as it is acidic and should have a pH of less than 7. In such cases, the system can be treated as a mixture of the acid or base and water, which is an amphoteric substance. By accounting for the self-ionization of water, the true pH of the solution can be calculated. For example, a solution of HCl would have a pH of 6.89 when treated as a mixture of HCl and water. The self-ionization equilibrium of solutions of sodium hydroxide at higher concentrations must also be considered. Weak acids and bases A weak acid or the conjugate acid of a weak base can be treated using the same formalism. Acid HA: Base A: First, an acid dissociation constant is defined as follows. Electrical charges are omitted from subsequent equations for the sake of generality and its value is assumed to have been determined by experiment. This being so, there are three unknown concentrations, [HA], [H+] and [A−] to determine by calculation. Two additional equations are needed. One way to provide them is to apply the law of mass conservation in terms of the two "reagents" H and A. C stands for analytical concentration. In some texts, one mass balance equation is replaced by an equation of charge balance. This is satisfactory for simple cases like this one, but is more difficult to apply to more complicated cases as those below. Together with the equation defining Ka, there are now three equations in three unknowns. When an acid is dissolved in water CA = CH = Ca, the concentration of the acid, so [A] = [H]. After some further algebraic manipulation an equation in the hydrogen ion concentration may be obtained. Solution of this quadratic equation gives the hydrogen ion concentration and hence p[H] or, more loosely, pH. This procedure is illustrated in an ICE table which can also be used to calculate the pH when some additional (strong) acid or alkaline has been added to the system, that is, when CA ≠ CH. For example, what is the pH of a 0.01 M solution of benzoic acid, pKa = 4.19? Step 1: Step 2: Set up the quadratic equation. Step 3: Solve the quadratic equation. For alkaline solutions, an additional term is added to the mass-balance equation for hydrogen. Since the addition of hydroxide reduces the hydrogen ion concentration, and the hydroxide ion concentration is constrained by the self-ionization equilibrium to be equal to , the resulting equation is: General method Some systems, such as with polyprotic acids, are amenable to spreadsheet calculations. With three or more reagents or when many complexes are formed with general formulae such as ApBqHr, the following general method can be used to calculate the pH of a solution. For example, with three reagents, each equilibrium is characterized by an equilibrium constant, β. Next, write down the mass-balance equations for each reagent: There are no approximations involved in these equations, except that each stability constant is defined as a quotient of concentrations, not activities. Much more complicated expressions are required if activities are to be used. There are three simultaneous equations in the three unknowns, [A], [B] and [H]. Because the equations are non-linear and their concentrations may range over many powers of 10, the solution of these equations is not straightforward. However, many computer programs are available which can be used to perform these calculations. There may be more than three reagents. The calculation of hydrogen ion concentrations, using this approach, is a key element in the determination of equilibrium constants by potentiometric titration. See also pH indicator Arterial blood gas Chemical equilibrium pKa References External links Acid–base chemistry Equilibrium chemistry Units of measurement Water quality indicators Logarithmic scales of measurement General chemistry
PH
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
5,973
[ "Acid–base chemistry", "Physical quantities", "Quantity", "Water pollution", "Equilibrium chemistry", "Logarithmic scales of measurement", "Water quality indicators", "nan", "Units of measurement" ]
24,544
https://en.wikipedia.org/wiki/Photosynthesis
Photosynthesis ( ) is a system of biological processes by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, typically from sunlight, into the chemical energy necessary to fuel their metabolism. Photosynthesis usually refers to oxygenic photosynthesis, a process that produces oxygen. Photosynthetic organisms store the chemical energy so produced within intracellular organic compounds (compounds containing carbon) like sugars, glycogen, cellulose and starches. To use this stored chemical energy, an organism's cells metabolize the organic compounds through cellular respiration. Photosynthesis plays a critical role in producing and maintaining the oxygen content of the Earth's atmosphere, and it supplies most of the biological energy necessary for complex life on Earth. Some bacteria also perform anoxygenic photosynthesis, which uses bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, producing sulfur instead of oxygen. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and power proton pumps to directly synthesize adenosine triphosphate (ATP), the "energy currency" of cells. Such archaeal photosynthesis might have been the earliest form of photosynthesis that evolved on Earth, as far back as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis). While the details may differ between species, the process always begins when light energy is absorbed by the reaction centers, proteins that contain photosynthetic pigments or chromophores. In plants, these pigments are chlorophylls (a porphyrin derivative that absorbs the red and blue spectrums of light, thus reflecting green) held inside chloroplasts, abundant in leaf cells. In bacteria, they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two important molecules that participate in energetic processes: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ATP. In plants, algae, and cyanobacteria, sugars are synthesized by a subsequent sequence of reactions called the Calvin cycle. In this process, atmospheric carbon dioxide is incorporated into already existing organic compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms like the reverse Krebs cycle are used to achieve the same end. The first photosynthetic organisms probably evolved early in the evolutionary history of life using reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. The average rate of energy captured by global photosynthesis is approximately 130 terawatts, which is about eight times the total power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or billions of metric tons), of carbon into biomass per year. Photosynthesis was discovered in 1779 by Jan Ingenhousz who showed that plants need light, not just soil and water. Overview Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon. In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade-loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen. Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism. Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments (cellular respiration in mitochondria). The general equation for photosynthesis as first proposed by Cornelis van Niel is: + + → + + Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is: + + → + + This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation: + + → + Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is: + + → + (used to build other compounds in subsequent reactions) Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide. Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation. Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis. Photosynthetic membranes and organelles In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb. In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system. Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected, which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors. These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex. Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place. Light-dependent reactions In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen. The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is: Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms. Z scheme In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic. In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram "Z-scheme"). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with an H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends. The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction. Water photolysis Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms. Light-independent reactions Calvin cycle In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars photosynthesis produces are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain. The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (five out of six molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch, and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids. Carbon concentrating mechanisms On land In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions. Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over sixty plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to and a useful carbon-concentrating mechanism in its own right. Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which spatially separates the fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants. Calcium-oxalate-accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named alarm photosynthesis. Under stress conditions (e.g., water deficit), oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme, and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere. In water Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome, releases CO2 from dissolved hydrocarbonate ions (HCO). Before the CO2 can diffuse out, RuBisCO concentrated within the carboxysome quickly sponges it up. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO. Order and kinetics The overall process of photosynthesis takes place in four stages: Efficiency Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%. Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) reemitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers. Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature, and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices. Scientists are studying photosynthesis in hopes of developing plants with increased yield. The efficiency of both light and dark reactions can be measured, but the relationship between the two can be complex. For example, the light reaction creates ATP and NADPH energy molecules, which C3 plants can use for carbon fixation or photorespiration. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions. Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. An integrated chlorophyll fluorometer and gas exchange system can investigate both light and dark reactions when researchers use the two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2 and of ΔH2O using reliable methods. CO2 is commonly measured in /(m2/s), parts per million, or volume per million; and H2O is commonly measured in /(m2/s) or in . By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation (PAR), it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and "Ci" or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations. Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response. Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC, the estimation of CO2 concentration at the site of carboxylation in the chloroplast, to replace Ci. CO2 concentration in the chloroplast becomes possible to estimate with the measurement of mesophyll conductance or gm using an integrated system. Photosynthesis measurement systems are not designed to directly measure the amount of light the leaf absorbs, but analysis of chlorophyll fluorescence, P700- and P515-absorbance, and gas exchange measurements reveal detailed information about, e.g., the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength dependency of the photosynthetic efficiency can be analyzed. A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure called a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time. Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks. Evolution Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago, though the first direct evidence of photosynthesis comes from thylakoid membranes preserved in 1.75-billion-year-old cherts. Oxygenic photosynthesis is the main source of oxygen in the Earth's atmosphere, and its earliest appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around two billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center. Symbiosis and the origin of chloroplasts Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges, and sea anemones. Scientists presume that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks, such as Elysia viridis and Elysia chlorotica, also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins they need to survive. An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles. Photosynthetic eukaryotic lineages Symbiotic and kleptoplastic organisms excluded: The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular) The cryptophytes—clade Cryptista (unicellular) The haptophytes—clade Haptista (unicellular) The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular) The ochrophytes—clade Stramenopila (uni- and multicellular) The chlorarachniophytes and three species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular) The euglenids—clade Excavata (unicellular) Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga). Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae. While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees. Photosynthetic prokaryotic lineages Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time. With a possible exception of Heimdallarchaeota, photosynthesis is not found in archaea. Haloarchaea are photoheterotrophic; they can absorb energy from the sun, but do not harvest carbon from the atmosphere and are therefore not photosynthetic. Instead of chlorophyll they use rhodopsins, which convert light-energy to ion gradients but cannot mediate electron transfer reactions. In bacteria eight photosynthetic lineages are currently known: Cyanobacteria, the only prokaryotes performing oxygenic photosynthesis and the only prokaryotes that contain two types of photosystems (type I (RCI), also known as Fe-S type, and type II (RCII), also known as quinone type). The seven remaining prokaryotes have anoxygenic photosynthesis and use versions of either type I or type II. Chlorobi (green sulfur bacteria) Type I Heliobacteria Type I Chloracidobacterium Type I Proteobacteria (purple sulfur bacteria and purple non-sulfur bacteria) Type II (see: Purple bacteria) Chloroflexota (green non-sulfur bacteria) Type II Gemmatimonadota Type II Eremiobacterota Type II Cyanobacteria and the evolution of photosynthesis The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae). The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Experimental history Discovery Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century. Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil a plant was using and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself. Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that a plant could restore the air the candle and the mouse had "injured." In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours. In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which organisms use photosynthesis to produce food (such as glucose) was outlined. Refinements Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria, he was the first to demonstrate that photosynthesis is a light-dependent redox reaction in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide. Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry. Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction: 2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2 A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water. Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle. Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain. Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration. In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation. In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32. Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series. Development of the concept In 1893, the American botanist Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term. C3 : C4 photosynthesis research In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight. Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants. Factors There are four main factors influencing photosynthesis and several corollary factors. The four main are: Light irradiance and wavelength Water absorption Carbon dioxide concentration Temperature. Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Light intensity (irradiance), wavelength and temperature The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life. The radiation climate within plant communities is extremely variable, in both time and space. In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation. At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance. At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased. These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center. Carbon dioxide levels and photorespiration As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars. RuBisCO oxygenase activity is disadvantageous to plants for several reasons: One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle. Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis. Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen. A highly simplified summary is: 2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3 The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide. See also Jan Anderson (scientist) Artificial photosynthesis Calvin-Benson cycle Carbon fixation Cellular respiration Chemosynthesis Daily light integral Hill reaction Integrated fluorometer Light-dependent reaction Organic reaction Photobiology Photoinhibition Photosynthetic reaction center Photosynthetically active radiation Photosystem Photosystem I Photosystem II Quantasome Quantum biology Radiosynthesis Red edge Vitamin D References Further reading Books Papers External links A collection of photosynthesis pages for all levels from a renowned expert (Govindjee) In depth, advanced treatment of photosynthesis, also from Govindjee Science Aid: Photosynthesis Article appropriate for high school science Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry and Cell Biology Overall examination of Photosynthesis at an intermediate level Overall Energetics of Photosynthesis The source of oxygen produced by photosynthesis Interactive animation, a textbook tutorial Photosynthesis – Light Dependent & Light Independent Stages Khan Academy, video introduction Agronomy Biological processes Botany Cellular respiration Ecosystems Metabolism Plant nutrition Plant physiology Quantum biology
Photosynthesis
[ "Physics", "Chemistry", "Biology" ]
10,895
[ "Plant physiology", "Cellular respiration", "Symbiosis", "Plants", "Photosynthesis", "Quantum mechanics", "Ecosystems", "nan", "Botany", "Biochemistry", "Cellular processes", "Metabolism", "Quantum biology" ]
24,553
https://en.wikipedia.org/wiki/Protein%20biosynthesis
Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease. Transcription Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNAcorresponding to a geneto unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. Post-transcriptional modifications Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: Addition of a 5' cap to the 5' end of the pre-mRNA molecule Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule Removal of introns via RNA splicing The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. Translation During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel prize in 1968, along with two other scientists, for his work. Protein folding Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. Post-translation events There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes. Post-translational modifications When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage Addition of chemical groups Addition of complex molecules Formation of intramolecular bonds Cleavage Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. Addition of chemical groups Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. Addition of complex molecules Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. Formation of covalent bonds Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. Role of protein synthesis in disease Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. Sickle cell disease Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunitstwo A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual. Cancer Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it. As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body. See also Central dogma of molecular biology Genetic code References External links A more advanced video detailing the different types of post-translational modifications and their chemical structures A useful video visualising the process of converting DNA to protein via transcription and translation Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease Gene expression Proteins Biosynthesis Metabolism
Protein biosynthesis
[ "Chemistry", "Biology" ]
5,089
[ "Biomolecules by chemical classification", "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Chemical synthesis", "Molecular biology", "Biochemistry", "Proteins", "Metabolism" ]
24,603
https://en.wikipedia.org/wiki/Proteasome
Proteasomes are protein complexes which degrade ubiquitin-tagged proteins by proteolysis, a chemical reaction that breaks peptide bonds. Enzymes that help such reactions are called proteases. Proteasomes are part of a major mechanism by which cells regulate the concentration of particular proteins and degrade misfolded proteins. Proteins are tagged for degradation with a small protein called ubiquitin. The tagging reaction is catalyzed by enzymes called ubiquitin ligases. Once a protein is tagged with a single ubiquitin molecule, this is a signal to other ligases to attach additional ubiquitin molecules. The result is a polyubiquitin chain that is bound by the proteasome, allowing it to degrade the tagged protein. The degradation process yields peptides of about seven to eight amino acids long, which can then be further degraded into shorter amino acid sequences and used in synthesizing new proteins. Proteasomes are found inside all eukaryotes and archaea, and in some bacteria. In eukaryotes, proteasomes are located both in the nucleus and in the cytoplasm. In structure, the proteasome is a cylindrical complex containing a "core" of four stacked rings forming a central pore. Each ring is composed of seven individual proteins. The inner two rings are made of seven β subunits that contain three to seven protease active sites. These sites are located on the interior surface of the rings, so that the target protein must enter the central pore before it is degraded. The outer two rings each contain seven α subunits whose function is to maintain a "gate" through which proteins enter the barrel. These α subunits are controlled by binding to "cap" structures or regulatory particles that recognize polyubiquitin tags attached to protein substrates and initiate the degradation process. The overall system of ubiquitination and proteasomal degradation is known as the ubiquitin–proteasome system. The proteasomal degradation pathway is essential for many cellular processes, including the cell cycle, the regulation of gene expression, and responses to oxidative stress. The importance of proteolytic degradation inside cells and the role of ubiquitin in proteolytic pathways was acknowledged in the award of the 2004 Nobel Prize in Chemistry to Aaron Ciechanover, Avram Hershko and Irwin Rose. Discovery Before the discovery of the ubiquitin–proteasome system, protein degradation in cells was thought to rely mainly on lysosomes, membrane-bound organelles with acidic and protease-filled interiors that can degrade and then recycle exogenous proteins and aged or damaged organelles. However, work by Joseph Etlinger and Alfred L. Goldberg in 1977 on ATP-dependent protein degradation in reticulocytes, which lack lysosomes, suggested the presence of a second intracellular degradation mechanism. This was shown in 1978 to be composed of several distinct protein chains, a novelty among proteases at the time. Later work on modification of histones led to the identification of an unexpected covalent modification of the histone protein by a bond between a lysine side chain of the histone and the C-terminal glycine residue of ubiquitin, a protein that had no known function. It was then discovered that a previously identified protein associated with proteolytic degradation, known as ATP-dependent proteolysis factor 1 (APF-1), was the same protein as ubiquitin. The proteolytic activities of this system were isolated as a multi-protein complex originally called the multi-catalytic proteinase complex by Sherwin Wilk and Marion Orlowski. Later, the ATP-dependent proteolytic complex that was responsible for ubiquitin-dependent protein degradation was discovered and was called the 26S proteasome. Much of the early work leading up to the discovery of the ubiquitin proteasome system occurred in the late 1970s and early 1980s at the Technion in the laboratory of Avram Hershko, where Aaron Ciechanover worked as a graduate student. Hershko's year-long sabbatical in the laboratory of Irwin Rose at the Fox Chase Cancer Center provided key conceptual insights, though Rose later downplayed his role in the discovery. The three shared the 2004 Nobel Prize in Chemistry for their work in discovering this system. Although electron microscopy data revealing the stacked-ring structure of the proteasome became available in the mid-1980s, the first structure of the proteasome core particle was not solved by X-ray crystallography until 1994. In 2018, the first atomic structures of the human 26S proteasome holoenzyme in complex with a polyubiquitylated protein substrate were solved by cryogenic electron microscopy, revealing mechanisms by which the substrate is recognized, deubiquitylated, unfolded and degraded by the human 26S proteasome. Structure and organization The proteasome subcomponents are often referred to by their Svedberg sedimentation coefficient (denoted S). The proteasome most exclusively used in mammals is the cytosolic 26S proteasome, which is about 2000 kilodaltons (kDa) in molecular mass containing one 20S protein subunit and two 19S regulatory cap subunits. The core is hollow and provides an enclosed cavity in which proteins are degraded; openings at the two ends of the core allow the target protein to enter. Each end of the core particle associates with a 19S regulatory subunit that contains multiple ATPase active sites and ubiquitin binding sites; it is this structure that recognizes polyubiquitinated proteins and transfers them to the catalytic core. An alternative form of regulatory subunit called the 11S particle can associate with the core in essentially the same manner as the 19S particle; the 11S may play a role in degradation of foreign peptides such as those produced after infection by a virus. 20S core particle The number and diversity of subunits contained in the 20S core particle depends on the organism; the number of distinct and specialized subunits is larger in multicellular than unicellular organisms and larger in eukaryotes than in prokaryotes. All 20S particles consist of four stacked heptameric ring structures that are themselves composed of two different types of subunits; α subunits are structural in nature, whereas β subunits are predominantly catalytic. The α subunits are pseudoenzymes homologous to β subunits. They are assembled with their N-termini adjacent to that of the β subunits. The outer two rings in the stack consist of seven α subunits each, which serve as docking domains for the regulatory particles and the alpha subunits N-termini () form a gate that blocks unregulated access of substrates to the interior cavity. The inner two rings each consist of seven β subunits and in their N-termini contain the protease active sites that perform the proteolysis reactions. Three distinct catalytic activities were identified in the purified complex: chymotrypsin-like, trypsin-like and peptidylglutamyl-peptide hydrolyzing. The size of the proteasome is relatively conserved and is about 150 angstroms (Å) by 115 Å. The interior chamber is at most 53 Å wide, though the entrance can be as narrow as 13 Å, suggesting that substrate proteins must be at least partially unfolded to enter. In archaea such as Thermoplasma acidophilum, all the α and all the β subunits are identical, whereas eukaryotic proteasomes such as those in yeast contain seven distinct types of each subunit. In mammals, the β1, β2, and β5 subunits are catalytic; although they share a common mechanism, they have three distinct substrate specificities considered chymotrypsin-like, trypsin-like, and peptidyl-glutamyl peptide-hydrolyzing (PHGH). Alternative β forms denoted β1i, β2i, and β5i can be expressed in hematopoietic cells in response to exposure to pro-inflammatory signals such as cytokines, in particular, interferon gamma. The proteasome assembled with these alternative subunits is known as the immunoproteasome, whose substrate specificity is altered relative to the normal proteasome. Recently an alternative proteasome was identified in human cells that lack the α3 core subunit. These proteasomes (known as the α4-α4 proteasomes) instead form 20S core particles containing an additional α4 subunit in place of the missing α3 subunit. These alternative 'α4-α4' proteasomes have been known previously to exist in yeast. Although the precise function of these proteasome isoforms is still largely unknown, cells expressing these proteasomes show enhanced resistance to toxicity induced by metallic ions such as cadmium. 19S regulatory particle The 19S particle in eukaryotes consists of 19 individual proteins and is divisible into two subassemblies, a 9-subunit base that binds directly to the α ring of the 20S core particle, and a 10-subunit lid. Six of the nine base proteins are ATPase subunits from the AAA Family, and an evolutionary homolog of these ATPases exists in archaea, called PAN (proteasome-activating nucleotidase). The association of the 19S and 20S particles requires the binding of ATP to the 19S ATPase subunits, and ATP hydrolysis is required for the assembled complex to degrade folded and ubiquitinated proteins. Note that only the step of substrate unfolding requires energy from ATP hydrolysis, while ATP-binding alone can support all the other steps required for protein degradation (e.g., complex assembly, gate opening, translocation, and proteolysis). In fact, ATP binding to the ATPases by itself supports the rapid degradation of unfolded proteins. However, while ATP hydrolysis is required for unfolding only, it is not yet clear whether this energy may be used in the coupling of some of these steps. In 2012, two independent efforts have elucidated the molecular architecture of the 26S proteasome by single particle electron microscopy. In 2016, three independent efforts have determined the first near-atomic resolution structure of the human 26S proteasome in the absence of substrates by cryo-EM. In 2018, a major effort has elucidated the detailed mechanisms of deubiquitylation, initiation of translocation and processive unfolding of substrates by determining seven atomic structures of substrate-engaged 26S proteasome simultaneously. In the heart of the 19S, directly adjacent to the 20S, are the AAA-ATPases (AAA proteins) that assemble to a heterohexameric ring of the order Rpt1/Rpt2/Rpt6/Rpt3/Rpt4/Rpt5. This ring is a trimer of dimers: Rpt1/Rpt2, Rpt6/Rpt3, and Rpt4/Rpt5 dimerize via their N-terminal coiled-coils. These coiled-coils protrude from the hexameric ring. The largest regulatory particle non-ATPases Rpn1 and Rpn2 bind to the tips of Rpt1/2 and Rpt6/3, respectively. The ubiquitin receptor Rpn13 binds to Rpn2 and completes the base sub-complex. The lid covers one half of the AAA-ATPase hexamer (Rpt6/Rpt3/Rpt4) and, unexpectedly, directly contacts the 20S via Rpn6 and to lesser extent Rpn5. The subunits Rpn9, Rpn5, Rpn6, Rpn7, Rpn3, and Rpn12, which are structurally related among themselves and to subunits of the COP9 complex and eIF3 (hence called PCI subunits) assemble to a horseshoe-like structure enclosing the Rpn8/Rpn11 heterodimer. Rpn11, the deubiquitinating enzyme, is placed at the mouth of the AAA-ATPase hexamer, ideally positioned to remove ubiquitin moieties immediately before translocation of substrates into the 20S. The second ubiquitin receptor identified to date, Rpn10, is positioned at the periphery of the lid, near subunits Rpn8 and Rpn9. Conformational changes of 19S The 19S regulatory particle within the 26S proteasome holoenzyme has been observed in six strongly differing conformational states in the absence of substrates to date. A hallmark of the AAA-ATPase configuration in this predominant low-energy state is a staircase- or lockwasher-like arrangement of the AAA-domains. In the presence of ATP but absence of substrate three alternative, less abundant conformations of the 19S are adopted primarily differing in the positioning of the lid with respect to the AAA-ATPase module. In the presence of ATP-γS or a substrate, considerably more conformations have been observed displaying dramatic structural changes of the AAA-ATPase module. Some of the substrate-bound conformations bear high similarity to the substrate-free ones, but they are not entirely identical, particularly in the AAA-ATPase module. Prior to the 26S assembly, the 19S regulatory particle in a free form has also been observed in seven conformational states. Notably, all these conformers are somewhat different and present distinct features. Thus, the 19S regulatory particle can sample at least 20 conformational states under different physiological conditions. Regulation of the 20S by the 19S The 19S regulatory particle is responsible for stimulating the 20S to degrade proteins. A primary function of the 19S regulatory ATPases is to open the gate in the 20S that blocks the entry of substrates into the degradation chamber. The mechanism by which the proteasomal ATPase open this gate has been recently elucidated. 20S gate opening, and thus substrate degradation, requires the C-termini of the proteasomal ATPases, which contains a specific motif (i.e., HbYX motif). The ATPases C-termini bind into pockets in the top of the 20S, and tether the ATPase complex to the 20S proteolytic complex, thus joining the substrate unfolding equipment with the 20S degradation machinery. Binding of these C-termini into these 20S pockets by themselves stimulates opening of the gate in the 20S in much the same way that a "key-in-a-lock" opens a door. The precise mechanism by which this "key-in-a-lock" mechanism functions has been structurally elucidated in the context of human 26S proteasome at near-atomic resolution, suggesting that the insertion of five C-termini of ATPase subunits Rpt1/2/3/5/6 into the 20S surface pockets are required to fully open the 20S gate. Other regulatory particles 20S proteasomes can also associate with a second type of regulatory particle, the 11S regulatory particle, a heptameric structure that does not contain any ATPases and can promote the degradation of short peptides but not of complete proteins. It is presumed that this is because the complex cannot unfold larger substrates. This structure is also known as PA28, REG, or PA26. The mechanisms by which it binds to the core particle through the C-terminal tails of its subunits and induces α-ring conformational changes to open the 20S gate suggest a similar mechanism for the 19S particle. The expression of the 11S particle is induced by interferon gamma and is responsible, in conjunction with the immunoproteasome β subunits, for the generation of peptides that bind to the major histocompatibility complex. Yet another type of non-ATPase regulatory particle is the Blm10 (yeast) or PA200/PSME4 (human). It opens only one α subunit in the 20S gate and itself folds into a dome with a very small pore over it. Assembly The assembly of the proteasome is a complex process due to the number of subunits that must associate to form an active complex. The β subunits are synthesized with N-terminal "propeptides" that are post-translationally modified during the assembly of the 20S particle to expose the proteolytic active site. The 20S particle is assembled from two half-proteasomes, each of which consists of a seven-membered pro-β ring attached to a seven-membered α ring. The association of the β rings of the two half-proteasomes triggers threonine-dependent autolysis of the propeptides to expose the active site. These β interactions are mediated mainly by salt bridges and hydrophobic interactions between conserved alpha helices whose disruption by mutation damages the proteasome's ability to assemble. The assembly of the half-proteasomes, in turn, is initiated by the assembly of the α subunits into their heptameric ring, forming a template for the association of the corresponding pro-β ring. The assembly of α subunits has not been characterized. Only recently, the assembly process of the 19S regulatory particle has been elucidated to considerable extent. The 19S regulatory particle assembles as two distinct subcomponents, the base and the lid. Assembly of the base complex is facilitated by four assembly chaperones, Hsm3/S5b, Nas2/p27, Rpn14/PAAF1, and Nas6/gankyrin (names for yeast/mammals). These assembly chaperones bind to the AAA-ATPase subunits and their main function seems to be to ensure proper assembly of the heterohexameric AAA-ATPase ring. To date it is still under debate whether the base complex assembles separately, whether the assembly is templated by the 20S core particle, or whether alternative assembly pathways exist. In addition to the four assembly chaperones, the deubiquitinating enzyme Ubp6/Usp14 also promotes base assembly, but it is not essential. The lid assembles separately in a specific order and does not require assembly chaperones. Protein degradation process Ubiquitination and targeting Proteins are targeted for degradation by the proteasome with covalent modification of a lysine residue that requires the coordinated reactions of three enzymes. In the first step, a ubiquitin-activating enzyme (known as E1) hydrolyzes ATP and adenylylates a ubiquitin molecule. This is then transferred to E1's active-site cysteine residue in concert with the adenylylation of a second ubiquitin. This adenylylated ubiquitin is then transferred to a cysteine of a second enzyme, ubiquitin-conjugating enzyme (E2). In the last step, a member of a highly diverse class of enzymes known as ubiquitin ligases (E3) recognizes the specific protein to be ubiquitinated and catalyzes the transfer of ubiquitin from E2 to this target protein. A target protein must be labeled with at least four ubiquitin monomers (in the form of a polyubiquitin chain) before it is recognized by the proteasome lid. It is therefore the E3 that confers substrate specificity to this system. The number of E1, E2, and E3 proteins expressed depends on the organism and cell type, but there are many different E3 enzymes present in humans, indicating that there is a huge number of targets for the ubiquitin proteasome system. The mechanism by which a polyubiquitinated protein is targeted to the proteasome is not fully understood. A few high-resolution snapshots of the proteasome bound to a polyubiquitinated protein suggest that ubiquitin receptors might be coordinated with deubiquitinase Rpn11 for initial substrate targeting and engagement. Ubiquitin-receptor proteins have an N-terminal ubiquitin-like (UBL) domain and one or more ubiquitin-associated (UBA) domains. The UBL domains are recognized by the 19S proteasome caps and the UBA domains bind ubiquitin via three-helix bundles. These receptor proteins may escort polyubiquitinated proteins to the proteasome, though the specifics of this interaction and its regulation are unclear. The ubiquitin protein itself is 76 amino acids long and was named due to its ubiquitous nature, as it has a highly conserved sequence and is found in all known eukaryotic organisms. The genes encoding ubiquitin in eukaryotes are arranged in tandem repeats, possibly due to the heavy transcription demands on these genes to produce enough ubiquitin for the cell. It has been proposed that ubiquitin is the slowest-evolving protein identified to date. Ubiquitin contains seven lysine residues to which another ubiquitin can be ligated, resulting in different types of polyubiquitin chains. Chains in which each additional ubiquitin is linked to lysine 48 of the previous ubiquitin have a role in proteasome targeting, while other types of chains may be involved in other processes. Deubiquitylation Ubiquitin chains conjugated to a protein targeted for proteasomal degradation are normally removed by any one of the three proteasome-associated deubiquitylating enzymes (DUBs), which are Rpn11, Ubp6/USP14 and UCH37. This process recycles ubiquitin and is essential to maintain the ubiquitin reservoir in cells. Rpn11 is an intrinsic, stoichiometric subunit of the 19S regulatory particle and is essential for the function of 26S proteasome. The DUB activity of Rpn11 is enhanced in the proteasome as compared to its monomeric form. How Rpn11 removes a ubiquitin chain en bloc from a protein substrate was captured by an atomic structure of the substrate-engaged human proteasome in a conformation named EB. Interestingly, this structure also shows how the DUB activity is coupled to the substrate recognition by the proteasomal AAA-ATPase. In contrast to Rpn11, USP14 and UCH37 are the DUBs that do not always associated with the proteasome. In cells, about 10-40% of the proteasomes were found to have USP14 associated. Both Ubp6/USP14 and UCH37 are largely activated by the proteasome and exhibit a very low DUB activity alone. Once activated, USP14 was found to suppress proteasome function by its DUB activity and by inducing parallel pathways of proteasome conformational transitions, one of which turned out to directly prohibit substrate insertion into the AAA-ATPase, as intuitively observed by time-resolved cryogenic electron microscopy. It appears that USP14 regulates proteasome function at multiple checkpoints by both catalytically competing with Rpn11 and allosterically reprogramming the AAA-ATPase states, which is rather unexpected for a DUB. These observations imply that the proteasome regulation may depend on its dynamic transitions of conformational states. Unfolding and translocation After a protein has been ubiquitinated, it is recognized by the 19S regulatory particle in an ATP-dependent binding step. The substrate protein must then enter the interior of the 20S subunit to come in contact with the proteolytic active sites. Because the 20S particle's central channel is narrow and gated by the N-terminal tails of the α ring subunits, the substrates must be at least partially unfolded before they enter the core. The passage of the unfolded substrate into the core is called translocation and necessarily occurs after deubiquitination. However, the order in which substrates are deubiquitinated and unfolded is not yet clear. Which of these processes is the rate-limiting step in the overall proteolysis reaction depends on the specific substrate; for some proteins, the unfolding process is rate-limiting, while deubiquitination is the slowest step for other proteins. The extent to which substrates must be unfolded before translocation is suggested to be around 20 amino acid residues by the atomic structure of the substrate-engaged 26S proteasome in the deubiquitylation-compatible state, but substantial tertiary structure, and in particular nonlocal interactions such as disulfide bonds, are sufficient to inhibit degradation. The presence of intrinsically disordered protein segments of sufficient size, either at the protein terminus or internally, has also been proposed to facilitate efficient initiation of degradation. The gate formed by the α subunits prevents peptides longer than about four residues from entering the interior of the 20S particle. The ATP molecules bound before the initial recognition step are hydrolyzed before translocation. While energy is needed for substrate unfolding, it is not required for translocation. The assembled 26S proteasome can degrade unfolded proteins in the presence of a non-hydrolyzable ATP analog, but cannot degrade folded proteins, indicating that energy from ATP hydrolysis is used for substrate unfolding. Passage of the unfolded substrate through the opened gate occurs via facilitated diffusion if the 19S cap is in the ATP-bound state. The mechanism for unfolding of globular proteins is necessarily general, but somewhat dependent on the amino acid sequence. Long sequences of alternating glycine and alanine have been shown to inhibit substrate unfolding, decreasing the efficiency of proteasomal degradation; this results in the release of partially degraded byproducts, possibly due to the decoupling of the ATP hydrolysis and unfolding steps. Such glycine-alanine repeats are also found in nature, for example in silk fibroin; in particular, certain Epstein–Barr virus gene products bearing this sequence can stall the proteasome, helping the virus propagate by preventing antigen presentation on the major histocompatibility complex. Proteolysis The proteasome functions as an endoprotease. The mechanism of proteolysis by the β subunits of the 20S core particle is through a threonine-dependent nucleophilic attack. This mechanism may depend on an associated water molecule for deprotonation of the reactive threonine hydroxyl. Degradation occurs within the central chamber formed by the association of the two β rings and normally does not release partially degraded products, instead reducing the substrate to short polypeptides typically 7–9 residues long, though they can range from 4 to 25 residues, depending on the organism and substrate. The biochemical mechanism that determines product length is not fully characterized. Although the three catalytic β subunits have a common mechanism, they have slightly different substrate specificities, which are considered chymotrypsin-like, trypsin-like, and peptidyl-glutamyl peptide-hydrolyzing (PHGH)-like. These variations in specificity are the result of interatomic contacts with local residues near the active sites of each subunit. Each catalytic β subunit also possesses a conserved lysine residue required for proteolysis. Although the proteasome normally produces very short peptide fragments, in some cases these products are themselves biologically active and functional molecules. Certain transcription factors regulating the expression of specific genes, including one component of the mammalian complex NF-κB, are synthesized as inactive precursors whose ubiquitination and subsequent proteasomal degradation converts them to an active form. Such activity requires the proteasome to cleave the substrate protein internally, rather than processively degrading it from one terminus. It has been suggested that long loops on these proteins' surfaces serve as the proteasomal substrates and enter the central cavity, while the majority of the protein remains outside. Similar effects have been observed in yeast proteins; this mechanism of selective degradation is known as regulated ubiquitin/proteasome dependent processing (RUP). Ubiquitin-independent degradation Although most proteasomal substrates must be ubiquitinated before being degraded, there are some exceptions to this general rule, especially when the proteasome plays a normal role in the post-translational processing of the protein. The proteasomal activation of NF-κB by processing p105 into p50 via internal proteolysis is one major example. Some proteins that are hypothesized to be unstable due to intrinsically unstructured regions, are degraded in a ubiquitin-independent manner. The most well-known example of a ubiquitin-independent proteasome substrate is the enzyme ornithine decarboxylase. Ubiquitin-independent mechanisms targeting key cell cycle regulators such as p53 have also been reported, although p53 is also subject to ubiquitin-dependent degradation. Finally, structurally abnormal, misfolded, or highly oxidized proteins are also subject to ubiquitin-independent and 19S-independent degradation under conditions of cellular stress. Evolution The 20S proteasome is both ubiquitous and essential in eukaryotes and archaea. The bacterial order Actinomycetales, also share homologs of the 20S proteasome, whereas most bacteria possess heat shock genes hslV and hslU, whose gene products are a multimeric protease arranged in a two-layered ring and an ATPase. The hslV protein has been hypothesized to resemble the likely ancestor of the 20S proteasome. In general, HslV is not essential in bacteria, and not all bacteria possess it, whereas some protists possess both the 20S and the hslV systems. Many bacteria also possess other homologs of the proteasome and an associated ATPase, most notably ClpP and ClpX. This redundancy explains why the HslUV system is not essential. Sequence analysis suggests that the catalytic β subunits diverged earlier in evolution than the predominantly structural α subunits. In bacteria that express a 20S proteasome, the β subunits have high sequence identity to archaeal and eukaryotic β subunits, whereas the α sequence identity is much lower. The presence of 20S proteasomes in bacteria may result from lateral gene transfer, while the diversification of subunits among eukaryotes is ascribed to multiple gene duplication events. Cell cycle control Cell cycle progression is controlled by ordered action of cyclin-dependent kinases (CDKs), activated by specific cyclins that demarcate phases of the cell cycle. Mitotic cyclins, which persist in the cell for only a few minutes, have one of the shortest life spans of all intracellular proteins. After a CDK-cyclin complex has performed its function, the associated cyclin is polyubiquitinated and destroyed by the proteasome, which provides directionality for the cell cycle. In particular, exit from mitosis requires the proteasome-dependent dissociation of the regulatory component cyclin B from the mitosis promoting factor complex. In vertebrate cells, "slippage" through the mitotic checkpoint leading to premature M phase exit can occur despite the delay of this exit by the spindle checkpoint. Earlier cell cycle checkpoints such as post-restriction point check between G1 phase and S phase similarly involve proteasomal degradation of cyclin A, whose ubiquitination is promoted by the anaphase promoting complex (APC), an E3 ubiquitin ligase. The APC and the Skp1/Cul1/F-box protein complex (SCF complex) are the two key regulators of cyclin degradation and checkpoint control; the SCF itself is regulated by the APC via ubiquitination of the adaptor protein, Skp2, which prevents SCF activity before the G1-S transition. Individual components of the 19S particle have their own regulatory roles. Gankyrin, a recently identified oncoprotein, is one of the 19S subcomponents that also tightly binds the cyclin-dependent kinase CDK4 and plays a key role in recognizing ubiquitinated p53, via its affinity for the ubiquitin ligase MDM2. Gankyrin is anti-apoptotic and has been shown to be overexpressed in some tumor cell types such as hepatocellular carcinoma. Like eukaryotes, some archaea also use the proteasome to control cell cycle, specifically by controlling ESCRT-III-mediated cell division. Regulation of plant growth In plants, signaling by auxins, or phytohormones that order the direction and tropism of plant growth, induces the targeting of a class of transcription factor repressors known as Aux/IAA proteins for proteasomal degradation. These proteins are ubiquitinated by SCFTIR1, or SCF in complex with the auxin receptor TIR1. Degradation of Aux/IAA proteins derepresses transcription factors in the auxin-response factor (ARF) family and induces ARF-directed gene expression. The cellular consequences of ARF activation depend on the plant type and developmental stage, but are involved in directing growth in roots and leaf veins. The specific response to ARF derepression is thought to be mediated by specificity in the pairing of individual ARF and Aux/IAA proteins. Apoptosis Both internal and external signals can lead to the induction of apoptosis, or programmed cell death. The resulting deconstruction of cellular components is primarily carried out by specialized proteases known as caspases, but the proteasome also plays important and diverse roles in the apoptotic process. The involvement of the proteasome in this process is indicated by both the increase in protein ubiquitination, and of E1, E2, and E3 enzymes that is observed well in advance of apoptosis. During apoptosis, proteasomes localized to the nucleus have also been observed to translocate to outer membrane blebs characteristic of apoptosis. Proteasome inhibition has different effects on apoptosis induction in different cell types. In general, the proteasome is not required for apoptosis, although inhibiting it is pro-apoptotic in most cell types that have been studied. Apoptosis is mediated through disrupting the regulated degradation of pro-growth cell cycle proteins. However, some cell lines — in particular, primary cultures of quiescent and differentiated cells such as thymocytes and neurons — are prevented from undergoing apoptosis on exposure to proteasome inhibitors. The mechanism for this effect is not clear, but is hypothesized to be specific to cells in quiescent states, or to result from the differential activity of the pro-apoptotic kinase JNK. The ability of proteasome inhibitors to induce apoptosis in rapidly dividing cells has been exploited in several recently developed chemotherapy agents such as bortezomib and . Response to cellular stress In response to cellular stresses – such as infection, heat shock, or oxidative damage – heat shock proteins that identify misfolded or unfolded proteins and target them for proteasomal degradation are expressed. Both Hsp27 and Hsp90—chaperone proteins have been implicated in increasing the activity of the ubiquitin-proteasome system, though they are not direct participants in the process. Hsp70, on the other hand, binds exposed hydrophobic patches on the surface of misfolded proteins and recruits E3 ubiquitin ligases such as CHIP to tag the proteins for proteasomal degradation. The CHIP protein (carboxyl terminus of Hsp70-interacting protein) is itself regulated via inhibition of interactions between the E3 enzyme CHIP and its E2 binding partner. Similar mechanisms exist to promote the degradation of oxidatively damaged proteins via the proteasome system. In particular, proteasomes localized to the nucleus are regulated by PARP and actively degrade inappropriately oxidized histones. Oxidized proteins, which often form large amorphous aggregates in the cell, can be degraded directly by the 20S core particle without the 19S regulatory cap and do not require ATP hydrolysis or tagging with ubiquitin. However, high levels of oxidative damage increases the degree of cross-linking between protein fragments, rendering the aggregates resistant to proteolysis. Larger numbers and sizes of such highly oxidized aggregates are associated with aging. Dysregulation of the ubiquitin proteasome system may contribute to several neural diseases. It may lead to brain tumors such as astrocytomas. In some of the late-onset neurodegenerative diseases that share aggregation of misfolded proteins as a common feature, such as Parkinson's disease and Alzheimer's disease, large insoluble aggregates of misfolded proteins can form and then result in neurotoxicity, through mechanisms that are not yet well understood. Decreased proteasome activity has been suggested as a cause of aggregation and Lewy body formation in Parkinson's. This hypothesis is supported by the observation that yeast models of Parkinson's are more susceptible to toxicity from α-synuclein, the major protein component of Lewy bodies, under conditions of low proteasome activity. Impaired proteasomal activity may underlie cognitive disorders such as the autism spectrum disorders, and muscle and nerve diseases such as inclusion body myopathy. Role in the immune system The proteasome plays a straightforward but critical role in the function of the adaptive immune system. Peptide antigens are displayed by the major histocompatibility complex class I (MHC) proteins on the surface of antigen-presenting cells. These peptides are products of proteasomal degradation of proteins originated by the invading pathogen. Although constitutively expressed proteasomes can participate in this process, a specialized complex composed of proteins, whose expression is induced by interferon gamma, are the primary producers of peptides which are optimal in size and composition for MHC binding. These proteins whose expression increases during the immune response include the 11S regulatory particle, whose main known biological role is regulating the production of MHC ligands, and specialized β subunits called β1i, β2i, and β5i with altered substrate specificity. The complex formed with the specialized β subunits is known as the immunoproteasome. Another β5i variant subunit, β5t, is expressed in the thymus, leading to a thymus-specific "thymoproteasome" whose function is as yet unclear. The strength of MHC class I ligand binding is dependent on the composition of the ligand C-terminus, as peptides bind by hydrogen bonding and by close contacts with a region called the "B pocket" on the MHC surface. Many MHC class I alleles prefer hydrophobic C-terminal residues, and the immunoproteasome complex is more likely to generate hydrophobic C-termini. Due to its role in generating the activated form of NF-κB, an anti-apoptotic and pro-inflammatory regulator of cytokine expression, proteasomal activity has been linked to inflammatory and autoimmune diseases. Increased levels of proteasome activity correlate with disease activity and have been implicated in autoimmune diseases including systemic lupus erythematosus and rheumatoid arthritis. The proteasome is also involved in Intracellular antibody-mediated proteolysis of antibody-bound virions. In this neutralisation pathway, TRIM21 (a protein of the tripartite motif family) binds with immunoglobulin G to direct the virion to the proteasome where it is degraded. Proteasome inhibitors Proteasome inhibitors have effective anti-tumor activity in cell culture, inducing apoptosis by disrupting the regulated degradation of pro-growth cell cycle proteins. This approach of selectively inducing apoptosis in tumor cells has proven effective in animal models and human trials. Lactacystin, a natural product synthesized by Streptomyces bacteria, was the first non-peptidic proteasome inhibitor discovered and is widely used as a research tool in biochemistry and cell biology. Lactacystin was licensed to Myogenics/Proscript, which was acquired by Millennium Pharmaceuticals, now part of Takeda Pharmaceuticals. Lactacystin covalently modifies the amino-terminal threonine of catalytic β subunits of the proteasome, particularly the β5 subunit responsible for the proteasome's chymotrypsin-like activity. This discovery helped to establish the proteasome as a mechanistically novel class of protease: an amino-terminal threonine protease. Bortezomib (Boronated MG132), a molecule developed by Millennium Pharmaceuticals and marketed as Velcade, is the first proteasome inhibitor to reach clinical use as a chemotherapy agent. Bortezomib is used in the treatment of multiple myeloma. Notably, multiple myeloma has been observed to result in increased proteasome-derived peptide levels in blood serum that decrease to normal levels in response to successful chemotherapy. Studies in animals have indicated that bortezomib may also have clinically significant effects in pancreatic cancer. Preclinical and early clinical studies have been started to examine bortezomib's effectiveness in treating other B-cell-related cancers, particularly some types of non-Hodgkin's lymphoma. Clinical results also seem to justify use of proteasome inhibitor combined with chemotherapy, for B-cell acute lymphoblastic leukemia Proteasome inhibitors can kill some types of cultured leukemia cells that are resistant to glucocorticoids. The molecule ritonavir, marketed as Norvir, was developed as a protease inhibitor and used to target HIV infection. However, it has been shown to inhibit proteasomes as well as free proteases; to be specific, the chymotrypsin-like activity of the proteasome is inhibited by ritonavir, while the trypsin-like activity is somewhat enhanced. Studies in animal models suggest that ritonavir may have inhibitory effects on the growth of glioma cells. Proteasome inhibitors have also shown promise in treating autoimmune diseases in animal models. For example, studies in mice bearing human skin grafts found a reduction in the size of lesions from psoriasis after treatment with a proteasome inhibitor. Inhibitors also show positive effects in rodent models of asthma. Labeling and inhibition of the proteasome is also of interest in laboratory settings for both in vitro and in vivo study of proteasomal activity in cells. The most commonly used laboratory inhibitors are lactacystin and the peptide aldehyde MG132 initially developed by Goldberg lab. Fluorescent inhibitors have also been developed to specifically label the active sites of the assembled proteasome. Clinical significance The proteasome and its subunits are of clinical significance for at least two reasons: (1) a compromised complex assembly or a dysfunctional proteasome can be associated with the underlying pathophysiology of specific diseases, and (2) they can be exploited as drug targets for therapeutic interventions. More recently, more effort has been made to consider the proteasome for the development of novel diagnostic markers and strategies. An improved and comprehensive understanding of the pathophysiology of the proteasome should lead to clinical applications in the future. The proteasomes form a pivotal component for the ubiquitin–proteasome system (UPS) and corresponding cellular Protein Quality Control (PQC). Protein ubiquitination and subsequent proteolysis and degradation by the proteasome are important mechanisms in the regulation of the cell cycle, cell growth and differentiation, gene transcription, signal transduction and apoptosis. Proteasome defects lead to reduced proteolytic activity and the accumulation of damaged or misfolded proteins, which may contribute to neurodegenerative disease, cardiovascular diseases, inflammatory responses and autoimmune diseases, and systemic DNA damage responses leading to malignancies. Research has implicated UPS defects in the pathogenesis of neurodegenerative and myodegenerative disorders, including Alzheimer's disease, Parkinson's disease and Pick's disease, amyotrophic lateral sclerosis (ALS), Huntington's disease, Creutzfeldt–Jakob disease, and motor neuron diseases, polyglutamine (PolyQ) diseases, muscular dystrophies and several rare forms of neurodegenerative diseases associated with dementia. As part of the ubiquitin–proteasome system (UPS), the proteasome maintains cardiac protein homeostasis and thus plays a significant role in cardiac ischemic injury, ventricular hypertrophy and heart failure. Additionally, evidence is accumulating that the UPS plays an essential role in malignant transformation. UPS proteolysis plays a major role in responses of cancer cells to stimulatory signals that are critical for the development of cancer. Accordingly, gene expression by degradation of transcription factors, such as p53, c-jun, c-Fos, NF-κB, c-Myc, HIF-1α, MATα2, STAT3, sterol-regulated element-binding proteins and androgen receptors are all controlled by the UPS and thus involved in the development of various malignancies. Moreover, the UPS regulates the degradation of tumor suppressor gene products such as adenomatous polyposis coli (APC) in colorectal cancer, retinoblastoma (Rb). and von Hippel–Lindau tumor suppressor (VHL), as well as a number of proto-oncogenes (Raf, Myc, Myb, Rel, Src, Mos, ABL). The UPS is also involved in the regulation of inflammatory responses. This activity is usually attributed to the role of proteasomes in the activation of NF-κB which further regulates the expression of pro inflammatory cytokines such as TNF-α, IL-β, IL-8, adhesion molecules (ICAM-1, VCAM-1, P-selectin) and prostaglandins and nitric oxide (NO). Additionally, the UPS also plays a role in inflammatory responses as regulators of leukocyte proliferation, mainly through proteolysis of cyclines and the degradation of CDK inhibitors. Lastly, autoimmune disease patients with SLE, Sjögren syndrome and rheumatoid arthritis (RA) predominantly exhibit circulating proteasomes which can be applied as clinical biomarkers. See also The Proteolysis Map DSS1/SEM1 protein family Exosome complex Endoplasmic-reticulum-associated protein degradation JUNQ and IPOD References Further reading The Yeast 26S Proteasome with list of subunits and pictures External links Proteasome subunit nomenclature guide 3D proteasome structures in the EM Data Bank(EMDB) Key points of proteasome function Proteins Protein complexes Organelles Apoptosis
Proteasome
[ "Chemistry" ]
9,993
[ "Biomolecules by chemical classification", "Signal transduction", "Apoptosis", "Molecular biology", "Proteins" ]
24,669
https://en.wikipedia.org/wiki/Pauli%20exclusion%20principle
In quantum mechanics, the Pauli exclusion principle (German: Pauli-Ausschlussprinzip) states that two or more identical particles with half-integer spins (i.e. fermions) cannot simultaneously occupy the same quantum state within a system that obeys the laws of quantum mechanics. This principle was formulated by Austrian physicist Wolfgang Pauli in 1925 for electrons, and later extended to all fermions with his spin–statistics theorem of 1940. In the case of electrons in atoms, the exclusion principle can be stated as follows: in a poly-electron atom it is impossible for any two electrons to have the same two values of all four of their quantum numbers, which are: n, the principal quantum number; , the azimuthal quantum number; m, the magnetic quantum number; and ms, the spin quantum number. For example, if two electrons reside in the same orbital, then their values of n, , and m are equal. In that case, the two values of ms (spin) pair must be different. Since the only two possible values for the spin projection ms are +1/2 and −1/2, it follows that one electron must have ms = +1/2 and one ms = −1/2. Particles with an integer spin (bosons) are not subject to the Pauli exclusion principle. Any number of identical bosons can occupy the same quantum state, such as photons produced by a laser, or atoms found in a Bose–Einstein condensate. A more rigorous statement is: under the exchange of two identical particles, the total (many-particle) wave function is antisymmetric for fermions and symmetric for bosons. This means that if the space and spin coordinates of two identical particles are interchanged, then the total wave function changes sign for fermions, but does not change sign for bosons. So, if hypothetically two fermions were in the same statefor example, in the same atom in the same orbital with the same spinthen interchanging them would change nothing and the total wave function would be unchanged. However, the only way a total wave function can both change sign (required for fermions), and also remain unchanged is that such a function must be zero everywhere, which means such a state cannot exist. This reasoning does not apply to bosons because the sign does not change. Overview The Pauli exclusion principle describes the behavior of all fermions (particles with half-integer spin), while bosons (particles with integer spin) are subject to other principles. Fermions include elementary particles such as quarks, electrons and neutrinos. Additionally, baryons such as protons and neutrons (subatomic particles composed from three quarks) and some atoms (such as helium-3) are fermions, and are therefore described by the Pauli exclusion principle as well. Atoms can have different overall spin, which determines whether they are fermions or bosons: for example, helium-3 has spin 1/2 and is therefore a fermion, whereas helium-4 has spin 0 and is a boson. The Pauli exclusion principle underpins many properties of everyday matter, from its large-scale stability to the chemical behavior of atoms. Half-integer spin means that the intrinsic angular momentum value of fermions is (reduced Planck constant) times a half-integer (1/2, 3/2, 5/2, etc.). In the theory of quantum mechanics, fermions are described by antisymmetric states. In contrast, particles with integer spin (bosons) have symmetric wave functions and may share the same quantum states. Bosons include the photon, the Cooper pairs which are responsible for superconductivity, and the W and Z bosons. Fermions take their name from the Fermi–Dirac statistical distribution, which they obey, and bosons take theirs from the Bose–Einstein distribution. History In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more chemically stable than those with odd numbers of electrons. In the 1916 article "The Atom and the Molecule" by Gilbert N. Lewis, for example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in any given shell, and especially to hold eight electrons, which he assumed to be typically arranged symmetrically at the eight corners of a cube. In 1919 chemist Irving Langmuir suggested that the periodic table could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells around the nucleus. In 1922, Niels Bohr updated his model of the atom by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells". Pauli looked for an explanation for these numbers, which were at first only empirical. At the same time he was trying to explain experimental results of the Zeeman effect in atomic spectroscopy and in ferromagnetism. He found an essential clue in a 1924 paper by Edmund C. Stoner, which pointed out that, for a given value of the principal quantum number (n), the number of energy levels of a single electron in the alkali metal spectra in an external magnetic field, where all degenerate energy levels are separated, is equal to the number of electrons in the closed shell of the noble gases for the same value of n. This led Pauli to realize that the complicated numbers of electrons in closed shells can be reduced to the simple rule of one electron per state if the electron states are defined using four quantum numbers. For this purpose he introduced a new two-valued quantum number, identified by Samuel Goudsmit and George Uhlenbeck as electron spin. Connection to quantum state symmetry In his Nobel lecture, Pauli clarified the importance of quantum state symmetry to the exclusion principle: Among the different classes of symmetry, the most important ones (which moreover for two particles are the only ones) are the symmetrical class, in which the wave function does not change its value when the space and spin coordinates of two particles are permuted, and the antisymmetrical class, in which for such a permutation the wave function changes its sign...[The antisymmetrical class is] the correct and general wave mechanical formulation of the exclusion principle. The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric with respect to exchange. If and range over the basis vectors of the Hilbert space describing a one-particle system, then the tensor product produces the basis vectors of the Hilbert space describing a system of two such particles. Any two-particle state can be represented as a superposition (i.e. sum) of these basis vectors: where each is a (complex) scalar coefficient. Antisymmetry under exchange means that . This implies when , which is Pauli exclusion. It is true in any basis since local changes of basis keep antisymmetric matrices antisymmetric. Conversely, if the diagonal quantities are zero in every basis, then the wavefunction component is necessarily antisymmetric. To prove it, consider the matrix element This is zero, because the two particles have zero probability to both be in the superposition state . But this is equal to The first and last terms are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey: or For a system with particles, the multi-particle basis states become n-fold tensor products of one-particle basis states, and the coefficients of the wavefunction are identified by n one-particle states. The condition of antisymmetry states that the coefficients must flip sign whenever any two states are exchanged: for any . The exclusion principle is the consequence that, if for any then This shows that none of the n particles may be in the same state. Advanced quantum theory According to the spin–statistics theorem, particles with integer spin occupy symmetric quantum states, and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-integer values of spin are allowed by the principles of quantum mechanics. In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin. In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free fermions. The reason for this is that, in one dimension, the exchange of particles requires that they pass through each other; for infinitely strong repulsion this cannot happen. This model is described by a quantum nonlinear Schrödinger equation. In momentum space, the exclusion principle is valid also for finite repulsion in a Bose gas with delta-function interactions, as well as for interacting spins and Hubbard model in one dimension, and for other models solvable by Bethe ansatz. The ground state in models solvable by Bethe ansatz is a Fermi sphere. Applications Atoms The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly important consequence of the principle is the elaborate electron shell structure of atoms and the way atoms share electrons, explaining the variety of chemical elements and their chemical combinations. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Electrons, being fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within an atom, i.e. have different spins while at the same electron orbital as described below. An example is the neutral helium atom (He), which has two bound electrons, both of which can occupy the lowest-energy (1s) states by acquiring opposite spin; as spin is part of the quantum state of the electron, the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin can take only two different values (eigenvalues). In a lithium atom (Li), with three bound electrons, the third electron cannot reside in a 1s state and must occupy a higher-energy state instead. The lowest available state is 2s, so that the ground state of Li is 1s22s. Similarly, successively larger elements must have shells of successively higher energy. The chemical properties of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of occupied electron shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the periodic table of the elements. To test the Pauli exclusion principle for the helium atom, Gordon Drake carried out very precise calculations for hypothetical states of the He atom that violate it, which are called paronic states. Later, K. Deilamian et al. used an atomic beam spectrometer to search for the paronic state 1s2s 1S0 calculated by Drake. The search was unsuccessful and showed that the statistical weight of this paronic state has an upper limit of . (The exclusion principle implies a weight of zero.) Solid state properties In conductors and semiconductors, there are very large numbers of molecular orbitals which effectively form a continuous band structure of energy levels. In strong conductors (metals) electrons are so degenerate that they cannot even contribute much to the thermal capacity of a metal. Many mechanical, electrical, magnetic, optical and chemical properties of solids are the direct consequence of Pauli exclusion. Stability of matter The stability of each electron state in an atom is described by the quantum theory of the atom, which shows that close approach of an electron to the nucleus necessarily increases the electron's kinetic energy, an application of the uncertainty principle of Heisenberg. However, stability of large systems with many electrons and many nucleons is a different question, and requires the Pauli exclusion principle. It has been shown that the Pauli exclusion principle is responsible for the fact that ordinary bulk matter is stable and occupies volume. This suggestion was first made in 1931 by Paul Ehrenfest, who pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. Atoms, therefore, occupy a volume and cannot be squeezed too closely together. The first rigorous proof was provided in 1967 by Freeman Dyson and Andrew Lenard (de), who considered the balance of attractive (electron–nuclear) and repulsive (electron–electron and nuclear–nuclear) forces and showed that ordinary matter would collapse and occupy a much smaller volume without the Pauli principle. A much simpler proof was found later by Elliott H. Lieb and Walter Thirring in 1975. They provided a lower bound on the quantum energy in terms of the Thomas-Fermi model, which is stable due to a theorem of Teller. The proof used a lower bound on the kinetic energy which is now called the Lieb–Thirring inequality. The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive exchange interaction, which is a short-range effect, acting simultaneously with the long-range electrostatic or Coulombic force. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time. Astrophysics Dyson and Lenard did not consider the extreme magnetic or gravitational forces that occur in some astronomical objects. In 1995 Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in neutron stars, although at a much higher density than in ordinary matter. It is a consequence of general relativity that, in sufficiently intense gravitational fields, matter collapses to form a black hole. Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form of white dwarf and neutron stars. In both bodies, the atomic structure is disrupted by extreme pressure, but the stars are held in hydrostatic equilibrium by degeneracy pressure, also known as Fermi pressure. This exotic form of matter is known as degenerate matter. The immense gravitational force of a star's mass is normally held in equilibrium by thermal pressure caused by heat produced in thermonuclear fusion in the star's core. In white dwarfs, which do not undergo nuclear fusion, an opposing force to gravity is provided by electron degeneracy pressure. In neutron stars, subject to even stronger gravitational forces, electrons have merged with protons to form neutrons. Neutrons are capable of producing an even higher degeneracy pressure, neutron degeneracy pressure, albeit over a shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher density than a white dwarf. Neutron stars are the most "rigid" objects known; their Young modulus (or more accurately, bulk modulus) is 20 orders of magnitude larger than that of diamond. However, even this enormous rigidity can be overcome by the gravitational field of a neutron star mass exceeding the Tolman–Oppenheimer–Volkoff limit, leading to the formation of a black hole. See also Spin-statistics theorem Exchange interaction Exchange symmetry Fermi–Dirac statistics Fermi hole Hund's rule Pauli effect References General External links Nobel Lecture: Exclusion Principle and Quantum Mechanics Pauli's account of the development of the Exclusion Principle. "What is the Pauli Exclusion Principle?" 49 minute audiovisual lecture. Concepts in physics Spintronics Chemical bonding Lorentz Medal winners
Pauli exclusion principle
[ "Physics", "Chemistry", "Materials_science" ]
3,224
[ "Spintronics", "Quantum mechanics", "Condensed matter physics", "nan", "Pauli exclusion principle", "Chemical bonding" ]
24,714
https://en.wikipedia.org/wiki/Precession
Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called nutation. In physics, there are two types of precession: torque-free and torque-induced. In astronomy, precession refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes. Torque-free or torque neglected Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia. The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows: where is the precession rate, is the spin rate about the axis of symmetry, is the moment of inertia about the axis of symmetry, is moment of inertia about either of the other two equal perpendicular principal axes, and is the angle between the moment of inertia direction and the symmetry axis. When an object is not perfectly rigid, inelastic dissipation will tend to damp torque-free precession, and the rotation axis will align itself with one of the inertia axes of the body. For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g.: for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy: this unphysical tendency can be counteracted by repeatedly applying a small rotation vector perpendicular to both and , noting that Torque-induced Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess. The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot. To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation. First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis. In the picture, a section of the wheel has been named . At the depicted moment in time, section is at the perimeter of the rotating motion around the (vertical) pivot axis. Section , therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of . Note that both arrows point in the same direction. The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis. It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous. In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side. Precession or gyroscopic considerations have an effect on bicycle performance at high speed. Precession is also the mechanism behind gyrocompasses. Classical (Newtonian) Precession is the change of angular velocity and angular momentum produced by a torque. The general equation that relates the torque to the rate of change of angular momentum is: where and are the torque and angular momentum vectors respectively. Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created. Under these circumstances the angular velocity of precession is given by: where is the moment of inertia, is the angular velocity of spin about the spin axis, is the mass, is the acceleration due to gravity, is the angle between the spin axis and the axis of precession and is the distance between the center of mass and the pivot. The torque vector originates at the center of mass. Using , we find that the period of precession is given by: Where is the moment of inertia, is the period of spin about the spin axis, and is the torque. In general, the problem is more complicated than this, however. Relativistic (Einsteinian) The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are: Thomas precession, a special-relativistic correction accounting for an object (such as a gyroscope) being accelerated along a curved path. de Sitter precession, a general-relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass. Lense–Thirring precession, a general-relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass. The Schwarzschild geodesics (sometimes Schwarzschild precession) is used in the prediction of the anomalous perihelion precession of the planets, most notably for the accurate prediction of the apsidal precession of Mercury. Astronomy In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. (See Milankovitch cycles.) Axial precession (precession of the equinoxes) Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5°. The ancient Greek astronomer Hipparchus (c. 190–120 BC) is generally accepted to be the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°), although there is some minor dispute about whether he was. In ancient China, the Jin-dynasty scholar-official Yu Xi ( 307–345 AD) made a similar discovery centuries later, noting that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars. The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role. Apsidal precession The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession. In the adjunct image, Earth's apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and nearly stationary. Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies. Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of , which accurately gives the observed excess turning rate of 43 arcseconds every 100 years. Nodal precession Orbital nodes also precess over time. See also Larmor precession Nutation Polar motion Precession (mechanical) Precession as a form of parallel transport References External links Explanation and derivation of formula for precession of a top Precession and the Milankovich theory From Stargazers to Starships Earth Dynamics (mechanics)
Precession
[ "Physics" ]
2,713
[ "Physical phenomena", "Physical quantities", "Classical mechanics", "Precession", "Motion (physics)", "Dynamics (mechanics)", "Wikipedia categories named after physical quantities" ]
24,731
https://en.wikipedia.org/wiki/Positron
The positron or antielectron is the particle with an electric charge of +1e, a spin of 1/2 (the same as the electron), and the same mass as an electron. It is the antiparticle (antimatter counterpart) of the electron. When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more photons. Positrons can be created by positron emission radioactive decay (through weak interactions), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material. History Theory In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. This paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions. Hermann Weyl then published a paper discussing the mathematical implications of the negative energy solution. The positive-energy solution explained experimental results, but Dirac was puzzled by the equally valid negative-energy solution that the mathematical model allowed. Quantum mechanics did not allow the negative energy solution to simply be ignored, as classical mechanics often did in such equations; the dual solution implied the possibility of an electron spontaneously jumping between positive and negative energy states. However, no such transition had yet been observed experimentally. Dirac wrote a follow-up paper in December 1929 that attempted to explain the unavoidable negative-energy solution for the relativistic electron. He argued that "... an electron with negative energy moves in an external [electromagnetic] field as though it carries a positive charge." He further asserted that all of space could be regarded as a "sea" of negative energy states that were filled, so as to prevent electrons jumping between positive energy states (negative electric charge) and negative energy states (positive charge). The paper also explored the possibility of the proton being an island in this sea, and that it might actually be a negative-energy electron. Dirac acknowledged that the proton having a much greater mass than the electron was a problem, but expressed "hope" that a future theory would resolve the issue. Robert Oppenheimer argued strongly against the proton being the negative-energy electron solution to Dirac's equation. He asserted that if it were, the hydrogen atom would rapidly self-destruct. Weyl in 1931 showed that the negative-energy electron must have the same mass as that of the positive-energy electron. Persuaded by Oppenheimer's and Weyl's argument, Dirac published a paper in 1931 that predicted the existence of an as-yet-unobserved particle that he called an "anti-electron" that would have the same mass and the opposite charge as an electron and that would mutually annihilate upon contact with an electron. Richard Feynman, and earlier Ernst Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge. John Archibald Wheeler invoked this concept to explain the identical properties shared by all electrons, suggesting that "they are all the same electron" with a complex, self-intersecting worldline. Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from the past to the future, or from the future to the past." The backwards in time point of view is nowadays accepted as completely equivalent to other pictures, but it does not have anything to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description. Experimental clues and discovery Several sources have claimed that Dmitri Skobeltsyn first observed the positron long before 1930, or even as early as 1923. They state that while using a Wilson cloud chamber in order to study the Compton effect, Skobeltsyn detected particles that acted like electrons but curved in the opposite direction in an applied magnetic field, and that he presented photographs with this phenomenon in a conference in the University of Cambridge, on 23–27 July 1928. In his book on the history of the positron discovery from 1963, Norwood Russell Hanson has given a detailed account of the reasons for this assertion, and this may have been the origin of the myth. But he also presented Skobeltsyn's objection to it in an appendix. Later, Skobeltsyn rejected this claim even more strongly, calling it "nothing but sheer nonsense". Skobeltsyn did pave the way for the eventual discovery of the positron by two important contributions: adding a magnetic field to his cloud chamber (in 1925), and by discovering charged particle cosmic rays, for which he is credited in Carl David Anderson's Nobel lecture. Skobeltzyn did observe likely positron tracks on images taken in 1931, but did not identify them as such at the time. Likewise, in 1929 Chung-Yao Chao, a Chinese graduate student at Caltech, noticed some anomalous results that indicated particles behaving like electrons, but with a positive charge, though the results were inconclusive and the phenomenon was not pursued. Fifty years later, Anderson acknowledged that his discovery was inspired by the work of his Caltech classmate Chung-Yao Chao, whose research formed the foundation from which much of Anderson's work developed but was not credited at the time. Anderson discovered the positron on 2 August 1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term positron, but allowed it at the suggestion of the Physical Review journal editor to whom he submitted his discovery paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that showed its charge was positive. Anderson wrote in retrospect that the positron could have been discovered earlier based on Chung-Yao Chao's work, if only it had been followed up on. Frédéric and Irène Joliot-Curie in Paris had evidence of positrons in old photographs when Anderson's results came out, but they had dismissed them as protons. The positron had also been contemporaneously discovered by Patrick Blackett and Giuseppe Occhialini at the Cavendish Laboratory in 1932. Blackett and Occhialini had delayed publication to obtain more solid evidence, so Anderson was able to publish the discovery first. Natural production Positrons are produced, together with neutrinos naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle produced by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In research published in 2011 by the American Astronomical Society, positrons were discovered originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module. Antiparticles, of which the most common are antineutrinos and positrons due to their low mass, are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery. Positron production from radioactive decay can be considered both artificial and natural production, as the generation of the radioisotope can be natural or artificial. Perhaps the best known naturally-occurring radioisotope which produces positrons is potassium-40, a long-lived isotope of potassium which occurs as a primordial isotope of potassium. Even though it is a small percentage of potassium (0.0117%), it is the single most abundant radioisotope in the human body. In a human body of mass, about 4,400 nuclei of 40K decay per second. The activity of natural potassium is 31 Bq/g. About 0.001% of these 40K decays produce about 4000 natural positrons per day in the human body. These positrons soon find an electron, undergo annihilation, and produce pairs of 511 keV photons, in a process similar (but much lower intensity) to that which happens during a PET scan nuclear medicine procedure. Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma in astrophysical jets. Large clouds of positron-electron plasma have also been associated with neutron stars. Observation in cosmic rays Satellite experiments have found evidence of positrons (as well as a few antiprotons) in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. However, the fraction of positrons in cosmic rays has been measured more recently with improved accuracy, especially at much higher energy levels, and the fraction of positrons has been seen to be greater in these higher energy cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe (evidence for which is lacking, see below). Rather, the antimatter in cosmic rays appear to consist of only these two elementary particles. Recent theories suggest the source of such positrons may come from annihilation of dark matter particles, acceleration of positrons to high energies in astrophysical objects, and production of high energy positrons in the interactions of cosmic ray nuclei with interstellar gas. Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 0.5 GeV to 500 GeV. Positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles. Positrons, like anti-protons, do not appear to originate from any hypothetical "antimatter" regions of the universe. On the contrary, there is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. Artificial production Physicists at the Lawrence Livermore National Laboratory in California have used a short, ultra-intense laser to irradiate a millimeter-thick gold target and produce more than 100 billion positrons. Presently significant lab production of 5 MeV positron-electron beams allows investigation of multiple characteristics such as how different elements react to 5 MeV positron interactions or impacts, how energy is transferred to particles, and the shock effect of gamma-ray bursts. In 2023, a collaboration between CERN and University of Oxford performed an experiment at the HiRadMat facility in which nano-second duration beams of electron-positron pairs were produced containing more than 10 trillion electron-positron pairs, so creating the first 'pair plasma' in the laboratory with sufficient density to support collective plasma behavior. Future experiments offer the possibility to study physics relevant to extreme astrophysical environments where copious electron-positron pairs are generated, such as gamma-ray bursts, fast radio bursts and blazar jets. Applications Certain kinds of particle accelerator experiments involve colliding positrons and electrons at relativistic speeds. The high impact energy and the mutual annihilation of these matter/antimatter opposites create a fountain of diverse subatomic particles. Physicists study the results of these collisions to test theoretical predictions and to search for new kinds of particles. The ALPHA experiment combines positrons with antiprotons to study properties of antihydrogen. Gamma rays, emitted indirectly by a positron-emitting radionuclide (tracer), are detected in positron emission tomography (PET) scanners used in hospitals. PET scanners create detailed three-dimensional images of metabolic activity within the human body. An experimental tool called positron annihilation spectroscopy (PAS) is used in materials research to detect variations in density, defects, displacements, or even voids, within a solid material. See also Beta particle Buffer-gas trap List of particles Positronium Positronic brain References External links What is a Positron? (from the Frequently Asked Questions :: Center for Antimatter-Matter Studies) Website about positrons and antimatter Positron information search at SLAC Positron Annihilation as a method of experimental physics used in materials research. New production method to produce large quantities of positrons Website about antimatter (positrons, positronium and antihydrogen). Positron Laboratory, Como, Italy Website of the AEgIS: Antimatter Experiment: Gravity, Interferometry, Spectroscopy, CERN Synopsis: Tabletop Particle Accelerator ... new tabletop method for generating electron–positron streams. Antimatter Electron Positron Elementary particles Leptons Quantum electrodynamics
Positron
[ "Physics", "Chemistry" ]
3,103
[ "Electron", "Antimatter", "Elementary particles", "Matter", "Molecular physics", "Positron", "Subatomic particles" ]
24,762
https://en.wikipedia.org/wiki/P53
p53, also known as Tumor protein P53, cellular tumor antigen p53 (UniProt name), or transformation-related protein 53 (TRP53) is a regulatory protein that is often mutated in human cancers. The p53 proteins (originally thought to be, and often spoken of as, a single protein) are crucial in vertebrates, where they prevent cancer formation. As such, p53 has been described as "the guardian of the genome" because of its role in conserving stability by preventing genome mutation. Hence TP53 is classified as a tumor suppressor gene. The TP53 gene is the most frequently mutated gene (>50%) in human cancer, indicating that the TP53 gene plays a crucial role in preventing cancer formation. TP53 gene encodes proteins that bind to DNA and regulate gene expression to prevent mutations of the genome. In addition to the full-length protein, the human TP53 gene encodes at least 12 protein isoforms. Gene In humans, the TP53 gene is located on the short arm of chromosome 17 (17p13.1). The gene spans 20 kb, with a non-coding exon 1 and a very long first intron of 10 kb, overlapping the Hp53int1 gene. The coding sequence contains five regions showing a high degree of conservation in vertebrates, predominantly in exons 2, 5, 6, 7 and 8, but the sequences found in invertebrates show only distant resemblance to mammalian TP53. TP53 orthologs have been identified in most mammals for which complete genome data are available. Human TP53 gene In humans, a common polymorphism involves the substitution of an arginine for a proline at codon position 72 of exon 4. Many studies have investigated a genetic link between this variation and cancer susceptibility; however, the results have been controversial. For instance, a meta-analysis from 2009 failed to show a link for cervical cancer. A 2011 study found that the TP53 proline mutation did have a profound effect on pancreatic cancer risk among males. A study of Arab women found that proline homozygosity at TP53 codon 72 is associated with a decreased risk for breast cancer. One study suggested that TP53 codon 72 polymorphisms, MDM2 SNP309, and A2164G may collectively be associated with non-oropharyngeal cancer susceptibility and that MDM2 SNP309 in combination with TP53 codon 72 may accelerate the development of non-oropharyngeal cancer in women. A 2011 study found that TP53 codon 72 polymorphism was associated with an increased risk of lung cancer. Meta-analyses from 2011 found no significant associations between TP53 codon 72 polymorphisms and both colorectal cancer risk and endometrial cancer risk. A 2011 study of a Brazilian birth cohort found an association between the non-mutant arginine TP53 and individuals without a family history of cancer. Another 2011 study found that the p53 homozygous (Pro/Pro) genotype was associated with a significantly increased risk for renal cell carcinoma. Function DNA damage and repair p53 plays a role in regulation or progression through the cell cycle, apoptosis, and genomic stability by means of several mechanisms: It can activate DNA repair proteins when DNA has sustained damage Thus, it may be an important factor in aging. It can arrest growth by holding the cell cycle at the G1/S regulation point on DNA damage recognition—if it holds the cell here for long enough, the DNA repair proteins will have time to fix the damage and the cell will be allowed to continue the cell cycle. It can initiate apoptosis (i.e., programmed cell death) if DNA damage proves to be irreparable. It is essential for the senescence response to short telomeres. WAF1/CIP1 encodes for p21 and hundreds of other down-stream genes. p21 (WAF1) binds to the G1-S/CDK (CDK4/CDK6, CDK2, and CDK1) complexes (molecules important for the G1/S transition in the cell cycle) inhibiting their activity. When p21(WAF1) is complexed with CDK2, the cell cannot continue to the next stage of cell division. A mutant p53 will no longer bind DNA in an effective way, and, as a consequence, the p21 protein will not be available to act as the "stop signal" for cell division. Studies of human embryonic stem cells (hESCs) commonly describe the nonfunctional p53-p21 axis of the G1/S checkpoint pathway with subsequent relevance for cell cycle regulation and the DNA damage response (DDR). Importantly, p21 mRNA is clearly present and upregulated after the DDR in hESCs, but p21 protein is not detectable. In this cell type, p53 activates numerous microRNAs (like miR-302a, miR-302b, miR-302c, and miR-302d) that directly inhibit the p21 expression in hESCs. The p21 protein binds directly to cyclin-CDK complexes that drive forward the cell cycle and inhibits their kinase activity, thereby causing cell cycle arrest to allow repair to take place. p21 can also mediate growth arrest associated with differentiation and a more permanent growth arrest associated with cellular senescence. The p21 gene contains several p53 response elements that mediate direct binding of the p53 protein, resulting in transcriptional activation of the gene encoding the p21 protein. The p53 and RB1 pathways are linked via p14ARF, raising the possibility that the pathways may regulate each other. p53 expression can be stimulated by UV light, which also causes DNA damage. In this case, p53 can initiate events leading to tanning. Stem cells Levels of p53 play an important role in the maintenance of stem cells throughout development and the rest of human life. In human embryonic stem cells (hESCs)s, p53 is maintained at low inactive levels. This is because activation of p53 leads to rapid differentiation of hESCs. Studies have shown that knocking out p53 delays differentiation and that adding p53 causes spontaneous differentiation, showing how p53 promotes differentiation of hESCs and plays a key role in cell cycle as a differentiation regulator. When p53 becomes stabilized and activated in hESCs, it increases p21 to establish a longer G1. This typically leads to abolition of S-phase entry, which stops the cell cycle in G1, leading to differentiation. Work in mouse embryonic stem cells has recently shown however that the expression of P53 does not necessarily lead to differentiation. p53 also activates miR-34a and miR-145, which then repress the hESCs pluripotency factors, further instigating differentiation. In adult stem cells, p53 regulation is important for maintenance of stemness in adult stem cell niches. Mechanical signals such as hypoxia affect levels of p53 in these niche cells through the hypoxia inducible factors, HIF-1α and HIF-2α. While HIF-1α stabilizes p53, HIF-2α suppresses it. Suppression of p53 plays important roles in cancer stem cell phenotype, induced pluripotent stem cells and other stem cell roles and behaviors, such as blastema formation. Cells with decreased levels of p53 have been shown to reprogram into stem cells with a much greater efficiency than normal cells. Papers suggest that the lack of cell cycle arrest and apoptosis gives more cells the chance to be reprogrammed. Decreased levels of p53 were also shown to be a crucial aspect of blastema formation in the legs of salamanders. p53 regulation is very important in acting as a barrier between stem cells and a differentiated stem cell state, as well as a barrier between stem cells being functional and being cancerous. Other Apart from the cellular and molecular effects above, p53 has a tissue-level anticancer effect that works by inhibiting angiogenesis. As tumors grow they need to recruit new blood vessels to supply them, and p53 inhibits that by (i) interfering with regulators of tumor hypoxia that also affect angiogenesis, such as HIF1 and HIF2, (ii) inhibiting the production of angiogenic promoting factors, and (iii) directly increasing the production of angiogenesis inhibitors, such as arresten. p53 by regulating Leukemia Inhibitory Factor has been shown to facilitate implantation in the mouse and possibly human reproduction. The immune response to infection also involves p53 and NF-κB. Checkpoint control of the cell cycle and of apoptosis by p53 is inhibited by some infections such as Mycoplasma bacteria, raising the specter of oncogenic infection. Regulation p53 acts as a cellular stress sensor. It is normally kept at low levels by being constantly marked for degradation by the E3 ubiquitin ligase protein MDM2. p53 is activated in response to myriad stressors – including DNA damage (induced by either UV, IR, or chemical agents such as hydrogen peroxide), oxidative stress, osmotic shock, ribonucleotide depletion, viral lung infections and deregulated oncogene expression. This activation is marked by two major events. First, the half-life of the p53 protein is increased drastically, leading to a quick accumulation of p53 in stressed cells. Second, a conformational change forces p53 to be activated as a transcription regulator in these cells. The critical event leading to the activation of p53 is the phosphorylation of its N-terminal domain. The N-terminal transcriptional activation domain contains a large number of phosphorylation sites and can be considered as the primary target for protein kinases transducing stress signals. The protein kinases that are known to target this transcriptional activation domain of p53 can be roughly divided into two groups. A first group of protein kinases belongs to the MAPK family (JNK1-3, ERK1-2, p38 MAPK), which is known to respond to several types of stress, such as membrane damage, oxidative stress, osmotic shock, heat shock, etc. A second group of protein kinases (ATR, ATM, CHK1 and CHK2, DNA-PK, CAK, TP53RK) is implicated in the genome integrity checkpoint, a molecular cascade that detects and responds to several forms of DNA damage caused by genotoxic stress. Oncogenes also stimulate p53 activation, mediated by the protein p14ARF. In unstressed cells, p53 levels are kept low through a continuous degradation of p53. A protein called Mdm2 (also called HDM2 in humans), binds to p53, preventing its action and transports it from the nucleus to the cytosol. Mdm2 also acts as an ubiquitin ligase and covalently attaches ubiquitin to p53 and thus marks p53 for degradation by the proteasome. However, ubiquitylation of p53 is reversible. On activation of p53, Mdm2 is also activated, setting up a feedback loop. p53 levels can show oscillations (or repeated pulses) in response to certain stresses, and these pulses can be important in determining whether the cells survive the stress, or die. MI-63 binds to MDM2, reactivating p53 in situations where p53's function has become inhibited. A ubiquitin specific protease, USP7 (or HAUSP), can cleave ubiquitin off p53, thereby protecting it from proteasome-dependent degradation via the ubiquitin ligase pathway. This is one means by which p53 is stabilized in response to oncogenic insults. USP42 has also been shown to deubiquitinate p53 and may be required for the ability of p53 to respond to stress. Recent research has shown that HAUSP is mainly localized in the nucleus, though a fraction of it can be found in the cytoplasm and mitochondria. Overexpression of HAUSP results in p53 stabilization. However, depletion of HAUSP does not result in a decrease in p53 levels but rather increases p53 levels due to the fact that HAUSP binds and deubiquitinates Mdm2. It has been shown that HAUSP is a better binding partner to Mdm2 than p53 in unstressed cells. USP10, however, has been shown to be located in the cytoplasm in unstressed cells and deubiquitinates cytoplasmic p53, reversing Mdm2 ubiquitination. Following DNA damage, USP10 translocates to the nucleus and contributes to p53 stability. Also USP10 does not interact with Mdm2. Phosphorylation of the N-terminal end of p53 by the above-mentioned protein kinases disrupts Mdm2-binding. Other proteins, such as Pin1, are then recruited to p53 and induce a conformational change in p53, which prevents Mdm2-binding even more. Phosphorylation also allows for binding of transcriptional coactivators, like p300 and PCAF, which then acetylate the C-terminal end of p53, exposing the DNA binding domain of p53, allowing it to activate or repress specific genes. Deacetylase enzymes, such as Sirt1 and Sirt7, can deacetylate p53, leading to an inhibition of apoptosis. Some oncogenes can also stimulate the transcription of proteins that bind to MDM2 and inhibit its activity. Epigenetic marks like histone methylation can also regulate p53, for example, p53 interacts directly with a repressive Trim24 cofactor that binds histones in regions of the genome that are epigenetically repressed. Trim24 prevents p53 from activating its targets, but only in these regions, effectively giving p53 the ability to 'read out' the histone profile at key target genes and act in a gene-specific manner. Role in disease If the TP53 gene is damaged, tumor suppression is severely compromised. People who inherit only one functional copy of the TP53 gene will most likely develop tumors in early adulthood, a disorder known as Li–Fraumeni syndrome. The TP53 gene can also be modified by mutagens (chemicals, radiation, or viruses), increasing the likelihood for uncontrolled cell division. More than 50 percent of human tumors contain a mutation or deletion of the TP53 gene. Loss of p53 creates genomic instability that most often results in an aneuploidy phenotype. Increasing the amount of p53 may seem a solution for treatment of tumors or prevention of their spreading. This, however, is not a usable method of treatment, since it can cause premature aging. Restoring endogenous normal p53 function holds some promise. Research has shown that this restoration can lead to regression of certain cancer cells without damaging other cells in the process. The ways by which tumor regression occurs depends mainly on the tumor type. For example, restoration of endogenous p53 function in lymphomas may induce apoptosis, while cell growth may be reduced to normal levels. Thus, pharmacological reactivation of p53 presents itself as a viable cancer treatment option. The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of head and neck squamous cell carcinoma. It delivers a functional copy of the p53 gene using an engineered adenovirus. Certain pathogens can also affect the p53 protein that the TP53 gene expresses. One such example, human papillomavirus (HPV), encodes a protein, E6, which binds to the p53 protein and inactivates it. This mechanism, in synergy with the inactivation of the cell cycle regulator pRb by the HPV protein E7, allows for repeated cell division manifested clinically as warts. Certain HPV types, in particular types 16 and 18, can also lead to progression from a benign wart to low or high-grade cervical dysplasia, which are reversible forms of precancerous lesions. Persistent infection of the cervix over the years can cause irreversible changes leading to carcinoma in situ and eventually invasive cervical cancer. This results from the effects of HPV genes, particularly those encoding E6 and E7, which are the two viral oncoproteins that are preferentially retained and expressed in cervical cancers by integration of the viral DNA into the host genome. The p53 protein is continually produced and degraded in cells of healthy people, resulting in damped oscillation (see a stochastic model of this process in ). The degradation of the p53 protein is associated with binding of MDM2. In a negative feedback loop, MDM2 itself is induced by the p53 protein. Mutant p53 proteins often fail to induce MDM2, causing p53 to accumulate at very high levels. Moreover, the mutant p53 protein itself can inhibit normal p53 protein levels. In some cases, single missense mutations in p53 have been shown to disrupt p53 stability and function. Suppression of p53 in human breast cancer cells is shown to lead to increased CXCR5 chemokine receptor gene expression and activated cell migration in response to chemokine CXCL13. One study found that p53 and Myc proteins were key to the survival of Chronic Myeloid Leukaemia (CML) cells. Targeting p53 and Myc proteins with drugs gave positive results on mice with CML. Experimental analysis of p53 mutations Most p53 mutations are detected by DNA sequencing. However, it is known that single missense mutations can have a large spectrum from rather mild to very severe functional effects. The large spectrum of cancer phenotypes due to mutations in the TP53 gene is also supported by the fact that different isoforms of p53 proteins have different cellular mechanisms for prevention against cancer. Mutations in TP53 can give rise to different isoforms, preventing their overall functionality in different cellular mechanisms and thereby extending the cancer phenotype from mild to severe. Recent studies show that p53 isoforms are differentially expressed in different human tissues, and the loss-of-function or gain-of-function mutations within the isoforms can cause tissue-specific cancer or provide cancer stem cell potential in different tissues. TP53 mutation also hits energy metabolism and increases glycolysis in breast cancer cells. The dynamics of p53 proteins, along with its antagonist Mdm2, indicate that the levels of p53, in units of concentration, oscillate as a function of time. This "damped" oscillation is both clinically documented and mathematically modelled. Mathematical models also indicate that the p53 concentration oscillates much faster once teratogens, such as double-stranded breaks (DSB) or UV radiation, are introduced to the system. This supports and models the current understanding of p53 dynamics, where DNA damage induces p53 activation (see p53 regulation for more information). Current models can also be useful for modelling the mutations in p53 isoforms and their effects on p53 oscillation, thereby promoting de novo tissue-specific pharmacological drug discovery. Discovery p53 was identified in 1979 by Lionel Crawford, David P. Lane, Arnold Levine, and Lloyd Old, working at Imperial Cancer Research Fund (UK) Princeton University/UMDNJ (Cancer Institute of New Jersey), and Memorial Sloan Kettering Cancer Center, respectively. It had been hypothesized to exist before as the target of the SV40 virus, a strain that induced development of tumors. The name p53 was given in 1979 describing the apparent molecular mass. The TP53 gene from the mouse was first cloned by Peter Chumakov of The Academy of Sciences of the USSR in 1982, and independently in 1983 by Moshe Oren in collaboration with David Givol (Weizmann Institute of Science). The human TP53 gene was cloned in 1984 and the full length clone in 1985. It was initially presumed to be an oncogene due to the use of mutated cDNA following purification of tumor cell mRNA. Its role as a tumor suppressor gene was revealed in 1989 by Bert Vogelstein at the Johns Hopkins School of Medicine and Arnold Levine at Princeton University. p53 went on to be identified as a transcription factor by Guillermina Lozano working at MD Anderson Cancer Center. Warren Maltzman, of the Waksman Institute of Rutgers University first demonstrated that TP53 was responsive to DNA damage in the form of ultraviolet radiation. In a series of publications in 1991–92, Michael Kastan of Johns Hopkins University, reported that TP53 was a critical part of a signal transduction pathway that helped cells respond to DNA damage. In 1993, p53 was voted molecule of the year by Science magazine. Structure p53 has seven domains: an acidic N-terminus transcription-activation domain (TAD), also known as activation domain 1 (AD1), which activates transcription factors. The N-terminus contains two complementary transcriptional activation domains, with a major one at residues 1–42 and a minor one at residues 55–75, specifically involved in the regulation of several pro-apoptotic genes. activation domain 2 (AD2) important for apoptotic activity: residues 43–63. proline rich domain important for the apoptotic activity of p53 by nuclear exportation via MAPK: residues 64–92. central DNA-binding core domain (DBD). Contains one zinc atom and several arginine amino acids: residues 102–292. This region is responsible for binding the p53 co-repressor LMO3. Nuclear Localization Signaling (NLS) domain, residues 316–325. homo-oligomerisation domain (OD): residues 307–355. Tetramerization is essential for the activity of p53 in vivo. C-terminal involved in downregulation of DNA binding of the central domain: residues 356–393. Mutations that deactivate p53 in cancer usually occur in the DBD. Most of these mutations destroy the ability of the protein to bind to its target DNA sequences, and thus prevents transcriptional activation of these genes. As such, mutations in the DBD are recessive loss-of-function mutations. Molecules of p53 with mutations in the OD dimerise with wild-type p53, and prevent them from activating transcription. Therefore, OD mutations have a dominant negative effect on the function of p53. Wild-type p53 is a labile protein, comprising folded and unstructured regions that function in a synergistic manner. SDS-PAGE analysis indicates that p53 is a 53-kilodalton (kDa) protein. However, the actual mass of the full-length p53 protein (p53α) based on the sum of masses of the amino acid residues is only 43.7 kDa. This difference is due to the high number of proline residues in the protein, which slow its migration on SDS-PAGE, thus making it appear heavier than it actually is. Isoforms As with 95% of human genes, TP53 encodes more than one protein. All these p53 proteins are called the p53 isoforms. These proteins range in size from 3.5 to 43.7 kDa. Several isoforms were discovered in 2005, and so far 12 human p53 isoforms have been identified (p53α, p53β, p53γ, ∆40p53α, ∆40p53β, ∆40p53γ, ∆133p53α, ∆133p53β, ∆133p53γ, ∆160p53α, ∆160p53β, ∆160p53γ). Furthermore, p53 isoforms are expressed in a tissue dependent manner and p53α is never expressed alone. The full length p53 isoform proteins can be subdivided into different protein domains. Starting from the N-terminus, there are first the amino-terminal transcription-activation domains (TAD 1, TAD 2), which are needed to induce a subset of p53 target genes. This domain is followed by the proline rich domain (PXXP), whereby the motif PXXP is repeated (P is a proline and X can be any amino acid). It is required among others for p53 mediated apoptosis. Some isoforms lack the proline rich domain, such as Δ133p53β,γ and Δ160p53α,β,γ; hence some isoforms of p53 are not mediating apoptosis, emphasizing the diversifying roles of the TP53 gene. Afterwards there is the DNA binding domain (DBD), which enables the proteins to sequence specific binding. The C-terminus domain completes the protein. It includes the nuclear localization signal (NLS), the nuclear export signal (NES) and the oligomerisation domain (OD). The NLS and NES are responsible for the subcellular regulation of p53. Through the OD, p53 can form a tetramer and then bind to DNA. Among the isoforms, some domains can be missing, but all of them share most of the highly conserved DNA-binding domain. The isoforms are formed by different mechanisms. The beta and the gamma isoforms are generated by multiple splicing of intron 9, which leads to a different C-terminus. Furthermore, the usage of an internal promoter in intron 4 causes the ∆133 and ∆160 isoforms, which lack the TAD domain and a part of the DBD. Moreover, alternative initiation of translation at codon 40 or 160 bear the ∆40p53 and ∆160p53 isoforms. Due to the isoformic nature of p53 proteins, there have been several sources of evidence showing that mutations within the TP53 gene giving rise to mutated isoforms are causative agents of various cancer phenotypes, from mild to severe, due to single mutation in the TP53 gene (refer to section Experimental analysis of p53 mutations for more details). Interactions p53 has been shown to interact with: AIMP2, ANKRD2, APTX, ATM, ATR, ATF3, AURKA, BAK1, BARD1, BLM, BRCA1, BRCA2, BRCC3, BRE, CEBPZ, CDC14A, Cdk1, CFLAR, CHEK1, CCNG1, CREBBP, CREB1, Cyclin H, CDK7, DNA-PKcs, E4F1, EFEMP2, EIF2AK2, ELL, EP300, ERCC6, GNL3, GPS2, GSK3B, HSP90AA1, HIF1A, HIPK1, HIPK2, HMGB1, HSPA9, Huntingtin, ING1, ING4, ING5, IκBα, KPNB1, LMO3, Mdm2, MDM4, MED1, MAPK9, MNAT1, NDN, NCL, NUMB, NF-κB, P16, PARC, PARP1, PIAS1, CDC14B, PIN1, PLAGL1, PLK3, PRKRA, PHB, PML, PSME3, PTEN, PTK2, PTTG1, RAD51, RCHY1, RELA, Reprimo RPA1, RPL11, S100B, SUMO1, SMARCA4, SMARCB1, SMN1, STAT3, TBP, TFAP2A, TFDP1, TIGAR, TOP1, TOP2A, TP53BP1, TP53BP2, TOP2B, TP53INP1, TSG101, UBE2A, UBE2I, UBC, USP7, USP10, WRN, WWOX, XPB, YBX1, YPEL3, YWHAZ, Zif268, ZNF148, SIRT1, circRNA_014511. See also Pifithrin, an inhibitor of P53 Notes References External links GeneReviews/NCBI/NIH/UW entry on Li-Fraumeni Syndrome TUMOR PROTEIN p53 @ OMIM p53 restoration of function p53 @ The Atlas of Genetics and Cytogenetics in Oncology and Haematology TP53 Gene @ GeneCards p53 News provided by insciences organisation Living LFS A non-profit Li-Fraumeni Syndrome patient support organization The George Pantziarka TP53 Trust A support group from the UK for people with Li-Fraumeni Syndrome or other TP53-related disorders IARC TP53 Somatic Mutations database maintained at IARC, Lyon, by Magali Olivier PDBe-KB provides an overview of all the structure information available in the PDB for Human P53. scientific animation conformational changes of p53 upon binding to DNA Programmed cell death Proteins Transcription factors Tumor suppressor genes Apoptosis Genes mutated in mice Aging-related proteins
P53
[ "Chemistry", "Biology" ]
6,256
[ "Biomolecules by chemical classification", "Gene expression", "Aging-related proteins", "Signal transduction", "Senescence", "Induced stem cells", "Apoptosis", "Molecular biology", "Proteins", "Programmed cell death", "Transcription factors" ]
24,834
https://en.wikipedia.org/wiki/Protein%20targeting
Protein targeting or protein sorting is the biological mechanism by which proteins are transported to their appropriate destinations within or outside the cell. Proteins can be targeted to the inner space of an organelle, different intracellular membranes, the plasma membrane, or to the exterior of the cell via secretion. Information contained in the protein itself directs this delivery process. Correct sorting is crucial for the cell; errors or dysfunction in sorting have been linked to multiple diseases. History In 1970, Günter Blobel conducted experiments on protein translocation across membranes. Blobel, then an assistant professor at Rockefeller University, built upon the work of his colleague George Palade. Palade had previously demonstrated that non-secreted proteins were translated by free ribosomes in the cytosol, while secreted proteins (and target proteins, in general) were translated by ribosomes bound to the endoplasmic reticulum. Candidate explanations at the time postulated a processing difference between free and ER-bound ribosomes, but Blobel hypothesized that protein targeting relied on characteristics inherent to the proteins, rather than a difference in ribosomes. Supporting his hypothesis, Blobel discovered that many proteins have a short amino acid sequence at one end that functions like a postal code specifying an intracellular or extracellular destination. He described these short sequences (generally 13 to 36 amino acids residues) as signal peptides or signal sequences and was awarded the 1999 Nobel prize in Physiology for the same. Signal peptides Signal peptides serve as targeting signals, enabling cellular transport machinery to direct proteins to specific intracellular or extracellular locations. While no consensus sequence has been identified for signal peptides, many nonetheless possess a characteristic tripartite structure: A positively charged, hydrophilic region near the N-terminal. A span of 10 to 15 hydrophobic amino acids near the middle of the signal peptide. A slightly polar region near the C-terminal, typically favoring amino acids with smaller side chains at positions approaching the cleavage site. After a protein has reached its destination, the signal peptide is generally cleaved by a signal peptidase. Consequently, most mature proteins do not contain signal peptides. While most signal peptides are found at the N-terminal, in peroxisomes the targeting sequence is located on the C-terminal extension. Unlike signal peptides, signal patches are composed by amino acid residues that are discontinuous in the primary sequence but become functional when folding brings them together on the protein surface. Unlike most signal sequences, signal patches are not cleaved after sorting is complete. In addition to intrinsic signaling sequences, protein modifications like glycosylation can also induce targeting to specific intracellular or extracellular regions. Protein translocation Since the translation of mRNA into protein by a ribosome takes place within the cytosol, proteins destined for secretion or a specific organelle must be translocated. This process can occur during translation, known as co-translational translocation, or after translation is complete, known as post-translational translocation. Co-translational translocation Most secretory and membrane-bound proteins are co-translationally translocated. Proteins that reside in the endoplasmic reticulum (ER), golgi or endosomes also use the co-translational translocation pathway. This process begins while the protein is being synthesized on the ribosome, when a signal recognition particle (SRP) recognizes an N-terminal signal peptide of the nascent protein. Binding of the SRP temporarily pauses synthesis while the ribosome-protein complex is transferred to an SRP receptor on the ER in eukaryotes, and the plasma membrane in prokaryotes. There, the nascent protein is inserted into the translocon, a membrane-bound protein conducting channel composed of the Sec61 translocation complex in eukaryotes, and the homologous SecYEG complex in prokaryotes. In secretory proteins and type I transmembrane proteins, the signal sequence is immediately cleaved from the nascent polypeptide once it has been translocated into the membrane of the ER (eukaryotes) or plasma membrane (prokaryotes) by signal peptidase. The signal sequence of type II membrane proteins and some polytopic membrane proteins are not cleaved off and therefore are referred to as signal anchor sequences. Within the ER, the protein is first covered by a chaperone protein to protect it from the high concentration of other proteins in the ER, giving it time to fold correctly. Once folded, the protein is modified as needed (for example, by glycosylation), then transported to the Golgi for further processing and goes to its target organelles or is retained in the ER by various ER retention mechanisms. The amino acid chain of transmembrane proteins, which often are transmembrane receptors, passes through a membrane one or several times. These proteins are inserted into the membrane by translocation, until the process is interrupted by a stop-transfer sequence, also called a membrane anchor or signal-anchor sequence. These complex membrane proteins are currently characterized using the same model of targeting that has been developed for secretory proteins. However, many complex multi-transmembrane proteins contain structural aspects that do not fit this model. Seven transmembrane G-protein coupled receptors (which represent about 5% of the genes in humans) mostly do not have an amino-terminal signal sequence. In contrast to secretory proteins, the first transmembrane domain acts as the first signal sequence, which targets them to the ER membrane. This also results in the translocation of the amino terminus of the protein into the ER membrane lumen. This translocation, which has been demonstrated with opsin with in vitro experiments, breaks the usual pattern of "co-translational" translocation which has always held for mammalian proteins targeted to the ER. A great deal of the mechanics of transmembrane topology and folding remains to be elucidated. Post-translational translocation Even though most secretory proteins are co-translationally translocated, some are translated in the cytosol and later transported to the ER/plasma membrane by a post-translational system. In prokaryotes this process requires certain cofactors such as SecA and SecB and is facilitated by Sec62 and Sec63, two membrane-bound proteins. The Sec63 complex, which is embedded in the ER membrane, causes hydrolysis of ATP, allowing chaperone proteins to bind to an exposed peptide chain and slide the polypeptide into the ER lumen. Once in the lumen the polypeptide chain can be folded properly. This process only occurs in unfolded proteins located in the cytosol. In addition, proteins targeted to other cellular destinations, such as mitochondria, chloroplasts, or peroxisomes, use specialized post-translational pathways. Proteins targeted for the nucleus are also translocated post-translationally through the addition of a nuclear localization signal (NLS) that promotes passage through the nuclear envelope via nuclear pores. Sorting of proteins Mitochondria While some proteins in the mitochondria originate from mitochondrial DNA within the organelle, most mitochondrial proteins are synthesized as cytosolic precursors containing uptake peptide signals. Unfolded proteins bound by cytosolic chaperone hsp70 that are targeted to the mitochondria may be localized to four different areas depending on their sequences. They may be targeted to the mitochondrial matrix, the outer membrane, the intermembrane space, or the inner membrane. Defects in any one or more of these processes has been linked to health and disease. Mitochondrial matrix Proteins destined for the mitochondrial matrix have specific signal sequences at their beginning (N-terminus) that consist of a string of 20 to 50 amino acids. These sequences are designed to interact with receptors that guide the proteins to their correct location inside the mitochondria. The sequences have a unique structure with clusters of water-loving (hydrophilic) and water-avoiding (hydrophobic) amino acids, giving them a dual nature known as amphipathic. These amphipathic sequences typically form a spiral shape (alpha-helix) with the charged amino acids on one side and the hydrophobic ones on the opposite side. This structural feature is essential for the sequence to function correctly in directing proteins to the matrix. If mutations occur that mess with this dual nature, the protein often fails to reach its intended destination, although not all changes to the sequence have this effect. This indicates the importance of the amphipathic property for the protein to be correctly targeted to the mitochondrial matrix.Proteins targeted to the mitochondrial matrix first involves interactions between the matrix targeting sequence located at the N-terminus and the outer membrane import receptor complex TOM20/22. In addition to the docking of internal sequences and cytosolic chaperones to TOM70. Where TOM is an abbreviation for translocase of the outer membrane. Binding of the matrix targeting sequence to the import receptor triggers a handoff of the polypeptide to the general import core (GIP) known as TOM40. The general import core (TOM40) then feeds the polypeptide chain through the intermembrane space and into another translocase complex TIM17/23/44 located on the inner mitochondrial membrane. This is accompanied by the necessary release of the cytosolic chaperones that maintain an unfolded state prior to entering the mitochondria. As the polypeptide enters the matrix, the signal sequence is cleaved by a processing peptidase and the remaining sequences are bound by mitochondrial chaperones to await proper folding and activity. The push and pull of the polypeptide from the cytosol to the intermembrane space and then the matrix is achieved by an electrochemical gradient that is established by the mitochondrion during oxidative phosphorylation. In which a mitochondrion active in metabolism has generated a negative potential inside the matrix and a positive potential in the intermembrane space. It is this negative potential inside the matrix that directs the positively charged regions of the targeting sequence into its desired location. Mitochondrial inner membrane Targeting of mitochondrial proteins to the inner membrane may follow 3 different pathways depending upon their overall sequences, however, entry from the outer membrane remains the same using the import receptor complex TOM20/22 and TOM40 general import core. The first pathway for proteins targeted to the inner membrane follows the same steps as those designated to the matrix where it contains a matrix targeting sequence that channels the polypeptide to the inner membrane complex containing the previously mentioned translocase complex TIM17/23/44. However, the difference is that the peptides that are designated to the inner membrane and not the matrix contain an upstream sequence called the stop-transfer-anchor sequence. This stop-transfer-anchor sequence is a hydrophobic region that embeds itself into the phospholipid bilayer of the inner membrane and prevents translocation further into the mitochondrion. The second pathway for proteins targeted to the inner membrane follows the matrix localization pathway in its entirety. However, instead of a stop-transfer-anchor sequence, it contains another sequence that interacts with an inner membrane protein called Oxa-1 once inside the matrix that will embed it into the inner membrane. The third pathway for mitochondrial proteins targeted to the inner membrane follow the same entry as the others into the outer membrane, however, this pathway utilizes the translocase complex TIM22/54 assisted by complex TIM9/10 in the intermembrane space to anchor the incoming peptide into the membrane. The peptides for this last pathway do not contain a matrix targeting sequence, but instead contain several internal targeting sequences. Mitochondrial intermembrane space If instead the precursor protein is designated to the intermembrane space of the mitochondrion, there are two pathways this may occur depending on the sequences being recognized. The first pathway to the intermembrane space follows the same steps for an inner membrane targeted protein. However, once bound to the inner membrane the C-terminus of the anchored protein is cleaved via a peptidase that liberates the preprotein into the intermembrane space so it can fold into its active state. One of the greatest examples for a protein that follows this pathway is cytochrome b2, that upon being cleaved will interact with a heme cofactor and become active. The second intermembrane space pathway does not utilize any inner membrane complexes and therefor does not contain a matrix targeting signal. Instead, it enters through the general import core TOM40 and is further modified in the intermembrane space to achieve its active conformation. TIM9/10 is an example of a protein that follows this pathway in order to be in the location it needs to be to assist in inner membrane targeting. Mitochondrial outer membrane Outer membrane targeting simply involves the interaction of precursor proteins with the outer membrane translocase complexes that embeds it into the membrane via internal-targeting sequences that are to form hydrophobic alpha helices or beta barrels that span the phospholipid bilayer. This may occur by two different routes depending on the preprotein internal sequences. If the preprotein contains internal hydrophobic regions capable of forming alpha helices, then the preprotein will utilize the mitochondrial import complex (MIM) and be transferred laterally to the membrane. For preproteins containing hydrophobic internal sequences that correlate to beta-barrel forming proteins, they will be imported from the aforementioned outer membrane complex TOM20/22 to the intermembrane space. In which they will interact with TIM9/10 intermembrane-space protein complex that transfers them to sorting and assembly machinery (SAM) that is present in the outer membrane that laterally displaces the targeted protein as a beta-barrel. Chloroplasts Chloroplasts are similar to mitochondria in that they contain their own DNA for production of some of their components. However, the majority of their proteins are obtained via post-translational translocation and arise from nuclear genes. Proteins may be targeted to several sites of the chloroplast depending on their sequences such as the outer envelope, inner envelope, stroma, thylakoid lumen, or the thylakoid membrane. Proteins are targeted to Thylakoids by mechanisms related to Bacterial Protein Translocation. Proteins targeted to the envelope of chloroplasts usually lack cleavable sorting sequence and are laterally displaced via membrane sorting complexes. General import for the majority of preproteins requires translocation from the cytosol through the Toc and Tic complexes located within the chloroplast envelope. Where Toc is an abbreviation for the translocase of the outer chloroplast envelope and Tic is the translocase of the inner chloroplast envelope. There is a minimum of three proteins that make up the function of the Toc complex. Two of which, referred to as Toc159 and Toc34, are responsible for the docking of stromal import sequences and both contain GTPase activity. The third known as Toc 75, is the actual translocation channel that feeds the recognized preprotein by Toc159/34 into the chloroplast. Stroma Targeting to the stroma requires the preprotein to have a stromal import sequence that is recognized by the Tic complex of the inner envelope upon being translocated from the outer envelope by the Toc complex. The Tic complex is composed of at least five different Tic proteins that are required to form the translocation channel across the inner envelope. Upon being delivered to the stroma, the stromal import sequence is cleaved off via a signal peptidase. This delivery process to the stroma is currently known to be driven by ATP hydrolysis via stromal HSP chaperones, instead of the transmembrane electrochemical gradient that is established in mitochondria to drive protein import. Further intra-chloroplast sorting depends on additional target sequences such as those designated to the thylakoid membrane or the thylakoid lumen. Thylakoid lumen If a protein is to be targeted to the thylakoid lumen, this may occur via four differently known routes that closely resemble bacterial protein transport mechanisms. The route that is taken depends upon the protein delivered to the stroma being in either an unfolded or metal-bound folded state. Both of which will still contain a thylakoid targeting sequence that is also cleaved upon entry to the lumen. While protein import into the stroma is ATP-driven, the pathway for metal-bound proteins in a folded state to the thylakoid lumen has been shown to be driven by a pH gradient. Thylakoid membrane Proteins bound for the membrane of the thylakoid will follow up to four known routes that are illustrated in the corresponding figure shown. They may follow a co-translational insertion route that utilizes stromal ribosomes and the SecY/E transmembrane complex, the SRP-dependent pathway, the spontaneous insertion pathway, or the GET pathway. The last of the three are post-translational pathways originating from nuclear genes and therefor constitute the majority of proteins targeted to the thylakoid membrane. According to recent review articles in the journal of biochemistry and molecular biology, the exact mechanisms are not yet fully understood. Both chloroplasts and mitochondria Many proteins are needed in both mitochondria and chloroplasts. In general the dual-targeting peptide is of intermediate character to the two specific ones. The targeting peptides of these proteins have a high content of basic and hydrophobic amino acids, a low content of negatively charged amino acids. They have a lower content of alanine and a higher content of leucine and phenylalanine. The dual targeted proteins have a more hydrophobic targeting peptide than both mitochondrial and chloroplastic ones. However, it is tedious to predict if a peptide is dual-targeted or not based on its physio-chemical characteristics. Nucleus The nucleus of a cell is surrounded by a nuclear envelope consisting of two layers, with the inner layer providing structural support and anchorage for chromosomes and the nuclear lamina. The outer layer is similar to the endoplasmic reticulum (ER) membrane. This envelope contains nuclear pores, which are complex structures made from around 30 different proteins. These pores act as selective gates that control the flow of molecules into and out of the nucleus. While small molecules can pass through these pores without issue, larger molecules, like RNA and proteins destined for the nucleus, must have specific signals to be allowed through. These signals are known as nuclear localization signals, usually comprising short sequences rich in positively charged amino acids like lysine or arginine. Proteins called nuclear import receptors recognize these signals and guide the large molecules through the nuclear pores by interacting with the disordered, mesh-like proteins that fill the pore. The process is dynamic, with the receptor moving the molecule through the meshwork until it reaches the nucleus. Once inside, a GTPase enzyme called Ran, which can exist in two different forms (one bound to GTP and the other to GDP), facilitates the release of the cargo inside the nucleus and recycles the receptor back to the cytosol. The energy for this transport comes from the hydrolysis of GTP by Ran. Similarly, nuclear export receptors help move proteins and RNA out of the nucleus using a different signal and also harnessing Ran's energy conversion. Overall, the nuclear pore complex works efficiently to transport macromolecules at high speed, allowing proteins to move in their folded state and ribosomal components as complete particles, which is distinct from how proteins are transported into most other organelles. Endoplasmic reticulum The endoplasmic reticulum (ER) plays a key role in protein synthesis and distribution in eukaryotic cells. It's a vast network of membranes where proteins are processed and sorted to various destinations, including the ER itself, the cell surface, and other organelles like the Golgi apparatus, endosomes, and lysosomes. Unlike other organelle-targeted proteins, those headed for the ER start to be transferred across its membrane while they're still being made. Protein synthesis and sorting There are two types of proteins that move to the ER: water-soluble proteins, which completely cross into the ER lumen, and transmembrane proteins, which partly cross and embed themselves within the ER membrane. These proteins find their way to the ER with the help of an ER signal sequence, a short stretch of hydrophobic amino acids. Proteins entering the ER are synthesized by ribosomes. There are two sets of ribosomes in the cell: those bound to the ER (making it look 'rough') and those floating freely in the cytosol. Both sets are identical but differ in the proteins they synthesize at a given moment. Ribosomes that are making proteins with an ER signal sequence attach to the ER membrane and start the translocation process. This process is energy-efficient because the growing protein chain itself pushes through the ER membrane as it elongates. As the mRNA is translated into a protein, multiple ribosomes may attach to it, creating a structure called a polyribosome. If the mRNA is coding for a protein with an ER signal sequence, the polyribosome attaches to the ER membrane, and the protein begins to enter the ER while it is still being synthesized. Guided entry of soluble proteins In the process of protein synthesis within eukaryotic cells, soluble proteins that are destined for the endoplasmic reticulum (ER) or for secretion out of the cell are guided to the ER by a two-part system. Firstly, a signal-recognition particle (SRP) in the cytosol attaches to the emerging protein's ER signal sequence and the ribosome itself. Secondly, an SRP receptor located in the ER membrane recognizes and binds to the SRP. This interaction temporarily slows down protein synthesis until the SRP and ribs complex binds to the SRP receptor on the ER. Once this binding occurs, the SRP is released, and the ribosome is transferred to a protein translocator in the ER membrane, allowing protein synthesis to continue. The polypeptide chain of the protein is then threaded through a channel in the translocator into the ER lumen. The signal sequence of the protein, typically at the beginning (N-terminus) of the polypeptide chain, plays a dual role. It not only targets the ribosome to the ER but also triggers the opening of the translocator. As the protein is fed through the translocator, the signal sequence stays attached, allowing the rest of the protein to move through as a loop. A signal peptidase inside the ER then cuts off the signal sequence, which is subsequently discarded into the lipid bilayer of the ER membrane and broken down. Finally, once the last part of the protein (the C-terminus) passes through the translocator, the entire soluble protein is released into the ER lumen, where it can then fold and undergo further modifications or be transported to its final destination. Mechanisms of transmembrane protein integration Transmembrane proteins, which are partly integrated into the ER membrane rather than released into the ER lumen, have a complex assembly process. The initial stages are similar to soluble proteins: a signal sequence starts the insertion into the ER membrane. However, this process is interrupted by a stop-transfer sequence—a string of hydrophobic amino acids—which causes the translocator to halt and release the protein laterally into the membrane. This results in a single-pass transmembrane protein with one end inside the ER lumen and the other in the cytosol, and this orientation is permanent. Some transmembrane proteins use an internal signal (start-transfer sequence) instead of one at the N-terminus, and unlike the initial signal sequence, this start-transfer sequence isn't removed. It begins the transfer process, which continues until a stop-transfer sequence is encountered, at which point both sequences become anchored in the membrane as alpha-helical segments. In more complex proteins that span the membrane multiple times, additional pairs of start- and stop-transfer sequences are used to weave the protein into the membrane in a fashion akin to a sewing machine. Each pair allows a new segment to cross the membrane and adds to the protein's structure, ensuring it is properly embedded with the correct arrangement of segments inside and outside the ER membrane. Peroxisomes Peroxisomes contain a single phospholipid bilayer that surrounds the peroxisomal matrix containing a wide variety of proteins and enzymes that participate in anabolism and catabolism. Peroxisomes are specialized cell organelles that carry out specific oxidative reactions using molecular oxygen. Their primary function is to remove hydrogen atoms from organic molecules, a process that results in the production of hydrogen peroxide (). Within peroxisomes, an enzyme called catalase plays a critical role. It uses the hydrogen peroxide generated in the earlier reaction to oxidize various other substances, including phenols, formic acid, formaldehyde, and alcohol. This is known as the "peroxidative" reaction. Peroxisomes are particularly important in liver and kidney cells for detoxifying harmful substances that enter the bloodstream. For example, they are responsible for oxidizing about 25% of the ethanol we consume into acetaldehyde. Additionally, catalase within peroxisomes can break down excess hydrogen peroxide into water and oxygen and thus preventing potential damage from the build-up of . Since it contains no internal DNA like that of the mitochondria or chloroplast all peroxisomal proteins are encoded by nuclear genes. To date there are two types of known Peroxisome Targeting Signals (PTS): Peroxisome targeting signal 1 (PTS1): a C-terminal tripeptide with a consensus sequence (S/A/C)-(K/R/H)-(L/A). The most common PTS1 is serine-lysine-leucine (SKL). The initial research that led to the discovery of this consensus observed that when firefly luciferase was expressed in cultured insect cells it was targeted to the peroxisome. By testing a variety of mutations in the gene encoding the expressed luciferase, the consensus sequence was then determined. It has also been found that by adding this C-terminal sequence of SKL to a cytosolic protein that it becomes targeted for transport to the peroxisome. The majority of peroxisomal matrix proteins possess this PTS1 type signal. Peroxisome targeting signal 2 (PTS2): a nonapeptide located near the N-terminus with a consensus sequence (R/K)-(L/V/I)-XXXXX-(H/Q)-(L/A/F) (where X can be any amino acid). There are also proteins that possess neither of these signals. Their transport may be based on a so-called "piggy-back" mechanism: such proteins associate with PTS1-possessing matrix proteins and are translocated into the peroxisomal matrix together with them. In the case of cytosolic proteins that are produced with the PTS1 C-terminal sequence, its path to the peroxisomal matrix is dependent upon binding to another cytosolic protein called pex5 (peroxin 5). Once bound, pex5 interacts with a peroxisomal membrane protein pex14 to form a complex. When the pex5 protein with bound cargo interacts with the pex14 membrane protein, the complex induces the release of the targeted protein into the matrix. Upon releasing the cargo protein into the matrix, pex5 dissociation from pex14 occurs via ubiquitinylation by a membrane complex comprising pex2, pex12, and pex10 followed by an ATP dependent removal involving the cytosolic protein complex pex1 and pex6. The cycle for pex5 mediated import into the peroxisomal matrix is restored after the ATP dependent removal of ubiquitin and is free to bind with another protein containing a PTS1 sequence. Proteins containing a PTS2 targeting sequence are mediated by a different cytosolic protein but are believed to follow a similar mechanism to that of those containing the PTS1 sequence. Diseases Protein transport is defective in the following genetic diseases: Mohr–Tranebjaerg syndrome Zellweger syndrome Adrenoleukodystrophy (ALD). Refsum disease Parkinson's disease Hypercholesterolemia, atherosclerosis, obesity, and diabetes In bacteria and archaea As discussed above (see protein translocation), most prokaryotic membrane-bound and secretory proteins are targeted to the plasma membrane by either a co-translation pathway that uses bacterial SRP or a post-translation pathway that requires SecA and SecB. At the plasma membrane, these two pathways deliver proteins to the SecYEG translocon for translocation. Bacteria may have a single plasma membrane (Gram-positive bacteria), or an inner membrane plus an outer membrane separated by the periplasm (Gram-negative bacteria). Besides the plasma membrane the majority of prokaryotes lack membrane-bound organelles as found in eukaryotes, but they may assemble proteins onto various types of inclusions such as gas vesicles and storage granules. Gram-negative bacteria In gram-negative bacteria proteins may be incorporated into the plasma membrane, the outer membrane, the periplasm or secreted into the environment. Systems for secreting proteins across the bacterial outer membrane may be quite complex and play key roles in pathogenesis. These systems may be described as type I secretion, type II secretion, etc. Gram-positive bacteria In most gram-positive bacteria, certain proteins are targeted for export across the plasma membrane and subsequent covalent attachment to the bacterial cell wall. A specialized enzyme, sortase, cleaves the target protein at a characteristic recognition site near the protein C-terminus, such as an LPXTG motif (where X can be any amino acid), then transfers the protein onto the cell wall. Several analogous systems are found that likewise feature a signature motif on the extra-cytoplasmic face, a C-terminal transmembrane domain, and cluster of basic residues on the cytosolic face at the protein's extreme C-terminus. The PEP-CTERM/exosortase system, found in many Gram-negative bacteria, seems to be related to extracellular polymeric substance production. The PGF-CTERM/archaeosortase A system in archaea is related to S-layer production. The GlyGly-CTERM/rhombosortase system, found in the Shewanella, Vibrio, and a few other genera, seems involved in the release of proteases, nucleases, and other enzymes. Bioinformatic tools Minimotif Miner is a bioinformatics tool that searches protein sequence queries for a known protein targeting sequence motifs. Phobius predicts signal peptides based on a supplied primary sequence. SignalP predicts signal peptide cleavage sites. LOCtree predicts the subcellular localization of proteins. Notes See also Bulk flow COPI COPII Clathrin LocDB PSORTdb Signal peptide References External links Post-translational modification Membrane proteins
Protein targeting
[ "Chemistry", "Biology" ]
6,648
[ "Gene expression", "Protein targeting", "Protein classification", "Biochemical reactions", "Post-translational modification", "Cellular processes", "Membrane proteins" ]
24,838
https://en.wikipedia.org/wiki/Peptidoglycan
Peptidoglycan or murein is a unique large macromolecule, a polysaccharide, consisting of sugars and amino acids that forms a mesh-like layer (sacculus) that surrounds the bacterial cytoplasmic membrane. The sugar component consists of alternating residues of β-(1,4) linked N-acetylglucosamine (NAG) and N-acetylmuramic acid (NAM). Attached to the N-acetylmuramic acid is an oligopeptide chain made of three to five amino acids. The peptide chain can be cross-linked to the peptide chain of another strand forming the 3D mesh-like layer. Peptidoglycan serves a structural role in the bacterial cell wall, giving structural strength, as well as counteracting the osmotic pressure of the cytoplasm. This repetitive linking results in a dense peptidoglycan layer which is critical for maintaining cell form and withstanding high osmotic pressures, and it is regularly replaced by peptidoglycan production. Peptidoglycan hydrolysis and synthesis are two processes that must occur in order for cells to grow and multiply, a technique carried out in three stages: clipping of current material, insertion of new material, and re-crosslinking of existing material to new material. The peptidoglycan layer is substantially thicker in gram-positive bacteria (20 to 80 nanometers) than in gram-negative bacteria (7 to 8 nanometers). Depending on pH growth conditions, the peptidoglycan forms around 40 to 90% of the cell wall's dry weight of gram-positive bacteria but only around 10% of gram-negative strains. Thus, presence of high levels of peptidoglycan is the primary determinant of the characterisation of bacteria as gram-positive. In gram-positive strains, it is important in attachment roles and serotyping purposes. For both gram-positive and gram-negative bacteria, particles of approximately 2 nm can pass through the peptidoglycan. It is difficult to tell whether an organism is gram-positive or gram-negative using a microscope; Gram staining, created by Hans Christian Gram in 1884, is required. The bacteria are stained with the dyes crystal violet and safranin. Gram positive cells are purple after staining, while Gram negative cells stain pink. Structure The peptidoglycan layer within the bacterial cell wall is a crystal lattice structure formed from linear chains of two alternating amino sugars, namely N-acetylglucosamine (GlcNAc or NAG) and N-acetylmuramic acid (MurNAc or NAM). The alternating sugars are connected by a β-(1,4)-glycosidic bond. Each MurNAc is attached to a short (4- to 5-residue) amino acid chain, containing L-alanine, D-glutamic acid, meso-diaminopimelic acid, and D-alanine in the case of Escherichia coli (a gram-negative bacterium); or L-alanine, D-glutamine, L-lysine, and D-alanine with a 5-glycine interbridge between tetrapeptides in the case of Staphylococcus aureus (a gram-positive bacterium). Peptidoglycan is one of the most important sources of D-amino acids in nature. By enclosing the inner membrane, the peptidoglycan layer protects the cell from lysis caused by the turgor pressure of the cell. When the cell wall grows, it retains its shape throughout its life, so a rod shape will remain a rod shape, and a spherical shape will remain a spherical shape for life. This happens because the freshly added septal material of synthesis transforms into a hemispherical wall for the offspring cells. Cross-linking between amino acids in different linear amino sugar chains occurs with the help of the enzyme DD-transpeptidase and results in a 3-dimensional structure that is strong and rigid. The specific amino acid sequence and molecular structure vary with the bacterial species. The different peptidoglycan types of bacterial cell walls and their taxonomic implications have been described. Archaea (domain Archaea) do not contain peptidoglycan (murein). Some Archaea contain pseudopeptidoglycan (pseudomurein, see below). Peptidoglycan is involved in binary fission during bacterial cell reproduction. L-form bacteria and mycoplasmas, both lacking peptidoglycan cell walls, do not proliferate by binary fission, but by a budding mechanism. In the course of early evolution, the successive development of boundaries (membranes, walls) protecting first structures of life against their environment must have been essential for the formation of the first cells (cellularisation). The invention of rigid peptidoglycan (murein) cell walls in bacteria (domain Bacteria) was probably the prerequisite for their survival, extensive radiation and colonisation of virtually all habitats of the geosphere and hydrosphere. Biosynthesis The peptidoglycan monomers are synthesized in the cytosol and are then attached to a membrane carrier bactoprenol. Bactoprenol transports peptidoglycan monomers across the cell membrane where they are inserted into the existing peptidoglycan. In the first step of peptidoglycan synthesis, glutamine, which is an amino acid, donates an amino group to a sugar, fructose 6-phosphate. This reaction, catalyzed by EC 2.6.1.16 (GlmS), turns fructose 6-phosphate into glucosamine-6-phosphate. In step two, an acetyl group is transferred from acetyl CoA to the amino group on the glucosamine-6-phosphate creating N-acetyl-glucosamine-6-phosphate. This reaction is EC 5.4.2.10, catalyzed by GlmM. In step three of the synthesis process, the N-acetyl-glucosamine-6-phosphate is isomerized, which will change N-acetyl-glucosamine-6-phosphate to N-acetyl-glucosamine-1-phosphate. This is EC 2.3.1.157, catalyzed by GlmU. In step 4, the N-acetyl-glucosamine-1-phosphate, which is now a monophosphate, attacks UTP. Uridine triphosphate, which is a pyrimidine nucleotide, has the ability to act as an energy source. In this particular reaction, after the monophosphate has attacked the UTP, an inorganic pyrophosphate is given off and is replaced by the monophosphate, creating UDP-N-acetylglucosamine (2,4). (When UDP is used as an energy source, it gives off an inorganic phosphate.) This initial stage, is used to create the precursor for the NAG in peptidoglycan. This is EC 2.7.7.23, also catalyzed by GlmU, which is a bifunctional enzyme. In step 5, some of the UDP-N-acetylglucosamine (UDP-GlcNAc) is converted to UDP-MurNAc (UDP-N-acetylmuramic acid) by the addition of a lactyl group to the glucosamine. Also in this reaction, the C3 hydroxyl group will remove a phosphate from the alpha carbon of phosphoenolpyruvate. This creates what is called an enol derivative. EC 2.5.1.7, catalyzed by MurA. In step 6, the enol is reduced to a "lactyl moiety" by NADPH in step six. EC 1.3.1.98, catalyzed by MurB. In step 7, the UDP–MurNAc is converted to UDP-MurNAc pentapeptide by the addition of five amino acids, usually including the dipeptide D-alanyl-D-alanine. This is a string of three reactions: EC 6.3.2.8 by MurC, EC 6.3.2.9 by MurD, and EC 6.3.2.13 by MurE. Each of these reactions requires the energy source ATP. This is all referred to as Stage one. Stage two occurs in the cytoplasmic membrane. It is in the membrane where a lipid carrier called bactoprenol carries peptidoglycan precursors through the cell membrane. Undecaprenyl phosphate will attack the UDP-MurNAc penta, creating a PP-MurNac penta, which is now a lipid (lipid I). EC 2.7.8.13 by MraY. UDP-GlcNAc is then transported to MurNAc, creating Lipid-PP-MurNAc penta-GlcNAc (lipid II), a disaccharide, also a precursor to peptidoglycan. EC 2.4.1.227 by MurG. Lipid II is transported across the membrane by flippase (MurJ), a discovery made in 2014 after decades of searching. Once it is there, it is added to the growing glycan chain by the enzyme peptidoglycan glycosyltransferase (GTase, EC 2.4.1.129). This reaction is known as transglycosylation. In the reaction, the hydroxyl group of the GlcNAc will attach to the MurNAc in the glycan, which will displace the lipid-PP from the glycan chain. In a final step, the DD-transpeptidase (TPase, EC 3.4.16.4) crosslinks individual glycan chains. This protein is also known as the penicillin-binding protein. Some versions of the enzyme also performs the glycosyltransferase function, while others leave the job to a separate enzyme. Pseudopeptidoglycan In some archaea, i.e. members of the Methanobacteriales and in the genus Methanopyrus, pseudopeptidoglycan (pseudomurein) has been found. In pseudopeptidoglycan the sugar residues are β-(1,3) linked N-acetylglucosamine and N-acetyltalosaminuronic acid. This makes the cell walls of such archaea insensitive to lysozyme. The biosynthesis of pseudopeptidoglycan has been described. Recognition by immune system Peptidoglycan recognition is an evolutionarily conserved process. The overall structure is similar between bacterial species, but various modifications can increase the diversity. These include modifications of the length of sugar polymers, modifications in the sugar structures, variations in cross-linking or substitutions of amino acids (primarily at the third position). The aim of these modifications is to alter the properties of the cell wall, which plays a vital role in pathogenesis. Peptidoglycans can be degraded by several enzymes (lysozyme, glucosaminidase, endopeptidase...), producing immunostimulatory fragments (sometimes called muropeptides) that are critical for mediating host-pathogen interactions. These include MDP (muramyl dipeptide), NAG (N-acetylglucosamine) or iE-DAP (γ-d-glutamyl-meso-diaminopimelic acid). Peptidoglycan from intestinal bacteria (both pathogens and commensals) crosses the intestinal barrier even under physiological conditions. Mechanisms through which peptidoglycan or its fragments enter the host cells can be direct (carrier-independent) or indirect (carrier-dependent), and they are either bacteria-mediated (secretion systems, membrane vesicles) or host cell-mediated (receptor-mediated, peptide transporters). Bacterial secretion systems are protein complexes used for the delivery of virulence factors across the bacterial cell envelope to the exterior environment. Intracellular bacterial pathogens invade eukaryotic cells (which may lead to the formation of phagolysosomes and/or autophagy activation), or bacteria may be engulfed by phagocytes (macrophages, monocytes, neutrophils...). The bacteria-containing phagosome may then fuse with endosomes and lysosomes, leading to degradation of bacteria and generation of polymeric peptidoglycan fragments and muropeptides. Receptors Innate immune system senses intact peptidoglycan and peptidoglycan fragments using numerous PRRs (pattern recognition receptors) that are secreted, expressed intracellularly or expressed on the cell surface. Peptidoglycan recognition proteins PGLYRPs are conserved from insects to mammals. Mammals produce four secreted soluble peptidoglycan recognition proteins (PGLYRP-1, PGLYRP-2, PGLYRP-3 and PGLYRP-4) that recognize muramyl pentapeptide or tetrapeptide. They can also bind to LPS and other molecules by using binding sites outside of the peptidoglycan-binding groove. After recognition of peptidoglycan, PGLYRPs activate polyphenol oxidase (PPO) molecules, Toll, or immune deficiency (IMD) signalling pathways. That leads to production of antimicrobial peptides (AMPs). Each of the mammalian PGLYRPs display unique tissue expression patterns. PGLYRP-1 is mainly expressed in the granules of neutrophils and eosinophils. PGLYRP-3 and 4 are expressed by several tissues such as skin, sweat glands, eyes or the intestinal tract. PGLYRP-1, 3 and 4 form disulphide-linked homodimers and heterodimers essential for their bactericidal activity. Their binding to bacterial cell wall peptidoglycans can induce bacterial cell death by interaction with various bacterial transcriptional regulatory proteins. PGLYRPs are likely to assist in bacterial killing by cooperating with other PRRs to enhance recognition of bacteria by phagocytes. PGLYRP-2 is primarily expressed by the liver and secreted into the circulation. Also, its expression can be induced in skin keratinocytes, oral and intestinal epithelial cells. In contrast with the other PGLYRPs, PGLYRP-2 has no direct bactericidal activity. It possesses peptidoglycan amidase activity, it hydrolyses the lactyl-amide bond between the MurNAc and the first amino acid of the stem peptide of peptidoglycan. It is proposed, that the function of PGLYRP-2 is to prevent over-activation of the immune system and inflammation-induced tissue damage in response to NOD2 ligands (see below), as these muropeptides can no longer be recognized by NOD2 upon separation of the peptide component from MurNAc. Growing evidence suggests that peptidoglycan recognition protein family members play a dominant role in the tolerance of intestinal epithelial cells toward the commensal microbiota. It has been demonstrated that expression of PGLYRP-2 and 4 can influence the composition of the intestinal microbiota. Recently, it has been discovered, that PGLYRPs (and also NOD-like receptors and peptidoglycan transporters) are highly expressed in the developing mouse brain. PGLYRP-2 and is highly expressed in neurons of several brain regions including the prefrontal cortex, hippocampus, and cerebellum, thus indicating potential direct effects of peptidoglycan on neurons. PGLYRP-2 is highly expressed also in the cerebral cortex of young children, but not in most adult cortical tissues. PGLYRP-1 is also expressed in the brain and continues to be expressed into adulthood. NOD-like receptors Probably the most well-known receptors of peptidoglycan are the NOD-like receptors (NLRs), mainly NOD1 and NOD2. The NOD1 receptor is activated after iE-DAP (γ-d-glutamyl-meso-diaminopimelic acid) binding, while NOD2 recognizes MDP (muramyl dipeptide), by their LRR domains. Activation leads to self-oligomerization, resulting in activation of two signalling cascades. One triggers activation of NF-κB (through RIP2, TAK1 and IKK), second leads to MAPK signalling cascade. Activation of these pathways induces production of inflammatory cytokines and chemokines. NOD1 is expressed by diverse cell types, including myeloid phagocytes, epithelial cells and neurons. NOD2 is expressed in monocytes and macrophages, epithelial intestinal cells, Paneth cells, dendritic cells, osteoblasts, keratinocytes and other epithelial cell types. As cytosolic sensors, NOD1 and NOD2 must either detect bacteria that enter the cytosol, or peptidoglycan must be degraded to generate fragments that must be transported into the cytosol for these sensors to function. Recently, it was demonstrated that NLRP3 is activated by peptidoglycan, through a mechanism that is independent of NOD1 and NOD2. In macrophages, N-acetylglucosamine generated by peptidoglycan degradation was found to inhibit hexokinase activity and induce its release from the mitochondrial membrane. It promotes NLRP3 inflammasome activation through a mechanism triggered by increased mitochondrial membrane permeability. NLRP1 is also considered as a cytoplasmic sensor of peptidoglycan. It can sense MDP and promote IL-1 secretion through binding NOD2. C-type lectin receptors (CLRs) C-type lectins are a diverse superfamily of mainly Ca2+-dependent proteins that bind a variety of carbohydrates (including the glycan skeleton of peptidoglycan), and function as innate immune receptors. CLR proteins that bind to peptidoglycan include MBL (mannose binding lectin), ficolins, Reg3A (regeneration gene family protein 3A) and PTCLec1. In mammals, they initiate the lectin-pathway of the complement cascade. Toll-like receptors The role of TLRs in direct recognition of peptidoglycan is controversial. In some studies, has been reported that peptidoglycan is sensed by TLR2. But this TLR2-inducing activity could be due to cell wall lipoproteins and lipoteichoic acids that commonly co-purify with peptidoglycan. Also variation in peptidoglycan structure in bacteria from species to species may contribute to the differing results on this topic. As vaccine or adjuvant Peptidoglycan is immunologically active, which can stimulate immune cells to increase the expression of cytokines and enhance antibody-dependent specific response when combined with vaccine or as adjuvant alone. MDP, which is the basic unit of peptidoglycan, was initially used as the active component of Freund's adjuvant. Peptidoglycan from Staphylococcus aureus was used as a vaccine to protect mice, showing that after vaccine injection for 40 weeks, the mice survived from S. aureus challenge at an increased lethal dose. Inhibition and degradation Some antibacterial drugs such as penicillin interfere with the production of peptidoglycan by binding to bacterial enzymes known as penicillin-binding proteins or DD-transpeptidases. Penicillin-binding proteins form the bonds between oligopeptide crosslinks in peptidoglycan. For a bacterial cell to reproduce through binary fission, more than a million peptidoglycan subunits (NAM-NAG+oligopeptide) must be attached to existing subunits. Mutations in genes coding for transpeptidases that lead to reduced interactions with an antibiotic are a significant source of emerging antibiotic resistance. Since peptidoglycan is also lacking in L-form bacteria and in mycoplasmas, both are resistant against penicillin. Other steps of peptidoglycan synthesis can also be targeted. The topical antibiotic bacitracin targets the utilization of C55-isoprenyl pyrophosphate. Lantibiotics, which includes the food preservative nisin, attack lipid II. Lysozyme, which is found in tears and constitutes part of the body's innate immune system exerts its antibacterial effect by breaking the β-(1,4)-glycosidic bonds in peptidoglycan (see above). Lysozyme is more effective in acting against gram-positive bacteria, in which the peptidoglycan cell wall is exposed, than against gram-negative bacteria, which have an outer layer of LPS covering the peptidoglycan layer. Several bacterial peptidoglycan modifications can result in resistance to degradation by lysozyme. Susceptibility of bacteria to degradation is also considerably affected by exposure to antibiotics. Exposed bacteria synthesize peptidoglycan that contains shorter sugar chains that are poorly crosslinked and this peptidoglycan is then more easily degraded by lysozyme. See also Undecaprenyl-diphosphatase References External links Diagrammatic representation of peptidoglycan structures. Structure of MurNAc 6-Phosphate Hydrolase (MurQ) from Haemophilus influenzae with a Bound Inhibitor. Membrane biology Glycobiology
Peptidoglycan
[ "Chemistry", "Biology" ]
4,698
[ "Biochemistry", "Glycobiology", "Membrane biology", "Molecular biology" ]
24,910
https://en.wikipedia.org/wiki/Product%20topology
In topology and related areas of mathematics, a product space is the Cartesian product of a family of topological spaces equipped with a natural topology called the product topology. This topology differs from another, perhaps more natural-seeming, topology called the box topology, which can also be given to a product space and which agrees with the product topology when the product is over only finitely many spaces. However, the product topology is "correct" in that it makes the product space a categorical product of its factors, whereas the box topology is too fine; in that sense the product topology is the natural topology on the Cartesian product. Definition Throughout, will be some non-empty index set and for every index let be a topological space. Denote the Cartesian product of the sets by and for every index denote the -th by The , sometimes called the , on is defined to be the coarsest topology (that is, the topology with the fewest open sets) for which all the projections are continuous. The Cartesian product endowed with the product topology is called the . The open sets in the product topology are arbitrary unions (finite or infinite) of sets of the form where each is open in and for only finitely many In particular, for a finite product (in particular, for the product of two topological spaces), the set of all Cartesian products between one basis element from each gives a basis for the product topology of That is, for a finite product, the set of all where is an element of the (chosen) basis of is a basis for the product topology of The product topology on is the topology generated by sets of the form where and is an open subset of In other words, the sets form a subbase for the topology on A subset of is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form The are sometimes called open cylinders, and their intersections are cylinder sets. The product topology is also called the because a sequence (or more generally, a net) in converges if and only if all its projections to the spaces converge. Explicitly, a sequence (respectively, a net ) converges to a given point if and only if in for every index where denotes (respectively, denotes ). In particular, if is used for all then the Cartesian product is the space of all real-valued functions on and convergence in the product topology is the same as pointwise convergence of functions. Examples If the real line is endowed with its standard topology then the product topology on the product of copies of is equal to the ordinary Euclidean topology on (Because is finite, this is also equivalent to the box topology on ) The Cantor set is homeomorphic to the product of countably many copies of the discrete space and the space of irrational numbers is homeomorphic to the product of countably many copies of the natural numbers, where again each copy carries the discrete topology. Several additional examples are given in the article on the initial topology. Properties The set of Cartesian products between the open sets of the topologies of each forms a basis for what is called the box topology on In general, the box topology is finer than the product topology, but for finite products they coincide. The product space together with the canonical projections, can be characterized by the following universal property: if is a topological space, and for every is a continuous map, then there exists continuous map such that for each the following diagram commutes: This shows that the product space is a product in the category of topological spaces. It follows from the above universal property that a map is continuous if and only if is continuous for all In many cases it is easier to check that the component functions are continuous. Checking whether a map is continuous is usually more difficult; one tries to use the fact that the are continuous in some way. In addition to being continuous, the canonical projections are open maps. This means that any open subset of the product space remains open when projected down to the The converse is not true: if is a subspace of the product space whose projections down to all the are open, then need not be open in (consider for instance ) The canonical projections are not generally closed maps (consider for example the closed set whose projections onto both axes are ). Suppose is a product of arbitrary subsets, where for every If all are then is a closed subset of the product space if and only if every is a closed subset of More generally, the closure of the product of arbitrary subsets in the product space is equal to the product of the closures: Any product of Hausdorff spaces is again a Hausdorff space. Tychonoff's theorem, which is equivalent to the axiom of choice, states that any product of compact spaces is a compact space. A specialization of Tychonoff's theorem that requires only the ultrafilter lemma (and not the full strength of the axiom of choice) states that any product of compact Hausdorff spaces is a compact space. If is fixed then the set is a dense subset of the product space . Relation to other topological notions Separation Every product of T0 spaces is T0. Every product of T1 spaces is T1. Every product of Hausdorff spaces is Hausdorff. Every product of regular spaces is regular. Every product of Tychonoff spaces is Tychonoff. A product of normal spaces be normal. Compactness Every product of compact spaces is compact (Tychonoff's theorem). A product of locally compact spaces be locally compact. However, an arbitrary product of locally compact spaces where all but finitely many are compact locally compact (This condition is sufficient and necessary). Connectedness Every product of connected (resp. path-connected) spaces is connected (resp. path-connected). Every product of hereditarily disconnected spaces is hereditarily disconnected. Metric spaces Countable products of metric spaces are metrizable spaces. Axiom of choice One of many ways to express the axiom of choice is to say that it is equivalent to the statement that the Cartesian product of a collection of non-empty sets is non-empty. The proof that this is equivalent to the statement of the axiom in terms of choice functions is immediate: one needs only to pick an element from each set to find a representative in the product. Conversely, a representative of the product is a set which contains exactly one element from each component. The axiom of choice occurs again in the study of (topological) product spaces; for example, Tychonoff's theorem on compact sets is a more complex and subtle example of a statement that requires the axiom of choice and is equivalent to it in its most general formulation, and shows why the product topology may be considered the more useful topology to put on a Cartesian product. See also - Sometimes called the projective limit topology Notes References General topology Operations on structures
Product topology
[ "Mathematics" ]
1,401
[ "General topology", "Topology" ]
11,404,605
https://en.wikipedia.org/wiki/Society%20of%20Professional%20Audio%20Recording%20Services
The Society of Professional Audio Recording Services (SPARS) is an organization that holds conferences and publishes papers about the professional audio community. Its members include many of the top audio engineers working in the industry today. SPARS was founded in 1979 as the Society of Professional Audio Recording Studios by the heads of eleven leading U.S. recording facilities. Among the co-founders were Mack Emerman of Criteria Studios, Chris Stone of Record Plant Studios, Joe Tarsia of Sigma Sound Studios, Howard Schwartz and Bob Liftin of Regent Sound, and Murray Allen of Universal Recording Corporation. SPARS developed the SPARS Code, which was common on the back of CD covers from the late eighties to the mid nineties. It specified whether analogue or digital recording mediums were used in each process of the recording (recording, mixing, mastering). References External links SPARS Website Audio engineering Film and video technology Music industry associations
Society of Professional Audio Recording Services
[ "Engineering" ]
183
[ "Electrical engineering", "Audio engineering" ]
11,406,702
https://en.wikipedia.org/wiki/Marker-assisted%20selection
Marker assisted selection or marker aided selection (MAS) is an indirect selection process where a trait of interest is selected based on a marker (morphological, biochemical or DNA/RNA variation) linked to a trait of interest (e.g. productivity, disease resistance, abiotic stress tolerance, and quality), rather than on the trait itself. This process has been extensively researched and proposed for plant- and animal- breeding. For example, using MAS to select individuals with disease resistance involves identifying a marker allele that is linked with disease resistance rather than the level of disease resistance. The assumption is that the marker associates at high frequency with the gene or quantitative trait locus (QTL) of interest, due to genetic linkage (close proximity, on the chromosome, of the marker locus and the disease resistance-determining locus). MAS can be useful to select for traits that are difficult or expensive to measure, exhibit low heritability and/or are expressed late in development. At certain points in the breeding process the specimens are examined to ensure that they express the desired trait. Marker types The majority of MAS work in the present era uses DNA-based markers. However, the first markers that allowed indirect selection of a trait of interest were morphological markers. In 1923, Karl Sax first reported association of a simply inherited genetic marker with a quantitative trait in plants when he observed segregation of seed size associated with segregation for a seed coat color marker in beans (Phaseolus vulgaris L.). In 1935, J. Rasmusson demonstrated linkage of flowering time (a quantitative trait) in peas with a simply inherited gene for flower color. Markers may be: Morphological These were the first markers loci available that have an obvious impact on the morphology of plants. These markers are often detectable by eye, by simple visual inspection. Examples of this type of marker include the presence or absence of an awn, leaf sheath coloration, height, grain color, aroma of rice etc. In well-characterized crops like maize, tomato, pea, barley or wheat, tens or hundreds of genes that determine morphological traits have been mapped to specific chromosome locations. Biochemical A protein that can be extracted and observed; for example, isozymes and storage proteins. Cytological Cytological markers are chromosomal features that can be identified through microscopy. These generally take the form of chromosome bands, regions of chromatin that become impregnated with specific dyes used in cytology. The presence or absence of a chromosome band can be correlated with a particular trait, indicating that the locus responsible for the trait is located within or near (tightly linked) to the banded region. Morphological and cytological markers formed the backbone of early genetic studies in crops such as wheat and maize. DNA-based Including (also known as short tandem repeats, STRs, or simple sequence repeats, SSRs), restriction fragment length polymorphism (RFLP), random amplification of polymorphic DNA (RAPD), amplified fragment length polymorphism (AFLP), and single nucleotide polymorphisms (SNPs). Positive and negative selectable markers The following terms are generally less relevant to discussions of MAS in plant and animal breeding, but are highly relevant in molecular biology research: Positive selectable markers are selectable markers that confer selective advantage to the host organism. An example would be antibiotic resistance, which allows the host organism to survive antibiotic selection. Negative selectable markers are selectable markers that eliminate or inhibit growth of the host organism upon selection. An example would be thymidine kinase, which makes the host sensitive to ganciclovir selection. A distinction can be made between selectable markers (which eliminate certain genotypes from the population) and screenable markers (which cause certain genotypes to be readily identifiable, at which point the experimenter must "score" or evaluate the population and act to retain the preferred genotypes). Most MAS uses screenable markers rather than selectable markers. Gene vs marker The gene of interest directly causes production of protein(s) or RNA that produce a desired trait or phenotype, whereas markers (a DNA sequence or the morphological or biochemical markers produced due to that DNA) are genetically linked to the gene of interest. The gene of interest and the marker tend to move together during segregation of gametes due to their proximity on the same chromosome and concomitant reduction in recombination (chromosome crossover events) between the marker and gene of interest. For some traits, the gene of interest has been discovered and the presence of desirable alleles can be directly assayed with a high level of confidence. However, if the gene of interest is not known, markers linked to the gene of interest can still be used to select for individuals with desirable alleles of the gene of interest. When markers are used there may be some inaccurate results due to inaccurate tests for the marker. There also can be false positive results when markers are used, due to recombination between the marker of interest and gene (or QTL). A perfect marker would elicit no false positive results. The term 'perfect marker' is sometimes used when tests are performed to detect a SNP or other DNA polymorphism in the gene of interest, if that SNP or other polymorphism is the direct cause of the trait of interest. The term 'marker' is still appropriate to use when directly assaying the gene of interest, because the test of genotype is an indirect test of the trait or phenotype of interest. Important properties of ideal markers for MAS An ideal marker: Has easy recognition of phenotypes - ideally all possible phenotypes (homo- and heterozygotes) from all possible alleles Demonstrates measurable differences in expression between trait types or gene of interest alleles, early in the development of the organism Testing for the marker does not have variable success depending on the allele at the marker locus or the allele at the target locus (the gene of interest that determines the trait of interest). Low or null interaction among the markers allowing the use of many at the same time in a segregating population Abundant in number Polymorphic Drawbacks of morphological markers Morphological markers are associated with several general deficits that reduce their usefulness including: the delay of marker expression until late into the development of the organism allowing dominance to mask the underlying genetics pleiotropy, which does not allow easy and parsimonious inferences to be drawn from one gene to one trait confounding effects of genes unrelated to the gene or trait of interest but which also affect the morphological marker (epistasis) frequent confounding effects of environmental factors which affect the morphological characteristics of the organism To avoid problems specific to morphological markers, DNA-based markers have been developed. They are highly polymorphic, exhibit simple inheritance (often codominant), are abundant throughout the genome, are easy and fast to detect, exhibit minimum pleiotropic effects, and detection is not dependent on the developmental stage of the organism. Numerous markers have been mapped to different chromosomes in several crops including rice, wheat, maize, soybean and several others, and in livestock such as cattle, pigs and chickens. Those markers have been used in diversity analysis, parentage detection, DNA fingerprinting, and prediction of hybrid performance. Molecular markers are useful in indirect selection processes, enabling manual selection of individuals for further propagation. Selection for major genes linked to markers 'Major genes' that are responsible for economically important characteristics are frequent in the plant kingdom. Such characteristics include disease resistance, male sterility, self-incompatibility, and others related to shape, color, and architecture of whole plants and are often of mono- or oligogenic in nature. The marker loci that are tightly linked to major genes can be used for selection and are sometimes more efficient than direct selection for the target gene. Such advantages in efficiency may be due for example, to higher expression of the marker mRNA in such cases that the marker is itself a gene. Alternatively, in such cases that the target gene of interest differs between two alleles by a difficult-to-detect single nucleotide polymorphism, an external marker (be it another gene or a polymorphism that is easier to detect, such as a short tandem repeat) may present as the most realistic option. Situations that are favorable for molecular marker selection There are several indications for the use of molecular markers in the selection of a genetic trait. Situations such as: The selected character is expressed late in plant development, like fruit and flower features or adult characters with a juvenile period (so that it is not necessary to wait for the organism to become fully developed before arrangements can be made for propagation) The expression of the target gene is recessive (so that individuals which are heterozygous positive for the recessive allele can be crossed to produce some homozygous offspring with the desired trait) There are special conditions for expression of the target gene(s), as in the case of breeding for disease and pest resistance (where inoculation with the disease or subjection to pests would otherwise be required). Sometimes inoculation methods are unreliable and sometimes field inoculation with the pathogen is not even allowed for safety reasons. Moreover, sometimes expression is dependent on environmental conditions. The phenotype is affected by two or more unlinked genes (epistatis). For example, selection for multiple genes which provide resistance against diseases or insect pests for gene pyramiding. The cost of genotyping (for example, the molecular marker assays needed here) is decreasing thus increasing the attractiveness of MAS as the development of the technology continues. (Additionally, the cost of phenotyping performed by a human is a labor burden, which is higher in a developed country and increasing in a developing country.) Steps for MAS Generally the first step is to map the gene or quantitative trait locus (QTL) of interest first by using different techniques and then using this information for marker assisted selection. Generally, the markers to be used should be close to gene of interest (<5 recombination unit or cM) in order to ensure that only minor fraction of the selected individuals will be recombinants. Generally, not only a single marker but rather two markers are used in order to reduce the chances of an error due to homologous recombination. For example, if two flanking markers are used at same time with an interval between them of approximately 20cM, there is higher probability (99%) for recovery of the target gene. QTL mapping techniques In plants QTL mapping is generally achieved using bi-parental cross populations; a cross between two parents which have a contrasting phenotype for the trait of interest are developed. Commonly used populations are near isogenic lines (NILs), recombinant inbred lines (RILs), doubled haploids (DH), back cross and F2. Linkage between the phenotype and markers which have already been mapped is tested in these populations in order to determine the position of the QTL. Such techniques are based on linkage and are therefore referred to as "linkage mapping".A Single step MAS and QTL mapping In contrast to two-step QTL mapping and MAS, a single-step method for breeding typical plant populations has been developed. In such an approach, in the first few breeding cycles, markers linked to the trait of interest are identified by QTL mapping and later the same information is used in the same population. In this approach, pedigree structure is created from families that are created by crossing number of parents (in three-way or four way crosses). Both phenotyping and genotyping is done using molecular markers mapped the possible location of QTL of interest. This will identify markers and their favorable alleles. Once these favorable marker alleles are identified, the frequency of such alleles will be increased and response to marker assisted selection is estimated. Marker allele(s) with desirable effect will be further used in next selection cycle or other experiments. High-throughput genotyping techniques Recently high-throughput genotyping techniques are developed which allows marker aided screening of many genotypes. This will help breeders in shifting traditional breeding to marker aided selection. One example of such automation is using DNA isolation robots, capillary electrophoresis and pipetting robots. One recent example of capllilary system is Applied Biosystems 3130 Genetic Analyzer. This is the latest generation of 4-capillary electrophoresis instruments for the low to medium throughput laboratories. High-throughput MAS is needed for crop breeding because current techniques are not cost effective. Arrays have been developed for rice by Masouleh et al 2009; wheat by Berard et al 2009, Bernardo et al 2015, and Rasheed et al 2016; legumes by Varshney et al 2016; and various other crops, but all of these have also problems with customization, cost, flexibility, and equipment costs. Use of MAS for backcross breeding A minimum of five or six-backcross generations are required to transfer a gene of interest from a donor (may not be adapted) to a recipient (recurrent – adapted cultivar). The recovery of the recurrent genotype can be accelerated with the use of molecular markers. If the F1 is heterozygous for the marker locus, individuals with the recurrent parent allele(s) at the marker locus in first or subsequent backcross generations will also carry a chromosome tagged by the marker. Marker assisted gene pyramiding Gene pyramiding has been proposed and applied to enhance resistance to disease and insects by selecting for two or more than two genes at a time. For example, in rice such pyramids have been developed against bacterial blight and blast. The advantage of use of markers in this case allows to select for QTL-allele-linked markers that have same phenotypic effect. MAS has also been proved useful for livestock improvement. A coordinated effort to implement wheat (Durum (Triticum turgidum) and common wheat (Triticum aestivum)) marker assisted selection in the U.S. as well as a resource for marker assisted selection exists at the Wheat CAP (Coordinated Agricultural Project) website. See also Association mapping Family based QTL mapping Genomics of domestication History of plant breeding Molecular breeding Nested association mapping QTL mapping Selection methods in plant breeding based on mode of reproduction Smart breeding References Further reading review application of MAS in crop improvement Plant Breeding and Genomics Genetics Plant breeding de:Marker Assisted Selection
Marker-assisted selection
[ "Chemistry", "Biology" ]
3,002
[ "Plant breeding", "Genetics", "Molecular biology" ]
11,408,570
https://en.wikipedia.org/wiki/Latent%20extinction%20risk
In conservation biology, latent extinction risk is a measure of the potential for a species to become threatened. Latent risk can most easily be described as the difference, or discrepancy, between the current observed extinction risk of a species (typically as quantified by the IUCN Red List) and the theoretical extinction risk of a species predicted by its biological or life history characteristics. Calculation Because latent risk is the discrepancy between current and predicted risks, estimates of both of these values are required (See population modeling and population dynamics). Once these values are known, the latent extinction risk can be calculated as Predicted Risk - Current Risk = Latent Extinction Risk. When the latent extinction risk is a positive value, it indicates that a species is currently less threatened than its biology would suggest it ought to be. For example, a species may have several of the characteristics often found in threatened species, such as large body size, small geographic distribution, or low reproductive rate, but still be rated as "least concern" in the IUCN Red List. This may be because it has not yet been exposed to serious threatening processes such as habitat degradation. Conversely, negative values of latent risk indicate that a species is already more threatened than its biology would indicate, probably because it inhabits a part of the world where it has been exposed to extreme endangering processes. Species with severely low negative values are usually listed as an endangered species and have associated recovery and conservation plans. Limits One of the issues associated with latent extinction risk is its difficulty to calculate because of the limited availability of data for predicting extinction risk across large numbers of species. Hence, the only study of latent risk to date has focused on mammals, which are one of the best-studied groups of organisms. Effects on conservation A study of latent extinction risk in mammals identified a number of "hotspots" where the average value of latent risk for mammal species was unusually high. This study suggested that these areas represented an opportunity for proactive conservation efforts, because these could become the "future battlegrounds of mammal conservation" if levels of human impact increase. Unexpectedly, the hotspots of mammal latent risk include large areas of Arctic America, where overall mammal diversity is not high, but where many species have the kind of biological traits (such as large body size and slow reproductive rate) that could render them extinction-prone. Another notable region of high latent risk for mammals is the island chain of Indonesia and Melanesia, where there are large numbers of restricted-range endemic species. Because it is much more cost-effective to prevent species declines before they happen than to attempt to rescue species from the brink of extinction, latent risk hotspots could form part of a global scheme to prioritize areas for conservation effort, together with other kinds of priority areas such as biodiversity hotspots. References Ecological metrics Extinction Environmental conservation
Latent extinction risk
[ "Mathematics" ]
590
[ "Ecological metrics", "Quantity", "Metrics" ]
16,793,276
https://en.wikipedia.org/wiki/Lift-off%20%28microtechnology%29
The lift-off process in microstructuring technology is a method of creating structures (patterning) of a target material on the surface of a substrate (e.g. wafer) using a sacrificial material (e.g. photoresist). It is an additive technique as opposed to more traditional subtracting technique like etching. The scale of the structures can vary from the nanoscale up to the centimeter scale or further, but are typically of micrometric dimensions. Process An inverse pattern is first created in the sacrificial stencil layer (ex. photoresist), deposited on the surface of the substrate. This is done by etching openings through the layer so that the target material can reach the surface of the substrate in those regions, where the final pattern is to be created. The target material is deposited over the whole area of the wafer, reaching the surface of the substrate in the etched regions and staying on the top of the sacrificial layer in the regions, where it was not previously etched. When the sacrificial layer is washed away (photoresist in a solvent), the material on the top is lifted-off and washed together with the sacrificial layer below. After the lift-off, the target material remains only in the regions where it had a direct contact with the substrate. Substrate is prepared Sacrificial layer is deposited and an inverse pattern is created (ex. photoresist is exposed and developed. Depending on the resist various methods can be used, such as Extreme ultraviolet lithography - EUVL or Electron beam lithography - EBL. The photoresist is removed in the areas, where the target material is to be located, creating an inverse pattern.) Target material (usually a thin metal layer) is deposited (on the whole surface of the wafer). This layer covers the remaining resist as well as parts of the wafer that were cleaned of the resist in the previous developing step. The rest of the sacrificial material (ex. photoresist) is washed out together with parts of the target material covering it, only the material that was in the "holes" having direct contact with the underlying layer (substrate/wafer) stays Advantages Lift-off is applied in cases where a direct etching of structural material would have undesirable effects on the layer below. Lift-off is a cheap alternative to etching in a research context, which permits a slower turn-around time. Finally, lifting off a material is an option if there is no access to an etching tool with the appropriate gases. Disadvantages There are 3 major problems with lift-off: Retention This is the worst problem for liftoff processes. If this problem occurs, unwanted parts of the metal layer will remain on the wafer. This can be caused by different situations. The resist below the parts that should have been lifted off could not have dissolved properly. Also, it is possible that the metal has adhered so well to the parts that should remain that it prevents lift-off. Ears When the metal is deposited, and it covers the sidewalls of the resist, "ears" can be formed. These are made of the metal along the sidewall which will be standing upwards from the surface. Also, it is possible that these ears will fall over on the surface, causing an unwanted shape on the substrate. If the ears remain on the surface, the risk remains that these ears will go through different layers put on top of the wafer and they might cause unwanted connections. Redeposition During the liftoff process it is possible that particles of metal will become reattached to the surface, at a random location. It is very difficult to remove these particles after the wafer has dried. Use Lift-off process is used mostly to create metallic interconnections. There are several types of lift-off processes, and what can be achieved depends highly on the actual process being used. Very fine structures have been used using EBL, for instance. The lift-off process can also involve multiple layers of different types of resist. This can for instance be used to create shapes that will prevent side walls of the resist being covered in the metal deposition stage. External links https://www.mems-exchange.org/catalog/lift_off/ Microtechnology Lift Off Process
Lift-off (microtechnology)
[ "Materials_science", "Engineering" ]
902
[ "Semiconductor device fabrication", "Materials science", "Microtechnology" ]
16,794,181
https://en.wikipedia.org/wiki/Chloride%20Group
Chloride is a global company that specializes in the design, production, and maintenance of industrial uninterruptible power supply (UPS) systems to ensure a reliable power supply for critical equipment across multiple industries. Formerly listed on the London Stock Exchange and a constituent of FTSE 250 index, the company has become privately-owned since 2021. History Chloride Group was founded in 1891 as The Chloride Electrical Syndicate Limited to manufacture batteries. Brand names used included Ajax, Exide, Dagenite, Kathanode, Shednought and Tudor. In the 1970s, under its then managing director Sir Michael Edwardes it showcased the UK's first battery-powered buses. In 1999, it diversified into secure power systems acquiring Oneac in the US, BOAR SA in Spain and Hytek in Australia. In 2000, it acquired the power protection division of Siemens in Germany and in 2001 it acquired Continuous Power International followed, in 2005, by Harath Engineering Services in the UK. In 2007, it acquired AST Electronique Services, a similar business in France. In July 2009, the Company announced the acquisition of a 90% stake in India’s leading Uninterruptible power supply company, DB Power Electronics. In September 2010, Chloride Group was fully acquired by Emerson Electric (joining the Emerson Network Power platform) of the United States for US$1.5 billion. In 2016, Emerson Network Power was acquired by Platinum Equity for US$4 billion. The business was rebranded under the name Vertiv, launching as a stand-alone business. In 2021, Chloride became an independent privately held company as a result of the buy-out of the business division of Vertiv by its management team supported by private investment fond Innovafonds and sovereign bank Bpifrance. The scope of the transaction comprised all industrial business activity globally including the manufacturing site in France, all patent and intellectual property, the registered trademarks Chloride and AEES as well as several regional assets. The product portfolio of Industrial AC and DC UPS systems as well as safety lighting portfolio were transferred in their entirety. On completion of this transaction the new group with pro-forma sales of $90 million in 2021 restored its historical name, Chloride. References External links Company website Technology companies established in 1891 1891 establishments in England Electrical equipment manufacturers Privately held companies of France Companies formerly listed on the London Stock Exchange 2010 mergers and acquisitions 2021 mergers and acquisitions
Chloride Group
[ "Engineering" ]
487
[ "Electrical engineering organizations", "Electrical equipment manufacturers" ]
16,794,275
https://en.wikipedia.org/wiki/Hawking%20energy
The Hawking energy or Hawking mass is one of the possible definitions of mass in general relativity. It is a measure of the bending of ingoing and outgoing rays of light that are orthogonal to a 2-sphere surrounding the region of space whose mass is to be defined. Definition Let be a 3-dimensional sub-manifold of a relativistic spacetime, and let be a closed 2-surface. Then the Hawking mass of is defined to be where is the mean curvature of . Properties In the Schwarzschild metric, the Hawking mass of any sphere about the central mass is equal to the value of the central mass. A result of Geroch implies that Hawking mass satisfies an important monotonicity condition. Namely, if has nonnegative scalar curvature, then the Hawking mass of is non-decreasing as the surface flows outward at a speed equal to the inverse of the mean curvature. In particular, if is a family of connected surfaces evolving according to where is the mean curvature of and is the unit vector opposite of the mean curvature direction, then Said otherwise, Hawking mass is increasing for the inverse mean curvature flow. Hawking mass is not necessarily positive. However, it is asymptotic to the ADM or the Bondi mass, depending on whether the surface is asymptotic to spatial infinity or null infinity. See also Mass in general relativity Inverse mean curvature flow References Further reading Section 6.1 in General relativity
Hawking energy
[ "Physics" ]
302
[ "General relativity", "Theory of relativity" ]
16,795,124
https://en.wikipedia.org/wiki/Classical%20Mechanics%20%28Goldstein%29
Classical Mechanics is a textbook written by Herbert Goldstein, a professor at Columbia University. Intended for advanced undergraduate and beginning graduate students, it has been one of the standard references on its subject around the world since its first publication in 1950. Overview In the second edition, Goldstein corrected all the errors that had been pointed out, added a new chapter on perturbation theory, a new section on Bertrand's theorem, and another on Noether's theorem. Other arguments and proofs were simplified and supplemented. Before the death of its primary author in 2005, a new (third) edition of the book was released, with the collaboration of Charles P. Poole and John L. Safko from the University of South Carolina. In the third edition, the book discusses at length various mathematically sophisticated reformations of Newtonian mechanics, namely analytical mechanics, as applied to particles, rigid bodies and continua. In addition, it covers in some detail classical electromagnetism, special relativity, and field theory, both classical and relativistic. There is an appendix on group theory. New to the third edition include a chapter on nonlinear dynamics and chaos, a section on the exact solutions to the three-body problem obtained by Euler and Lagrange, and a discussion of the damped driven pendulum that explains the Josephson junctions. This is counterbalanced by the reduction of several existing chapters motivated by the desire to prevent this edition from exceeding the previous one in length. For example, the discussions of Hermitian and unitary matrices were omitted because they are more relevant to quantum mechanics rather than classical mechanics, while those of Routh's procedure and time-independent perturbation theory were reduced. Table of Contents (3rd Edition) Preface Chapter 1: Survey of Elementary Principles Chapter 2: Variational Principles and Lagrange's Equations Chapter 3: The Central Force Problem Chapter 4: The Kinematics of Rigid Body Motion Chapter 5: The Rigid Body Equations of Motion Chapter 6: Oscillations Chapter 7: The Classical Mechanics of the Special Theory of Relativity Chapter 8: The Hamilton Equations of Motion Chapter 9: Canonical Transformations Chapter 10: Hamilton–Jacobi Theory and Action-Angle Coordinates Chapter 11: Classical Chaos Chapter 12: Canonical Perturbation Theory Chapter 13: Introduction to the Lagrangian and Hamiltonian Formulations for Continuous Systems and Fields Appendix A: Euler Angles in Alternate Conventions and Cayley–Klein Parameters Appendix B: Groups and Algebras Appendix C: Solutions to Select Exercises Select Bibliography Author Index Subject Index Editions Reception First edition S.L. Quimby of Columbia University noted that the first half of the first edition of the book is dedicated to the development of Lagrangian mechanics with the treatment of velocity-dependent potentials, which are important in electromagnetism, and the use of the Cayley-Klein parameters and matrix algebra for rigid-body dynamics. This is followed by a comprehensive and clear discussion of Hamiltonian mechanics. End-of-chapter references improve the value of the book. Quimby pointed out that although this book is suitable for students preparing for quantum mechanics, it is not helpful for those interested in analytical mechanics because its treatment omits too much. Quimby praised the quality of printing and binding which make the book attractive. In the Journal of the Franklin Institute, Rupen Eskergian noted that the first edition of Classical Mechanics offers a mature take on the subject using vector and tensor notations and with a welcome emphasis on variational methods. This book begins with a review of elementary concepts, then introduces the principle of virtual work, constraints, generalized coordinates, and Lagrangian mechanics. Scattering is treated in the same chapter as central forces and the two-body problem. Unlike most other books on mechanics, this one elaborates upon the virial theorem. The discussion of canonical and contact transformations, the Hamilton-Jacobi theory, and action-angle coordinates is followed by a presentation of geometric optics and wave mechanics. Eskergian believed this book serves as a bridge to modern physics. Writing for The Mathematical Gazette on the first edition, L. Rosenhead congratulated Goldstein for a lucid account of classical mechanics leading to modern theoretical physics, which he believed would stand the test of time alongside acknowledged classics such as E.T. Whittaker's Analytical Dynamics and Arnold Sommerfeld's Lectures on Theoretical Physics. This book is self-contained and is suitable for students who have completed courses in mathematics and physics of the first two years of university. End-of-chapter references with comments and some example problems enhance the book. Rosenhead also liked the diagrams, index, and printing. Concerning the second printing of the first edition, Vic Twersky of the Mathematical Research Group at New York University considered the book to be of pedagogical merit because it explains things in a clear and simple manner, and its humor is not forced. Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. It also has a chapter on the mechanics of fields and continua. At the end of each chapter, there is a list of references with the author's candid reviews of each. Twersky said that Goldstein's Classical Mechanics is more suitable for physicists compared to the much older treatise Analytical Dynamics by E.T. Whittaker, which he deemed more appropriate for mathematicians. E. W. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Mathematical tools are introduced as needed. He believed that the annotated references at the end of each chapter are of great value. Third edition Stephen R. Addison from the University of Central Arkansas commented that while the first edition of Classical Mechanics was essentially a treatise with exercises, the third has become less scholarly and more of a textbook. This book is most useful for students who are interested in learning the necessary material in preparation for quantum mechanics. The presentation of most materials in the third edition remain unchanged compared to that of the second, though many of the old references and footnotes were removed. Sections on the relations between the action-angle coordinates and the Hamilton-Jacobi equation with the old quantum theory, wave mechanics, and geometric optics were removed. Chapter 7, which deals with special relativity, has been heavily revised and could prove to be more useful to students who want to study general relativity than its equivalent in previous editions. Chapter 11 provides a clear, if somewhat dated, survey of classical chaos. Appendix B could help advanced students refresh their memories but may be too short to learn from. In all, Addison believed that this book remains a classic text on the eighteenth- and nineteenth-century approaches to theoretical mechanics; those interested in a more modern approach – expressed in the language of differential geometry and Lie groups – should refer to Mathematical Methods of Classical Mechanics by Vladimir Arnold. Martin Tiersten from the City University of New York pointed out a serious error in the book that persisted in all three editions and even got promoted to the front cover of the book. Such a closed orbit, depicted in a diagram on page 80 (as Figure 3.7) is impossible for an attractive central force because the path cannot be concave away from the center of force. A similarly erroneous diagram appears on page 91 (as Figure 3.13). Tiersten suggested that the reason why this error remained unnoticed for so long is because advanced mechanics texts typically do not use vectors in their treatment of central-force problems, in particular the tangential and normal components of the acceleration vector. He wrote, "Because an attractive force is always directed in toward the center of force, the direction toward the center of curvature at the turning points must be toward the center of force." In response, Poole and Safko acknowledged the error and stated they were working on a list of errata. See also Newtonian mechanics Classical Mechanics (Kibble and Berkshire) Course of Theoretical Physics (Landau and Lifshitz) List of textbooks on classical and quantum mechanics Introduction to Electrodynamics (Griffiths) Classical Electrodynamics (Jackson) References External links Errata, corrections, and comments on the third edition. John L. Safko and Charles P. Poole. University of South Carolina. Classical mechanics Physics textbooks 1951 non-fiction books
Classical Mechanics (Goldstein)
[ "Physics" ]
1,761
[ "Mechanics", "Classical mechanics" ]
16,796,173
https://en.wikipedia.org/wiki/Nix%20%28package%20manager%29
Nix is a cross-platform package manager for Unix-like systems, and a tool to instantiate and manage those systems, invented in 2003 by Eelco Dolstra. Approach The Nix package manager employs a model in which software packages are each installed into unique directories with immutable contents. These directory names correspond to cryptographic hashes that take into account all dependencies of a package, including other packages managed by Nix. As a result, Nix package names are content-identifying since packages with the same name will have had the same inputs and the same build platform, and therefore the same build result. Implementation Package recipes for Nix are written in the purpose-built "Nix language", a declarative, purely functional, lazily evaluated, dynamically typed programming language. Distinguishing features of the Nix language are strings with "context", string interpolation, first-class file system paths, and "indented strings", which in combination allow concisely expressing dependencies between file system data when specifying the contents of new files. Dependencies between files, as declared in the Nix language, are automatically tracked and persisted in the "Nix store". New files in the Nix store are created through "derivations". A derivation is a persistent data structure that specifies an executable, arguments and environment variables for its invocation (see execve), and other files to be read from the Nix store. The executable is then run in a sandbox that prohibits access to anything but the explicitly specified input files and only allows writing to the designated output path. Nix preserves dependency information in output files by scanning for the distinctive hashes used for package directory names. Automatic reference tracking ensures integrity of packages, even when they are transferred across machines. It also enables garbage collection of unused packages when no other package depends on them. At the cost of greater storage requirements, all upgrades in Nix are guaranteed to be both atomic and capable of efficient rollback. Unique directory names allow installing many packages with differing versions of shared libraries, and is claimed to eliminate so-called dependency hell. This also lets multiple users safely install software on the same system without administrator privileges. As a result, the Nix package management and deployment model advertises more reliable, reproducible, and portable packages. Nix has full support for Linux, macOS, and WSL, and can safely be installed side-by-side with another package manager. Nixpkgs Nixpkgs is the package repository built upon the Nix package manager. According to Repology, as of January 2025 it contains more than 122,000 packages and has a higher number of up-to-date packages than any other package repository. Operating systems supported by Nixpkgs are primarily Linux and Darwin, with some support for Windows and BSD variants. Supported CPU architectures include 64-bit x86 and ARM. Packages for these architectures are built regularly, using a continuous integration service called Hydra, and the results of these builds are uploaded to a public binary cache. When Nix installs a package, it checks this cache and downloads the binary package to avoid building it locally. Nixpkgs is developed in a single Git repository on GitHub. Beside packages, it also contains the source code for NixOS. Projects using Nix NixOS is a Linux distribution that uses Nix for managing the entire system configuration, including the Linux kernel. Nix is used for software packaging and distribution in CERN's LHCb experiment. Nix underlies the distributed software development platforms Replit and Google IDX. Forks and alternative implementations In 2021, a reimplementation by the name Tvix was announced, with the goals of modularity, full compatibility with Nixpkgs, and improved evaluator performance. As of 2024, Tvix has an evaluator and a store implementation, though the authors do not consider the project yet stable or ready for use in production. Tvix is written primarily in Rust. In 2024, a team of volunteers released the first version of Lix, a fork of Nix focused on correctness and compatibility that uses the Meson build automation system. The project intends to gradually rewrite parts of the code in Rust. See also GNU Guix: another declarative package manager, and early clone of Nix, using GNU Guile for configuration and customization Maak: a build automation utility similar to make, and early precursor to Nix Runbook automation References External links Discussion among developers on the Debian mailing list (2008) 2012 software Data management software Free computer programming tools Free package management systems Functional programming Linux package management-related software Unix software Configuration management Software using the GNU Lesser General Public License Free software programmed in C++
Nix (package manager)
[ "Engineering" ]
965
[ "Systems engineering", "Configuration management" ]