id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
13,967,547
https://en.wikipedia.org/wiki/Dry%20lab
A dry lab is a laboratory where the nature of the experiments does not involve significant risk. This is in contrast to a wet lab where it is necessary to handle various types of chemicals and biological hazards. An example of a dry lab is one where computational or applied mathematical analyses are done on a computer-generated model to simulate a phenomenon in the physical realm. Examples of such phenomena include a molecule changing quantum states, the event horizon of a black hole or anything that otherwise might be impossible or too dangerous to observe under normal laboratory conditions. This term may also refer to a lab that uses primarily electronic equipment, for example, a robotics lab. A dry lab can also refer to a laboratory space for the storage of dry materials. Dry labbing can also refer to supplying fictional (yet plausible) results in lieu of performing an assigned experiment, or carrying out a systematic review. In silico chemistry As computing power has grown exponentially this approach to research, often referred to as in silico (as opposed to in vitro and in vivo), has amassed more attention, especially in the area of bioinformatics. More specifically, within bioinformatics, is the study of proteins or proteomics, which is the elucidation of their unknown structures and folding patterns. The general approach in the elucidation of protein structure has been to first purify a protein, crystallize it and then send X-rays through such a purified protein crystal to observe how these x-rays diffract into specific pattern—a process referred to as X-ray crystallography. However, many proteins, especially those embedded in cellular membranes, are nearly impossible to crystallize due to their hydrophobic nature. Although other techniques exists, such as ramachandran plotting and mass spectrometry, these alone generally do not lead to the full elucidation of protein structure or folding mechanisms. Distributed computing As a means of surpassing the limitations of these techniques, projects such as Folding@home and Rosetta@home are aimed at resolving this problem using computational analysis, this means of resolving protein structure is referred to as protein structure prediction. Although many labs have a slightly different approach, the main concept is to find, from a myriad of protein conformations, which conformation has the lowest energy or, in the case of Folding@Home, to find relatively low energies of proteins that could cause the protein to misfold and aggregate other proteins to itself—like in the case of sickle cell anemia. The general scheme in these projects is that a small number of computations are parsed to, or sent to be calculated on, a computer, generally a home computer, and then that computer analyzes the likelihood that a specific protein will take a certain shape or conformation based on the amount of energy required for that protein to stay in that shape, this way of processing data is what is generally referred to as distributed computing. This analysis is done on an extraordinarily large number of different conformations, owing to the support of hundreds of thousands of home-based computers, with the goal of finding the conformation of lowest possible energy or set of conformations of lowest possible energy relative to any conformations that are just slightly different. Although doing so is quite difficult, one can, by observing the energy distribution of a large number of conformations, despite the almost infinite number of different protein conformations possible for any given protein (see Levinthal Paradox), with a reasonably large number of protein energy samplings, predict relatively closely what conformation, within a range of conformations, has the expected lowest energy using methods in statistical inference. There are other factors such as salt concentration, pH, ambient temperature or chaperonins, which are proteins that assist in the folding process of other proteins, that can greatly affect how a protein folds. However, if the given protein is shown to fold on its own, especially in vitro, these findings can be further supported. Once we can see how a protein folds then we can see how it works as a catalyst, or in intracellular communication, e.g. neuroreceptor-neurotransmitter interaction. How certain compounds may be used to enhance or prevent the function of these proteins and how an elucidated protein overall plays a role in disease can also be much better understood. There are many other avenues of research in which the dry lab approach has been implemented. Other physical phenomena, such as sound, properties of newly discovered or hypothetical compounds and quantum mechanics models have recently received more attention in this area of approach. See also Computational chemistry Computational science Computer simulation Computational physics In silico Protein structure prediction Wet lab References Bioinformatics Laboratory types
Dry lab
[ "Chemistry", "Engineering", "Biology" ]
960
[ "Bioinformatics", "Biological engineering", "Laboratory types" ]
13,968,939
https://en.wikipedia.org/wiki/OpenFOAM
OpenFOAM (Open Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). The OpenFOAM software is used in research organisations, academic institutes and across many types of industries, for example, automotive, manufacturing, process engineering, environmental engineering and marine energy. OpenFOAM is open-source software which is freely available and licensed under the GNU General Public License Version 3, with the following variants: OpenFOAM, released by OpenCFD Ltd. (with the name trademarked since 2007) first released as open-source in 2004. (Note: since 2012, OpenCFD Ltd is wholly-owned subsidiary of ESI Group) FOAM-Extend, released by Wikki Ltd. (since 2009) OpenFOAM, released by OpenFOAM Foundation. (since 2011) History The name FOAM has been claimed to appear for the first time as a post-processing tool written by Charlie Hill, in the early 90s in Prof. David Gosman's group in Imperial College London. As a counter argument , it has been claimed that Henry Weller created the FOAM library for field operation and manipulation which interfaced to the GUISE (Graphical User Interface Software Environment) which was created by Charlie Hill for interfacing to AVS. As a continuum mechanics / computational fluid dynamics tool, the first development of FOAM (which became OpenFOAM later on) was virtually always presumed to be initiated by Henry Weller at the same institute by using the C++ programming language rather than the de facto standard programming language FORTRAN of the time to develop a powerful and flexible general simulation platform. From this initiation to the founding of a company called Nabla Ltd, (predominantly) Henry Weller and Hrvoje Jasak carried out the basic development of the software for almost a decade. For a few years, FOAM was sold as a commercial code by Nabla Ltd., on 10 December 2004, it was released under GPL and renamed to OpenFOAM. In 2004, Nabla Ltd was folded. Immediately afterwards, Henry Weller, Chris Greenshields and Mattijs Janssens founded OpenCFD Ltd to develop and release OpenFOAM. At the same time, Hrvoje Jasak founded the consulting company Wikki Ltd and maintained a fork of OpenFOAM called openfoam-extend, later renamed to foam-extend. In April 2008, the OpenFOAM development moved to using git for its source code repository. On 5 August 2011, OpenCFD transferred the OpenFOAM software (source code) and documentation from OpenCFD to the newly incorporated OpenFOAM Foundation, registered in Delaware State, USA. On 8 August 2011, OpenCFD was acquired by Silicon Graphics International (SGI). On 12 September 2012, ESI Group announced the acquisition of OpenCFD Ltd, becoming a wholly-owned subsidiary of ESI Group, and OpenCFD retaining its ownership of the OpenFOAM trademark. On 25 April 2014, The OpenFOAM Foundation Ltd was incorporated in England, as a company limited by guarantee with all assets transferred to the UK and the US entity dissolved, together with changes to the governance of the Foundation. Weller and Greenshields left OpenCFD and formed CFD Direct Ltd in March 2015. On 3 September 2024, Cristel de Rouvray, CEO of ESI Group officially resigned as Founder Member and director of The OpenFOAM Foundation Limited. The OpenFOAM Foundation Ltd directors are Henry Weller, Chris Greenshields, and Brendan Bouffler. The following are the three main variants of OpenFOAM: OpenFOAM, Foundation, developed and maintained primarily by CFD Direct Ltd with a sequence based identifier (e.g. 6.0) (from 2011). OpenFOAM, OpenCFD, developed and maintained mainly by OpenCFD Ltd, (ESI Group company since 2012) with a date-of-release identifier (e.g. v1606) (from 2016). The FOAM-Extend Project, mainly maintained by Wikki Ltd. (from 2009). OpenFOAM Governance In 2018, OpenCFD Ltd. and some of its industrial, academic, and community partners established an administrative body, i.e. OpenFOAM Governance, to allow the OpenFOAM's user community to decide/contribute the future development and direction of their variant of the software. The structure of OpenFOAM Governance consisted of a Steering Committee and various Technical Committees. The Steering Committee comprised representatives from the main sponsors of OpenFOAM in industry, academia, release authorities and consultant organisations. The organisation composition of the initial committee involved members from OpenCFD Ltd., ESI Group, Volkswagen, General Motors, FM Global, TotalSim Ltd., TU Darmstadt, and Wikki Ltd. In addition, nine technical committees were established in the following areas: Documentation, high performance computing, meshing, multiphase, numerics, optimisation, turbulence, marine applications, and nuclear applications with the members from the organisations of OpenCFD Ltd., CINECA, University of Zagreb, TU Darmstadt, National Technical University of Athens, Upstream CFD GmbH, University of Michigan, and EPFL. Structure Software structure The OpenFOAM directory structure consists of two main directories: OpenFOAM-<version>: OpenFOAM libraries whose directory layout is shown in the side-figure ThirdParty: A set of third-party libraries Simulation structure OpenFOAM computer simulations are configured by several plain text input files located across the following three directories: system/ controlDict fvSchemes fvSolution fvOptions (optional) other dictionaries (configuration files in OpenFOAM) constant polyMesh/ other dictionaries 0/ or another initial time directory field files Additional directories can be generated, depending on user selections. These may include: result time directories: field predictions as a function of iteration count or time postProcessing/: data typically generated by function objects data conversion, e.g. VTK See also Computer-aided design Computer-aided engineering Finite volume method ParaView, an open-source multiple-platform application for interactive scientific visualization VTK (file format) References External links OpenFOAM Foundation website DLR: Future Aircraft Engineering – The Numerical Simulation 2004 software C++ libraries Computational fluid dynamics Computer-aided engineering software for Linux Continuum mechanics Fluid dynamics Free science software Software using the GNU General Public License Free software programmed in C++ Open Source computer aided engineering applications Scientific simulation software Software that uses VTK
OpenFOAM
[ "Physics", "Chemistry", "Engineering" ]
1,400
[ "Continuum mechanics", "Computational fluid dynamics", "Chemical engineering", "Classical mechanics", "Computational physics", "Piping", "Fluid dynamics" ]
13,969,132
https://en.wikipedia.org/wiki/Neher%E2%80%93McGrath%20method
In electrical engineering, Neher–McGrath is a method of estimating the steady-state temperature of electrical power cables for some commonly encountered configurations. By estimating the temperature of the cables, the safe long-term current-carrying capacity of the cables can be calculated. J. H. Neher and M. H. McGrath were two electrical engineers who wrote a paper in 1957 about how to calculate the capacity of current (ampacity) of cables. The paper described two-dimensional highly symmetric simplified calculations which have formed the basis for many cable application guidelines and regulations. Complex geometries, or configurations that require three-dimensional analysis of heat flow, require more complex tools such as finite element analysis. Their article became used as reference for the ampacity in most of the standard tables. Overview The Neher–McGrath paper summarized years of research into analytical treatment of the practical problem of heat transfer from power cables. The methods described included all the heat generation mechanisms from a power cable (conductor loss, dielectric loss and shield loss). From the basic principles that electric current leads to thermal heating and thermal power transfer to the ambient environment requires some temperature difference, it follows that the current leads to a temperature rise in the conductors. The ampacity, or maximum allowable current, of an electric power cable depends on the allowable temperatures of the cable and any adjacent materials such as insulation or termination equipment. For insulated cables, the insulation maximum temperature is normally the limiting material property that constrains ampacity. For uninsulated cables (typically used in outdoor overhead installations), the tensile strength of the cable (as affected by temperature) is normally the limiting material property. The Neher–McGrath method is the electrical industry standard for calculating cable ampacity, most often employed via lookup in tables of precomputed results for common configurations. US National Electrical Code use The equation in section 310-15(C) of the National Electrical Code, called the Neher–McGrath equation (NM), may be used to estimate the effective ampacity of a cable: In the equation, is normally the limiting conductor temperature derived from the insulation or tensile strength limitations. is a term added to the ambient temperature to compensate for heat generated in the jacket and insulation for higher voltages. is called the dielectric loss temperature rise and is generally regarded as insignificant for voltages below 2000 V. Term is a multiplier used to convert direct current resistance () to the effective alternating current resistance (which typically includes conductor skin effects and eddy current losses). For wire sizes smaller than AWG No. 2 (), this term is also generally regarded as insignificant. is the effective thermal resistance between the conductor and the ambient conditions, which can require significant empirical or theoretical effort to estimate. With respect to the AC-sensitive terms, tabular presentation of the NM equation results in the National Electrical Code was developed assuming the standard North American power frequency of 60 hertz and sinusoidal wave forms for current and voltage. The challenges posed by the complexity of estimating and of estimating the local increase in ambient temperature obtained by co-locating many cables (in a duct bank) create a market niche in the electric power industry for software dedicated to ampacity estimation. References Power engineering Power cables
Neher–McGrath method
[ "Engineering" ]
670
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
13,971,587
https://en.wikipedia.org/wiki/Ultrasound%20attenuation%20spectroscopy
Ultrasound attenuation spectroscopy is a method for characterizing properties of fluids and dispersed particles. It is also known as acoustic spectroscopy. There is an international standard for this method. Measurement of attenuation coefficient versus ultrasound frequency yields raw data for further calculation of various system properties. Such raw data are often used in the calculation of the particle size distribution in heterogeneous systems such as emulsions and colloids. In the case of acoustic rheometers, the raw data are converted into extensional viscosity or volume viscosity. Instruments that employ ultrasound attenuation spectroscopy are referred to as Acoustic spectrometers. References External links Ultrasonic Spectrometer Acoustics Colloidal chemistry Spectroscopy Ultrasound
Ultrasound attenuation spectroscopy
[ "Physics", "Chemistry", "Astronomy" ]
148
[ "Spectroscopy stubs", "Colloidal chemistry", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Classical mechanics", "Acoustics", "Colloids", "Surface science", "Astronomy stubs", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
13,973,033
https://en.wikipedia.org/wiki/Kickoff%20meeting
A kickoff meeting is the first meeting with the project team and with or without the client of the project. This meeting would follow definition of the base elements for the project and other project planning activities. This meeting introduces the members of the project team and the client and provides the opportunity to discuss the role of team members. Other base elements in the project that involve the client may also be discussed at this meeting (schedule, status reporting, etc.). If there are any new team members, the process to be followed is explained so as to maintain quality standards of the organization. Clarity is given by the project lead if there exists any ambiguity in the process implementations. There is a special discussion on the legalities involved in the project. For example, the design team interacting with the testing team may want a car to be tested on city roads. If the legal permissions are not mentioned by the concerned stakeholder during kickoff, the test may get modified later to comply with local traffic laws (this causes unplanned delay in project implementation). So, it would be best to have a discussion about this during the kickoff meeting and to follow it up separately, rather than to proceed on assumptions and later be forced to replan test procedures. The kickoff meeting is an enthusiasm-generator for the customer and displays a full summary of the project so far. By displaying a thorough knowledge of the goal and steps on how to reach it, the customer gains confidence in the team's ability to deliver the work. Kickoff means that the work starts. See also Project management References Schedule (project management)
Kickoff meeting
[ "Physics" ]
320
[ "Spacetime", "Physical quantities", "Time", "Schedule (project management)" ]
13,973,133
https://en.wikipedia.org/wiki/Kolbe%20nitrile%20synthesis
The Kolbe nitrile synthesis is a method for the preparation of alkyl nitriles by reaction of the corresponding alkyl halide with a metal cyanide. A side product for this reaction is the formation of an isonitrile because the cyanide ion is an ambident nucleophile. The reaction is named after Hermann Kolbe. \underset{alkyl\ halide}{R-X} + \underset{cyanide\ ion}{CN^\ominus} -> \underset{alkyl\ nitrile}{R-C{\equiv}N} + \underset{alkyl\ isonitrile}{R-\overset\oplus N{\equiv}C^\ominus} The ratio of product isomers depends on the solvent and the reaction mechanism, and can be predicted by Kornblum's rule. With the Using alkali cyanides such as sodium cyanide and polar solvents, the reaction occurs by an SN2 mechanism via the more-nucleophilic carbon atom of the cyanide ion. This type of reaction together with dimethyl sulfoxide as a solvent is a convenient method for the synthesis of nitriles. The use of DMSO was a major advancement in the development of this reaction, as it works for more sterically hindered electrophilies (secondary and neopentyl halides) without rearrangement side-reactions. See also Rosenmund–von Braun reaction, a similar reaction for the synthesis of aromatic nitriles , a similar reaction with enones References Substitution reactions Name reactions
Kolbe nitrile synthesis
[ "Chemistry" ]
359
[ "Name reactions" ]
675,130
https://en.wikipedia.org/wiki/Molecular%20physics
Molecular physics is the study of the physical properties of molecules and molecular dynamics. The field overlaps significantly with physical chemistry, chemical physics, and quantum chemistry. It is often considered as a sub-field of atomic, molecular, and optical physics. Research groups studying molecular physics are typically designated as one of these other fields. Molecular physics addresses phenomena due to both molecular structure and individual atomic processes within molecules. Like atomic physics, it relies on a combination of classical and quantum mechanics to describe interactions between electromagnetic radiation and matter. Experiments in the field often rely heavily on techniques borrowed from atomic physics, such as spectroscopy and scattering. Molecular structure In a molecule, both the electrons and nuclei experience similar-scale forces from the Coulomb interaction. However, the nuclei remain at nearly fixed locations in the molecule while the electrons move significantly. This picture of a molecule is based on the idea that nucleons are much heavier than electrons, so will move much less in response to the same force. Neutron scattering experiments on molecules have been used to verify this description. Molecular energy levels and spectra When atoms join into molecules, their inner electrons remain bound to their original nucleus while the outer valence electrons are distributed around the molecule. The charge distribution of these valence electrons determines the electronic energy level of a molecule, and can be described by molecular orbital theory, which closely follows the atomic orbital theory used for single atoms. Assuming that the momenta of the electrons are on the order of ħ/a (where ħ is the reduced Planck constant and a is the average internuclear distance within a molecule, ~ 1 Å), the magnitude of the energy spacing for electronic states can be estimated at a few electron volts. This is the case for most low-lying molecular energy states, and corresponds to transitions in the visible and ultraviolet regions of the electromagnetic spectrum. In addition to the electronic energy levels shared with atoms, molecules have additional quantized energy levels corresponding to vibrational and rotational states. Vibrational energy levels refer to motion of the nuclei about their equilibrium positions in the molecule. The approximate energy spacing of these levels can be estimated by treating each nucleus as a quantum harmonic oscillator in the potential produced by the molecule, and comparing its associated frequency to that of an electron experiencing the same potential. The result is an energy spacing about 100× smaller than that for electronic levels. In agreement with this estimate, vibrational spectra show transitions in the near infrared (about ). Finally, rotational energy states describe semi-rigid rotation of the entire molecule and produce transition wavelengths in the far infrared and microwave regions (about 100-10,000 μm in wavelength). These are the smallest energy spacings, and their size can be understood by comparing the energy of a diatomic molecule with internuclear spacing ~ 1 Å to the energy of a valence electron (estimated above as ~ ħ/a). Actual molecular spectra also show transitions which simultaneously couple electronic, vibrational, and rotational states. For example, transitions involving both rotational and vibrational states are often referred to as rotational-vibrational or rovibrational transitions. Vibronic transitions combine electronic and vibrational transitions, and rovibronic transitions combine electronic, rotational, and vibrational transitions. Due to the very different frequencies associated with each type of transition, the wavelengths associated with these mixed transitions vary across the electromagnetic spectrum. Experiments In general, the goals of molecular physics experiments are to characterize shape and size, electric and magnetic properties, internal energy levels, and ionization and dissociation energies for molecules. In terms of shape and size, rotational spectra and vibrational spectra allow for the determination of molecular moments of inertia, which allows for calculations of internuclear distances in molecules. X-ray diffraction allows determination of internuclear spacing directly, especially for molecules containing heavy elements. All branches of spectroscopy contribute to determination of molecular energy levels due to the wide range of applicable energies (ultraviolet to microwave regimes). Current research Within atomic, molecular, and optical physics, there are numerous studies using molecules to verify fundamental constants and probe for physics beyond the Standard Model. Certain molecular structures are predicted to be sensitive to new physics phenomena, such as parity and time-reversal violation. Molecules are also considered a potential future platform for trapped ion quantum computing, as their more complex energy level structure could facilitate higher efficiency encoding of quantum information than individual atoms. From a chemical physics perspective, intramolecular vibrational energy redistribution experiments use vibrational spectra to determine how energy is redistributed between different quantum states of a vibrationally excited molecule. See also Born–Oppenheimer approximation Electrostatic deflection (molecular physics/nanotechnology) Molecular energy state Molecular modeling Rigid rotor Spectroscopy Physical chemistry Chemical Physics Quantum Chemistry Sources ATOMIC, MOLECULAR AND OPTICAL PHYSICS: NEW RESEARCH by L.T. Chen; Nova Science Publishers, Inc. New York References Atomic, molecular, and optical physics
Molecular physics
[ "Physics", "Chemistry" ]
1,009
[ "Molecular physics", " molecular", "nan", "Atomic", " and optical physics" ]
675,364
https://en.wikipedia.org/wiki/Voltage%20regulation
In electrical engineering, particularly power engineering, voltage regulation is a measure of change in the voltage magnitude between the sending and receiving end of a component, such as a transmission or distribution line. Voltage regulation describes the ability of a system to provide near constant voltage over a wide range of load conditions. The term may refer to a passive property that results in more or less voltage drop under various load conditions, or to the active intervention with devices for the specific purpose of adjusting voltage. Electrical power systems In electrical power systems, voltage regulation is a dimensionless quantity defined at the receiving end of a transmission line as: where Vnl is voltage at no load and Vfl is voltage at full load. The percent voltage regulation of an ideal transmission line, as defined by a transmission line with zero resistance and reactance, would equal zero due to Vnl equaling Vfl as a result of there being no voltage drop along the line. This is why a smaller value of Voltage Regulation is usually beneficial, indicating that the line is closer to ideal. The Voltage Regulation formula could be visualized with the following: "Consider power being delivered to a load such that the voltage at the load is the load's rated voltage VRated, if then the load disappears, the voltage at the point of the load will rise to Vnl." Voltage regulation in transmission lines occurs due to the impedance of the line between its sending and receiving ends. Transmission lines intrinsically have some amount of resistance, inductance, and capacitance that all change the voltage continuously along the line. Both the magnitude and phase angle of voltage change along a real transmission line. The effects of line impedance can be modeled with simplified circuits such as the short line approximation (least accurate), the medium line approximation (more accurate), and the long line approximation (most accurate). The short line approximation ignores capacitance of the transmission line and models the resistance and reactance of the transmission line as a simple series resistor and inductor. This combination has impedance R + jωL or R + jX. There is a single line current I = IS = IR in the short line approximation, different from the medium and long line. The medium length line approximation takes into account the shunt admittance, usually pure capacitance, by distributing half the admittance at the sending and receiving end of the line. This configuration is often referred to as a nominal - π. The long line approximation takes these lumped impedance and admittance values and distributes them uniformly along the length of the line. The long line approximation therefore requires the solving of differential equations and results in the highest degree of accuracy. In the voltage regulation formula, Vno load is the voltage measured at the receiving end terminals when the receiving end is an open circuit. The entire short line model is an open circuit in this condition, and no current flows in an open circuit, so I = 0 A and the voltage drop across the line given by Ohm’s law Vline drop = IZline is 0 V. The sending and receiving end voltages are thus the same. This value is what the voltage at the receiving end would be if the transmission line had no impedance. The voltage would not be changed at all by the line, which is an ideal scenario in power transmission. Vfull load is the voltage across the load at the receiving end when the load is connected and current flows in the transmission line. Now Vline drop = IZline is nonzero, so the voltages and the sending and receiving ends of the transmission line are not equal. The current I can be found by solving Ohm’s law using a combined line and load impedance: . Then the VR, full load is given by . The effects of this modulation on voltage magnitude and phase angle is illustrated using phasor diagrams that map VR, VS, and the resistive and inductive components of Vline drop. Three power factor scenarios are shown, where (a) the line serves an inductive load so the current lags receiving end voltage, (b) the line serves a completely real load so the current and receiving end voltage are in phase, and (c) the line serves a capacitive load so the current leads receiving end voltage. In all cases the line resistance R causes a voltage drop that is in phase with current, and the reactance of the line X causes a voltage drop that leads current by 90 degrees. These successive voltage drops are summed to the receiving end voltage, tracing backward from VR to VS in the short line approximation circuit. The vector sum of VR and the voltage drops equals VS, and it is apparent in the diagrams that VS does not equal VR in magnitude or phase angle. The diagrams show that the phase angle of current in the line affects voltage regulation significantly. Lagging current in (a) makes the required magnitude of sending end voltage quite large relative to the receiving end. The phase angle difference between sending and receiving end is minimized, however. Leading current in (c) actually allows the sending end voltage magnitude be smaller than the receiving end magnitude, so the voltage counterintuitively increases along the line. In-phase current in (b) does little to affect the magnitude of voltage between sending and receiving ends, but the phase angle shifts considerably. Real transmission lines typically serve inductive loads, which are the motors that exist everywhere in modern electronics and machines. Transferring a large amount of reactive power Q to inductive loads makes the line current lag voltage, and the voltage regulation is characterized by decrease in voltage magnitude. In transferring a large amount of real power P to real loads, current is mostly in phase with voltage. The voltage regulation in this scenario is characterized by a decrease in phase angle rather than magnitude. Sometimes, the term voltage regulation is used to describe processes by which the quantity VR is reduced, especially concerning special circuits and devices for this purpose (see below). Electronic power supply parameters The quality of a system's voltage regulation is described by three main parameters: Distribution feeder regulation Electric utilities aim to provide service to customers at a specific voltage level, for example, 220 V or 240 V. However, due to Kirchhoff's Laws, the voltage magnitude and thus the service voltage to customers will in fact vary along the length of a conductor such as a distribution feeder (see Electric power distribution). Depending on law and local practice, actual service voltage within a tolerance band such as ±5% or ±10% may be considered acceptable. In order to maintain voltage within tolerance under changing load conditions, various types of devices are traditionally employed: a load tap changer (LTC) at the substation transformer, which changes the turns ratio in response to load current and thereby adjusts the voltage supplied at the sending end of the feeder; voltage regulators, which are essentially transformers with tap changers to adjust the voltage along the feeder, so as to compensate for the voltage drop over distance; and capacitors, which reduce the voltage drop along the feeder by reducing current flow to loads consuming reactive power. A new generation of devices for voltage regulation based on solid-state technology are in the early commercialization stages. Distribution regulation involves a "regulation point": the point at which the equipment tries to maintain constant voltage. Customers further than this point observe an expected effect: higher voltage at light load, and lower voltage at high load. Customers closer than this point experience the opposite effect: higher voltage at high load, and lower voltage at light load. Complications due to distributed generation Distributed generation, in particular photovoltaics connected at the distribution level, presents a number of significant challenges for voltage regulation. Conventional voltage regulation equipment works under the assumption that line voltage changes predictably with distance along the feeder. Specifically, feeder voltage drops with increasing distance from the substation due to line impedance and the rate of voltage drop decreases farther away from the substation. However, this assumption may not hold when DG is present. For example, a long feeder with a high concentration of DG at the end will experience significant current injection at points where the voltage is normally lowest. If the load is sufficiently low, current will flow in the reverse direction (i.e. towards the substation), resulting in a voltage profile that increases with distance from the substation. This inverted voltage profile may confuse conventional controls. In one such scenario, load tap changers expecting voltage to decrease with distance from the substation may choose an operating point that in fact causes voltage down the line to exceed operating limits. The voltage regulation issues caused by DG at the distribution level are complicated by lack of utility monitoring equipment along distribution feeders. The relative scarcity of information on distribution voltages and loads makes it difficult for utilities to make adjustments necessary to keep voltage levels within operating limits. Although DG poses a number of significant challenges for distribution level voltage regulation, if combined with intelligent power electronics DG can actually serve to enhance voltage regulation efforts. One such example is PV connected to the grid through inverters with volt-VAR control. In a study conducted jointly by the National Renewable Energy Laboratory (NREL) and Electric Power Research Institute (EPRI), when volt-VAR control was added to a distribution feeder with 20% PV penetration, the diurnal voltage swings on the feeder were significantly reduced. Transformers One case of voltage regulation is in a transformer. The unideal components of the transformer cause a change in voltage when current flows. Under no load, when no current flows through the secondary coils, Vnl is given by the ideal model, where VS = VP*NS/NP. Looking at the equivalent circuit and neglecting the shunt components, as is a reasonable approximation, one can refer all resistance and reactance to the secondary side and clearly see that the secondary voltage at no load will indeed be given by the ideal model. In contrast, when the transformer delivers full load, a voltage drop occurs over the winding resistance, causing the terminal voltage across the load to be lower than anticipated. By the definition above, this leads to a nonzero voltage regulation which must be considered in use of the transformer. See also Voltage regulator Electric power distribution Shunt regulator References
Voltage regulation
[ "Physics" ]
2,084
[ "Voltage", "Physical quantities", "Voltage regulation" ]
676,502
https://en.wikipedia.org/wiki/Rogue%20wave
Rogue waves (also known as freak waves or killer waves) are large and unpredictable surface waves that can be extremely dangerous to ships and isolated structures such as lighthouses. They are distinct from tsunamis, which are long wavelength waves, often almost unnoticeable in deep waters and are caused by the displacement of water due to other phenomena (such as earthquakes). A rogue wave at the shore is sometimes called a sneaker wave. In oceanography, rogue waves are more precisely defined as waves whose height is more than twice the significant wave height (H or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Rogue waves do not appear to have a single distinct cause but occur where physical factors such as high winds and strong currents cause waves to merge to create a single large wave. Recent research suggests sea state crest-trough correlation leading to linear superposition may be a dominant factor in predicting the frequency of rogue waves. Among other causes, studies of nonlinear waves such as the Peregrine soliton, and waves modeled by the nonlinear Schrödinger equation (NLS), suggest that modulational instability can create an unusual sea state where a "normal" wave begins to draw energy from other nearby waves, and briefly becomes very large. Such phenomena are not limited to water and are also studied in liquid helium, nonlinear optics, and microwave cavities. A 2012 study reported that in addition to the Peregrine soliton reaching up to about three times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes and demonstrated the creation of a "super rogue wave" (a breather around five times higher than surrounding waves) in a water-wave tank. A 2012 study supported the existence of oceanic rogue holes, the inverse of rogue waves, where the depth of the hole can reach more than twice the significant wave height. Although it is often claimed that rogue holes have never been observed in nature despite replication in wave tank experiments, there is a rogue hole recording from an oil platform in the North Sea, revealed in Kharif et al. The same source also reveals a recording of what is known as the 'Three Sisters'. Background Rogue waves are waves in open water that are much larger than surrounding waves. More precisely, rogue waves have a height which is more than twice the significant wave height (H or SWH). They can be caused when currents or winds cause waves to travel at different speeds, and the waves merge to create a single large wave; or when nonlinear effects cause energy to move between waves to create a single extremely large wave. Once considered mythical and lacking hard evidence, rogue waves are now proven to exist and are known to be natural ocean phenomena. Eyewitness accounts from mariners and damage inflicted on ships have long suggested they occur. Still, the first scientific evidence of their existence came with the recording of a rogue wave by the Gorm platform in the central North Sea in 1984. A stand-out wave was detected with a wave height of in a relatively low sea state. However, what caught the attention of the scientific community was the digital measurement of a rogue wave at the Draupner platform in the North Sea on January 1, 1995; called the "Draupner wave", it had a recorded maximum wave height of and peak elevation of . During that event, minor damage was inflicted on the platform far above sea level, confirming the accuracy of the wave-height reading made by a downwards pointing laser sensor. The existence of rogue waves has since been confirmed by video and photographs, satellite imagery, radar of the ocean surface, stereo wave imaging systems, pressure transducers on the sea-floor, and oceanographic research vessels. In February 2000, a British oceanographic research vessel, the RRS Discovery, sailing in the Rockall Trough west of Scotland, encountered the largest waves ever recorded by any scientific instruments in the open ocean, with an SWH of and individual waves up to . In 2004, scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each or higher. A rogue wave is a natural ocean phenomenon that is not caused by land movement, only lasts briefly, occurs in a limited location, and most often happens far out at sea. Rogue waves are considered rare, but potentially very dangerous, since they can involve the spontaneous formation of massive waves far beyond the usual expectations of ship designers, and can overwhelm the usual capabilities of ocean-going vessels which are not designed for such encounters. Rogue waves are, therefore, distinct from tsunamis. Tsunamis are caused by a massive displacement of water, often resulting from sudden movements of the ocean floor, after which they propagate at high speed over a wide area. They are nearly unnoticeable in deep water and only become dangerous as they approach the shoreline and the ocean floor becomes shallower; therefore, tsunamis do not present a threat to shipping at sea (e.g., the only ships lost in the 2004 Asian tsunami were in port.). These are also different from the wave known as a "hundred-year wave", which is a purely statistical description of a particularly high wave with a 1% chance to occur in any given year in a particular body of water. Rogue waves have now been proven to cause the sudden loss of some ocean-going vessels. Well-documented instances include the freighter MS München, lost in 1978. Rogue waves have been implicated in the loss of other vessels, including the Ocean Ranger, a semisubmersible mobile offshore drilling unit that sank in Canadian waters on 15 February 1982. In 2007, the United States' National Oceanic and Atmospheric Administration (NOAA) compiled a catalogue of more than 50 historical incidents probably associated with rogue waves. History of rogue wave knowledge Early reports In 1826, French scientist and naval officer Jules Dumont d'Urville reported waves as high as in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago. In that era, the thought was widely held that no wave could exceed . Author Susan Casey wrote that much of that disbelief came because there were very few people who had seen a rogue wave and survived; until the advent of steel double-hulled ships of the 20th century, "people who encountered rogue waves generally weren't coming back to tell people about it." Pre-1995 research Unusual waves have been studied scientifically for many years (for example, John Scott Russell's wave of translation, an 1834 study of a soliton wave). Still, these were not linked conceptually to sailors' stories of encounters with giant rogue ocean waves, as the latter were believed to be scientifically implausible. Since the 19th century, oceanographers, meteorologists, engineers, and ship designers have used a statistical model known as the Gaussian function (or Gaussian Sea or standard linear model) to predict wave height, on the assumption that wave heights in any given sea are tightly grouped around a central value equal to the average of the largest third, known as the significant wave height (SWH). In a storm sea with an SWH of , the model suggests hardly ever would a wave higher than occur. It suggests one of could indeed happen, but only once in 10,000 years. This basic assumption was well accepted, though acknowledged to be an approximation. Using a Gaussian form to model waves has been the sole basis of virtually every text on that topic for the past 100 years. The first known scientific article on "freak waves" was written by Professor Laurence Draper in 1964. In that paper, he documented the efforts of the National Institute of Oceanography in the early 1960s to record wave height, and the highest wave recorded at that time, which was about . Draper also described freak wave holes. Research on cross-swell waves and their contribution to rogue wave studies Before the Draupner wave was recorded in 1995, early research had already made significant strides in understanding extreme wave interactions. In 1979, Dik Ludikhuize and Henk Jan Verhagen at TU Delft successfully generated cross-swell waves in a wave basin. Although only monochromatic waves could be produced at the time, their findings, reported in 1981, showed that individual wave heights could be added together even when exceeding breaker criteria. This phenomenon provided early evidence that waves could grow significantly larger than anticipated by conventional theories of wave breaking. This work highlighted that in cases of crossing waves, wave steepness could increase beyond usual limits. Although the waves studied were not as extreme as rogue waves, the research provided an understanding of how multidirectional wave interactions could lead to extreme wave heights - a key concept in the formation of rogue waves. The crossing wave phenomenon studied in the Delft Laboratory therefore had direct relevance to the unpredictable rogue waves encountered at sea. Research published in 2024 by TU Delft and other institutions has subsequently demonstrated that waves coming from multiple directions can grow up to four times steeper than previously imagined. The 1995 Draupner wave The Draupner wave was the first rogue wave to be detected by a measuring instrument. The wave was recorded in 1995 at Unit E of the Draupner platform, a gas pipeline support complex located in the North Sea about southwest from the southern tip of Norway. At 15:24 UTC on 1 January 1995, the device recorded a rogue wave with a maximum wave height of . Peak elevation above still water level was . The reading was confirmed by the other sensors. In the area, the SWH at the time was about , so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community. Subsequent research Following the evidence of the Draupner wave, research in the area became widespread. The first scientific study to comprehensively prove that freak waves exist, which are clearly outside the range of Gaussian waves, was published in 1997. Some research confirms that observed wave height distribution, in general, follows well the Rayleigh distribution. Still, in shallow waters during high energy events, extremely high waves are rarer than this particular model predicts. From about 1997, most leading authors acknowledged the existence of rogue waves with the caveat that wave models could not replicate rogue waves. Statoil researchers presented a paper in 2000, collating evidence that freak waves were not the rare realizations of a typical or slightly non-gaussian sea surface population (classical extreme waves) but were the typical realizations of a rare and strongly non-gaussian sea surface population of waves (freak extreme waves). A workshop of leading researchers in the world attended the first Rogue Waves 2000 workshop held in Brest in November 2000. In 2000, British oceanographic vessel RRS Discovery recorded a wave off the coast of Scotland near Rockall. This was a scientific research vessel fitted with high-quality instruments. Subsequent analysis determined that under severe gale-force conditions with wind speeds averaging , a ship-borne wave recorder measured individual waves up to from crest to trough, and a maximum SWH of . These were some of the largest waves recorded by scientific instruments up to that time. The authors noted that modern wave prediction models are known to significantly under-predict extreme sea states for waves with a significant height (Hs) above . The analysis of this event took a number of years and noted that "none of the state-of-the-art weather forecasts and wave modelsthe information upon which all ships, oil rigs, fisheries, and passenger boats relyhad predicted these behemoths." In simple terms, a scientific model (and also ship design method) to describe the waves encountered did not exist. This finding was widely reported in the press, which reported that "according to all of the theoretical models at the time under this particular set of weather conditions, waves of this size should not have existed". In 2004, the ESA MaxWave project identified more than 10 individual giant waves above in height during a short survey period of three weeks in a limited area of the South Atlantic. By 2007, it was further proven via satellite radar studies that waves with crest-to-trough heights of occur far more frequently than previously thought. Rogue waves are now known to occur in all of the world's oceans many times each day. Rogue waves are now accepted as a common phenomenon. Professor Akhmediev of the Australian National University has stated that 10 rogue waves exist in the world's oceans at any moment. Some researchers have speculated that roughly three of every 10,000 waves on the oceans achieve rogue status, yet in certain spotssuch as coastal inlets and river mouthsthese extreme waves can make up three of every 1,000 waves, because wave energy can be focused. Rogue waves may also occur in lakes. A phenomenon known as the "Three Sisters" is said to occur in Lake Superior when a series of three large waves forms. The second wave hits the ship's deck before the first wave clears. The third incoming wave adds to the two accumulated backwashes and suddenly overloads the ship deck with large amounts of water. The phenomenon is one of various theorized causes of the sinking of the on Lake Superior in November 1975. A 2012 study reported that in addition to the Peregrine soliton reaching up to about 3 times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes, and demonstrated the creation of a "super rogue a breather around 5 times higher than surrounding wavesin a water tank. Also in 2012, researchers at the Australian National University proved the existence of "rogue wave holes", an inverted profile of a rogue wave. Their research created rogue wave holes on the water surface in a water-wave tank. In maritime folklore, stories of rogue holes are as common as stories of rogue waves. They had followed from theoretical analysis but had never been proven experimentally. "Rogue wave" has become a near-universal term used by scientists to describe isolated, large-amplitude waves that occur more frequently than expected for normal, Gaussian-distributed, statistical events. Rogue waves appear ubiquitous and are not limited to the oceans. They appear in other contexts and have recently been reported in liquid helium, nonlinear optics, and microwave cavities. Marine researchers universally now accept that these waves belong to a specific kind of sea wave, not considered by conventional models for sea wind waves. A 2015 paper studied the wave behavior around a rogue wave, including optical and the Draupner wave, and concluded, "rogue events do not necessarily appear without warning but are often preceded by a short phase of relative order". In 2019, researchers succeeded in producing a wave with similar characteristics to the Draupner wave (steepness and breaking), and proportionately greater height, using multiple wavetrains meeting at an angle of 120°. Previous research had strongly suggested that the wave resulted from an interaction between waves from different directions ("crossing seas"). Their research also highlighted that wave-breaking behavior was not necessarily as expected. If waves met at an angle less than about 60°, then the top of the wave "broke" sideways and downwards (a "plunging breaker"). Still, from about 60° and greater, the wave began to break vertically upwards, creating a peak that did not reduce the wave height as usual but instead increased it (a "vertical jet"). They also showed that the steepness of rogue waves could be reproduced in this manner. Lastly, they observed that optical instruments such as the laser used for the Draupner wave might be somewhat confused by the spray at the top of the wave if it broke, and this could lead to uncertainties of around in the wave height. They concluded, "... the onset and type of wave breaking play a significant role and differ significantly for crossing and noncrossing waves. Crucially, breaking becomes less crest-amplitude limiting for sufficiently large crossing angles and involves the formation of near-vertical jets". Extreme rogue wave events On 17 November 2020, a buoy moored in of water on Amphitrite Bank in the Pacific Ocean off Ucluelet, Vancouver Island, British Columbia, Canada, at recorded a lone tall wave among surrounding waves about in height. The wave exceeded the surrounding significant wave heights by a factor of 2.93. When the wave's detection was revealed to the public in February 2022, one scientific paper and many news outlets christened the event as "the most extreme rogue wave event ever recorded" and a "once-in-a-millennium" event, claiming that at about three times the height of the waves around it, the Ucluelet wave set a record as the most extreme rogue wave ever recorded at the time in terms of its height in proportion to surrounding waves, and that a wave three times the height of those around it was estimated to occur on average only once every 1,300 years worldwide. The Ucluelet event generated controversy. Analysis of scientific papers dealing with rogue wave events since 2005 revealed the claims for the record-setting nature and rarity of the wave to be incorrect. The paper Oceanic rogue waves by Dysthe, Krogstad and Muller reports on an event in the Black Sea in 2004 which was far more extreme than the Ucluelet wave, where the Datawell Waverider buoy reported a wave whose height was higher and 3.91 times the significant wave height, as detailed in the paper. Thorough inspection of the buoy after the recording revealed no malfunction. The authors of the paper that reported the Black Sea event assessed the wave as "anomalous" and suggested several theories on how such an extreme wave may have arisen. The Black Sea event differs in the fact that it, unlike the Ucluelet wave, was recorded with a high-precision instrument. The Oceanic rogue waves paper also reports even more extreme waves from a different source, but these were possibly overestimated, as assessed by the data's own authors. The Black Sea wave occurred in relatively calm weather. Furthermore, a paper by I. Nikolkina and I. Didenkulova also reveals waves more extreme than the Ucluelet wave. In the paper, they infer that in 2006 a wave appeared in the Pacific Ocean off the Port of Coos Bay, Oregon, with a significant wave height of . The ratio is 5.38, almost twice that of the Ucluelet wave. The paper also reveals the incident as marginally more extreme than the Ucluelet event. The paper also assesses a report of an wave in a significant wave height of , but the authors cast doubt on that claim. A paper written by Craig B. Smith in 2007 reported on an incident in the North Atlantic, in which the submarine Grouper was hit by a 30-meter wave in calm seas. Causes Because the phenomenon of rogue waves is still a matter of active research, clearly stating what the most common causes are or whether they vary from place to place is premature. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area. The warm Agulhas Current runs to the southwest, while the dominant winds are westerlies, but since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localized variation. Suggested mechanisms for freak waves include: Diffractive focusing According to this hypothesis, coast shape or seabed shape directs several small waves to meet in phase. Their crest heights combine to create a freak wave. Focusing by currents Waves from one current are driven into an opposing current. This results in shortening of wavelength, causing shoaling (i.e., increase in wave height), and oncoming wave trains to compress together into a rogue wave. This happens off the South African coast, where the Agulhas Current is countered by westerlies. Nonlinear effects (modulational instability)A rogue wave may occur by natural, nonlinear processes from a random background of smaller waves. In such a case, it is hypothesized, an unusual, unstable wave type may form, which "sucks" energy from other waves, growing to a near-vertical monster itself, before becoming too unstable and collapsing shortly thereafter. One simple model for this is a wave equation known as the nonlinear Schrödinger equation (NLS), in which a normal and perfectly accountable (by the standard linear model) wave begins to "soak" energy from the waves immediately fore and aft, reducing them to minor ripples compared to other waves. The NLS can be used in deep-water conditions. In shallow water, waves are described by the Korteweg–de Vries equation or the Boussinesq equation. These equations also have nonlinear contributions and show solitary-wave solutions. The terms soliton (a type of self-reinforcing wave) and breather (a wave where energy concentrates in a localized and oscillatory fashion) are used for some of these waves, including the well-studied Peregrine soliton. Studies show that nonlinear effects could arise in bodies of water. A small-scale rogue wave consistent with the NLS on (the Peregrine soliton) was produced in a laboratory water-wave tank in 2011. Normal part of the wave spectrum Some studies argue that many waves classified as rogue waves (with the sole condition that they exceed twice the SWH) are not freaks but just rare, random samples of the wave height distribution, and are, as such, statistically expected to occur at a rate of about one rogue wave every 28 hours. This is commonly discussed as the question "Freak Waves: Rare Realizations of a Typical Population Or Typical Realizations of a Rare Population?" According to this hypothesis, most real-world encounters with huge waves can be explained by linear wave theory (or weakly nonlinear modifications thereof), without the need for special mechanisms like the modulational instability. Recent studies analyzing billions of wave measurements by wave buoys demonstrate that rogue wave occurrence rates in the ocean can be explained with linear theory when the finite spectral bandwidth of the wave spectrum is taken into account. However, whether weakly nonlinear dynamics can explain even the largest rogue waves (such as those exceeding three times the significant wave height, which would be exceedingly rare in linear theory) is not yet known. This has also led to criticism questioning whether defining rogue waves using only their relative height is meaningful in practice.: Constructive interference of elementary waves Rogue waves can result from the constructive interference (dispersive and directional focusing) of elementary three-dimensional waves enhanced by nonlinear effects.: Wind wave interactions While wind alone is unlikely to generate a rogue wave, its effect combined with other mechanisms may provide a fuller explanation of freak wave phenomena. As the wind blows over the ocean, energy is transferred to the sea surface. When strong winds from a storm blow in the ocean current's opposing direction, the forces might be strong enough to generate rogue waves randomly. Theories of instability mechanisms for the generation and growth of wind waves – although not on the causes of rogue waves – are provided by Phillips and Miles. The spatiotemporal focusing seen in the NLS equation can also occur when the non-linearity is removed. In this case, focusing is primarily due to different waves coming into phase rather than any energy-transfer processes. Further analysis of rogue waves using a fully nonlinear model by R. H. Gibbs (2005) brings this mode into question, as it is shown that a typical wave group focuses in such a way as to produce a significant wall of water at the cost of a reduced height. A rogue wave, and the deep trough commonly seen before and after it, may last only for some minutes before either breaking or reducing in size again. Apart from a single one, the rogue wave may be part of a wave packet consisting of a few rogue waves. Such rogue wave groups have been observed in nature. Research efforts A number of research programmes are currently underway or have concluded whose focus is/was on rogue waves, including: In the course of Project MaxWave, researchers from the GKSS Research Centre, using data collected by ESA satellites, identified a large number of radar signatures that have been portrayed as evidence for rogue waves. Further research is underway to develop better methods of translating the radar echoes into sea surface elevation, but at present this technique is not proven. The Australian National University, working in collaboration with Hamburg University of Technology and the University of Turin, have been conducting experiments in nonlinear dynamics to try to explain rogue or killer waves. The "Lego Pirate" video has been widely used and quoted to describe what they call "super rogue waves", which their research suggests can be up to five times bigger than the other waves around them. The European Space Agency continues to do research into rogue waves by radar satellite. United States Naval Research Laboratory, the science arm of the Navy and Marine Corps published results of their modelling work in 2015. Massachusetts Institute of Technology(MIT)'s research in this field is ongoing. Two researchers there partially supported by the Naval Engineering Education Consortium (NEEC) considered the problem of short-term prediction of rare, extreme water waves and developed and published their research on a predictive tool of about 25 wave periods. This tool can give ships and their crews a two to three-minute warning of a potentially catastrophic impact allowing crew some time to shut down essential operations on a ship (or offshore platform). The authors cite landing on an aircraft carrier as a prime example. The University of Colorado and the University of Stellenbosch Kyoto University Swinburne University of Technology in Australia recently published work on the probabilities of rogue waves. The University of Oxford Department of Engineering Science published a comprehensive review of the science of rogue waves in 2014. In 2019, A team from the Universities of Oxford and Edinburgh recreated the Draupner wave in a lab. University of Western Australia Tallinn University of Technology in Estonia Extreme Seas Project funded by the EU. At Umeå University in Sweden, a research group in August 2006 showed that normal stochastic wind-driven waves can suddenly give rise to monster waves. The nonlinear evolution of the instabilities was investigated by means of direct simulations of the time-dependent system of nonlinear equations. The Great Lakes Environmental Research Laboratory did research in 2002, which dispelled the long-held contentions that rogue waves were of rare occurrence. The University of Oslo has conducted research into crossing sea state and rogue wave probability during the Prestige accident; nonlinear wind-waves, their modification by tidal currents, and application to Norwegian coastal waters; general analysis of realistic ocean waves; modelling of currents and waves for sea structures and extreme wave events; rapid computations of steep surface waves in three dimensions, and comparison with experiments; and very large internal waves in the ocean. The National Oceanography Centre in the United Kingdom Scripps Institute of Oceanography in the United States Ritmare project in Italy. University of Copenhagen and University of Victoria Other media Researchers at UCLA observed rogue-wave phenomena in microstructured optical fibers near the threshold of soliton supercontinuum generation and characterized the initial conditions for generating rogue waves in any medium. Research in optics has pointed out the role played by a Peregrine soliton that may explain those waves that appear and disappear without leaving a trace. Rogue waves in other media appear to be ubiquitous and have also been reported in liquid helium, in quantum mechanics, in nonlinear optics, in microwave cavities, in Bose–Einstein condensate, in heat and diffusion, and in finance. Reported encounters Many of these encounters are reported only in the media, and are not examples of open-ocean rogue waves. Often, in popular culture, an endangering huge wave is loosely denoted as a "rogue wave", while the case has not been established that the reported event is a rogue wave in the scientific sense – i.e. of a very different nature in characteristics as the surrounding waves in that sea state] and with a very low probability of occurrence. This section lists a limited selection of notable incidents. 19th century Eagle Island lighthouse (1861) – Water broke the glass of the structure's east tower and flooded it, implying a wave that surmounted the cliff and overwhelmed the tower. Flannan Isles Lighthouse (1900) – Three lighthouse keepers vanished after a storm that resulted in wave-damaged equipment being found above sea level. 20th century SS Kronprinz Wilhelm, September 18, 1901 – The most modern German ocean liner of its time (winner of the Blue Riband) was damaged on its maiden voyage from Cherbourg to New York by a huge wave. The wave struck the ship head-on. RMS Lusitania (1910) – On the night of 10 January 1910, a wave struck the ship over the bow, damaging the forecastle deck and smashing the bridge windows. Voyage of the James Caird (1916) – Sir Ernest Shackleton encountered a wave he termed "gigantic" while piloting a lifeboat from Elephant Island to South Georgia. USS Memphis, August 29, 1916 – An armored cruiser, formerly known as the USS Tennessee, wrecked while stationed in the harbor of Santo Domingo, with 43 men killed or lost, by a succession of three waves, the largest estimated at 70 feet. RMS Homeric (1924) – Hit by a wave while sailing through a hurricane off the East Coast of the United States, injuring seven people, smashing numerous windows and portholes, carrying away one of the lifeboats, and snapping chairs and other fittings from their fastenings. USS Ramapo (1933) – Triangulated at . (1942) – Broadsided by a wave and listed briefly about 52° before slowly righting. SS Michelangelo (1966) – Hole torn in superstructure, heavy glass was smashed by the wave above the waterline, and three deaths. (1975) – Lost on Lake Superior, a Coast Guard report blamed water entry to the hatches, which gradually filled the hold, or errors in navigation or charting causing damage from running onto shoals. However, another nearby ship, the , was hit at a similar time by two rogue waves and possibly a third, and this appeared to coincide with the sinking around 10 minutes later. (1978) – Lost at sea, leaving only scattered wreckage and signs of sudden damage including extreme forces above the water line. Although more than one wave was probably involved, this remains the most likely sinking due to a freak wave. Esso Languedoc (1980) – A wave washed across the deck from the stern of the French supertanker near Durban, South Africa. Fastnet Lighthouse – Struck by a wave in 1985 Draupner wave (North Sea, 1995) – The first rogue wave confirmed with scientific evidence, it had a maximum height of . Queen Elizabeth 2 (1995) – Encountered a wave in the North Atlantic, during Hurricane Luis. The master said it "came out of the darkness" and "looked like the White Cliffs of Dover." Newspaper reports at the time described the cruise liner as attempting to "surf" the near-vertical wave in order not to be sunk. 21st century U.S. Naval Research Laboratory ocean-floor pressure sensors detected a freak wave caused by Hurricane Ivan in the Gulf of Mexico, 2004. The wave was around high from peak to trough, and around long. Their computer models also indicated that waves may have exceeded in the eyewall. Aleutian Ballad, (Bering Sea, 2005) footage of what is identified as an wave appears in an episode of Deadliest Catch. The wave strikes the ship at night and cripples the vessel, causing the boat to tip for a short period onto its side. This is one of the few video recordings of what might be a rogue wave. In 2006, researchers from U.S. Naval Institute theorized rogue waves may be responsible for the unexplained loss of low-flying aircraft, such as U.S. Coast Guard helicopters during search-and-rescue missions. MS Louis Majesty (Mediterranean Sea, March 2010) was struck by three successive waves while crossing the Gulf of Lion on a Mediterranean cruise between Cartagena and Marseille. Two passengers were killed by flying glass when the second and third waves shattered a lounge window. The waves, which struck without warning, were all abnormally high in respect to the sea swell at the time of the incident. In 2011, the Sea Shepherd vessel MV Brigitte Bardotwas damaged by a rogue wave of 11 m (36.1 ft) while pursuing the Japanese whaling fleet off the western coast of Australia on 28 December 2011. The MV Brigitte Bardot was escorted back to Fremantle by the SSCS flagship, MV Steve Irwin. The main hull was cracked, and the port side pontoon was being held together by straps. The vessel arrived at Fremantle Harbor on 5 January 2012. Both ships were followed by the ICR security vessel MV Shōnan Maru 2 at a distance of 5 nautical miles (9 km). In 2019, Hurricane Dorian's extratropical remnant generated a rogue wave off the coast of Newfoundland. In 2022, the Viking cruise ship Viking Polaris was hit by a rogue wave on its way to Ushuaia, Argentina. One person died, four more were injured, and the ship's scheduled route to Antarctica was canceled. Quantifying the impact of rogue waves on ships The loss of the in 1978 provided some of the first physical evidence of the existence of rogue waves. München was a state-of-the-art cargo ship with multiple water-tight compartments and an expert crew. She was lost with all crew, and the wreck has never been found. The only evidence found was the starboard lifeboat recovered from floating wreckage sometime later. The lifeboats hung from forward and aft blocks above the waterline. The pins had been bent back from forward to aft, indicating the lifeboat hanging below it had been struck by a wave that had run from fore to aft of the ship and had torn the lifeboat from the ship. To exert such force, the wave must have been considerably higher than . At the time of the inquiry, the existence of rogue waves was considered so statistically unlikely as to be near impossible. Consequently, the Maritime Court investigation concluded that the severe weather had somehow created an "unusual event" that had led to the sinking of the München. In 1980, the MV Derbyshire was lost during Typhoon Orchid south of Japan, along with all of her crew. The Derbyshire was an ore-bulk oil combination carrier built in 1976. At 91,655 gross register tons, she remains the largest British ship ever lost at sea. The wreck was found in June 1994. The survey team deployed a remotely operated vehicle to photograph the wreck. A private report published in 1998 prompted the British government to reopen a formal investigation into the sinking. The investigation included a comprehensive survey by the Woods Hole Oceanographic Institution, which took 135,774 pictures of the wreck during two surveys. The formal forensic investigation concluded that the ship sank because of structural failure and absolved the crew of any responsibility. Most notably, the report determined the detailed sequence of events that led to the structural failure of the vessel. A third comprehensive analysis was subsequently done by Douglas Faulkner, professor of marine architecture and ocean engineering at the University of Glasgow. His 2001 report linked the loss of the Derbyshire with the emerging science on freak waves, concluding that the Derbyshire was almost certainly destroyed by a rogue wave. Work by sailor and author Craig B. Smith in 2007 confirmed prior forensic work by Faulkner in 1998 and determined that the Derbyshire was exposed to a hydrostatic pressure of a "static head" of water of about with a resultant static pressure of . This is in effect of seawater (possibly a super rogue wave) flowing over the vessel. The deck cargo hatches on the Derbyshire were determined to be the key point of failure when the rogue wave washed over the ship. The design of the hatches only allowed for a static pressure less than of water or , meaning that the typhoon load on the hatches was more than 10 times the design load. The forensic structural analysis of the wreck of the Derbyshire is now widely regarded as irrefutable. In addition, fast-moving waves are now known to also exert extremely high dynamic pressure. Plunging or breaking waves are known to cause short-lived impulse pressure spikes called Gifle peaks. These can reach pressures of (or more) for milliseconds, which is sufficient pressure to lead to brittle fracture of mild steel. Evidence of failure by this mechanism was also found on the Derbyshire. Smith documented scenarios where hydrodynamic pressure up to or over 500 metric tonnes/m2 could occur. In 2004, an extreme wave was recorded impacting the Alderney Breakwater, Alderney, in the Channel Islands. This breakwater is exposed to the Atlantic Ocean. The peak pressure recorded by a shore-mounted transducer was . This pressure far exceeds almost any design criteria for modern ships, and this wave would have destroyed almost any merchant vessel. Design standards In November 1997, the International Maritime Organization (IMO) adopted new rules covering survivability and structural requirements for bulk carriers of and upwards. The bulkhead and double bottom must be strong enough to allow the ship to survive flooding in hold one unless loading is restricted. Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. A wave in the usual "linear" model would have a breaking force of . Although modern ships are typically designed to tolerate a breaking wave of 15 t/m2, a rogue wave can dwarf both of these figures with a breaking force far exceeding 100 t/m2. Smith presented calculations using the International Association of Classification Societies (IACS) Common Structural Rules for a typical bulk carrier. Peter Challenor, a scientist from the National Oceanography Centre in the United Kingdom, was quoted in Casey's book in 2010 as saying: "We don't have that random messy theory for nonlinear waves. At all." He added, "People have been working actively on this for the past 50 years at least. We don't even have the start of a theory." In 2006, Smith proposed that the IACS recommendation 34 pertaining to standard wave data be modified so that the minimum design wave height be increased to . He presented analysis that sufficient evidence exists to conclude that high waves can be experienced in the 25-year lifetime of oceangoing vessels, and that high waves are less likely, but not out of the question. Therefore, a design criterion based on high waves seems inadequate when the risk of losing crew and cargo is considered. Smith also proposed that the dynamic force of wave impacts should be included in the structural analysis. The Norwegian offshore standards now consider extreme severe wave conditions and require that a 10,000-year wave does not endanger the ships' integrity. W. Rosenthal noted that as of 2005, rogue waves were not explicitly accounted for in Classification Society's rules for ships' design. As an example, DNV GL, one of the world's largest international certification bodies and classification society with main expertise in technical assessment, advisory, and risk management publishes their Structure Design Load Principles which remain largely based on the Significant Wave Height, and as of January 2016, still have not included any allowance for rogue waves. The U.S. Navy historically took the design position that the largest wave likely to be encountered was . Smith observed in 2007 that the navy now believes that larger waves can occur and the possibility of extreme waves that are steeper (i.e. do not have longer wavelengths) is now recognized. The navy has not had to make any fundamental changes in ship design due to new knowledge of waves greater than 21.4 m because the ships are built to higher standards than required. The more than 50 classification societies worldwide each has different rules. However, most new ships are built to the standards of the 12 members of the International Association of Classification Societies, which implemented two sets of common structural rules - one for oil tankers and one for bulk carriers, in 2006. These were later harmonised into a single set of rules. See also Oceanography, currents and regions Waves (and especially, ) Notes References Further reading External links Extreme seas project Design for Ship Safety in Extreme Seas MaxWave report and WaveAtlas Freak waves spotted from space, BBC News Online Ship-sinking monster waves revealed by ESA satellites MaxWave project Rogue Wave Workshop (2005) Rogue Waves 2004 Other – documentary on rogue wave sizes, impacts, and causes by Facts in Motion (YouTuber). BBC News Report on Wave Research, 21 August 2004 The BBC's Horizon "Freak waves" first aired in November 2002 'Giant Waves on the Open Sea', lecture by Professor Paul H Taylor at Gresham College, 13 May 2008 (available for video, audio or text download) TV program description Non-technical description of some of the causes of rogue waves New Scientist article 06/2001 Freak Wave Research in Japan Optical Science Group, Research School of Physics and Engineering at the Australian National University "Rogue Giants at Sea", The New York Times, July 11, 2006 Illustrations of the ways rogue waves can form – with descriptions for layman, photos and animations. "The Wave" – photograph of a solitary and isolated rogue wave appearing in otherwise calm ocean waters (photographer: G Foulds) Katherine Noyes (25 February 2016), "A new algorithm from MIT could protect ships from 'rogue waves' at sea ", CIO magazine. Wood, Charles, The Grand Unified Theory of Rogue Waves. Rogue waves – enigmatic giants of the sea – were thought to be caused by two different mechanisms. But a new idea that borrows from the hinterlands of probability theory has the potential to predict them all. February 5, 2020. Quanta Magazine. Experimental physics Oceanography Ocean currents Water waves Weather hazards Articles containing video clips Fluid dynamics Surface waves
Rogue wave
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
8,607
[ "Ocean currents", "Physical phenomena", "Hydrology", "Applied and interdisciplinary physics", "Weather hazards", "Weather", "Oceanography", "Water waves", "Surface waves", "Chemical engineering", "Waves", "Experimental physics", "Piping", "Fluid dynamics" ]
678,365
https://en.wikipedia.org/wiki/Enthalpy%20change%20of%20solution
In thermochemistry, the enthalpy of solution (heat of solution or enthalpy of solvation) is the enthalpy change associated with the dissolution of a substance in a solvent at constant pressure resulting in infinite dilution. The enthalpy of solution is most often expressed in kJ/mol at constant temperature. The energy change can be regarded as being made up of three parts: the endothermic breaking of bonds within the solute and within the solvent, and the formation of attractions between the solute and the solvent. An ideal solution has a null enthalpy of mixing. For a non-ideal solution, it is an excess molar quantity. Energetics Dissolution by most gases is exothermic. That is, when a gas dissolves in a liquid solvent, energy is released as heat, warming both the system (i.e. the solution) and the surroundings. The temperature of the solution eventually decreases to match that of the surroundings. The equilibrium, between the gas as a separate phase and the gas in solution, will by Le Châtelier's principle shift to favour the gas going into solution as the temperature is decreased (decreasing the temperature increases the solubility of a gas). When a saturated solution of a gas is heated, gas comes out of the solution. Steps in dissolution Dissolution can be viewed as occurring in three steps: Breaking solute-solute attractions (endothermic), for instance, lattice energy in salts. Breaking solvent-solvent attractions (endothermic), for instance, that of hydrogen bonding Forming solvent-solute attractions (exothermic), in solvation. The value of the enthalpy of solvation is the sum of these individual steps. Dissolving ammonium nitrate in water is endothermic. The energy released by the solvation of the ammonium ions and nitrate ions is less than the energy absorbed in breaking up the ammonium nitrate ionic lattice and the attractions between water molecules. Dissolving potassium hydroxide is exothermic, as more energy is released during solvation than is used in breaking up the solute and solvent. Expressions in differential or integral form The expressions of the enthalpy change of dissolution can be differential or integral, as a function of the ratio of amounts of solute-solvent. The molar differential enthalpy change of dissolution is: where is the infinitesimal variation or differential of the mole number of the solute during dissolution. The integral heat of dissolution is defined as a process of obtaining a certain amount of solution with a final concentration. The enthalpy change in this process, normalized by the mole number of solute, is evaluated as the molar integral heat of dissolution. Mathematically, the molar integral heat of dissolution is denoted as: The prime heat of dissolution is the differential heat of dissolution for obtaining an infinitely diluted solution. Dependence on the nature of the solution The enthalpy of mixing of an ideal solution is zero by definition but the enthalpy of dissolution of nonelectrolytes has the value of the enthalpy of fusion or vaporisation. For non-ideal solutions of electrolytes it is connected to the activity coefficient of the solute(s) and the temperature derivative of the relative permittivity through the following formula: See also Apparent molar property Enthalpy of mixing Heat of dilution Heat of melting Hydration energy Lattice energy Law of dilution Solvation Thermodynamic activity Solubility equilibrium References External links phase diagram Solutions Enthalpy
Enthalpy change of solution
[ "Physics", "Chemistry", "Mathematics" ]
733
[ "Thermodynamic properties", "Physical quantities", "Quantity", "Homogeneous chemical mixtures", "Enthalpy", "Solutions" ]
17,962,061
https://en.wikipedia.org/wiki/Werthamer%E2%80%93Helfand%E2%80%93Hohenberg%20theory
In physics, The Werthamer–Helfand–Hohenberg (WHH) theory was proposed in 1966 by N. Richard Werthamer, Eugene Helfand and Pierre Hohenberg to go beyond BCS theory of superconductivity and it provides predictions of upper critical field () in type-II superconductors. The theory predicts the upper critical field () at 0 K from and the slope of at . References Superconductivity
Werthamer–Helfand–Hohenberg theory
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
102
[ "Materials science stubs", "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electromagnetism stubs", "Physical chemistry stubs", "Electrical resistance and conductance" ]
17,962,735
https://en.wikipedia.org/wiki/Clarifier
Clarifiers are settling tanks built with mechanical means for continuous removal of solids being deposited by sedimentation. A clarifier is generally used to remove solid particulates or suspended solids from liquid for clarification and/or thickening. Inside the clarifier, solid contaminants will settle down to the bottom of the tank where it is collected by a scraper mechanism. Concentrated impurities, discharged from the bottom of the tank, are known as sludge, while the particles that float to the surface of the liquid are called scum. Applications Pretreatment Before the water enters the clarifier, coagulation and flocculation reagents, such as polyelectrolytes and ferric sulfate, can be added. These reagents cause finely suspended particles to clump together and form larger and denser particles, called flocs, that settle more quickly and stably. This allows the separation of the solids in the clarifier to occur more efficiently and easily, aiding in the conservation of energy. Isolating the particle components first using these processes may reduce the volume of downstream water treatment processes like filtration. Potable water treatment Drinking water, water being purified for human consumption, is treated with flocculation reagents, then sent to the clarifier where removal of the flocculated coagulate occurs producing clarified water. The clarifier works by permitting the heavier and larger particles to settle to the bottom of the clarifier. The particles then form a bottom layer of sludge requiring regular removal and disposal. Clarified water then proceeds through several more steps before being sent for storage and use. Wastewater treatment Sedimentation tanks have been used to treat wastewater for millennia. Primary treatment of sewage is removal of floating and settleable solids through sedimentation. Primary clarifiers reduce the content of suspended solids and pollutants embedded in those suspended solids. Because of the large amount of reagent necessary to treat domestic wastewater, preliminary chemical coagulation and flocculation are generally not used, remaining suspended solids being reduced by following stages of the system. However, coagulation and flocculation can be used for building a compact treatment plant (also called a "package treatment plant"), or for further polishing of the treated water. Sedimentation tanks called 'secondary clarifiers' remove flocs of biological growth created in some methods of secondary treatment including activated sludge, trickling filters and rotating biological contactors. Mining Methods used to treat suspended solids in mining wastewater include sedimentation and floc blanket clarification and filtration. Sedimentation is used by Rio Tinto Minerals to refine raw ore into refined borates. After dissolving the ore, the saturated borate solution is pumped into a large settling tank. Borates float on top of the liquor while rock and clay settles to the bottom. Technology Although sedimentation might occur in tanks of other shapes, removal of accumulated solids is easiest with conveyor belts in rectangular tanks or with scrapers rotating around the central axis of circular tanks. Mechanical solids removal devices move as slowly as practical to minimize resuspension of settled solids. Tanks are sized to give water an optimal residence time within the tank. Economy favors using small tanks; but if flow rate through the tank is too high, most particles will not have sufficient time to settle, and will be carried with the treated water. Considerable attention is focused on reducing water inlet and outlet velocities to minimize turbulence and promote effective settling throughout available tank volume. Baffles are used to prevent fluid velocities at the tank entrance from extending into the tank; and overflow weirs are used to uniformly distribute flow from liquid leaving the tank over a wide area of the surface to minimize resuspension of settling particles. Tube settlers Tube or plate settlers are commonly used in rectangular clarifiers to increase the settling capacity by reducing the vertical distance a suspended particle must travel. Tube settlers are available in many different designs such as parallel plates, chevron shaped, diamond, octagon or triangle shape, and circular shape. High efficiency tube settlers use a stack of parallel tubes, rectangles or flat corrugated plates separated by a few inches (several centimeters) and sloping upwards in the direction of flow. This structure creates a large number of narrow parallel flow pathways encouraging uniform laminar flow as modeled by Stokes' law. These structures work in two ways: They provide a very large surface area onto which particles may fall and become stabilized. Because flow is temporarily accelerated between the plates and then immediately slows down, this helps to aggregate very fine particles that can settle as the flow exits the plates. Structures inclined between 45°  and 60°  may allow gravity drainage of accumulated solids, but shallower angles of inclination typically require periodic draining and cleaning. Tube settlers may allow the use of a smaller clarifier and may enable finer particles to be separated with residence times less than 10 minutes. Typically such structures are used for difficult-to-treat waters, especially those containing colloidal materials. Tube settlers capture the fine particles allowing the larger particles to travel to the bottom of the clarifier in a more uniform way. The fine particles then build up into a larger mass which then slides down the tube channels. The reduction in solids present in the outflow allows a reduction in the clarifier footprint when designing. Tubes made of PVC plastic are a minor cost in clarifier design improvements and may lead to an increase of operating rate of 2 to 4 times. Operation In order to maintain and promote the proper processing of a clarifier, it is important to remove any corrosive, reactive and polymerisable components first, or any material that may foul the outlet stream of water to avoid any unwanted side reactions, changes in the product or damage to any of the water treatment equipment. This is done through routine inspections in order to ascertain the extent of sediment build up, as well as frequent cleaning of the quiescent zones, the inlet and outlet areas of the clarifier to remove any scouring, litter, weeds or debris that may have accumulated over time. Water being introduced into the clarifier should be controlled to reduce the velocity of the inlet flow. Reducing the velocity maximizes the hydraulic retention time inside the clarifier for sedimentation and helps to avoid excessive turbulence and mixing; thereby promoting the effective settling of the suspended particles. To further discourage the overt mixing within the clarifier and increase the retention time allowed for the particles to settle, the inlet flow should also be distributed evenly across the entire cross section of the settling zone inside the clarifier, where the volume is maintained at 37.7 percent capacity. The sludge formed from the settled particles at the bottom of each clarifier, if left for an extended period of time, may become gluey and viscous, causing difficulties in its removal. This formation of sludge promotes anaerobic conditions and a healthy environment for the growth of bacteria. This can cause the resuspension of particles by gases and the release of dissolved nutrients throughout the water fluid, reducing the effectiveness of the clarifier. Major health issues and problems can also occur further down the track of the water purification system, or the health of the fish found downstream of the clarifier may be hindered. New development Improvements and modifications have been made to enhance clarifier performance depending on the characteristics of the substance undergoing the separation. Addition of flocculants is common to aid separation in clarifiers, but density difference of flocculant concentrate may cause treated water to have an excessive flocculant concentration. Uniform flocculent concentration can be improved and flocculant dosage reduced by installation of an intermediate diffused wall perpendicular to the flow in the clarifier. The two dominant forces acting upon the solid particles in clarifiers are gravity and particle interactions. Disproportional flow can lead to turbulent and hydraulic instability and potential flow short-circuiting. Installation of perforated baffle walls in modern clarifiers promotes uniform flow across the basin. Rectangular clarifiers are commonly used for high efficiency and low running cost. Improvements of these clarifiers were made to stabilize flow by elongation and narrowing of the tank. See also API oil-water separator Dissolved air flotation List of waste-water treatment technologies Total suspended solids References Bibliography Weber, Walter J., Jr. Physicochemical Processes for Water Quality Control. John Wiley & Sons (1972). Sewerage Water treatment Industrial water treatment
Clarifier
[ "Chemistry", "Engineering", "Environmental_science" ]
1,786
[ "Water treatment", "Industrial water treatment", "Water pollution", "Sewerage", "Environmental engineering" ]
17,967,841
https://en.wikipedia.org/wiki/5-HTTLPR
5-HTTLPR (serotonin-transporter-linked promoter region) is a degenerate repeat (redundancy in the genetic code) polymorphic region in SLC6A4, the gene that codes for the serotonin transporter. Since the polymorphism was identified in the middle of the 1990s, it has been extensively investigated, e.g., in connection with neuropsychiatric disorders. A 2006 scientific article stated that "over 300 behavioral, psychiatric, pharmacogenetic and other medical genetics papers" had analyzed the polymorphism. While often discussed as an example of gene-environment interaction, this contention is contested. Alleles The polymorphism occurs in the promoter region of the gene. Researchers commonly report it with two variations in humans: A short ("s") and a long ("l"), but it can be subdivided further. The short (s)- and long (l)- alleles have been thought to be related to stress and psychiatric disorders. In connection with the region are two single nucleotide polymorphisms (SNP): rs25531 and rs25532. One study published in 2000 found 14 allelic variants (14-A, 14-B, 14-C, 14-D, 15, 16-A, 16-B, 16-C, 16-D, 16-E, 16-F, 19, 20 and 22) in a group of around 200 Japanese and Europeans. The difference between 16-A and 16-D is the rs25531 SNP. It is also the difference between 14-A and 14-D. Some studies have found that long allele results in higher serotonin transporter mRNA transcription in human cell lines. The higher level may be due to the A-allele of rs25531, such that subjects with the long-rs25531(A) allelic combination (sometimes written LA) have higher levels while long-rs25531(G) carriers have levels more similar to short-allele carriers. Newer studies examining the effects of genotype may compare the LA/LA genotype against all other genotypes. The allele frequency of this polymorphism seems to vary considerably across populations, with a higher frequency of the long allele in Europe and lower frequency in Asia. It is argued that the population variation in the allele frequency is more likely due to neutral evolutionary processes than natural selection. Neuropsychiatric disorders In the 1990s it has been speculated that the polymorphism might be related to affective disorders, and an initial study found such a link. However, another large European study found no such link. A decade later two studies found that 5-HTT polymorphism influences depressive responses to life stress; an example of gene-environment interaction (GxE) not considered in the previous studies. However, a 2017 meta-analysis found no such association. Earlier, two 2009 meta-analyses found no overall GxE effect, while a 2011 meta-analysis, demonstrated a positive result. In turn, the 2011 meta-analysis has been criticized as being overly inclusive (e.g. including hip fractures as outcomes), for deeming a study supportive of the GxE interaction which is actually in the opposite direction, and because of substantial evidence of publication bias and data mining in the literature. This criticism points out that if the original finding were real, and not the result of publication bias, we would expect that those replication studies which are closest in design to the original are the most likely to replicate—instead we find the opposite. This suggests that authors may be data dredging for measures and analytic strategies which yield the results they want. Treatment response With the results from one study the polymorphism was thought to be related to treatment response so that long-allele patients respond better to antidepressants. Another antidepressant treatment response study did, however, rather point to the rs25531 SNP, and a large study by the group of investigators found a "lack of association between response to an SSRI and variation at the SLC6A4 locus". One study could find a treatment response effect for repetitive transcranial magnetic stimulation to drug-resistant depression with long/long homozygotes benefitting more than short-allele carriers. The researchers found a similar effect for the Val66Met polymorphism in the BDNF gene. Amygdala The 5-HTTLPR has been thought to predispose individuals to affective disorders such as anxiety and depression. There have been some studies that test whether this association is due to the effects of variation in 5-HTTLPR on the reactivity of the human amygdala. In order to test this, researchers gathered a group of subjects and administered a harm avoidance (HA) subset of the Tridimensional Personality Questionnaire as an initial mood and personality assessment. Subjects also had their DNA isolated and analyzed in order to be genotyped. Next, the amygdala was then engaged by having the subject match fearful facial expressions during an fMRI scan (by the 3-T GE Signa scanner). The results of the study showed that there was bilateral activity in the amygdala for every subject when processing the fearful images, as expected. However, the activity in the right amygdala was much higher for subjects with the s-allele, which shows that the 5-HTTLPR has an effect on amygdala activity. There did not seem to be the same effect on the left amygdala. Insomnia There has been speculation that the 5-HTTLPR gene is associated with insomnia and sleep quality. Primary insomnia is one of the most common sleep disorders and is defined as having trouble falling or staying asleep, enough to cause distress in one's life. Serotonin (5-HT) has been associated with the regulation of sleep for a very long time now. The 5-HT transporter (5-HTT) is the main regulator of serotonin and serotonergic energy and is therefore targeted by many antidepressants. There also have been several family and twin studies that suggest that insomnia is heavily genetically influenced. Many of these studies have found that there is a genetic and environment dual-factor that influences insomnia. It has been hypothesized that the short 5-HTTLPR genotype is related to poor sleep quality and, therefore, also primary insomnia. It is important to note that research studies have found that this variation does not cause insomnia, but rather may predispose an individual to experience worse quality of sleep when faced with a stressful life event. Brummett The effect that the 5-HTTLPR gene had on sleep quality was tested by Brummett in a study conducted at Duke University Medical Center from 2001 to 2004. The sleep quality of 344 participants was measured using The Pittsburgh Sleep Quality Index. The study found that caregivers with the homozygous s-allele had poorer sleep quality, which shows that the stress of caregiving combined with the allele gave way to worse sleep quality. Although the study found that the 5-HTTLPR genotype did not directly affect sleep quality, the 5-HTTLPR polymorphism's effect on sleep quality was magnified by one's environmental stress. It supports the notion that the 5-HTTLPR s-allele is what leads to hyperarousal when exposed to stress; hyperarousability is commonly associated with insomnia. Deuschle However, in a 2007 study conducted by a sleep laboratory in Germany, it was found that the 5-HTTLPR gene did have a strong association with both insomnia and depression both in participants with and without lifetime affective disorders. This study included 157 insomnia patients and a control group of 836 individuals that had no psychiatric disorders. The subjects were then genotyped through polymerase chain reaction (PCR) techniques. The researchers found that the s-allele was greater represented in the vast majority of patients with insomnia compared to those who had no disorder. This shows that there is an association between the 5-HTTPLR genotype and primary insomnia. However, it is important to consider the fact that there was a very limited number of subjects with insomnia tested in this study. Personality traits 5-HTTLPR may be related to personality traits: Two 2004 meta-analyses found 26 research studies investigating the polymorphism in relation to anxiety-related traits. The initial and classic 1996 study found s-allele carriers to on average have slightly higher neuroticism score with the NEO PI-R personality questionnaire, and this result was replicated by the group with new data. Some other studies have, however, failed to find this association, nor with peer-rated neuroticism, and a 2006 review noted the "erratic success in replication" of the first finding. A meta-analysis published in 2004 stated that the lack of replicability was "largely due to small sample size and the use of different inventories". They found that neuroticism as measured with the NEO-family of personality inventories had quite significant association with 5-HTTLPR while the trait harm avoidance from the Temperament and Character Inventory family did not have any significant association. A similar conclusion was reached in an updated 2008 meta-analysis. However, based on over 4000 subjects, the largest study that used the NEO PI-R found no association between variants of the serotonin transporter gene (including 5-HTTLPR) and neuroticism, or its facets (Anxiety, Angry-Hostility, Depression, Self-Consciousness, Impulsiveness, and Vulnerability). In a study published in 2009, authors found that individuals homozygous for the long allele of 5-HTTLPR paid more attention on average to positive affective pictures while selectively avoiding negative affective pictures presented alongside the positive pictures compared to their heterozygous and short-allele-homozygous peers. This biased attention of positive emotional stimuli suggests they may tend to be more optimistic. Other research indicates carriers of the short 5-HTTLPR allele have difficulty disengaging attention from emotional stimuli compared to long allele homozygotes. Another study published in 2009 using an eye tracking assessment of information processing found that short 5-HTTLPR allele carriers displayed an eye gaze bias to view positive scenes and avoid negative scenes, while long allele homozygotes viewed the emotion scenes in a more even-handed fashion. This research suggests that short 5-HTTLPR allele carriers may be more sensitive to emotional information in the environment than long allele homozygotes. Another research group have given evidence for a modest association between shyness and the long form in grade school children. This is, however, just a single report and the link is not investigated as intensively as for the anxiety-related traits. Neuroimaging Molecular neuroimaging studies have examined the association between genotype and serotonin transporter binding with positron emission tomography (PET) and SPECT brain scanners. Such studies use a radioligand that binds—preferably selectively—to the serotonin transporter so an image can be formed that quantifies the distribution of the serotonin transporter in the brain. One study could see no difference in serotonin transporter availability between long/long and short/short homozygotes subjects among 96 subjects scanned with SPECT using the iodine-123 β-CIT radioligand. Using the PET radioligand carbon-11-labeled McN 5652 another research team could neither find any difference in serotonin transporter binding between genotype groups. Newer studies have used the radioligand carbon-11-labeled DASB with one study finding higher serotonin transporter binding in the putamen of LA homozygotes compared to other genotypes. Another study with similar radioligand and genotype comparison found higher binding in the midbrain. Associations between the polymorphism and the grey matter in parts of the anterior cingulate brain region have also been reported based on magnetic resonance imaging brain scannings and voxel-based morphometry analysis. 5-HTTLPR short allele–driven amygdala hyperreactivity was confirmed in a large (by MRI study standards) cohort of healthy subjects with no history of psychiatric illness or treatment. Brain blood flow measurements with positron emission tomography brain scanners can show genotype-related changes. The glucose metabolism in the brain has also been investigated with respect to the polymorphism, and the functional magnetic resonance imaging (fMRI) brain scans have also been correlated to the polymorphism. Especially the amygdala brain structure has been the focus of the functional neuroimaging studies. Electrophysiology The relationship between the Event Related Potentials P3a and P3b and the genetic variants of 5-HTTLPR were investigated using an auditory oddball paradigm and revealed short allele homozygotes mimicked those of COMT met/met homozygotes with an enhancement of the frontal, but not parietal P3a and P3b. This suggests a frontal-cortical dopaminergic and serotoninergic mechanism in bottom-up attentional capture. Other organisms In rats (Rattus rattus) berberine increases 5-HTTLPR activity. References Further reading External links 5-HTTLPR: A Pointed Review at Slate Star Codex
5-HTTLPR
[ "Biology" ]
2,845
[ "SNPs on chromosome 17", "Single-nucleotide polymorphisms" ]
17,971,241
https://en.wikipedia.org/wiki/Barwise%20compactness%20theorem
In mathematical logic, the Barwise compactness theorem, named after Jon Barwise, is a generalization of the usual compactness theorem for first-order logic to a certain class of infinitary languages. It was stated and proved by Barwise in 1967. Statement Let be a countable admissible set. Let be an -finite relational language. Suppose is a set of -sentences, where is a set with parameters from , and every -finite subset of is satisfiable. Then is satisfiable. References External links Stanford Encyclopedia of Philosophy: "Infinitary Logic", Section 5, "Sublanguages of L(ω1,ω) and the Barwise Compactness Theorem" Theorems in the foundations of mathematics Mathematical logic Metatheorems
Barwise compactness theorem
[ "Mathematics" ]
162
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical logic stubs", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
17,972,479
https://en.wikipedia.org/wiki/Luxemburg%E2%80%93Gorky%20effect
In radiophysics, the Luxemburg–Gorky effect (named after Radio Luxemburg and the city of Gorky (Nizhny Novgorod)) is a phenomenon of cross modulation between two radio waves, one of which is strong, passing through the same part of a medium, especially a conductive region of atmosphere or a plasma. Current theory seems to be that the conductivity of the ionosphere is affected by the presence of strong radio waves. The strength of a radio wave returning from the ionosphere to a distant point is dependent on this conductivity level. Therefore, if station "A" is radiating a strong amplitude modulated radio signal all around, some of it will modulate the conductivity of the ionosphere above the station. Then if station "B" is also sending an amplitude modulated signal from another location, the part of station "B's" signal that passes through the ionosphere disturbed by station "A" to a receiver in line with both stations may have its strength modulated by the station "A" signal, even though the two are widely apart in frequency. In other words, the ionosphere passes the station "B" signal with a strength that varies in step with the modulation (voice, etc.) of station "A." This re-modulation level of the station "B" signal is usually only a few percent, but is enough to make both stations audible. The interference (both stations simultaneously received) goes away as the receiver is tuned slightly away from the frequency of "B." See also Distortion Radio propagation Plasma physics Notes References . . In the paper "An hereditary theory of the Luxemburg effect" (English translation of the title), written only few years after the discovery of the effect itself, Dario Graffi proposes a theory of the Luxemburg effect based on Volterra's theory of hereditary phenomena. Radio spectrum
Luxemburg–Gorky effect
[ "Physics" ]
385
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
2,379,305
https://en.wikipedia.org/wiki/Brihanmumbai%20Storm%20Water%20Disposal%20System
The Brihanmumbai Stormwater Disposal System is a project planned to overhaul Mumbai's water drainage system. The estimated budget for implementing the project is Rs. 12 billion (approx. 300 million US dollars) as of August 2005. Such a high-budget project would require funds from the Central Government. Mumbai has a drainage system, which in many places, are more than 100 years old, consisting of 2,000 km of open drains, 440 km of closed drains, 186 outfalls and more than 30,000 water entrances. The capacity of most of the drains is around 25 mm of rain per hour during low tide, which is exceeded routinely during the monsoon season in Mumbai, which witness more than 1400 mm during June and July. The drain system works with the aid of gravity, with no pumping stations to speed up the drainage. Most of the storm water drains are also choked due to the dumping of garbage by citizens. Portions of Mumbai like Bombay Central and Tardeo remain below sea level. Reclamation of ponds and obstructions in drains due to cables and gas pipe exacerbate the problem. History of failed drainage system in Mumbai The act of 26 July 2005. The project was conceived after major floods in Mumbai in 1985. Watson Hawksley was appointed as consultants to design the drainage system from Sandhurst Road to Milan subway in 1989. A proposal was submitted in 1993 for a project which involved replacement of drains, setting up of pumping stations at Worli, Haji Ali and Cleaveland Bandar, construction of a five-metre wide road alongside major drains for desilting, removal of obstructions from the drains and rehabilitation of slum-dwellers . The project was not acted upon due to lack of funds till the catastrophic floods in 2005. The initial estimated cost of the project was around Rs 6 billion. Around Rs 1.43 billion was spent on the project till 1998. By 2005, the project cost had gone up to Rs 12 billion. References External links Executive Summary of Environment Status Report of the Municipal Corporation of Greater Mumbai, 2002-03 (PDF format) which states: "...The present Storm Water Drainage system in the city is 70 years old and about 480 km in length. It is capable of handling rain intensity of 25 mm per hour at low tide.... One of the heavily polluted storm water drains which is known as Mithi River is responsible for polluting Mahim creek. BRIMSTOWAD is a project for the rehabilitation of city's SWD system.... The project... will cost Rs. 6.16 billion at 1992 price." https://web.archive.org/web/20090310212824/http://mdmu.maharashtra.gov.in/pages/Mumbai/mumbaiplanShow.php - Government of Maharashtra's Department of Relief and Rehabilitation's "Mumbai plan" which says - "The present capacity of the storm-water drains needs to be augmented to a higher capacity which is under serious consideration with the Government of Maharashtra/BMC." "BMC project cost mounts to Rs 1,200 crore" - Mid-Day article dated 10 July 2005 Geography of Mumbai Water in India Stormwater management Flood control in India
Brihanmumbai Storm Water Disposal System
[ "Chemistry", "Environmental_science" ]
663
[ "Water treatment", "Stormwater management", "Water pollution" ]
2,379,569
https://en.wikipedia.org/wiki/Submersible%20bridge
A submersible bridge is a type of movable bridge that lowers the bridge deck below the water level to permit waterborne traffic to use the waterway. This differs from a lift bridge or table bridge, which operate by raising the roadway. Two submersible bridges exist across the Corinth Canal in Greece, one at each end, in Isthmia and Corinth. They lower the centre span to 8 metres below water level when they give way to ships crossing the channel. The submersible bridge's primary advantage over the similar lift bridge is that there is no structure above the shipping channel and thus no height limitation on ship traffic. This is particularly important for sailing vessels. Additionally, the lack of an above-deck structure is considered aesthetically pleasing, a similarity shared with the Chicago-style bascule bridge and the table bridge. However, the presence of the submerged bridge structure limits the draft of vessels in the waterway. The term submersible bridge is also sometimes applied to a non-movable bridge that is designed to withstand submersion and high currents when the water level rises. Such a bridge is more properly called a low water bridge. See also Low water crossing, a non-moving bridge that is sometimes submerged Moveable bridges for a list of other moveable bridge types Table bridge, a similar bridge that moves upward Underwater bridge, a non-moving military bridge that is always submerged References External links Popular Science, November 1943, "Ducking Bridge" Lowers Span To Allow Ships To Pass built in Iraq in 1943 (bottom-right hand side of page) Video of the operation of a submersible bridge at the entrance of the Corinth Canal Bridges Moveable bridges Bridges by structural type
Submersible bridge
[ "Engineering" ]
348
[ "Structural engineering", "Bridges" ]
2,379,716
https://en.wikipedia.org/wiki/Devolution%20%28biology%29
Devolution, de-evolution, or backward evolution (not to be confused with dysgenics) is the notion that species can revert to supposedly more primitive forms over time. The concept relates to the idea that evolution has a divine purpose (teleology) and is thus progressive (orthogenesis), for example that feet might be better than hooves, or lungs than gills. However, evolutionary biology makes no such assumptions, and natural selection shapes adaptations with no foreknowledge or foresights of any kind regarding the outcome. It is possible for small changes (such as in the frequency of a single gene) to be reversed by chance or selection, but this is no different from the normal course of evolution and as such de-evolution is not compatible with a proper understanding of evolution due to natural selection. In the 19th century, when belief in orthogenesis was widespread, zoologists such as Ray Lankester and Anton Dohrn and palaeontologists Alpheus Hyatt and Carl H. Eigenmann advocated the idea of devolution. The concept appears in Kurt Vonnegut's 1985 novel Galápagos, which portrays a society that has evolved backwards to have small brains. Dollo's law of irreversibility, first stated in 1893 by the palaeontologist Louis Dollo, denies the possibility of devolution. The evolutionary biologist Richard Dawkins explains Dollo's law as being simply a statement about the improbability of evolution's following precisely the same path twice. Context The idea of devolution is based on the presumption of orthogenesis, the view that evolution has a purposeful direction towards increasing complexity. Modern evolutionary theory, beginning with Darwin at least, poses no such presumption, and the concept of evolutionary change is independent of either any increase in complexity of organisms sharing a gene pool, or any decrease, such as in vestigiality or in loss of genes. Earlier views that species are subject to "cultural decay", "drives to perfection", or "devolution" are practically meaningless in terms of current (neo-)Darwinian theory. Early scientific theories of transmutation of species such as Lamarckism perceived species diversity as a result of a purposeful internal drive or tendency to form improved adaptations to the environment. In contrast, Darwinian evolution and its elaboration in the light of subsequent advances in biological research, have shown that adaptation through natural selection comes about when particular heritable attributes in a population happen to give a better chance of successful reproduction in the reigning environment than rival attributes do. By the same process less advantageous attributes are less "successful"; they decrease in frequency or are lost completely. Since Darwin's time it has been shown how these changes in the frequencies of attributes occur according to the mechanisms of genetics and the laws of inheritance originally investigated by Gregor Mendel. Combined with Darwin's original insights, genetic advances led to what has variously been called the modern evolutionary synthesis or the neo-Darwinism of the 20th century. In these terms evolutionary adaptation may occur most obviously through the natural selection of particular alleles. Such alleles may be long established, or they may be new mutations. Selection also might arise from more complex epigenetic or other chromosomal changes, but the fundamental requirement is that any adaptive effect must be heritable. The concept of devolution on the other hand, requires that there be a preferred hierarchy of structure and function, and that evolution must mean "progress" to "more advanced" organisms. For example, it could be said that "feet are better than hooves" or "lungs are better than gills", so their development is "evolutionary" whereas change to an inferior or "less advanced" structure would be called "devolution". In reality an evolutionary biologist defines all heritable changes to relative frequencies of the genes or indeed to epigenetic states in the gene pool as evolution. All gene pool changes that lead to increased fitness in terms of appropriate aspects of reproduction are seen as (neo-)Darwinian adaptation because, for the organisms possessing the changed structures, each is a useful adaptation to their circumstances. For example, hooves have advantages for running quickly on plains, which benefits horses, and feet offer advantages in climbing trees, which some ancestors of humans did. The concept of devolution as regress from progress relates to the ancient ideas that either life came into being through special creation or that humans are the ultimate product or goal of evolution. The latter belief is related to anthropocentrism, the idea that human existence is the point of all universal existence. Such thinking can lead on to the idea that species evolve because they "need to" in order to adapt to environmental changes. Biologists refer to this misconception as teleology, the idea of intrinsic finality that things are "supposed" to be and behave a certain way, and naturally tend to act that way to pursue their own good. From a biological viewpoint, in contrast, if species evolve it is not a reaction to necessity, but rather that the population contains variations with traits that favour their natural selection. This view is supported by the fossil record which demonstrates that roughly ninety-nine percent of all species that ever lived are now extinct. People thinking in terms of devolution commonly assume that progress is shown by increasing complexity, but biologists studying the evolution of complexity find evidence of many examples of decreasing complexity in the record of evolution. The lower jaw in fish, reptiles and mammals has seen a decrease in complexity, if measured by the number of bones. Ancestors of modern horses had several toes on each foot; modern horses have a single hooved toe. Modern humans may be evolving towards never having wisdom teeth, and already have lost most of the tail found in many other mammals - not to mention other vestigial structures, such as the vermiform appendix or the nictitating membrane. In some cases, the level of organization of living creatures can also “shift” downwards (e.g., the loss of multicellularity in some groups of protists and fungi). A more rational version of the concept of devolution, a version that does not involve concepts of "primitive" or "advanced" organisms, is based on the observation that if certain genetic changes in a particular combination (sometimes in a particular sequence as well) are precisely reversed, one should get precise reversal of the evolutionary process, yielding an atavism or "throwback", whether more or less complex than the ancestors where the process began. At a trivial level, where just one or a few mutations are involved, selection pressure in one direction can have one effect, which can be reversed by new patterns of selection when conditions change. That could be seen as reversed evolution, though the concept is not of much interest because it does not differ in any functional or effective way from any other adaptation to selection pressures. History The concept of degenerative evolution was used by scientists in the 19th century, at this time it was believed by most biologists that evolution had some kind of direction. In 1857 the physician Bénédict Morel, influenced by Lamarckism, claimed that environmental factors such as taking drugs or alcohol would produce social degeneration in the offspring of those individuals, and would revert those offspring to a primitive state. Morel, a devout Catholic, had believed that mankind had started in perfection, contrasting modern humanity to the past. Morel claimed there had been "Morbid deviation from an original type". His theory of devolution was later advocated by some biologists. According to Roger Luckhurst: Darwin soothed readers that evolution was progressive, and directed towards human perfectibility. The next generation of biologists were less confident or consoling. Using Darwin's theory, and many rival biological accounts of development then in circulation, scientists suspected that it was just as possible to devolve, to slip back down the evolutionary scale to prior states of development. One of the first biologists to suggest devolution was Ray Lankester, he explored the possibility that evolution by natural selection may in some cases lead to devolution, an example he studied was the regressions in the life cycle of sea squirts. Lankester discussed the idea of devolution in his book Degeneration: A Chapter in Darwinism (1880). He was a critic of progressive evolution, pointing out that higher forms existed in the past which have since degenerated into simpler forms. Lankester argued that "if it was possible to evolve, it was also possible to devolve, and that complex organisms could devolve into simpler forms or animals". Anton Dohrn also developed a theory of degenerative evolution based on his studies of vertebrates. According to Dohrn many chordates are degenerated because of their environmental conditions. Dohrn claimed cyclostomes such as lampreys are degenerate fish as there is no evidence their jawless state is an ancestral feature but is the product of environmental adaptation due to parasitism. According to Dohrn if cyclostomes would devolve further then they would resemble something like an Amphioxus. The historian of biology Peter J. Bowler has written that devolution was taken seriously by proponents of orthogenesis and others in the late 19th century who at this period of time firmly believed that there was a direction in evolution. Orthogenesis was the belief that evolution travels in internally directed trends and levels. The paleontologist Alpheus Hyatt discussed devolution in his work, using the concept of racial senility as the mechanism of devolution. Bowler defines racial senility as "an evolutionary retreat back to a state resembling that from which it began." Hyatt who studied the fossils of invertebrates believed that up to a point ammonoids developed by regular stages up until a specific level but would later due to unfavourable conditions descend back to a previous level, this according to Hyatt was a form of lamarckism as the degeneration was a direct response to external factors. To Hyatt after the level of degeneration the species would then become extinct, according to Hyatt there was a "phase of youth, a phase of maturity, a phase of senility or degeneration foreshadowing the extinction of a type". To Hyatt the devolution was predetermined by internal factors which organisms can neither control or reverse. This idea of all evolutionary branches eventually running out of energy and degenerating into extinction was a pessimistic view of evolution and was unpopular amongst many scientists of the time. Carl H. Eigenmann an ichthyologist wrote Cave vertebrates of America: a study in degenerative evolution (1909) in which he concluded that cave evolution was essentially degenerative. The entomologist William Morton Wheeler and the Lamarckian Ernest MacBride (1866–1940) also advocated degenerative evolution. According to Macbride invertebrates were actually degenerate vertebrates, his argument was based on the idea that "crawling on the seabed was inherently less stimulating than swimming in open waters." Degeneration theory Johann Friedrich Blumenbach and other monogenists such as Georges-Louis Leclerc, Comte de Buffon were believers in the "Degeneration theory" of racial origins. The theory claims that races can degenerate into "primitive" forms. Blumenbach claimed that Adam and Eve were white and that other races came about by degeneration from environmental factors such as the sun and poor diet. Buffon believed that the degeneration could be reversed if proper environmental control was taken and that all contemporary forms of man could revert to the original Caucasian race. Blumenbach claimed Negroid pigmentation arose because of the result of the heat of the tropical sun, cold wind caused the tawny colour of the Eskimos and the Chinese were fair skinned compared to the other Asian stocks because they kept mostly in towns protected from environmental factors. According to Blumenbach there are five races all belonging to a single species: Caucasian, Mongolian, Ethiopian, American and Malay. Blumenbach however stated: I have allotted the first place to the Caucasian because this stock displays the most beautiful race of men. According to Blumenbach the other races are supposed to have degenerated from the Caucasian ideal stock. Blumenbach denied that his "Degeneration theory" was racist; he also wrote three essays claiming non-white peoples are capable of excelling in arts and sciences in reaction against racialists of his time who believed they couldn't. In literature and popular culture Cyril M. Kornbluth's 1951 short story "The Marching Morons" is an example of dysgenic pressure in fiction, describing a man who accidentally ends up in the distant future and discovers that dysgenics has resulted in mass stupidity. Similarly, Mike Judge's 2006 film Idiocracy has the same premise, with the main character the subject of a military hibernation experiment that goes awry, taking him 500 years into the future. While in "The Marching Morons", civilization is kept afloat by a small group of dedicated geniuses, in Idiocracy, voluntary childlessness among high-IQ couples leaves only automated systems to fill that role. The 1998 song "Flagpole Sitta" by Harvey Danger finds lighthearted humor in dysgenics with the lines "Been around the world and found/That only stupid people are breeding/The cretins cloning and feeding/And I don't even own a tv". H. G. Wells' 1895 novel, The Time Machine, describes a future world where humanity has degenerated into two distinct branches who have their roots in the class distinctions of Wells' day. Both have sub-human intelligence and other putative dysgenic traits. T. J. Bass's novels Half Past Human and The Godwhale describe humanity becoming cooperative and "low-maintenance" to the detriment of all other traits. The American new wave band Devo derived both their name and overarching philosophy from the concept of "de-evolution" and used social satire and humor to espouse the idea that humanity had actually regressed over time. According to music critic Steve Huey, the band "adapted the theory to fit their view of American society as a rigid, dichotomized instrument of repression ensuring that its members behaved like clones, marching through life with mechanical, assembly-line precision and no tolerance for ambiguity." DC Comics' Aquaman has one of the seven races of Atlantis called The Trench, similar to the Grindylows of British folklore, Cthulhu Mythos' Deep One, Universal Classic Monsters' Gill-man, and Fallout's Mirelurk. They were regressed to survive in the deepest, darkest places on the bottom of ocean trenches where they hide—hence their name—and are photophobic when in contact with light. LEGO's 2009 Bionicle sets include Glatorian and Agori. One of the six tribes includes The Sand Tribe, which the Glatorian and Agori of that tribe are turned into scorpion-like beasts—the Vorox and the Zesk—by their creators, The Great Beings; whom are also of the same species as Glatorian and Agori. Kurt Vonnegut's 1985 novel Galápagos is set a million years in the future, where humans have "devolved" to have much smaller brains. Robert E. Howard, in The Hyborian Age, an essay on his Conan the Barbarian universe, stated that the Atlanteans devolved into "ape-men", and had once been the Picts (distinct from the actual people; his are closely modeled on Algonquian Native Americans). Similarly, Helena Blavatsky, founder of Theosophy, believed, contrary to standard evolutionary theory, that apes had devolved from humans rather than the opposite, through affected people "putting themselves on the animal level". Jonathan Swift's 1726 novel Gulliver's Travels contains a story about Yahoos, a kind of human-like creature turned into a savage, animal-like the state of society in which the Houyhnhnms—descendants of horses—are the dominant species. H.P. Lovecraft's 1924 short story The Rats in the Walls also describes devolved humans. References Creationism Evolutionary biology
Devolution (biology)
[ "Biology" ]
3,404
[ "Evolutionary biology", "Creationism", "Biology theories", "Obsolete biology theories" ]
2,379,726
https://en.wikipedia.org/wiki/Calcification
Calcification is the accumulation of calcium salts in a body tissue. It normally occurs in the formation of bone, but calcium can be deposited abnormally in soft tissue, causing it to harden. Calcifications may be classified on whether there is mineral balance or not, and the location of the calcification. Calcification may also refer to the processes of normal mineral deposition in biological systems, such as the formation of stromatolites or mollusc shells (see Biomineralization). Signs and symptoms Calcification can manifest itself in many ways in the body depending on the location. In the pulpal structure of a tooth, calcification often presents asymptomatically, and is diagnosed as an incidental finding during radiographic interpretation. Individual teeth with calcified pulp will typically respond negatively to vitality testing; teeth with calcified pulp often lack sensation of pain, pressure, and temperature. Causes of soft tissue calcification Calcification of soft tissue (arteries, cartilage, heart valves, etc.) can be caused by vitamin K2 deficiency or by poor calcium absorption due to a high calcium/vitamin D ratio. This can occur with or without a mineral imbalance. A common misconception is that calcification is caused by excess amount of calcium in diet. Dietary calcium intake is not associated with accumulation of calcium in soft tissue, and calcification occurs irrespective of the amount of calcium intake. Intake of excessive vitamin D can cause vitamin D poisoning and excessive intake of calcium from the intestine which, when accompanied by a deficiency of vitamin K (perhaps induced by an anticoagulant), can result in calcification of arteries and other soft tissue. Such metastatic soft tissue calcification is mainly in tissues containing "calcium catchers" such as elastic fibres or mucopolysaccharides. These tissues especially include the lungs (pumice lung) and the aorta. Mineral balance Dystrophic calcification, without a systemic mineral imbalance. Metastatic calcification, a systemic elevation of calcium levels in the blood and all tissues. Forms Calcification can be pathological or a standard part of the aging process. Nearly all adults show calcification of the pineal gland. Location Extraskeletal calcification, e.g. calciphylaxis Brain, e.g. primary familial brain calcification (Fahr's syndrome) Choroid plexus usually in the lateral ventricles Tumor calcification Arthritic bone spurs Kidney stones Gall stones Heterotopic bone Tonsil stones Pulp stone Breast disease In a number of breast pathologies, calcium is often deposited at sites of cell death or in association secretions or hyalinized stroma, resulting in pathologic calcification. For example, small, irregular, linear calcifications may be seen, via mammography, in a ductal carcinoma-in-situ to produce visible radio-opacities. Arteriosclerotic calcification One of the principal causes of arterial stiffening with age is vascular calcification. Vascular calcification is the deposition of mineral in the form of calcium phosphate salts in the smooth muscle-rich medial layer of large arteries including the aorta. DNA damage, especially oxidative DNA damage, causes accelerated vascular calcification. Vascular calcification could also be linked to the chronic leakage of blood lysates into the vessel wall since red blood cells have been shown to contain a high concentration of calcium. Diagnosis In terms of diagnosis, in this case vascular calcification, an ultrasound and radiography of said area is sufficient. Treatment Treatment of high calcium/vitamin D ratio may most easily be accomplished by intake of more vitamin D if vitamin K is normal. Intake of too much vitamin D would be evident by anorexia, loss of appetite, or soft tissue calcification. See also Calcinosis Marine biogenic calcification Monckeberg's arteriosclerosis Pineal gland References Calcium Histopathology Biomineralization
Calcification
[ "Chemistry" ]
853
[ "Histopathology", "Bioinorganic chemistry", "Biomineralization", "Microscopy" ]
2,379,771
https://en.wikipedia.org/wiki/Cadmium%20zinc%20telluride
Cadmium zinc telluride, (CdZnTe) or CZT, is a compound of cadmium, zinc and tellurium or, more strictly speaking, an alloy of cadmium telluride and zinc telluride. A direct bandgap semiconductor, it is used in a variety of applications, including semiconductor radiation detectors, photorefractive gratings, electro-optic modulators, solar cells, and terahertz generation and detection. The band gap varies from approximately 1.4 to 2.2 eV, depending on composition. Characteristics Radiation detectors using CZT can operate in direct-conversion (or photoconductive) mode at room temperature, unlike some other materials (particularly germanium) which require cooling. Their relative advantages include high sensitivity for X-rays and gamma rays, due to the high atomic numbers of Cd and Te, and better energy resolution than scintillator detectors. CZT can be formed into different shapes for different radiation-detecting applications, and a variety of electrode geometries, such as coplanar grids and small pixel detectors, have been developed to provide unipolar (electron-only) operation, thereby improving energy resolution. A 1 cm3 CZT crystal has a sensitivity range of 30 keV to 3 MeV with a 2.5% FWHM energy resolution at 662 keV. Pixelated CZT with a volume of 6 cm3 can achieve 0.71% FWHM energy resolution at 662 keV and perform Compton imaging. See also Scintillator cadmium telluride zinc telluride Telluride (chemistry) References External links National Pollutant Inventory - Cadmium and compounds Cadmium compounds Zinc compounds Tellurides II-VI semiconductors Nonlinear optical materials Terahertz technology Electro-optical materials
Cadmium zinc telluride
[ "Physics", "Chemistry", "Technology", "Engineering" ]
377
[ "Inorganic compounds", "Spectrum (physical sciences)", "Radioactive contamination", "Semiconductor materials", "Electromagnetic spectrum", "Measuring instruments", "Ionising radiation detectors", "II-VI semiconductors", "Terahertz technology" ]
2,379,782
https://en.wikipedia.org/wiki/Integrated%20logistics%20support
Integrated logistics support (ILS) is a technology in the system engineering to lower a product life cycle cost and decrease demand for logistics by the maintenance system optimization to ease the product support. Although originally developed for military purposes, it is also widely used in commercial customer service organisations. ILS defined In general, ILS plans and directs the identification and development of logistics support and system requirements for military systems, with the goal of creating systems that last longer and require less support, thereby reducing costs and increasing return on investments. ILS therefore addresses these aspects of supportability not only during acquisition, but also throughout the operational life cycle of the system. The impact of ILS is often measured in terms of metrics such as reliability, availability, maintainability and testability (RAMT), and sometimes System Safety (RAMS). ILS is the integrated planning and action of a number of disciplines in concert with one another to assure system availability. The planning of each element of ILS is ideally developed in coordination with the system engineering effort and with each other. Tradeoffs may be required between elements in order to acquire a system that is: affordable (lowest life cycle cost), operable, supportable, sustainable, transportable, and environmentally sound. In some cases, a deliberate process of Logistics Support Analysis will be used to identify tasks within each logistics support element. The most widely accepted list of ILS activities include: Reliability engineering, maintainability engineering and maintenance (preventive, predictive and corrective) planning Supply (spare part) support acquire resources Support and test equipment/equipment support Manpower and personnel Training and training support Technical data/publications Computer resources support Facilities Packaging, handling, storage and transportation Design interface Decisions are documented in a life cycle sustainment plan (LCSP), a Supportability Strategy, or (most commonly) an Integrated Logistics Support Plan (ILSP). ILS planning activities coincide with development of the system acquisition strategy, and the program will be tailored accordingly. A properly executed ILS strategy will ensure that the requirements for each of the elements of ILS are properly planned, resourced, and implemented. These actions will enable the system to achieve the operational readiness levels required by the warfighter at the time of fielding and throughout the life cycle. ILS can be also used for civilian projects, as highlighted by the ASD/AIA ILS Guide. It is considered common practice within some industries - primarily Defence - for ILS practitioners to take a leave of absence to undertake an ILS Sabbatical; furthering their knowledge of the logistics engineering disciplines. ILS Sabbaticals are normally taken in developing nations - allowing the practitioner an insight into sustainment practices in an environment of limited materiel resources. Adoption ILS is a technique introduced by the US Army to ensure that the supportability of an equipment item is considered during its design and development. The technique was adopted by the UK MoD in 1993 and made compulsory for the procurement of the majority of MOD equipment. Influence on Design. Integrated Logistic Support will provide important means to identify (as early as possible) reliability issues / problems and can initiate system or part design improvements based on reliability, maintainability, testability or system availability analysis Design of the Support Solution for minimum cost. Ensuring that the Support Solution considers and integrates the elements considered by ILS. This is discussed fully below. Initial Support Package. These tasks include calculation of requirements for spare parts, special tools, and documentation. Quantities required for a specified initial period are calculated, procured, and delivered to support delivery, installation in some of the cases, and operation of the equipment. The ILS management process facilitates specification, design, development, acquisition, test, fielding, and support of systems. Maintenance planning Maintenance planning begins early in the acquisition process with development of the maintenance concept. It is conducted to evolve and establish requirements and tasks to be accomplished for achieving, restoring, and maintaining the operational capability for the life of the system. Maintenance planning also involves Level Of Repair Analysis (LORA) as a function of the system acquisition process. Maintenance planning will: Define the actions and support necessary to ensure that the system attains the specified system readiness objectives with minimum Life Cycle Cost (LCC). Set up specific criteria for repair, including Built-In Test Equipment (BITE) requirements, testability, reliability, and maintainability; support equipment requirements; automatic test equipment; and manpower skills and facility requirements. State specific maintenance tasks, to be performed on the system. Define actions and support required for fielding and marketing the system. Address warranty considerations. The maintenance concept must ensure prudent use of manpower and resources. When formulating the maintenance concept, analysis of the proposed work environment on the health and safety of maintenance personnel must be considered. Conduct a LORA to optimize the support system, in terms of LCC, readiness objectives, design for discard, maintenance task distribution, support equipment and ATE, and manpower and personnel requirements. Minimize the use of hazardous materials and the generation of waste. Supply support Supply support encompasses all management actions, procedures, and techniques used to determine requirements to: Acquire support items and spare parts. Catalog the items. Receive the items. Store and warehouse the items. Transfer the items to where they are needed. Issue the items. Dispose of secondary items. Provide for initial support of the system. Acquire, distribute, and replenish inventory Support and test equipment Support and test equipment includes all equipment, mobile and fixed, that is required to perform the support functions, except that equipment which is an integral part of the system. Support equipment categories include: Handling and Maintenance Equipment. Tools (hand tools as well as power tools). Metrology and measurement devices. Calibration equipment. Test equipment. Automatic test equipment. Support equipment for on- and off-equipment maintenance. Special inspection equipment and depot maintenance plant equipment, which includes all equipment and tools required to assemble, disassemble, test, maintain, and support the production and/or depot repair of end items or components. This also encompasses planning and acquisition of logistic support for this equipment. Manpower and personnel Manpower and personnel involves identification and acquisition of personnel with skills and grades required to operate and maintain a system over its lifetime. Manpower requirements are developed and personnel assignments are made to meet support demands throughout the life cycle of the system. Manpower requirements are based on related ILS elements and other considerations. Human factors engineering (HFE) or behavioral research is frequently applied to ensure a good man-machine interface. Manpower requirements are predicated on accomplishing the logistics support mission in the most efficient and economical way. This element includes requirements during the planning and decision process to optimize numbers, skills, and positions. This area considers: Man-machine and environmental interface Special skills Human factors considerations during the planning and decision process Training and training devices Training and training devices support encompasses the processes, procedures, techniques, training devices, and equipment used to train personnel to operate and support a system. This element defines qualitative and quantitative requirements for the training of operating and support personnel throughout the life cycle of the system. It includes requirements for: Competencies management Factory training Instructor and key personnel training New equipment training team Resident training Sustainment training User training HAZMAT disposal and safe procedures training Embedded training devices, features, and components are designed and built into a specific system to provide training or assistance in the use of the system. (One example of this is the HELP files of many software programs.) The design, development, delivery, installation, and logistic support of required embedded training features, mockups, simulators, and training aids are also included. Technical data Technical Data and Technical Publications consists of scientific or technical information necessary to translate system requirements into discrete engineering and logistic support documentation. Technical data is used in the development of repair manuals, maintenance manuals, user manuals, and other documents that are used to operate or support the system. Technical data includes, but may not be limited to: Technical manuals Technical and supply bulletins Transportability guidance technical manuals Maintenance expenditure limits and calibration procedures Repair parts and tools lists Maintenance allocation charts Corrective maintenance instructions Preventive maintenance and Predictive maintenance instructions Drawings/specifications/technical data packages Software documentation Provisioning documentation Depot maintenance work requirements Identification lists Component lists Product support data Flight safety critical parts list for aircraft Lifting and tie down pamphlet/references Hazardous Material documentation Computer resources support Computer Resources Support includes the facilities, hardware, software, documentation, manpower, and personnel needed to operate and support computer systems and the software within those systems. Computer resources include both stand-alone and embedded systems. This element is usually planned, developed, implemented, and monitored by a Computer Resources Working Group (CRWG) or Computer Resources Integrated Product Team (CR-IPT) that documents the approach and tracks progress via a Computer Resources Life-Cycle Management Plan (CRLCMP). Developers will need to ensure that planning actions and strategies contained in the ILSP and CRLCMP are complementary and that computer resources support for the operational software, and ATE software, support software, is available where and when needed. Packaging, handling, storage, and transportation (PHS&T) This element includes resources and procedures to ensure that all equipment and support items are preserved, packaged, packed, marked, handled, transported, and stored properly for short- and long-term requirements. It includes material-handling equipment and packaging, handling and storage requirements, and pre-positioning of material and parts. It also includes preservation and packaging level requirements and storage requirements (for example, sensitive, proprietary, and controlled items). This element includes planning and programming the details associated with movement of the system in its shipping configuration to the ultimate destination via transportation modes and networks available and authorized for use. It further encompasses establishment of critical engineering design parameters and constraints (e.g., width, length, height, component and system rating, and weight) that must be considered during system development. Customs requirements, air shipping requirements, rail shipping requirements, container considerations, special movement precautions, mobility, and transportation asset impact of the shipping mode or the contract shipper must be carefully assessed. PHS&T planning must consider: System constraints (such as design specifications, item configuration, and safety precautions for hazardous material) Special security requirements Geographic and environmental restrictions Special handling equipment and procedures Impact on spare or repair parts storage requirements Emerging PHS&T technologies, methods, or procedures and resource-intensive PHS&T procedures Environmental impacts and constraints Facilities The Facilities logistics element is composed of a variety of planning activities, all of which are directed toward ensuring that all required permanent or semi-permanent operating and support facilities (for instance, training, field and depot maintenance, storage, operational, and testing) are available concurrently with system fielding. Planning must be comprehensive and include the need for new construction as well as modifications to existing facilities. It also includes studies to define and establish impacts on life cycle cost, funding requirements, facility locations and improvements, space requirements, environmental impacts, duration or frequency of use, safety and health standards requirements, and security restrictions. Also included are any utility requirements, for both fixed and mobile facilities, with emphasis on limiting requirements of scarce or unique resources. Design interface Design interface is the relationship of logistics-related design parameters of the system to its projected or actual support resource requirements. These design parameters are expressed in operational terms rather than as inherent values and specifically relate to system requirements and support costs of the system. Programs such as "design for testability" and "design for discard" must be considered during system design. The basic requirements that need to be considered as part of design interface include: Reliability Maintainability Standardization Interoperability Safety Security Usability Environmental and HAZMAT Privacy, particularly for computer systems Legal See also Reliability, availability and serviceability (computer hardware) References The references below cover many relevant standards and handbooks related to Integrated logistics support. Standards Army Regulation 700-127 Integrated Logistics Support, 27 September 2007 British Defence Standard 00-600 Integrated Logistics Support for MOD Projects Federal Standard 1037C in support of MIL-STD-188 IEEE 1332, IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment, Institute of Electrical and Electronics Engineers. MIL-STD-785, Reliability Program for Systems and Equipment Development and Production, U.S. Department of Defense. MIL-STD 1388-1A Logistic Support Analysis (LSA) MIL-STD 1388-2B Requirements for a Logistic Support Analysis Record MIL-STD-1629A, Procedures for Performing a Failure Mode, Effects and Criticality Analysis (FMECA) MIL-STD-2173, Reliability Centered Maintenance Requirements, U.S. Department of Defense (superseded by NAVAIR 00-25-403) OPNAVINST 4130.2A DEF(AUST)5691 Logistic Support Analysis DEF(AUST)5692 Logistic Support Analysis Record Requirements for the Australian Defence Organisation Specifications - not standards The ASD/AIA Suite of S-Series ILS specifications SX000i - International guide for integrated logistic support (under development) S1000D - International specification for technical publications using a common source database S2000M - International specification for materiel management - Integrated data processing S3000L - International specification for Logistics Support Analysis - LSA S4000P - International specification for developing and continuously improving preventive maintenance S5000F - International specification for operational and maintenance data feedback (under development) S6000T - International specification for training needs analysis - TNA (definition on-going) SX001G - Glossary for the Suite of S-specifications SX002D - Common Data Model AECMA 1000D (Technical Publications) - Refer to S1000D above AECMA 2000M (initial provisioning) - Refer to S2000M above DI-ILSS-80095, Data Item Description: Integrated Logistics Support Plan (ILSP) (17 Dec 1985) Handbooks Integrated Logistics Support Handbook, third edition - James V. Jones MIL-HDBK-217F, Reliability Prediction of Electronic Equipment, U.S. Department of Defense . MIL-HDBK-338B, Electronic Reliability Design Handbook, U.S. Department of Defense. MIL-HDBK-781A, Reliability Test Methods, Plans, and Environments for Engineering Development, Qualification, and Production, U.S. Department of Defense. NASA Probabilistic Risk Assessment Handbook NASA Fault Tree Assessment handbook MIL-HDBK-2155, Failure Reporting, Analysis and Corrective Action Taken, U.S. Department of Defense MIL-HDBK-502A, Product Support Analysis, U.S. Department of Defense Resources Systems Assessments, Integrated Logistics and COOP Support Services, 26 August 2008 AeroSpace and Defence (ASD) Industries Association of Europe Integrated Logistics Support, The Design Engineering Link by Walter Finkelstein, J.A. Richard Guertin, 1989, Article References Military logistics Systems engineering
Integrated logistics support
[ "Engineering" ]
3,059
[ "Systems engineering" ]
2,379,792
https://en.wikipedia.org/wiki/Giant%20component
In network theory, a giant component is a connected component of a given random graph that contains a significant fraction of the entire graph's vertices. More precisely, in graphs drawn randomly from a probability distribution over arbitrarily large graphs, a giant component is a connected component whose fraction of the overall number of vertices is bounded away from zero. In sufficiently dense graphs distributed according to the Erdős–Rényi model, a giant component exists with high probability. Giant component in Erdős–Rényi model Giant components are a prominent feature of the Erdős–Rényi model (ER) of random graphs, in which each possible edge connecting pairs of a given set of vertices is present, independently of the other edges, with probability . In this model, if for any constant , then with high probability (in the limit as goes to infinity) all connected components of the graph have size , and there is no giant component. However, for there is with high probability a single giant component, with all other components having size . For , intermediate between these two possibilities, the number of vertices in the largest component of the graph, is with high probability proportional to . Giant component is also important in percolation theory. When a fraction of nodes, , is removed randomly from an ER network of degree , there exists a critical threshold, . Above there exists a giant component (largest cluster) of size, . fulfills, . For the solution of this equation is , i.e., there is no giant component. At , the distribution of cluster sizes behaves as a power law, ~ which is a feature of phase transition. Alternatively, if one adds randomly selected edges one at a time, starting with an empty graph, then it is not until approximately edges have been added that the graph contains a large component, and soon after that the component becomes giant. More precisely, when edges have been added, for values of close to but larger than , the size of the giant component is approximately . However, according to the coupon collector's problem, edges are needed in order to have high probability that the whole random graph is connected. Graphs with arbitrary degree distributions A similar sharp threshold between parameters that lead to graphs with all components small and parameters that lead to a giant component also occurs in tree-like random graphs with non-uniform degree distributions . The degree distribution does not define a graph uniquely. However, under the assumption that in all respects other than their degree distribution, the graphs are treated as entirely random, many results on finite/infinite-component sizes are known. In this model, the existence of the giant component depends only on the first two (mixed) moments of the degree distribution. Let a randomly chosen vertex have degree , then the giant component exists if and only ifThis is known as the Molloy and Reed condition. The first moment of is the mean degree of the network. In general, the -th moment is defined as . When there is no giant component, the expected size of the small component can also be determined by the first and second moments and it is However, when there is a giant component, the size of the giant component is more tricky to evaluate. Criteria for giant component existence in directed and undirected configuration graphs Similar expressions are also valid for directed graphs, in which case the degree distribution is two-dimensional. There are three types of connected components in directed graphs. For a randomly chosen vertex: out-component is a set of vertices that can be reached by recursively following all out-edges forward; in-component is a set of vertices that can be reached by recursively following all in-edges backward; weak component is a set of vertices that can be reached by recursively following all edges regardless of their direction. Let a randomly chosen vertex has in-edges and out edges. By definition, the average number of in- and out-edges coincides so that . If is the generating function of the degree distribution for an undirected network, then can be defined as . For directed networks, generating function assigned to the joint probability distribution can be written with two valuables and as: , then one can define and . The criteria for giant component existence in directed and undirected random graphs are given in the table below: See also References Graph connectivity Random graphs
Giant component
[ "Mathematics" ]
873
[ "Mathematical relations", "Graph connectivity", "Graph theory", "Random graphs" ]
2,380,748
https://en.wikipedia.org/wiki/Suction%20pressure
Suction pressure is also called Diffusion Pressure Deficit. If some solute is dissolved in solvent, its diffusion pressure decreases. The difference between diffusion pressure of pure solvent and solution is called diffusion pressure deficit (DPD). It is a reduction in the diffusion pressure of solvent in the solution over its pure state due to the presence of solutes in it and forces opposing diffusion. When a plant cell is placed in a hypotonic solution, water enters into a cell by endosmosis and as a result turgor pressure (TP) develops in the cell. The cell membrane becomes stretched and the osmotic pressure (OP) of the cell decreases. As the cell absorbs more and more water its turgor pressure increases and osmotic pressure decreases. When a cell is fully turgid, its OP is equal to TP and DPD is zero. Turgid cells cannot absorb any more water. Thus, with reference to plant cells, the DPD can be described as the actual thirst of a cell for water and can be expressed as : Thus it is DPD that tends to equate and represents the water-absorbing ability of a cell, it is also called suction force (SF) or suction pressure (SP). The actual pressure with which a cell absorbs water is called "suction pressure". Factors affecting DPD DPD is directly proportional to the height of the plant, tree or organism. DPD is governed by two factors i.e. turgor pressure and osmotic pressure. Turgor pressure can be denoted as wall pressure in some cases. DPD is directly proportional to the concentration of the solution. DPD decreases with dilution of the solution. History The term diffusion pressure deficit (DPD) was coined by B.S. Meyer in 1938. Originally DPD was described as suction pressure by German botanist Otto Renner in 1915. Refrigeration In refrigeration and air conditioning systems, the suction pressure' (also called the low-side pressure) is the intake pressure generated by the system compressor while operating. The suction pressure, along with the suction temperature the wet bulb temperature of the discharge air are used to determine the correct refrigerant charge in a system. Further reading The measurement of Diffusion Pressure Deficit in plants by the method of Vapour Equilibrium (By R. O. SLATYER, 1958) References Diffusion
Suction pressure
[ "Physics", "Chemistry" ]
500
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
2,380,765
https://en.wikipedia.org/wiki/Discharge%20pressure
Discharge pressure (also called high side pressure or head pressure) is the pressure generated on the output side of a gas compressor in a refrigeration or air conditioning system. The discharge pressure is affected by several factors: size and speed of the condenser fan, condition and cleanliness of the condenser coil, and the size of the discharge line. An extremely high discharge pressure coupled with an extremely low suction pressure is an indicator of a refrigerant restriction. References Cooling technology Hydraulics Hydrostatics Pressure
Discharge pressure
[ "Physics", "Chemistry" ]
108
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Physical systems", "Hydraulics", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
2,380,869
https://en.wikipedia.org/wiki/Ferrite%20%28magnet%29
A ferrite is one of a family of iron oxide-containing magnetic ceramic materials. They are ferrimagnetic, meaning they are attracted by magnetic fields and can be magnetized to become permanent magnets. Unlike many ferromagnetic materials, most ferrites are not electrically conductive, making them useful in applications like magnetic cores for transformers to suppress eddy currents. Ferrites can be divided into two groups based on their magnetic coercivity, their resistance to being demagnetized: "Hard" ferrites have high coercivity, so are difficult to demagnetize. They are used to make permanent magnets for applications such as refrigerator magnets, loudspeakers, and small electric motors. "Soft" ferrites have low coercivity, so they easily change their magnetization and act as conductors of magnetic fields. They are used in the electronics industry to make efficient magnetic cores called ferrite cores for high-frequency inductors, transformers and antennas, and in various microwave components. Ferrite compounds are extremely low cost, being made mostly of iron oxide, and have excellent corrosion resistance. Yogoro Kato and Takeshi Takei of the Tokyo Institute of Technology synthesized the first ferrite compounds in 1930. Composition, structure, and properties Ferrites are usually ferrimagnetic ceramic compounds derived from iron oxides, with either a body-centered cubic or hexagonal crystal structure. Like most of the other ceramics, ferrites are hard, brittle, and poor conductors of electricity. They are typically composed of α-iron(III) oxide (e.g. hematite ) with one, or more additional, metallic element oxides, usually with an approximately stochiometric formula of MO·Fe2O3 such as Fe(II) such as in the common mineral magnetite composed of Fe(II)-Fe(III)2O4. Above 585 °C Fe(II)-Fe(III)2O4 transforms into the non-magnetic gamma phase. Fe(II)-Fe(III)2O4 is commonly seen as the black iron(II) oxide coating the surface of cast-iron cookware). The other pattern is M·Fe(III)2O3, where M is another metallic element. Common, naturally occurring ferrites (typically members of the spinel group) include those with nickel (NiFe2O4) which occurs as the mineral trevorite, magnesium containing magnesioferrite (MgFe2O4), cobalt (cobalt ferrite), or manganese (MnFe2O4) which occurs naturally as the mineral jacobsite. Less often bismuth, strontium, zinc as found in franklinite, aluminum,yittrium, or barium ferrites are used In addition, more complex synthetic alloys are often used for specific applications. Many ferrites adopt the spinel chemical structure with the formula , where A and B represent various metal cations, one of which is usually iron (Fe). Spinel ferrites usually adopt a crystal motif consisting of cubic close-packed (fcc) oxides (O) with A cations occupying one eighth of the tetrahedral holes, and B cations occupying half of the octahedral holes, i.e., . An exception exists for ɣ-Fe2O3 which has a spinel crystalline form and is widely used a magnetic recording substrate. However the structure is not an ordinary spinel structure, but rather the inverse spinel structure: One eighth of the tetrahedral holes are occupied by B cations, one fourth of the octahedral sites are occupied by A cations. and the other one fourth by B cation. It is also possible to have mixed structure spinel ferrites with formula [] [] , where is the degree of inversion. The magnetic material known as "Zn Fe" has the formula , with occupying the octahedral sites and occupying the tetrahedral sites, it is an example of normal structure spinel ferrite. Some ferrites adopt hexagonal crystal structure, like barium and strontium ferrites () and (). In terms of their magnetic properties, the different ferrites are often classified as "soft", "semi-hard" or "hard", which refers to their low or high magnetic coercivity, as follows. Soft ferrites Ferrites that are used in transformer or electromagnetic cores contain nickel, zinc, and/or manganese compounds. Soft ferrites are not suitable to make permanent magnets. They have high magnetic permeability so they conduct magnetic fields and are attracted to magnets, but when the external magnetic field is removed, the remanent magnetization does not tend to persist. This is due to their low coercivity. The low coercivity also means the material's magnetization can easily reverse direction without dissipating much energy (hysteresis losses), while the material's high resistivity prevents eddy currents in the core, another source of energy loss. Because of their comparatively low core losses at high frequencies, they are extensively used in the cores of RF transformers and inductors in applications such as switched-mode power supplies and loopstick antennas used in AM radios. The most common soft ferrites are: Manganese-zinc ferrite "Mn Zn", with the formula . Mn Zn have higher permeability and saturation induction than Ni Zn. Nickel-zinc ferrite "Ni Zn", with the formula . Ni Zn ferrites exhibit higher resistivity than Mn Zn, and are therefore more suitable for frequencies above 1 MHz. For use with frequencies above 0.5 MHz but below 5 MHz, Mn Zn ferrites are used; above that, Ni Zn is the usual choice. The exception is with common mode inductors, where the threshold of choice is at 70 MHz. Semi-hard ferrites Cobalt ferrite is in between soft and hard magnetic material and is usually classified as a semi-hard material. It is mainly used for its magnetostrictive applications like sensors and actuators thanks to its high saturation magnetostriction (~200 ppm). has also the benefits to be rare-earth free, which makes it a good substitute for terfenol-D. Moreover, cobalt ferrite's magnetostrictive properties can be tuned by inducing a magnetic uniaxial anisotropy. This can be done by magnetic annealing, magnetic field assisted compaction, or reaction under uniaxial pressure. This last solution has the advantage to be ultra fast (20 min) thanks to the use of spark plasma sintering. The induced magnetic anisotropy in cobalt ferrite is also beneficial to enhance the magnetoelectric effect in composite. Hard ferrites In contrast, permanent ferrite magnets are made of hard ferrites, which have a high coercivity and high remanence after magnetization. Iron oxide and barium carbonate or strontium carbonate are used in manufacturing of hard ferrite magnets. The high coercivity means the materials are very resistant to becoming demagnetized, an essential characteristic for a permanent magnet. They also have high magnetic permeability. These so-called ceramic magnets are cheap, and are widely used in household products such as refrigerator magnets. The maximum magnetic field is about 0.35 tesla and the magnetic field strength is about 30–160 kiloampere turns per meter (400–2000 oersteds). The density of ferrite magnets is about 5 g/cm3. The most common hard ferrites are: Strontium ferrite (), used in small electric motors, micro-wave devices, recording media, magneto-optic media, telecommunication, and electronics industry. Strontium hexaferrite () is well known for its high coercivity due to its magnetocrystalline anisotropy. It has been widely used in industrial applications as permanent magnets and, because they can be powdered and formed easily, they are finding their applications into micro and nano-types systems such as biomarkers, bio diagnostics and biosensors. Barium ferrite (), a common material for permanent magnet applications. Barium ferrites are robust ceramics that are generally stable to moisture and corrosion-resistant. They are used in e.g. loudspeaker magnets and as a medium for magnetic recording, e.g. on magnetic stripe cards. Production Ferrites are produced by heating a mixture of the oxides of the constituent metals at high temperatures, as shown in this idealized equation: Fe2O3 + ZnO → ZnFe2O4 In some cases, the mixture of finely-powdered precursors is pressed into a mold. For barium and strontium ferrites, these metals are typically supplied as their carbonates, BaCO3 or SrCO3. During the heating process, these carbonates undergo calcination: MCO3 → MO + CO2 After this step, the two oxides combine to give the ferrite. The resulting mixture of oxides undergoes sintering. Processing Having obtained the ferrite, the cooled product is milled to particles smaller than 2 μm, sufficiently small that each particle consists of a single magnetic domain. Next the powder is pressed into a shape, dried, and re-sintered. The shaping may be performed in an external magnetic field, in order to achieve a preferred orientation of the particles (anisotropy). Small and geometrically easy shapes may be produced with dry pressing. However, in such a process small particles may agglomerate and lead to poorer magnetic properties compared to the wet pressing process. Direct calcination and sintering without re-milling is possible as well but leads to poor magnetic properties. Electromagnets are pre-sintered as well (pre-reaction), milled and pressed. However, the sintering takes place in a specific atmosphere, for instance one with an oxygen shortage. The chemical composition and especially the structure vary strongly between the precursor and the sintered product. To allow efficient stacking of product in the furnace during sintering and prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are also available in fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading. Uses Ferrite cores are used in electronic inductors, transformers, and electromagnets where the high electrical resistance of the ferrite leads to very low eddy current losses. Ferrites are also found as a lump in a computer cable, called a ferrite bead, which helps to prevent high frequency electrical noise (radio frequency interference) from exiting or entering the equipment; these types of ferrites are made with lossy materials to not just block (reflect), but also absorb and dissipate as heat, the unwanted higher-frequency energy. Early computer memories stored data in the residual magnetic fields of hard ferrite cores, which were assembled into arrays of core memory. Ferrite powders are used in the coatings of magnetic recording tapes. Ferrite particles are also used as a component of radar-absorbing materials or coatings used in stealth aircraft and in the absorption tiles lining the rooms used for electromagnetic compatibility measurements. Most common audio magnets, including those used in loudspeakers and electromagnetic instrument pickups, are ferrite magnets. Except for certain "vintage" products, ferrite magnets have largely displaced the more expensive Alnico magnets in these applications. In particular, for hard hexaferrites today the most common uses are still as permanent magnets in refrigerator seal gaskets, microphones and loud speakers, small motors for cordless appliances and in automobile applications. Ferrite magnets find applications in electric power steering systems and automotive sensors due to their cost-effectiveness and corrosion resistance. Ferrite magnets are known for their high magnetic permeability and low electrical conductivity, making them suitable for high-frequency applications. In electric power steering systems, they provide the necessary magnetic field for efficient motor operation, contributing to the system's overall performance and reliability. Automotive sensors utilize ferrite magnets for accurate detection and measurement of various parameters, such as position, speed, and fluid levels. Due to ceramic ferrite magnet’s weaker magnetic fields compared to superconducting magnets, they are sometimes used in low-field or open MRI systems. These magnets are favored in certain cases due to their lower cost, stable magnetic field, and ability to function without the need for complex cooling systems. Ferrite nanoparticles exhibit superparamagnetic properties. History Yogoro Kato and Takeshi Takei of the Tokyo Institute of Technology synthesized the first ferrite compounds in 1930. This led to the founding of TDK Corporation in 1935, to manufacture the material. Barium hexaferrite (BaO•6Fe2O3) was discovered in 1950 at the Philips Natuurkundig Laboratorium (Philips Physics Laboratory). The discovery was somewhat accidental—due to a mistake by an assistant who was supposed to be preparing a sample of hexagonal lanthanum ferrite for a team investigating its use as a semiconductor material. On discovering that it was actually a magnetic material, and confirming its structure by X-ray crystallography, they passed it on to the magnetic research group. Barium hexaferrite has both high coercivity (170 kA/m) and low raw material costs. It was developed as a product by Philips Industries (Netherlands) and from 1952 was marketed under the trade name Ferroxdure. The low price and good performance led to a rapid increase in the use of permanent magnets. In the 1960s Philips developed strontium hexaferrite (SrO•6Fe2O3), with better properties than barium hexaferrite. Barium and strontium hexaferrite dominate the market due to their low costs. Other materials have been found with improved properties. BaO•2(FeO)•8(Fe2O3) came in 1980. and Ba2ZnFe18O23 came in 1991. See also Ferromagnetic material properties Cobalt Ferrite References External links International Magnetics Association What are the bumps at the end of computer cables? Sources MMPA 0100-00, Standard Specifications for Permanent Magnet Materials Meeldijk, Victor Electronic Components: Selection and Application Guidelines, 1997 Wiley Ott, Henry Noise Reduction Techniques in Electronic Systems 1988 Wiley Luecke, Gerald and others General Radiotelephone Operator License Plus Radar Endorsement 2004, Master Pub. Bartlett, Bruce and others Practical Recording Techniques 2005 Focal Press Schaller, George E. Ferrite Processing & Effects on Material Performance Ceramic materials Ferromagnetic materials Types of magnets Loudspeakers Ferrites
Ferrite (magnet)
[ "Physics", "Engineering" ]
3,171
[ "Ferromagnetic materials", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
2,381,942
https://en.wikipedia.org/wiki/Larock%20indole%20synthesis
The Larock indole synthesis is a heteroannulation reaction that uses palladium as a catalyst to synthesize indoles from an ortho-iodoaniline and a disubstituted alkyne. It is also known as Larock heteroannulation. The reaction is extremely versatile and can be used to produce varying types of indoles. Larock indole synthesis was first proposed by Richard C. Larock in 1991 at Iowa State University. Overall reaction The reaction usually occurs with an o-iodianiline or its derivatives, 2–5 equivalents of an alkyne, palladium(II) (PdII), an excess of sodium or potassium carbonate base, PPh3, and 1 equivalent of LiCl or n-Bu4NCl. N-methyl, N-acetyl, and N-tosyl derivatives of ortho-iodoanilines have been shown to be the most successful anilines that can be used to produce good to excellent yields. Reagents and optimal conditions Chlorides Either LiCl or n-Bu4N are used depending on the reaction conditions, but LiCl appears to be the more effective base in Larock indole annulation. The stoichiometry of LiCl is also considerably important, as more than 1 equivalent of LiCl will slow the rate of reaction and lower the overall yield. Bases Bases other than sodium or potassium carbonate have been used to produce a good overall yield of the annulation reaction. For example, KOAc can be used with 1 equivalent of LiCl. However, the reaction using KOAc must be used at 120 °C to reach completion of the reaction at a reasonable time. In contrast K2CO3 can be used at 100 °C. Alkynes The Larock indole synthesis is a flexible reaction partly due to the variety of substituted alkynes that can be used in the annulation reaction. In particular, alkynes with substituents including alkyls, aryls, alkenyls, hydroxyls, and silyls have been successfully used. However, bulkier tertiary alkyl or trimethylsilyl groups have been shown to provide a higher yield. The annulation reaction will also proceed more efficiently when 2–5 equivalents of an alkyne is used. Less than two equivalents appear to create suboptimal conditions for the reaction. PPh3 as a catalyst 5% mol of PPh3 was initially used in the reaction as a catalyst. However, later experiments have shown that PPh3 does not significantly improve the overall yield and is not necessary. Reaction mechanism The Larock indole synthesis proceeds via the following intermediate steps: Pd(OAc)2 is reduced to Pd(0). A coordination of the chloride occurs to form a chloride-ligated zerovalent palladium. The o-iodoaniline undergoes oxidative addition to Pd(II). The alkyne coordinates to the Pd(II) by ligand exchange. A migratory insertion causes the alkyne to undergo regioselective syn-insertion into arylpalladium bond. Regioselectivity is determined during this step. The nitrogen displaces the halide in the resulting vinylic palladium intermediate to form the six-membered palladium-containing heteroatom. The Pd(II) center undergoes a reductive elimination to form the indole and regenerate Pd(0) which can then be recycled into the catalytic indole process. The carbopalladation step is regioselective when unsymmetrical alkynes are used. Although it was previously believed that the alkyne is inserted with the less sterically-hindering R-group adjacent to the arylpalladium, Larock et al. observed that the larger more sterically-hindering R-group is inserted next to the arylpalladium. They suggest that the driving force of the alkyne insertion may be the steric hindrance present in the developing carbon-carbon bond and the orientation of the alkyne prior to syn-insertion of the alkyne into the aryl palladium bond. Alkyne insertion occurs so that the large substituent on the alkyne avoids steric strain from the short developing carbon-carbon bond by interacting with the longer carbon-palladium bond. Modifications and variations o-bromoanilines or o-chloroanilines do not undergo Larock indole synthesis. However, researchers from Boehringer-Ingelheim were able to successfully use both o-bromoanilines and o-chloroanilines to form indoles by using N-methyl-2-pyrrolidone (NMP) as the solvent with 1,1'bis(di-tert-butylphosphino)ferrocene as the palladium ligand. O-bromoanilines and o-chloroanilines are more readily available and cost-effective over using o-iodianiline in Larock indole synthesis. Monguchi et al. also derived 2- and 2,3-substituted indoles without using LiCl. The optimized Indole reaction uses 10% Pd/C (3.0 mol%) with 1.1 equivalent of NaOAc, and NMP at 110–130 °C. Monguchi et al. state that their optimized condition of the Larock indole synthesis without LiCl is a more mild, environmentally benign, and efficient strategy for producing indoles. Applications Indoles are one of the most prevalent heterocyclic structures found in biological processes, so the production of indole derivatives are important in a diversity of fields. Nishikawa et al. derived iso-tryptophan by using Larock indole synthesis with pre-synthesized α-C-glucosylpropargyl glycine and o-iodo-tosylanilide. This reaction produced the product which had the reverse regioselectivity of normal Larock indole synthesis. The larger substituent was placed adjacent to the forming carbon-carbon bond, rather than the carbon-palladium bond. The explanation for the reverse regioselectivity which produced the iso-tryptophan is unknown. Optically active tryptophan which adheres to the regioselectivity of the Larock indole synthesis can also be synthesized using o-iodoaniline with propargyl substituted bislactim ethyl ether. Propargyl substituted bislactim ethyl ether is generated by using Schöllkopf chiral auxiliary bis lactam ether with n-BuLi, THF, and 3-halo-1-9trimethylsily1)-1-propyne and extracting the trans-isomer of the propargyl-substituted bislactim. Other relevant applications include the synthesis of 5-HT1D receptor agonist MK-0462, an anti-migraine drug. References Indole forming reactions Name reactions
Larock indole synthesis
[ "Chemistry" ]
1,485
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
2,382,632
https://en.wikipedia.org/wiki/Tanner%20graph
In coding theory, a Tanner graph is a bipartite graph that can be used to express constraints (typically equations) that specify an error correcting code. Tanner graphs play a central role in the design and decoding of LDPC codes. They have also been applied to the construction of longer codes from smaller ones. Both encoders and decoders employ these graphs extensively. Origins Tanner graphs were proposed by Michael Tanner as a means to create larger error correcting codes from smaller ones using recursive techniques. He generalized the techniques of Elias for product codes. Tanner discussed lower bounds on the codes obtained from these graphs irrespective of the specific characteristics of the codes which were being used to construct larger codes. Tanner graphs for linear block codes Tanner graphs are partitioned into subcode nodes and digit nodes. For linear block codes, the subcode nodes denote rows of the parity-check matrix H. The digit nodes represent the columns of the matrix H. An edge connects a subcode node to a digit node if a nonzero entry exists in the intersection of the corresponding row and column. Bounds proven by Tanner Tanner proved the following bounds Let be the rate of the resulting linear code, let the degree of the digit nodes be and the degree of the subcode nodes be . If each subcode node is associated with a linear code (n,k) with rate r = k/n, then the rate of the code is bounded by Computational complexity of Tanner graph based methods The advantage of these recursive techniques is that they are computationally tractable. The coding algorithm for Tanner graphs is extremely efficient in practice, although it is not guaranteed to converge except for cycle-free graphs, which are known not to admit asymptotically good codes. Applications of Tanner graph Zemor's decoding algorithm, which is a recursive low-complexity approach to code construction, is based on Tanner graphs. Notes Michael Tanner's Original paper Michael Tanner's page Coding theory Application-specific graphs
Tanner graph
[ "Mathematics" ]
409
[ "Discrete mathematics", "Coding theory" ]
2,383,266
https://en.wikipedia.org/wiki/Apo2.7
Apo2.7 is a protein confined to the mitochondrial membrane. It can be detected during early stages of apoptosis. It can be used to detect apoptosis via flow cytometry. References Apoptosis Proteins
Apo2.7
[ "Chemistry" ]
49
[ "Biomolecules by chemical classification", "Signal transduction", "Apoptosis", "Molecular biology", "Proteins" ]
15,065,280
https://en.wikipedia.org/wiki/ZNF74
Zinc finger protein 74 is a protein that in humans is encoded by the ZNF74 gene. Schizophrenia susceptibility has been associated with a mutation in this protein. Interactions ZNF74 has been shown to interact with POLR2A. References Further reading External links Transcription factors
ZNF74
[ "Chemistry", "Biology" ]
62
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,067,464
https://en.wikipedia.org/wiki/ZNF238
Zinc finger protein 238 (also known as RP58 or ZBTB18) is a zinc finger containing transcription factor that in humans is encoded by the ZNF238 gene. Function ZNF238 is a gene that plays a major role in the "promotion of ordered and correctly timed neurogenesis leading to proper layer formation and cortical growth." The loss of ZNF238 has been observed to cause microcephaly, agenesis of the corpus callosum, malformation of layers in the cerebral cortex, and cerebellar hypoplasia. Additionally, its absence can cause a decrease in Ngn2 and Neurod1 (in progenitor cells, and an increase thereof in mutant neurons), with the result of less progenitor cells and an increase in neuronal differentiation and glial cell growth. ZNF238 also regulates repressed genes that, if left unchecked, can lead to glioma progression. Furthermore, an absence of ZNF238 results in upregulation of the epithelial-mesenchymal transition process. In tumors such as medulloblastomas, the loss of ZNF238 can disorganize the tumor's cellular divisional processes, resulting in a cellularly diverse neoplasm. This new diversity has been observed to increase the invasiveness of the tumor, yielding proliferation into more areas of the brain than before the loss of ZNF238. C2H2-type zinc finger proteins, such as ZNF238, act on the molecular level as transcriptional activators or repressors and are involved in chromatin assembly. Interactions ZNF238 has been shown to interact with DNMT3A. References Further reading External links Transcription factors
ZNF238
[ "Chemistry", "Biology" ]
382
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,068,567
https://en.wikipedia.org/wiki/Einstein%E2%80%93Hopf%20drag
In physics, the Einstein–Hopf drag (named after Albert Einstein and Ludwig Hopf) is a velocity-dependent drag force upon charged particles that are being bathed in thermal radiation. References Further reading Drag (physics) Electrical phenomena
Einstein–Hopf drag
[ "Physics", "Chemistry" ]
48
[ "Drag (physics)", "Physical phenomena", "Electrical phenomena", "Fluid dynamics" ]
15,071,240
https://en.wikipedia.org/wiki/IFNA7
Interferon alpha-7 is a protein that in humans is encoded by the IFNA7 gene. References Further reading
IFNA7
[ "Chemistry" ]
26
[ "Biochemistry stubs", "Protein stubs" ]
15,071,250
https://en.wikipedia.org/wiki/IFNA14
Interferon alpha-14 is a protein that in humans is encoded by the IFNA14 gene. References Further reading
IFNA14
[ "Chemistry" ]
26
[ "Biochemistry stubs", "Protein stubs" ]
15,071,267
https://en.wikipedia.org/wiki/IGHV%40
Ig heavy chain V-III region VH26 is a protein that in humans is encoded by the IGHV@ gene. IGHV is the immunoglobulin heavy chain variable region genes; in B-cell neoplasms like chronic lymphocytic leukemia, mutations of IGHV are associated with better responses to some treatments and with prolonged survival. See also IGH@ References Further reading Proteins
IGHV@
[ "Chemistry" ]
90
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
15,071,362
https://en.wikipedia.org/wiki/KCNC4
Potassium voltage-gated channel, Shaw-related subfamily, member 4 (KCNC4), also known as Kv3.4, is a human gene. The Shaker gene family of Drosophila encodes components of voltage-gated potassium channels and comprises four subfamilies. Based on sequence similarity, this gene is similar to the Shaw subfamily. The protein encoded by this gene belongs to the delayed rectifier class of channel proteins and is an integral membrane protein that mediates the voltage-dependent potassium ion permeability of excitable membranes. It generates atypical voltage-dependent transient current that may be important for neuronal excitability. Several transcript variants encoding different isoforms have been found for this gene. See also Voltage-gated potassium channel References Further reading Ion channels
KCNC4
[ "Chemistry" ]
167
[ "Neurochemistry", "Ion channels" ]
15,071,397
https://en.wikipedia.org/wiki/REPIN1
Replication initiator 1 is a protein that in humans is encoded by the REPIN1 gene. The protein helps enable RNA binding activity as a replication initiation-region protein. The make up of REPIN 1 include three zinc finger hand clusters that organize polydactyl zinc finger proteins containing 15 zinc finger DNA- binding motifs. It has also been predicted to help in regulation of transcription via RNA polymerase II with it being located in the nucleoplasm. Expression of this protein has been seen in the colon, spleen, kidney, and 23 other tissues within the human body throughout. History REPIN 1 originally was first identified in a study focusing on replication of dihydrofolate reductase gene (dhfr) in Chinese hamsters, with it initiating near stable bent DNA that binds to multiple factors. In the paper scientists used protein DNA cross linking experiments that revealed the 60-kDa polypeptide, with it being labeled by its alternative name RIP60. Due to the cofractionating of ATP-dependent DNA helicase with DNA-binding activity that was origin specific, the study suggested that RIP60 was involved with chromosomal DNA synthesis in mammalian cells. Genetics REPIN 1 can be found on chromosome 7q36.1 according to the National Center for Biotechnology Information within humans. REPIN 1 acts as a specific sequence binding protein in human DNA which is required for the start of chromosomal replication. Located in the nucleoplasm and part of the nuclear origin of replication recognition complex within the nucleus, it first binds on 5'-ATT'3' of the sequence. It does this on reiterated sequences downstream of the origin of bidirectional replication (OBR), and at a second 5'-ATT-3' homologous sequence opposite of the orientation within the OBR zone. It encodes proteins containing fifteen C2H2 zinc finger DNA binding motifs to three clusters referred to as hands Z1 (ZFs 1-5), Z2 (ZFs 6-8), and Z3 (ZFs 9-15) with proline rich areas being present between them. Function The function of REPIN 1 is to act as a replication initiator and sequence binding protein for chromosomal replication. Like other zinc finger proteins its physiological functions, molecular mechanisms, and regulations are not fully understood. However due to its high expression in adipose tissue and livers found in sub congenic and congenic rat strains some scientists have seen in as a participant in the regulation of genes. More specifically in those that are involved in lipid droplet formation and fusion, adipogenesis, as well as glucose and fatty acid transport in adipocytes. Human in vitro data also suggests REPIN 1's role in adipocyte function and a possible therapeutic target for treating obesity. References Further reading External links Transcription factors
REPIN1
[ "Chemistry", "Biology" ]
591
[ "Protein stubs", "Gene expression", "Signal transduction", "Biochemistry stubs", "Induced stem cells", "Transcription factors" ]
15,071,485
https://en.wikipedia.org/wiki/SOX8
Transcription factor SOX-8 is a protein that in humans is encoded by the SOX8 gene. This gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors involved in the regulation of embryonic development and in the determination of the cell fate. The encoded protein may act as a transcriptional activator after forming a protein complex with other proteins. This protein may be involved in brain development and function. Haploinsufficiency for this protein may contribute to the mental retardation found in haemoglobin H-related mental retardation (ATR-16 syndrome). See also SOX genes References Further reading Transcription factors
SOX8
[ "Chemistry", "Biology" ]
141
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,071,821
https://en.wikipedia.org/wiki/MEIS2
Homeobox protein Meis2 is a protein that in humans is encoded by the MEIS2 gene. This gene encodes a homeobox protein belonging to the TALE ('three amino acid loop extension') family of homeodomain-containing proteins. TALE homeobox proteins are highly conserved transcription regulators, and several members have been shown to be essential contributors to developmental programs. Multiple transcript variants encoding distinct isoforms have been described for this gene. References Further reading External links Transcription factors
MEIS2
[ "Chemistry", "Biology" ]
102
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,071,920
https://en.wikipedia.org/wiki/MLLT1
Protein ENL is a protein that in humans is encoded by the MLLT1 gene. Interactions MLLT1 has been shown to interact with CBX8. References Further reading External links Transcription factors
MLLT1
[ "Chemistry", "Biology" ]
41
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,072,456
https://en.wikipedia.org/wiki/PBX3
Pre-B-cell leukemia transcription factor 3 is a protein that in humans is encoded by the PBX3 gene. References Further reading External links Transcription factors
PBX3
[ "Chemistry", "Biology" ]
34
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,074,246
https://en.wikipedia.org/wiki/DPF2
Zinc finger protein ubi-d4 is a protein that in humans is encoded by the DPF2 gene. The protein encoded by this gene is a member of the d4 domain family, characterized by a zinc finger-like structural motif. This protein functions as a transcription factor which is necessary for the apoptotic response following deprivation of survival factors. It likely serves a regulatory role in rapid hematopoietic cell growth and turnover. This gene is considered a candidate gene for multiple endocrine neoplasia type I, an inherited cancer syndrome involving multiple parathyroid, enteropancreatic, and pituitary tumors. References Further reading External links Transcription factors
DPF2
[ "Chemistry", "Biology" ]
139
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,074,255
https://en.wikipedia.org/wiki/RFX4
Transcription factor RFX4 is a protein that in humans is encoded by the RFX4 gene. This gene is a member of the regulatory factor X gene family, which encodes transcription factors that contain a highly conserved winged helix DNA binding domain. The protein encoded by this gene is structurally related to regulatory factors X1, X2, X3, and X5. It has been shown to interact with itself as well as with regulatory factors X2 and X3, but it does not interact with regulatory factor X1. This protein may be a transcriptional repressor rather than a transcriptional activator. Three transcript variants encoding different isoforms have been described for this gene. References Further reading External links Transcription factors
RFX4
[ "Chemistry", "Biology" ]
147
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,074,561
https://en.wikipedia.org/wiki/RNR1
RNR1 (RNA, ribosomal 45S cluster 1) is a human ribosomal DNA gene located on Chromosome 13. Tandem copies of this gene form one of five nucleolus organizer regions in the human genome, they are located on the chromosomes 13 (RNR1), 14 (RNR2), 15 (RNR3), 21 (RNR4), 22 (RNR5). References Further reading Proteins Non-coding RNA RNA Ribosomal RNA Ribozymes
RNR1
[ "Chemistry" ]
104
[ "Catalysis", "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins", "Ribozymes" ]
19,015,982
https://en.wikipedia.org/wiki/Spectral%20purity
Spectral purity is a term used in both optics and signal processing. In optics, it refers to the quantification of the monochromaticity of a given light sample. This is a particularly important parameter in areas like laser operation and time measurement. Spectral purity is easier to achieve in devices that generate visible and ultraviolet light, since higher frequency light results in greater spectral purity. In signal processing, spectral purity is defined as the inherent stability of a signal, or how clean a spectrum is compared to what it should be. See also Frequency drift Frequency deviation Jitter Automatic frequency control Allan variance References Spectroscopy
Spectral purity
[ "Physics", "Chemistry" ]
121
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
9,947,492
https://en.wikipedia.org/wiki/Digifant%20engine%20management%20system
Digifant is an Engine Management System operated by an Engine Control Unit that actuates outputs, such as fuel injection and ignition systems, using information derived from sensor inputs, such as engine speed, exhaust oxygen and intake air flow. Digifant was designed by Volkswagen Group, in cooperation with Robert Bosch GmbH. Digifant is the outgrowth of the Digijet fuel injection system first used on water-cooled Volkswagen A2 platform-based models. History Digifant was introduced in 1986 on the 2.1 litre Volkswagen Type 2 (T3) (Vanagon in the US) engine. This system combined digital fuel control as used in the earlier Digijet systems with a new map controlled digital ignition system. Subsequently, Digifant II was introduced, adding ECU integrated knock control and idle stabilization, while externalizing an Ignition Control Unit (ICU). Digifant as used in Volkswagen Golf and Volkswagen Jetta models simplified several functions, and added knock sensor control to the ignition system. Other versions of Digifant appeared on the Volkswagen Fox, Corrado, Volkswagen Transporter (T4) (known as the Eurovan in North America), as well as 1993 and later production versions of the rear-engined Volkswagen Beetle, sold only in Mexico. Lower-power versions (without a knock sensor), supercharged, and 16-valve variants were produced. Nearly exclusive to the European market, Volkswagen AG subsidiary Audi AG also used the Digifant system, namely in its 2.0 E variants of the Audi 80 and Audi 100. Digifant is an engine management system designed originally to take advantage of the first generation of newly developed digital signal processing circuits. Production changes and updates were made to keep the system current with the changing California and federal emissions requirements. Updates were also made to allow integration of other vehicle systems into the scope of engine operation. Changes in circuit technology, design and processing speed along with evolving emissions standards, resulted in the development of new engine management systems. These new system incorporated adaptive learning fuzzy logic, enhanced and expanded diagnostics, and the ability to meet total vehicle emissions standards. Features Fuel injection control is digitally electronic. It is based on the measurement engine load (this signal is provided by the Air Flow Sensor), and on engine speed (signal provided by the Hall sender in the distributor). These primary signals are compared to a 'map', or table of values, stored in the ECU memory. The amount of fuel delivered is controlled by the duration of actuation of the fuel injector(s). This value is taken from a programme in the ECU that has 16 points for load and 16 points for speed. These 256 primary values are then modified by coolant temperature, intake air temperature, oxygen content of the exhaust, car battery voltage and throttle position - to provide 65,000 possible injector duration points. Digifant is unlike the earlier CIS and CIS-E fuel injection systems that it replaced, in that fuel injectors are mounted on a common fuel rail. CIS fuel injection systems used mechanical fuel injectors. The fuel injectors are wired in parallel, and are supplied with Constant System Voltage. The ECU switches the earth/ground on and off to control duration. All injectors operate at the same time (simultaneously, rather than sequentially) each crankshaft revolution; two complete revolutions being needed for each cylinder to receive the correct amount of fuel for each combustion cycle. Ignition system control is also digital electronic. The sensors that supply the engine load and engine speed signals for injector duration provide information about the basic ignition timing point. The signal sent to the Ignition Control Unit (if not integrated into the ECU) is derived from a programme in the ECU that is similar to the injector duration programme. Engine knock control is used to allow the ignition timing to continually approach the point of detonation. This is the point where the engine will produce the most motive power, as well as the highest efficiency. Additional functions of the ECU include operation of the fuel pump by closing the Ground for the fuel pump relay, and control of idle speed by a throttle plate bypass valve. The Idle Air Control Valve (IACV) (previously known as an Idle Air Stabiliser Valve - IASV), receives a changing milliamp signal that varies the strength of an electromagnet pulling open the bypass valve. Idle speed stabilisation is enhanced by a process known as Idle Speed Control (ISC). This function (previously known as Digital Idle Stabilization), allows the ECU to modify ignition timing at idle to further improve idle quality. Digifant II inputs/outputs The 25 pin electronic control unit used in the Golf and Jetta receives inputs from the following sources: Hall sender unit (provides engine speed signal) Air Flow Sensor (provides engine load information) Coolant temperature sensor Intake Air Temperature sensor Knock sensor Additional signals used as inputs are: Air conditioner (compressor on) Car battery voltage Starter motor signal power steering pump (supplying pressure) The Anti-lock Braking System (ABS), three-speed automatic transmission, and vehicle speed sensor are not linked to this system. Outputs controlling engine operation include: Fuel injectors Idle Air Control Valve Ignition Control Unit Fuel pump relay Oxygen sensor heater Additional systems The evaporative emission system is controlled by a vacuum-operated mechanical carbon canister control valve. Fuel pressure is maintained by a vacuum operated mechanical fuel pressure regulator on the fuel injector rail assembly. Inputs and outputs are shown in the following illustration. Digifant II as used on Golf and Jetta vehicles provides the basis for this chart. North America variants In North America, Volkswagen released two other versions of the Digifant fuel injection system (in addition to standard Digifant II described above). A limited number of 1987-1990 California Golf and Jetta models are equipped with Digifant II that features an on-board diagnostics system (OBD). These vehicles have 'blink code' capacity to store up to five Diagnostic Trouble Codes (DTCs). Diagnostic troubleshooting is done by pressing the Check Engine switch on the dashboard. This system can also have carbon monoxide (CO), ignition timing and idle speed adjusted to baseline values. In 1991, California Golf, Jetta, Fox, Cabriolet and Corrado vehicles were equipped with expanded OBD capabilities. This version was renamed "Digifant I". These later Digifant versions have 38-pin ECUs with Rapid Data Transfer and permanent DTC memory. All Eurovans with Digifant also have rapid data transfer and permanent DTC memory. These systems use a throttle plate potentiometer to track throttle plate position in place of the idle and full throttle switches used on earlier systems. Another characteristic of Digifant II equipped vehicles in California is a switch mount on the dashboard which has a "Check Engine" symbol. Digifant I models in California feature a Check Engine light, with the display of codes done by a special Volkswagen tool under the shift boot, or by a jumper and an LED by the home mechanic. Digifant Reliability Vehicles using engines equipped with Digifant fuel injection typically operated both efficiently and smoothly when new but suffered from a number of issues as the car aged, even by a few months. Digifant sensors & other components, especially the ECU itself, were remarkably sensitive to poor electrical grounding. Without a reliable ground signal to reference, the system was vulnerable to errors and malfunction. In the 1980s and 1990s many Volkswagen / Audi designs did not incorporate sufficient engine compartment grounding, and when the car's engine compartment became soiled with dirt, oil & road salts from several months / years of use, the fuel injection system would fail in often inexplicable ways. Backfiring, stalls & other poor running were all too common. Later versions were improved by providing multiple grounding paths, but the early Digifant systems suffered poor service reputations. Most of driveability issues can be traced back to a few issues: Bad ECU earth/ground Bad oxygen/lambda sensor earth/ground Faulty Engine Coolant Temperature sensor (ECT) The engine coolant temperature sensor is located in the coolant flange,(under the distributor on the Polo Fox Coupe) on the front of the cylinder head (on transverse-engine vehicles). The bad earth/ground can be traced to an essential ground strap on the front upper transmission bolt. Without this, the ECU tends to earth/ground elsewhere, causing a specific trace to burn out on the circuit board and killing the ECU. This causes the injectors to stay open constantly, flooding the engine. See also Jetronic Motronic References Robert Bentley. Volkswagen GTI, Golf, Jetta Service Manual 1985 through 1992. Cambridge: Bentley Publishers, 1992. Volkswagen of America. "Digifant Engine Management". Engine Management Systems. Auburn Hills: Volkswagen Service Publications, 2006. 31–36. External links Volkswagen Group corporate website Fuel Injection theory Volkswagen Group engines Fuel injection systems Embedded systems Power control Engine technology Automotive technology tradenames
Digifant engine management system
[ "Physics", "Technology", "Engineering" ]
1,864
[ "Physical quantities", "Engines", "Computer engineering", "Embedded systems", "Computer systems", "Engine technology", "Power (physics)", "Computer science", "Power control" ]
9,947,735
https://en.wikipedia.org/wiki/Zero-fuel%20weight
The zero-fuel weight (ZFW) of an aircraft is the total weight of the airplane and all its contents, minus the total weight of the usable fuel on board. Unusable fuel is included in ZFW. Remember the takeoff weight components contributions: Where OEW is the Operating Empty Weight (that is a charactersitic of the plane), PL is the Payload actually embarqued, and FOB the Fuel actually embarqued and TOW the actual take-off weight. ZFW is also defined as OEW + PL. The previous formula becomes: . For many types of airplane, the airworthiness limitations include a maximum zero-fuel weight. This limitation is specified to ensure bending moments on the wing roots are not excessive during flight. When the aircraft is loaded before flight, the zero-fuel weight must not exceed the maximum zero-fuel weight. Maximum zero fuel weight The maximum zero fuel weight (MZFW) is the maximum weight allowed before usable fuel and other specified usable agents (engine injection fluid, and other consumable propulsion agents) are loaded in defined sections of the aircraft as limited by strength and airworthiness requirements. It may include usable fuel in specified tanks when carried in lieu of payload. The addition of usable and consumable items to the zero fuel weight must be in accordance with the applicable government regulations so that airplane structure and airworthiness requirements are not exceeded. If the limitations applicable to a transport category airplane type include a maximum zero-fuel weight it must be specified in the Airplane Flight Manual and the type certificate data sheet for the airplane type. Maximum zero fuel weight in aircraft operations When an aircraft is being loaded with crew, passengers, baggage and freight it is most important to ensure that the ZFW does not exceed the MZFW. When an aircraft is being loaded with fuel it is most important to ensure that the takeoff weight will not exceed the maximum permissible takeoff weight. MZFW : The maximum permissible weight of an aircraft with no disposable fuel or oil. where FOB is Fuel On Board. For any aircraft with a defined MZFW, the maximum payload () can be calculated as the MZFW minus the OEW (operational empty weight) Maximum zero fuel weight in type certification The maximum zero fuel weight is an important parameter in demonstrating compliance with gust design criteria for transport category airplanes. Wing bending relief In fixed-wing aircraft, fuel is usually carried in the wings. While the aircraft is in the air, weight in the wings does not contribute as significantly to the bending moment in the wing as does weight in the fuselage. This is because the lift on the wings and the weight of the fuselage bend the wing tips upwards and the wing roots downwards; but the weight of the wings, including the weight of fuel in the wings, bend the wing tips downwards, providing relief to the bending effect on the wing. Considering the bending moment at the wing root, the capacity for extra weight in the wings is greater than the capacity for extra weight in the fuselage. Designers of airplanes can optimise the maximum takeoff weight and prevent overloading in the fuselage by specifying a MZFW. This is usually done for large airplanes with cantilever wings. (Airplanes with strut-braced wings achieve substantial wing bending relief by having the load of the fuselage applied by the strut mid-way along the wing semi-span. Extra wing bending relief cannot be achieved by particular placement of the fuel. There is usually no MZFW specified for an airplane with a strut-braced wing.) Most small airplanes do not have an MZFW specified among their limitations. For these airplanes with cantilever wings, the loading case that must be considered when determining the maximum takeoff weight is the airplane with zero fuel and all disposable load in the fuselage. With zero fuel in the wing the only wing bending relief is due to the weight of the wing. See also Index of aviation articles Aircraft gross weight Dry weight - The equivalent term for automobiles References Aircraft weight measurements
Zero-fuel weight
[ "Physics", "Engineering" ]
831
[ "Aircraft weight measurements", "Mass", "Matter", "Aerospace engineering" ]
9,950,498
https://en.wikipedia.org/wiki/Least%20squares%20inference%20in%20phylogeny
Least squares inference in phylogeny generates a phylogenetic tree based on an observed matrix of pairwise genetic distances and optionally a weight matrix. The goal is to find a tree which satisfies the distance constraints as best as possible. Ordinary and weighted least squares The discrepancy between the observed pairwise distances and the distances over a phylogenetic tree (i.e. the sum of the branch lengths in the path from leaf to leaf ) is measured by where the weights depend on the least squares method used. Least squares distance tree construction aims to find the tree (topology and branch lengths) with minimal S. This is a non-trivial problem. It involves searching the discrete space of unrooted binary tree topologies whose size is exponential in the number of leaves. For n leaves there are 1 • 3 • 5 • ... • (2n-3) different topologies. Enumerating them is not feasible already for a small number of leaves. Heuristic search methods are used to find a reasonably good topology. The evaluation of S for a given topology (which includes the computation of the branch lengths) is a linear least squares problem. There are several ways to weight the squared errors , depending on the knowledge and assumptions about the variances of the observed distances. When nothing is known about the errors, or if they are assumed to be independently distributed and equal for all observed distances, then all the weights are set to one. This leads to an ordinary least squares estimate. In the weighted least squares case the errors are assumed to be independent (or their correlations are not known). Given independent errors, a particular weight should ideally be set to the inverse of the variance of the corresponding distance estimate. Sometimes the variances may not be known, but they can be modeled as a function of the distance estimates. In the Fitch and Margoliash method for instance it is assumed that the variances are proportional to the squared distances. Generalized least squares The ordinary and weighted least squares methods described above assume independent distance estimates. If the distances are derived from genomic data their estimates covary, because evolutionary events on internal branches (of the true tree) can push several distances up or down at the same time. The resulting covariances can be taken into account using the method of generalized least squares, i.e. minimizing the following quantity where are the entries of the inverse of the covariance matrix of the distance estimates. Computational Complexity Finding the tree and branch lengths minimizing the least squares residual is an NP-complete problem. However, for a given tree, the optimal branch lengths can be determined in time for ordinary least squares, time for weighted least squares, and time for generalised least squares (given the inverse of the covariance matrix). External links PHYLIP, a freely distributed phylogenetic analysis package containing an implementation of the weighted least squares method PAUP, a similar package available for purchase Darwin, a programming environment with a library of functions for statistics, numerics, sequence and phylogenetic analysis References Computational phylogenetics
Least squares inference in phylogeny
[ "Biology" ]
622
[ "Bioinformatics", "Phylogenetics", "Computational phylogenetics", "Genetics techniques" ]
9,955,143
https://en.wikipedia.org/wiki/Bacterial%20transcription
Bacterial transcription is the process in which a segment of bacterial DNA is copied into a newly synthesized strand of messenger RNA (mRNA) with use of the enzyme RNA polymerase. The process occurs in three main steps: initiation, elongation, and termination; and the result is a strand of mRNA that is complementary to a single strand of DNA. Generally, the transcribed region accounts for more than one gene. In fact, many prokaryotic genes occur in operons, which are a series of genes that work together to code for the same protein or gene product and are controlled by a single promoter. Bacterial RNA polymerase is made up of four subunits and when a fifth subunit attaches, called the sigma factor (σ-factor), the polymerase can recognize specific binding sequences in the DNA, called promoters. The binding of the σ-factor to the promoter is the first step in initiation. Once the σ-factor releases from the polymerase, elongation proceeds. The polymerase continues down the double stranded DNA, unwinding it and synthesizing the new mRNA strand until it reaches a termination site. There are two termination mechanisms that are discussed in further detail below. Termination is required at specific sites for proper gene expression to occur. Gene expression determines how much gene product, such as protein, is made by the gene. Transcription is carried out by RNA polymerase but its specificity is controlled by sequence-specific DNA binding proteins called transcription factors. Transcription factors work to recognize specific DNA sequences and based on the cells needs, promote or inhibit additional transcription. Similar to other taxa, bacteria experience bursts of transcription. The work of the Jones team in Jones et al. 2014 explains some of the underlying causes of bursts and other variability, including stability of the resulting mRNA, the strength of promotion encoded in the relevant promoter and the duration of transcription due to strength of the TF binding site. They also found that bacterial TFs linger too briefly for TFs binding characteristics to explain the sustained transcription of bursts. Bacterial transcription differs from eukaryotic transcription in several ways. In bacteria, transcription and translation can occur simultaneously in the cytoplasm of the cell, whereas in eukaryotes transcription occurs in the nucleus and translation occurs in the cytoplasm. There is only one type of bacterial RNA polymerase whereas eukaryotes have 3 types. Bacteria have a σ-factor that detects and binds to promoter sites but eukaryotes do not need a σ-factor. Instead, eukaryotes have transcription factors that allow the recognition and binding of promoter sites. Overall, transcription within bacteria is a highly regulated process that is controlled by the integration of many signals at a given time. Bacteria heavily rely on transcription and translation to generate proteins that help them respond specifically to their environment. RNA polymerase RNA polymerase is composed of a core and a holoenzyme structure. The core enzymes contains the catalytic properties of RNA polymerase and is made up of ββ′α2ω subunits. This sequence is conserved across all bacterial species. The holoenzyme is composed of a specific component known as the sigma factor (σ-factor). The sigma factor functions in aiding in promoter recognition, correct placement of RNA polymerase, and beginning unwinding at the start site. After the sigma factor performs its required function, it dissociates, while the catalytic portion remains on the DNA and continues transcription. Additionally, RNA polymerase contains a core Mg+ ion that assists the enzyme with its catalytic properties. RNA polymerase works by catalyzing the nucleophilic attack of 3’ OH of RNA to the alpha phosphate of a complementary NTP molecule to create a growing strand of RNA from the template strand of DNA. Furthermore, RNA polymerase also displays exonuclease activities, meaning that if improper base pairing is detected, it can cut out the incorrect bases and replace them with the proper, correct one. Initiation Initiation of transcription requires promoter regions, which are specific nucleotide consensus sequences that tell the σ-factor on RNA polymerase where to bind to the DNA. The promoters are usually located 15 to 19 bases apart and are most commonly found upstream of the genes they control. RNA polymerase is made up of 4 subunits, which include two alphas, a beta, and a beta prime (α, α, β, and β'). A fifth subunit, sigma (called the σ-factor), is only present during initiation and detaches prior to elongation. Each subunit plays a role in the initiation of transcription, and the σ-factor must be present for initiation to occur. When all σ-factor is present, RNA polymerase is in its active form and is referred to as the holoenzyme. When the σ-factor detaches, it is in core polymerase form. The σ-factor recognizes promoter sequences at -35 and -10 regions and transcription begins at the start site (+1). The sequence of the -10 region is TATAAT and the sequence of the -35 region is TTGACA. The σ-factor binds to the -35 promoter region. At this point, the holoenzyme is referred to as the closed complex because the DNA is still double stranded (connected by hydrogen bonds). Once the σ-factor binds, the remaining subunits of the polymerase attach to the site. The high concentration of adenine-thymine bonds at the -10 region facilitates the unwinding of the DNA. At this point, the holoenzyme is called the open complex. This open complex is also called the transcription bubble. Only one strand of DNA, called the template strand (also called the noncoding strand or nonsense/antisense strand), gets transcribed. Transcription begins and short "abortive" nucleotide sequences approximately 10 base pairs long are produced. These short sequences are nonfunctional pieces of RNA that are produced and then released. Generally, this nucleotide sequence consists of about twelve base pairs and aids in contributing to the stability of RNA polymerase so it is able to continue along the strand of DNA. The σ-factor is needed to initiate transcription but is not needed to continue transcribing the DNA. The σ-factor dissociates from the core enzyme and elongation proceeds. This signals the end of the initiation phase and the holoenzyme is now in core polymerase form. The promoter region is a prime regulator of transcription. Promoter regions regulate transcription of all genes within bacteria. As a result of their involvement, the sequence of base pairs within the promoter region is significant; the more similar the promoter region is to the consensus sequence, the tighter RNA polymerase will be able to bind. This binding contributes to the stability of elongation stage of transcription and overall results in more efficient functioning. Additionally, RNA polymerase and σ-factors are in limited supply within any given bacterial cell. Consequently, σ-factor binding to the promoter is affected by these limitations. All promoter regions contain sequences that are considered non-consensus and this helps to distribute σ-factors across the entirety of the genome. Elongation During elongation, RNA polymerase slides down the double stranded DNA, unwinding it and transcribing (copying) its nucleotide sequence into newly synthesized RNA. The movement of the RNA-DNA complex is essential for the catalytic mechanism of RNA polymerase. Additionally, RNA polymerase increases the overall stability of this process by acting as a link between the RNA and DNA strands. New nucleotides that are complementary to the DNA template strand are added to the 3' end of the RNA strand. The newly formed RNA strand is practically identical to the DNA coding strand (sense strand or non-template strand), except it has uracil substituting thymine, and a ribose sugar backbone instead of a deoxyribose sugar backbone. Because nucleoside triphosphates (NTPs) need to attach to the OH- molecule on the 3' end of the RNA, transcription always occurs in the 5' to 3' direction. The four NTPs are adenosine-5'-triphosphate (ATP), guanoside-5'-triphosphate (GTP), uridine-5'-triphosphate (UTP), and cytidine-5'-triphosphate (CTP). The attachment of NTPs onto the 3' end of the RNA transcript provides the energy required for this synthesis. NTPs are also energy producing molecules that provide the fuel that drives chemical reactions in the cell. Multiple RNA polymerases can be active at once, meaning many strands of mRNA can be produced very quickly. RNA polymerase moves down the DNA rapidly at approximately 40 bases per second. Due to the quick nature of this process, DNA is continually unwound ahead of RNA polymerase and then rewound once RNA polymerase moves along further. The polymerase has a proofreading mechanism that limits mistakes to about 1 in 10,000 nucleotides transcribed. RNA polymerase has lower fidelity (accuracy) and speed than DNA polymerase. DNA polymerase has a very different proofreading mechanism that includes exonuclease activity, which contributes to the higher fidelity. The consequence of an error during RNA synthesis is usually harmless, where as an error in DNA synthesis could be detrimental. The promoter sequence determines the frequency of transcription of its corresponding gene. Termination In order for proper gene expression to occur, transcription must stop at specific sites. Two termination mechanisms are well known: Intrinsic termination (also called Rho-independent termination): Specific DNA nucleotide sequences signal the RNA polymerase to stop. The sequence is commonly a palindromic sequence that causes the strand to loop which stalls the RNA polymerase. Generally, this type of termination follows the same standard procedure. A pause will occur due to a polyuridine sequence that allows the formation of a hairpin loop. This hairpin loop will aid in forming a trapped complex, which will ultimately cause the dissociation of RNA polymerase from the template DNA strand and halt transcription. Rho-dependent termination: ρ factor (rho factor) is a terminator protein that attaches to the RNA strand and follows behind the polymerase during elongation. Once the polymerase nears the end of the gene it is transcribing, it encounters a series of G nucleotides which causes it to stall. This stalling allows the rho factor to catch up to the RNA polymerase. The rho protein then pulls the RNA transcript from the DNA template and the newly synthesized mRNA is released, ending transcription. Rho factor is a protein complex that also displays helicase activities (is able to unwind the nucleic acid strands). It will bind to the DNA in cytosine rich regions and when RNA polymerase encounters it, a trapped complex will form causing the dissociation of all molecules involved and end transcription. The termination of DNA transcription in bacteria may be stopped by certain mechanisms wherein the RNA polymerase will ignore the terminator sequence until the next one is reached. This phenomenon is known as antitermination and is utilized by certain bacteriophages. References External links Bacterial Transcription – animation Video animation summarizing the process Gene expression Bacteria
Bacterial transcription
[ "Chemistry", "Biology" ]
2,325
[ "Gene expression", "Prokaryotes", "Molecular genetics", "Cellular processes", "Bacteria", "Molecular biology", "Biochemistry", "Microorganisms" ]
9,956,291
https://en.wikipedia.org/wiki/AP-1%20transcription%20factor
Activator protein 1 (AP-1) is a transcription factor that regulates gene expression in response to a variety of stimuli, including cytokines, growth factors, stress, and bacterial and viral infections. AP-1 controls a number of cellular processes including differentiation, proliferation, and apoptosis. The structure of AP-1 is a heterodimer composed of proteins belonging to the c-Fos, c-Jun, ATF and JDP families. History AP-1 was first discovered as a TPA-activated transcription factor that bound to a cis-regulatory element of the human metallothionein IIa (hMTIIa) promoter and SV40. The AP-1 binding site was identified as the 12-O-Tetradecanoylphorbol-13-acetate (TPA) response element (TRE) with the consensus sequence 5’-TGA G/C TCA-3’. The AP-1 subunit Jun was identified as a novel oncoprotein of avian sarcoma virus, and Fos-associated p39 protein was identified as the transcript of the cellular Jun gene. Fos was first isolated as the cellular homologue of two viral v-fos oncogenes, both of which induce osteosarcoma in mice and rats. Since its discovery, AP-1 has been found to be associated with numerous regulatory and physiological processes, and new relationships are still being investigated. Structure AP-1 transcription factor is assembled through the dimerization of a characteristic bZIP domain (basic region leucine zipper) in the Fos and Jun subunits. A typical bZIP domain consists of a “leucine zipper” region, and a “basic region”. The leucine zipper is responsible for dimerization of the Jun and Fos protein subunits. This structural motif twists two alpha helical protein domains into a “coiled coil,” characterized by a periodicity of 3.5 residues per turn and repetitive leucines appearing at every seventh position of the polypeptide chain. Due to the amino acid sequence and the periodicity of the helices, the leucine side chains are arranged along one face of the α helix and form a hydrophobic surface that modulates dimerization. Hydrophobic residues additional to leucine also form the characteristic 3-4 repeat of α helices involved in “coiled-coil” interactions, and help contribute to the hydrophobic packing that drives dimerization. Together, this hydrophobic surface holds the two subunits together. The basic region of the bZIP domain is just upstream to the leucine zipper, and contains positively charged residues. This region interacts with DNA target sites. Apart from the “leucine zipper” and the “basic region” which are important for dimerization and DNA-binding, the c-jun protein contains three short regions, which consist of clusters of negatively charged amino acids in its N-terminal half that are important for transcriptional activation in vivo. Dimerization happens between the products of the c-jun and c-fos protooncogenes, and is required for DNA-binding. Jun proteins can form both homo and heterodimers and therefore are capable of binding to DNA by themselves. However, Fos proteins do not dimerize with each other and therefore can only bind to DNA when bound with Jun. The Jun-Fos heterodimer is more stable and has higher DNA-binding activity than Jun homodimers. Function AP-1 transcription factor has been shown to have a hand in a wide range of cellular processes, including cell growth, differentiation, and apoptosis. AP-1 activity is often regulated via post-translational modifications, DNA binding dimer composition, and interaction with various binding partners. AP-1 transcription factors are also associated with numerous physiological functions especially in determination of organisms’ life span and tissue regeneration. Below are some of the other important functions and biological roles AP-1 transcription factors have been shown to be involved in. Cell growth, proliferation and senescence The AP-1 transcription factor has been shown to play numerous roles in cell growth and proliferation. In particular, c-Fos and c-Jun seem to be major players in these processes. C-jun has been shown to be essential for fibroblast proliferation, and levels of both AP-1 subunits have been shown to be expressed above basal levels during cell division. C-fos has also been shown to increase in expression in response to the introduction of growth factors in the cell, further supporting its suggested involvement in the cell cycle. The growth factors TGF alpha, TGF beta, and IL2 have all been shown to stimulate c-Fos, and thereby stimulate cellular proliferation via AP-1 activation. Cellular senescence has been identified as "a dynamic and reversible process regulated by (in)activation of a predetermined enhancer landscape controlled by the pioneer transcription factor AP-1", which "defines the organizational principles of the transcription factor network that drives the transcriptional programme of senescent cells". Cellular differentiation AP-1 transcription is deeply involved in the modulation of gene expression. Changes in cellular gene expression in the initiation of DNA synthesis and the formation of differentiated derivatives can lead to cellular differentiation. AP-1 has been shown to be involved in cell differentiation in several systems. For example, by forming stable heterodimers with c-Jun, the bZIP region of c-Fos increases the binding of c-Jun to target genes whose activation is involved in the differentiation of chicken embryo fibroblasts (CEF). It has also been shown to participate in endoderm specification. Apoptosis AP-1 transcription factor is associated with a broad range of apoptosis related interactions. AP-1 activity is induced by numerous extracellular matrix and genotoxic agents, suggesting involvement in programmed cell death. Many of these stimuli activate the c-Jun N-terminal kinases (JNKs) leading to the phosphorylation of Jun proteins and enhanced transcriptional activity of AP-1 dependent genes. Increases in the levels of Jun and Fos proteins and JNK activity have been reported in scenarios in which cells undergo apoptosis. For example, inactivated c-Jun-ER cells show a normal morphology, while c-Jun-ER activated cells have been shown to be apoptotic. tissue specific regulation It has been shown that AP-1 motif regulates tissue-specific genes through enhancer selection mechanism in fibroblasts. It has been shown that AP-1 motif is related to epigenetic regulation in kidney function and now there is suspect that AP-1 motif is regulated in developing RPE, specifically through OTX2. Regulation of AP-1 Increased AP-1 levels lead to increased transactivation of target gene expression. Regulation of AP-1 activity is therefore critical for cell function and occurs through specific interactions controlled by dimer-composition, transcriptional and post-translational events, and interaction with accessory proteins. AP-1 functions are heavily dependent on the specific Fos and Jun subunits contributing to AP-1 dimers. The outcome of AP-1 activation is dependent on the complex combinatorial patterns of AP-1 component dimers. The AP-1 complex binds to a palindromic DNA motif (5’-TGA G/C TCA-3’) to regulate gene expression, but specificity is dependent on the dimer composition of the bZIP subunit. Physiological relevance AP-1 transcription factor has been shown to be involved in skin physiology, specifically in tissue regeneration. The process of skin metabolism is initiated by signals that trigger undifferentiated proliferative cells to undergo cell differentiation. Therefore, activity of AP-1 subunits in response to extracellular signals may be modified under conditions where the balance of keratinocyte proliferation and differentiation has to be rapidly and temporally altered. The AP-1 transcription factor also has been shown to be involved in breast cancer cell growth through multiple mechanisms, including regulation of cyclin D1, E2F factors and their target genes. c-Jun, which is one of the AP-1 subunits, regulates the growth of breast cancer cells. Activated c-Jun is predominantly expressed at the invasive front in breast cancer and is associated with proliferation of breast cells. Due to the AP-1 regulatory functions in cancer cells, AP-1 modulation is studied as a potential strategy for cancer prevention and therapy. Regulome See also Activator protein Immediate early genes – Genes that are rapidly expressed in response to varied stimuli, without needing new proteins to be synthesized, including c-fos and c-jun Transcription factor References External links NLM Genecards Atlas of Genetics Transcription factors
AP-1 transcription factor
[ "Chemistry", "Biology" ]
1,803
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
9,957,063
https://en.wikipedia.org/wiki/Histone%20acetylation%20and%20deacetylation
Histone acetylation and deacetylation are the processes by which the lysine residues within the N-terminal tail protruding from the histone core of the nucleosome are acetylated and deacetylated as part of gene regulation. Histone acetylation and deacetylation are essential parts of gene regulation. These reactions are typically catalysed by enzymes with "histone acetyltransferase" (HAT) or "histone deacetylase" (HDAC) activity. Acetylation is the process where an acetyl functional group is transferred from one molecule (in this case, acetyl coenzyme A) to another. Deacetylation is simply the reverse reaction where an acetyl group is removed from a molecule. Acetylated histones, octameric proteins that organize chromatin into nucleosomes, the basic structural unit of the chromosomes and ultimately higher order structures, represent a type of epigenetic marker within chromatin. Acetylation removes the positive charge on the histones, thereby decreasing the interaction of the N termini of histones with the negatively charged phosphate groups of DNA. As a consequence, the condensed chromatin is transformed into a more relaxed structure that is associated with greater levels of gene transcription. This relaxation can be reversed by deacetylation catalyzed by HDAC activity. Relaxed, transcriptionally active DNA is referred to as euchromatin. More condensed (tightly packed) DNA is referred to as heterochromatin. Condensation can be brought about by processes including deacetylation and methylation. Mechanism of action Nucleosomes are portions of double-stranded DNA (dsDNA) that are wrapped around protein complexes called histone cores. These histone cores are composed of 8 subunits, two each of H2A, H2B, H3 and H4 histones. This protein complex forms a cylindrical shape that dsDNA wraps around with approximately 147 base pairs. Nucleosomes are formed as a beginning step for DNA compaction that also contributes to structural support as well as serves functional roles. These functional roles are contributed by the tails of the histone subunits. The histone tails insert themselves in the minor grooves of the DNA and extend through the double helix, which leaves them open for modifications involved in transcriptional activation. Acetylation has been closely associated with increases in transcriptional activation while deacetylation has been linked with transcriptional deactivation. These reactions occur post-translation and are reversible. The mechanism for acetylation and deacetylation takes place on the NH3+ groups of lysine amino acid residues. These residues are located on the tails of histones that make up the nucleosome of packaged dsDNA. The process is aided by factors known as histone acetyltransferases (HATs). HAT molecules facilitate the transfer of an acetyl group from a molecule of acetyl-coenzyme A (Acetyl-CoA) to the NH3+ group on lysine. When a lysine is to be deacetylated, factors known as histone deacetylases (HDACs) catalyze the removal of the acetyl group with a molecule of H2O. Acetylation has the effect of changing the overall charge of the histone tail from positive to neutral. Nucleosome formation is dependent on the positive charges of the H4 histones and the negative charge on the surface of H2A histone fold domains. Acetylation of the histone tails disrupts this association, leading to weaker binding of the nucleosomal components. By doing this, the DNA is more accessible and leads to more transcription factors being able to reach the DNA. Thus, acetylation of histones is known to increase the expression of genes through transcription activation. Deacetylation performed by HDAC molecules has the opposite effect. By deacetylating the histone tails, the DNA becomes more tightly wrapped around the histone cores, making it harder for transcription factors to bind to the DNA. This leads to decreased levels of gene expression and is known as gene silencing. Acetylated histones, the octomeric protein cores of nucleosomes, represent a type of epigenetic marker within chromatin. Studies have shown that one modification has the tendency to influence whether another modification will take place. Modifications of histones can not only cause secondary structural changes at their specific points, but can cause many structural changes in distant locations which inevitably affects function. As the chromosome is replicated, the modifications that exist on the parental chromosomes are handed down to daughter chromosomes. The modifications, as part of their function, can recruit enzymes for their particular function and can contribute to the continuation of modifications and their effects after replication has taken place. It has been shown that, even past one replication, expression of genes may still be affected many cell generations later. A study showed that, upon inhibition of HDAC enzymes by Trichostatin A, genes inserted next to centric heterochromatin showed increased expression. Many cell generations later, in the absence of the inhibitor, the increased gene expression was still expressed, showing modifications can be carried through many replication processes such as mitosis and meiosis. Histone acetylation/deacetylation enzymes Histone acetyltransferase (HATs) Histone Acetyltransferases, also known as HATs, are a family of enzymes that acetylate the histone tails of the nucleosome. This, and other modifications, are expressed based on the varying states of the cellular environment. Many proteins with acetylating abilities have been documented and, after a time, were categorized based on sequence similarities between them. These similarities are high among members of a family, but members from different families show very little resemblance. Some of the major families identified so far are as follows. GNAT family General Control Non-Derepressible 5 (Gcn5) –related N-Acetyltransferases (GNATs) is one of the many studied families with acetylation abilities. This superfamily includes the factors Gcn5 which is included in the SAGA, SLIK, STAGA, ADA, and A2 complexes, Gcn5L, p300/CREB-binding protein associated factor (PCAF), Elp3, HPA2 and HAT1. Major features of the GNAT family include HAT domains approximately 160 residues in length and a conserved bromodomain that has been found to be an acetyl-lysine targeting motif. Gcn5 has been shown to acetylate substrates when it is part of a complex. Recombinant Gcn5 has been found to be involved in the acetylation of the H3 histones of the nucleosome. To a lesser extent, it has been found to also acetylate H2B and H4 histones when involved with other complexes. PCAF has the ability to act as a HAT protein and acetylate histones, it can acetylate non-histone proteins related to transcription, as well as act as a coactivator in many processes including myogenesis, nuclear-receptor-mediated activation and growth-factor-signaled activation. Elp3 has the ability to acetylate all histone subunits and also shows involvement in the RNA polymerase II holoenzyme. MYST family MOZ (Monocytic Leukemia Zinc Finger Protein), Ybf2/Sas3, Sas2 and Tip60 (Tat Interacting Protein) all make up MYST, another well known family that exhibits acetylating capabilities. This family includes Sas3, essential SAS-related acetyltransferase (Esa1), Sas2, Tip60, MOF, MOZ, MORF, and HBO1. The members of this family have multiple functions, not only with activating and silencing genes, but also affect development and have implications in human diseases. Sas2 and Sas3 are involved in transcription silencing, MOZ and TIF2 are involved with the formation of leukemic transclocation products while MOF is involved in dosage compensation in Drosophila. MOF also influences spermatogenesis in mice as it is involved in the expansion of H2AX phosphorylation during the leptotene to pachytene stages of meiosis. HAT domains for this family are approximately 250 residues which include cysteine-rich, zinc binding domains as well as N-terminal chromodomains. The MYST proteins Esa1, Sas2 and Sas3 are found in yeast, MOF is found in Drosophila and mice while Tip60, MOZ, MORF, and HBO1 are found in humans. Tip60 has roles in the regulation of gene transcription, HBO has been found to impact the DNA replication process, MORF is able to acetylate free histones (especially H3 and H4) as well as nucleosomal histones. p300/CBP family Adenoviral E1A-associated protein of 300kDa (p300) and the CREB-binding protein (CBP) make up the next family of HATs. This family of HATs contain HAT domains that are approximately 500 residues long and contain bromodomains as well as three cysteine-histidine rich domains that help with protein interactions. These HATs are known to acetylate all of the histone subunits in the nucleosome. They also have the ability to acetylate and mediate non-histone proteins involved in transcription and are also involved in the cell-cycle, differentiation and apoptosis. Other HATs There are other proteins that have acetylating abilities but differ in structure to the previously mentioned families. One HAT is called steroid receptor coactivator 1 (SRC1), which has a HAT domain located at the C-terminus end of the protein along with a basic helix-loop-helix and PAS A and PAS B domains with a LXXLL receptor interacting motif in the middle. Another is ATF-2 which contains a transcriptional activation (ACT) domain and a basic zipper DNA-binding (bZip) domain with a HAT domain in-between. The last is TAFII250 which has a Kinase domain at the N-terminus region, two bromodomains located at the C-terminus region and a HAT domain located in-between. Histone deacetylase (HDACs) There are a total of four classes that categorize Histone Deacetylases (HDACs). Class I includes HDACs 1, 2, 3, and 8. Class II is divided into two subgroups, Class IIA and Class IIB. Class IIA includes HDACs 4, 5, 7, and 9 while Class IIB includes HDACs 6 and 10. Class III contains the Sirtuins and Class IV contains only HDAC11. Classes of HDAC proteins are divided and grouped together based on the comparison to the sequence homologies of Rpd3, Hos1 and Hos2 for Class I HDACs, HDA1 and Hos3 for the Class II HDACs and the sirtuins for Class III HDACs. Class I HDACs HDAC1 & HDAC2 HDAC1 & HDAC2 are in the first class of HDACs are most closely related to one another. By analyzing the overall sequences of both HDACs, their similarity was found to be approximately 82% homologous. These enzymes have been found to be inactive when isolated which led to the conclusion that they must be incorporated with cofactors in order to activate their deacetylase abilities. There are three major protein complexes that HDAC 1 & 2 may incorporate themselves into. These complexes include Sin3 (named after its characteristic protein mSin3A), Nucleosome Remodelling and Deacetylating complex (NuRD), and Co-REST. The Sin3 complex and the NuRD complex both contain HDACs 1 and 2, the Rb-associated protein 48 (RbAp48) and RbAp46 which make up the core of each complex. Other complexes may be needed though in order to initiate the maximum amount of available activity possible. HDACs 1 and 2 can also bind directly to DNA binding proteins such as Yin and Yang 1 (YY1), Rb binding protein 1 and Sp1. HDACs 1 and 2 have been found to express regulatory roles in key cell cycle genes including p21. Activity of these HDACs can be affected by phosphorylation. An increased amount of phosphorylation (hyperphosphorylation) leads to increased deacetylase activity, but degrades complex formation between HDACs 1 and 2 and between HDAC1 and mSin3A/YY1. A lower than normal amount of phosphorylation (hypophosphorylation) leads to a decrease in the amount of deacetylase activity, but increases the amount of complex formation. Mutation studies found that major phosphorylation happens at residues Ser421 and Ser423. Indeed, when these residues were mutated, a drastic reduction was seen in the amount of deacetylation activity. This difference in the state of phosphorylation is a way of keeping an optimal level of phosphorylation to ensure there is no over or under expression of deacetylation. HDACs 1 and 2 have been found only exclusively in the nucleus. In HDAC1 knockout (KO) mice, mice were found to die during embryogenesis and showed a drastic reduction in the production but increased expression of Cyclin-Dependent Kinase Inhibitors (CDKIs) p21 and p27. Not even upregulation of the other Class I HDACs could compensate for the loss of HDAC1. This inability to recover from HDAC1 KO leads researchers to believe that there are both functional uniqueness to each HDAC as well as regulatory cross-talk between factors. HDAC3 HDAC3 has been found to be most closely related to HDAC8. HDAC3 contains a non-conserved region in the C-terminal region that was found to be required for transcriptional repression as well as its deacetylase activity. It also contains two regions, one called a Nuclear Localization Signal (NLS) as well as a Nuclear Export Signal (NES). The NLS functions as a signal for nuclear action while an NES functions with HDACs that perform work outside of the nucleus. A presence of both signals for HDAC3 suggests it travels between the nucleus and the cytoplasm. HDAC3 has even been found to interact with the plasma membrane. Silencing Mediator for Retinoic Acid and Thyroid Hormone (SMRT) receptors and Nuclear Receptor Co-Repressor (N-CoR) factors must be utilized by HDAC3 in order to activate it. Upon doing so, it gains the ability to co-precipitate with HDACs 4, 5, and 7. HDAC3 can also be found complexed together with HDAC-related protein (HDRP). HDACs 1 and 3 have been found to mediate Rb-RbAp48 interactions which suggests that it functions in cell cycle progression. HDAC3 also shows involvement in stem cell self-renewal and a transcription independent role in mitosis. HDAC8 HDAC8 has been found to be most similar to HDAC3. Its major feature is its catalytic domain which contains an NLS region in the center. Two transcripts of this HDAC have been found which include a 2.0kb transcript and a 2.4kb transcript. Unlike the other HDAC molecules, when purified, this HDAC showed to be enzymatically active. At this point, due to its recent discovery, it is not yet known if it is regulated by co-repressor protein complexes. Northern blots have revealed that different tissue types show varying degrees of HDAC8 expression but has been observed in smooth muscles and is thought to contribute to contractility. Class II HDACs Class IIA The Class IIA HDACs includes HDAC4, HDAC5, HDAC7 and HDAC9. HDACs 4 and 5 have been found to most closely resemble each other while HDAC7 maintains a resemblance to both of them. There have been three discovered variants of HDAC9 including HDAC9a, HDAC9b and HDAC9c/HDRP, while more have been suspected. The variants of HDAC9 have been found to have similarities to the rest of the Class IIA HDACs. For HDAC9, the splicing variants can be seen as a way of creating a "fine-tuned mechanism" for differentiation expression levels in the cell. Different cell types may take advantage and utilize different isoforms of the HDAC9 enzyme allowing for different forms of regulation. HDACs 4, 5 and 7 have their catalytic domains located in the C-terminus along with an NLS region while HDAC9 has its catalytic domain located in the N-terminus. However, the HDAC9 variant HDAC9c/HDRP lacks a catalytic domain but has a 50% similarity to the N-terminus of HDACs 4 and 5. For HDACs 4, 5 and 7, conserved binding domains have been discovered that bind for C-terminal binding protein (CtBP), myocyte enhancer factor 2 (MEF2) and 14-3-3. All three HDACs work to repress the myogenic transcription factor MEF2 which an essential role in muscle differentiation as a DNA binding transcription factor. Binding of HDACs to MEF2 inhibits muscle differentiation, which can be reversed by action of Ca2+/calmodulin-dependent kinase (CaMK) which works to dissociate the HDAC/MEF2 complex by phosphorylating the HDAC portion. They have been seen to be involved in cellular hypertrophy in muscle control differentiation as well as cellular hypertrophy in muscle and cartilage tissues. HDACs 5 and 7 have been shown to work in opposition to HDAC4 during muscle differentiation regulation so as to keep a proper level of expression. There has been evidence that these HDACs also interact with HDAC3 as a co-recruitment factor to the SMRT/N-CoR factors in the nucleus. Absence of the HDAC3 enzyme has shown to lead to inactivity which makes researchers believe that HDACs 4, 5 and 7 help the incorporation of DNA-binding recruiters for the HDAC3-containing HDAC complexes located in the nucleus. When HDAC4 is knocked out in mice, they suffer from a pronounced chondrocyte hypertrophy and die due to extreme ossification. HDAC7 has been shown to suppress Nur77-dependent apoptosis. This interaction leads to a role in clonal expansion of T cells. HDAC9 KO mice are shown to suffer from cardiac hypertrophy which is exacerbated in mice that are double KO for HDACs 9 and 5. Class IIB The Class IIB HDACs include HDAC6 and HDAC10. These two HDACs are most closely related to each other in overall sequence. However, HDAC6's catalytic domain is most similar to HDAC9. A unique feature of HDAC6 is that it contains two catalytic domains in tandem of one another. Another unique feature of HDAC6 is the HDAC6-, SP3, and Brap2-related zinc finger motif (HUB) domain in the C-terminus which shows some functions related to ubiquitination, meaning this HDAC is prone to degradation. HDAC10 has two catalytic domains as well. One active domain is located in the N-terminus and a putative catalytic domain is located in the C-terminus along with an NES domain. Two putative Rb-binding domains have also been found on HDAC10 which shows it may have roles in the regulation of the cell cycle. Two variants of HDAC10 have been found, both having slight differences in length. HDAC6 is the only HDAC to be shown to act on tubulin, acting as a tubulin deacetylase which helps in the regulation of microtubule-dependent cell motility. It is mostly found in the cytoplasm but has been known to be found in the nucleus, complexed together with HDAC11. HDAC10 has been seen to act on HDACs 1, 2, 3 (or SMRT), 4, 5 and 7. Some evidence has been shown that it may have small interactions with HDAC6 as well. This leads researchers to believe that HDAC10 may function more as a recruiter rather than a factor for deacetylation. However, experiments conducted with HDAC10 did indeed show deacetylation activity. Class IV HDACs HDAC11 HDAC11 has been shown to be related to HDACs 3 and 8, but its overall sequence is quite different from the other HDACs, leading it to be in its own category. HDAC11 has a catalytic domain located in its N-terminus. It has not been found incorporated in any HDAC complexes such as Nurd or SMRT which means it may have a special function unique to itself. It has been found that HDAC11 remains mainly in the nucleus. Biological functions Transcription regulation The discovery of histone acetylation causing changes in transcription activity can be traced back to the work of Vicent Allfrey and colleagues in 1964. The group hypothesized that histone proteins modified by acetyl groups added negative charges to the positive lysines, and thus, reduced the interaction between DNA and histones. Histone modification is now considered a major regulatory mechanism that is involved in many different stages of genetic functions. Our current understanding is that acetylated lysine residues on histone tails is associated with transcriptional activation. In turn, deacetylated histones are associated with transcriptional repression. In addition, negative correlations have been found between several histone acetylation marks. The regulatory mechanism is thought to be twofold. Lysine is an amino acid with a positive charge when unmodified. Lysines on the amino terminal tails of histones have a tendency to weaken the chromatin's overall structure. Addition of an acetyl group, which carries a negative charge, effectively removes the positive charge and hence, reduces the interaction between the histone tail and the nucleosome. This opens up the usually tightly packed nucleosome and allows transcription machinery to come into contact with the DNA template, leading to gene transcription. Repression of gene transcription is achieved by the reverse of this mechanism. The acetyl group is removed by one of the HDAC enzymes during deacetylation, allowing histones to interact with DNA more tightly to form compacted nucleosome assembly. This increase in the rigid structure prevents the incorporation of transcriptional machinery, effectively silencing gene transcription. Another implication of histone acetylation is to provide a platform for protein binding. As a posttranslational modification, the acetylation of histones can attract proteins to elongated chromatin that has been marked by acetyl groups. It has been hypothesized that the histone tails offer recognition sites that attract proteins responsible for transcriptional activation. Unlike histone core proteins, histone tails are not part of the nucleosome core and are exposed to protein interaction. A model proposed that the acetylation of H3 histones activates gene transcription by attracting other transcription related complexes. Therefore, the acetyl mark provides a site for protein recognition where transcription factors interact with the acetylated histone tails via their bromodomain. Histone code hypothesis The Histone code hypothesis suggests the idea that patterns of post-translational modifications on histones, collectively, can direct specific cellular functions. Chemical modifications of histone proteins often occur on particular amino acids. This specific addition of single or multiple modifications on histone cores can be interpreted by transcription factors and complexes which leads to functional implications. This process is facilitated by enzymes such as HATs and HDACs that add or remove modifications on histones, and transcription factors that process and "read" the modification codes. The outcome can be activation of transcription or repression of a gene. For example, the combination of acetylation and phosphorylation have synergistic effects on the chromosomes overall structural condensation level and, hence, induces transcription activation of immediate early gene. Experiments investigating acetylation patterns of H4 histones suggested that these modification patterns are collectively maintained in mitosis and meiosis in order to modify long-term gene expression. The acetylation pattern is regulated by HAT and HADC enzymes and, in turn, sets the local chromatin structure. In this way, acetylation patterns are transmitted and interconnected with protein binding ability and functions in subsequent cell generation. Bromodomain The bromodomain is a motif that is responsible for acetylated lysine recognition on histones by nucleosome remodelling proteins. Posttranslational modifications of N- and C-terminal histone tails attracts various transcription initiation factors that contain bromodomains, including human transcriptional coactivator PCAF, TAF1, GCN5 and CREB-binding protein (CBP), to the promoter and have a significance in regulating gene expression. Structural analysis of transcription factors has shown that highly conserved bromodomains are essential for protein to bind to acetylated lysine. This suggests that specific histone site acetylation has a regulatory role in gene transcriptional activation. Human diseases Inflammatory diseases Gene expression is regulated by histone acetylation and deacetylation, and this regulation is also applicable to inflammatory genes. Inflammatory lung diseases are characterized by expression of specific inflammatory genes such as NF-κB and AP-1 transcription factor. Treatments with corticosteroids and theophylline for inflammatory lung diseases interfere with HAT/HDAC activity to turn off inflammatory genes. Specifically, gene expression data demonstrated increased activity of HAT and decreased level of HDAC activity in patients with Asthma. Patients with chronic obstructive pulmonary disease showed there is an overall decrease in HDAC activity with unchanged levels of HAT activity. Results have shown that there is an important role for HAT/HDAC activity balance in inflammatory lung diseases and provided insights on possible therapeutic targets. Cancer Due to the regulatory role during transcription of epigenetic modifications in genes, it is not surprising that changes in epigenetic markers, such as acetylation, can contribute to cancer development. HDACs expression and activity in tumor cells is very different from normal cells. The overexpression and increased activity of HDACs has been shown to be characteristic of tumorigenesis and metastasis, suggesting an important regulatory role of histone deacetylation on the expression of tumor suppressor genes. One of the examples is the regulation role of histone acetylation/deacetylation in P300 and CBP, both of which contribute to oncogenesis. Approved in 2006 by the U.S. Food and Drug Administration (FDA), Vorinostat represents a new category for anticancer drugs that are in development. Vorinostat targets histone acetylation mechanisms and can effectively inhibit abnormal chromatin remodeling in cancerous cells. Targets of Vorinostat includes HDAC1, HDAC2, HDAC3 and HDAC6. Carbon source availability is reflected in histone acetylation in cancer. Glucose and glutamine are the major carbon sources of most mammalian cells, and glucose metabolism is closely related to histone acetylation and deacetylation. Glucose availability affects the intracellular pool of acetyl-CoA, a central metabolic intermediate that is also the acetyl donor in histone acetylation. Glucose is converted to acetyl-CoA by the pyruvate dehydrogenase complex (PDC), which produces acetyl-CoA from glucose-derived pyruvate; and by adenosine triphosphate-citrate lyase (ACLY), which generates acetyl-CoA from glucose-derived citrate. PDC and ACLY activity depend on glucose availability, which thereby influences histone acetylation and consequently modulates gene expression and cell cycle progression. Dysregulation of ACLY and PDC contributes to metabolic reprogramming and promotes the development of multiple cancers. At the same time, glucose metabolism maintains the NAD+/NADH ratio, and NAD+ participates in SIRT-mediated histone deacetylation. SIRT enzyme activity is altered in various malignancies, and inhibiting SIRT6, a histone deacetylase that acts on acetylated H3K9 and H3K56, promotes tumorigenesis. SIRT7, which deacetylates H3K18 and thereby represses transcription of target genes, is activated in cancer to stabilize cells in the transformed state. Nutrients appear to modulate SIRT activity. For example, long-chain fatty acids activate the deacetylase function of SIRT6, and this may affect histone acetylation. Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions, and much of the work on addiction has focused on histone acetylation. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 21% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol. Cocaine addiction occurs in about 0.5% of the US population. Repeated cocaine administration in mice induces hyperacetylation of histone 3 (H3) or histone 4 (H4) at 1,696 genes in one brain "reward" region [the nucleus accumbens (NAc)] and deacetylation at 206 genes. At least 45 genes, shown in previous studies to be upregulated in the NAc of mice after chronic cocaine exposure, were found to be associated with hyperacetylation of H3 or H4. Many of these individual genes are directly related to aspects of addiction associated with cocaine exposure. In rodent models, many agents causing addiction, including tobacco smoke products, alcohol, cocaine, heroin and methamphetamine, cause DNA damage in the brain. During repair of DNA damages some individual repair events may alter the acetylations of histones at the sites of damage, or cause other epigenetic alterations, and thus leave an epigenetic scar on chromatin. Such epigenetic scars likely contribute to the persistent epigenetic changes found in addictions. In 2013, 22.7 million persons aged 12 or older needed treatment for an illicit drug or alcohol use problem (8.6 percent of persons aged 12 or older). Other disorders Suggested by the idea that the structure of chromatin can be modified to allow or deny access of transcription activators, regulatory functions of histone acetylation and deacetylation can have implications with genes that cause other diseases. Studies on histone modifications may reveal many novel therapeutic targets. Based on different cardiac hypertrophy models, it has been demonstrated that cardiac stress can result in gene expression changes and alter cardiac function. These changes are mediated through HATs/HDACs posttranslational modification signaling. HDAC inhibitor trichostatin A was reported to reduce stress induced cardiomyocyte autophagy. Studies on p300 and CREB-binding protein linked cardiac hypertrophy with cellular HAT activity suggesting an essential role of histone acetylation status with hypertrophy responsive genes such as GATA4, SRF, and MEF2. Epigenetic modifications also play a role in neurological disorders. Deregulation of histones modification are found to be responsible for deregulated gene expression and hence associated with neurological and psychological disorders, such as Schizophrenia and Huntington disease. Current studies indicate that inhibitors of the HDAC family have therapeutic benefits in a wide range of neurological and psychiatric disorders. Many neurological disorders only affect specific brain regions; therefore, understanding of the specificity of HDACs is still required for further investigations for improved treatments. See also Histone acetyltransferase Histone deacetylase Histone methylation Acetylation Phosphorylation Nucleosome References External links Animation of histone tail acetylation and deacetylation: Organic reactions Proteins Post-translational modification
Histone acetylation and deacetylation
[ "Chemistry" ]
7,050
[ "Biomolecules by chemical classification", "Gene expression", "Organic reactions", "Biochemical reactions", "Post-translational modification", "Molecular biology", "Proteins" ]
12,356,622
https://en.wikipedia.org/wiki/Photoionisation%20cross%20section
Photoionisation cross section in the context of condensed matter physics refers to the probability of a particle (usually an electron) being emitted from its electronic state. Cross section in photoemission The photoemission is a useful experimental method for the determination and the study of the electronic states. Sometimes the small amount of deposited material over a surface has a weak contribution to the photoemission spectra, which makes its identification very difficult. The knowledge of the cross section of a material can help to detect thin layers or 1D nanowires over a substrate. A right choice of the photon energy can enhance a small amount of material deposited over a surface, otherwise the display of the different spectra won't be possible. See also Gamma ray cross section ARPES Synchrotron radiation Cross section (physics) Absorption cross section Nuclear cross section References External links Elettra's photoemission cross sections calculations Electromagnetism Condensed matter physics
Photoionisation cross section
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
189
[ "Physical phenomena", "Electromagnetism", "Materials science stubs", "Phases of matter", "Materials science", "Condensed matter physics", "Fundamental interactions", "Condensed matter stubs", "Electromagnetism stubs", "Matter" ]
12,357,222
https://en.wikipedia.org/wiki/Transition%20metal%20dinitrogen%20complex
Transition metal dinitrogen complexes are coordination compounds that contain transition metals as ion centers the dinitrogen molecules (N2) as ligands. Historical background Transition metal complexes of N2 have been studied since 1965 when the first complex was reported by Allen and Senoff. This diamagnetic complex, [Ru(NH3)5(N2)]2+, was synthesized from hydrazine hydrate and ruthenium trichloride and consists of a [Ru(NH3)5]2+ centre attached to one end of N2. The existence of N2 as a ligand in this compound was identified by IR spectrum with a strong band around 2170–2100 cm−1. In 1966, the molecular structure of [Ru(NH3)5(N2)]Cl2 was determined by Bottomly and Nyburg by X-ray crystallography. The dinitrogen complex trans-[IrCl(N2)(PPh3)2] is made by treating Vaska's complex with aromatic acyl azides. It has a planar geometry. The first preparation of a metal-dinitrogen complex using dinitrogen was reported in 1967 by Yamamoto and coworkers. They obtained [Co(H)(N2)(PPh3)3] by reduction of Co(acac)3 with AlEt2OEt under an atmosphere of N2. Containing both hydrido and N2 ligands, the complex was of potential relevance to nitrogen fixation. From the late 1960s, a variety of transition metal-dinitrogen complexes were made including those with iron, molybdenum and vanadium as metal centers. Interest in such complexes arises because N2 comprises the majority of the atmosphere and because many useful compounds contain nitrogen. Biological nitrogen fixation probably occurs via the binding of N2 to those metal centers in the enzyme nitrogenase, followed by a series of steps that involve electron transfer and protonation. Bonding modes In terms of its bonding to transition metals, N2 is related to CO and acetylene as all three species have triple bonds. A variety of bonding modes have been characterized. Based on whether the N2 molecules are shared by two more metal centers, the complexes can be classified into mononuclear and bridging. Based on the geometric relationship between the N2 molecule and the metal center, the complexes can be classified into end-on or side-on modes. In the end-on bonding modes of transition metal-dinitrogen complexes, the N-N vector can be considered in line with the metal ion center, whereas in the side-on modes, the metal-ligand bond is known to be perpendicular to the N-N vector. Mononuclear, end-on As a ligand, N2 usually binds to metals as an "end-on" ligand, as illustrated by [Ru(NH3)5N2]2+. Such complexes are usually analogous to related CO derivatives. This relationship is illustrated by the pair of complexes IrCl(CO)(PPh3)2 and IrCl(N2)(PPh3)2. In these mononuclear cases, N2 is both as a σ-donor and a π-acceptor. The M-N-N bond angles are close to 180°. N2 is a weaker pi-acceptor than CO, reflecting the nature of the π* orbitals on CO vs N2. For this reason, few examples exist of complexes containing both CO and N2 ligand. Transition metal-dinitrogen complexes can contain more than one N2 as "end-on" ligands, such as mer-[Mo(N2)3(PPrn2Ph)3], which has octahedral geometry. In another example, the dinitrogen ligand in Mo(N2)2(Ph2PCH2CH2PPh2)2 can be reduced to produce ammonia. Because many nitrogenases contain Mo, there has been particular interest in Mo-N2 complexes. Bridging, end-on N2 also serves as a bridging ligand with "end-on" bonding to two metal centers, as illustrated by {[Ru(NH3)5]2(μ-N2)}4+. These complexes are also called multinuclear dinitrogen complexes. In contrast to their mononuclear counterpart, they can be prepared for both early and late transition metals. In 2006, a study of iron-dinitrogen complexes by Holland and coworkers showed that the N–N bond is significantly weakened upon complexation with iron atoms with a low coordination number. The complex involved bidentate chelating ligands attached to the iron atoms in the Fe–N–N–Fe core, in which N2 acts as a bridging ligand between two iron atoms. Increasing the coordination number of iron by modifying the chelating ligands and adding another ligand per iron atom showed an increase in the strength of the N–N bond in the resulting complex. It is thus suspected that Fe in a low-coordination environment is a key factor to the fixation of nitrogen by the nitrogenase enzyme, since its Fe–Mo cofactor also features Fe with low coordination numbers. The average bond length of those bridging-end-on dinitrogen complexes is about 1.2 Å. In some cases, the bond length can be as long as 1.4 Å, which is similar to those of N-N single bonds. Hasanayn and co-workers have shown that the Lewis structures of end-on bridging complexes can be assigned based on π-molecular-orbital occupancy, in analogy with simple tetratomic organic molecules. For example the cores of N2-bridged complexes with 8, 10, or 12 π-electrons can generally be formulated, respectively, as M≡N-N≡M, M=N=N=M, and M-N≡N-M, in analogy with the 8-, 10-, and 12-π-electron organic molecules HC≡C-C≡CH, O=C=C=O, and F-C≡C-F. Mononuclear, side-on In comparison with their end-on counterpart, the mononuclear side-on dinitrogen complexes are usually higher in energy and the examples of them are rare. Dinitrogen act as a π-donor in these type of complexes. Fomitchev and Coppens has reported the first crystallographic evidence for side-on coordination of N2 to a single metal center in a photoinduced metastable state. When treated with UV light, the transition metal-dinitrogen complex, [Os(NH3)5(N2)]2+ in solid states can be converted into a metastable state of [Os(NH3)5(η2-N2)]2+, where the vibration of dinitrogen has shifted from 2025 to 1831 cm−1. Some other examples are considered to exist in the transition states of intramolecular linkage isomerizations. Armor and Taube has reported these isomerizations using 15N-labelled dinitrogen as ligands. Bridging, side-on In a second mode of bridging, bimetallic complexes are known wherein the N-N vector is perpendicular to the M-M vector, which can be considered as side-on fashion. One example is [(η5-C5Me4H)2Zr]2(μ2,η2,η2-N2). The dimetallic complex can react with H2 to achieve the artificial nitrogen fixation by reducing N2. A related ditantalum tetrahydride complex could also reduce N2. Reactivity Cleavage to nitrides When metal nitrido complexes are produced from N2, the intermediacy of a dinitrogen complex is assumed. Some Mo(III) complexes also cleave N2: 2Mo(NR2)3 + N2 → (R2N)3Mo-N2-Mo(NR2)3 (R2N)3Mo-N2-Mo(NR2)3 → 2N≡Mo(NR2)3 Attack by electrophiles Some electron-rich metal dinitrogen complexes are susceptible to attack by electrophiles on nitrogen. When the electrophile is a proton, the reaction is of interest in the context of abiological nitrogen fixation. Some metal-dintrogen complexes even catalyze the hydrogenation of N2 to ammonia in a cycle that involves N-protonation of a reduced M-N2 complex. See also Abiological nitrogen fixation Main-group element-mediated activation of dinitrogen Transition metal nitrido complex References Coordination complexes Nitrogen compounds
Transition metal dinitrogen complex
[ "Chemistry" ]
1,842
[ "Coordination chemistry", "Coordination complexes" ]
204,680
https://en.wikipedia.org/wiki/Four-momentum
In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy and three-momentum , where is the particle's three-velocity and the Lorentz factor, is The quantity of above is the ordinary non-relativistic momentum of the particle and its rest mass. The four-momentum is useful in relativistic calculations because it is a Lorentz covariant vector. This means that it is easy to keep track of how it transforms under Lorentz transformations. Minkowski norm Calculating the Minkowski norm squared of the four-momentum gives a Lorentz invariant quantity equal (up to factors of the speed of light ) to the square of the particle's proper mass: where is the metric tensor of special relativity with metric signature for definiteness chosen to be . The negativity of the norm reflects that the momentum is a timelike four-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout. The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momenta and , the quantity is invariant. Relation to four-velocity For a massive particle, the four-momentum is given by the particle's invariant mass multiplied by the particle's four-velocity, where the four-velocity is and is the Lorentz factor (associated with the speed ), is the speed of light. Derivation There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocity and simply define , being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with the principle of least action and use the Lagrangian framework to derive the four-momentum, including the expression for the energy. One may at once, using the observations detailed below, define four-momentum from the action . Given that in general for a closed system with generalized coordinates and canonical momenta , it is immediate (recalling , , , and , , , in the present metric convention) that is a covariant four-vector with the three-vector part being the (negative of) canonical momentum. Consider initially a system of one degree of freedom . In the derivation of the equations of motion from the action using Hamilton's principle, one finds (generally) in an intermediate stage for the variation of the action, The assumption is then that the varied paths satisfy , from which Lagrange's equations follow at once. When the equations of motion are known (or simply assumed to be satisfied), one may let go of the requirement . In this case the path is assumed to satisfy the equations of motion, and the action is a function of the upper integration limit , but is still fixed. The above equation becomes with , and defining , and letting in more degrees of freedom, Observing that one concludes In a similar fashion, keep endpoints fixed, but let vary. This time, the system is allowed to move through configuration space at "arbitrary speed" or with "more or less energy", the field equations still assumed to hold and variation can be carried out on the integral, but instead observe by the fundamental theorem of calculus. Compute using the above expression for canonical momenta, Now using where is the Hamiltonian, leads to, since in the present case, Incidentally, using with in the above equation yields the Hamilton–Jacobi equations. In this context, is called Hamilton's principal function. The action is given by where is the relativistic Lagrangian for a free particle. From this, The variation of the action is To calculate , observe first that and that So or and thus which is just where the second step employs the field equations , , and as in the observations above. Now compare the last three expressions to find with norm , and the famed result for the relativistic energy, where is the now unfashionable relativistic mass, follows. By comparing the expressions for momentum and energy directly, one has that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives the energy–momentum relation, Substituting in the equation for the norm gives the relativistic Hamilton–Jacobi equation, It is also possible to derive the results from the Lagrangian directly. By definition, which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector. The energy and the three-momentum are separately conserved quantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below. More pedestrian approaches include expected behavior in electrodynamics. In this approach, the starting point is application of Lorentz force law and Newton's second law in the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance of electric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector. It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of the velocity addition formula and assuming conservation of momentum. This too gives only the three-vector part. Conservation of four-momentum As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa): The four-momentum (either covariant or contravariant) is conserved. The total energy is conserved. The 3-space momentum is conserved (not to be confused with the classic non-relativistic momentum ). Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, since kinetic energy in the system center-of-mass frame and potential energy from forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta and each have (rest) mass 3GeV/c2 separately, but their total mass (the system mass) is 10GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10GeV/c2. One practical application from particle physics of the conservation of the invariant mass involves combining the four-momenta and of two daughter particles produced in the decay of a heavier particle with four-momentum to find the mass of the heavier particle. Conservation of four-momentum gives , while the mass of the heavier particle is given by . By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal to . This technique is used, e.g., in experimental searches for Z′ bosons at high-energy particle colliders, where the Z′ boson would show up as a bump in the invariant mass spectrum of electron–positron or muon–antimuon pairs. If the mass of an object does not change, the Minkowski inner product of its four-momentum and corresponding four-acceleration is simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, so Canonical momentum in the presence of an electromagnetic potential For a charged particle of charge , moving in an electromagnetic field given by the electromagnetic four-potential: where is the scalar potential and the vector potential, the components of the (not gauge-invariant) canonical momentum four-vector is This, in turn, allows the potential energy from the charged particle in an electrostatic potential and the Lorentz force on the charged particle moving in a magnetic field to be incorporated in a compact way, in relativistic quantum mechanics. Four-momentum in curved spacetime In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is four-vector with covariant index: Four-momentum is expressed through the energy of physical system and relativistic momentum . At the same time, the four-momentum can be represented as the sum of two non-local four-vectors of integral type: Four-vector is the generalized four-momentum associated with the action of fields on particles; four-vector is the four-momentum of the fields arising from the action of particles on the fields. Energy and momentum , as well as components of four-vectors and can be calculated if the Lagrangian density of the system is given. The following formulas are obtained for the energy and momentum of the system: Here is that part of the Lagrangian density that contains terms with four-currents; is the velocity of matter particles; is the time component of four-velocity of particles; is determinant of metric tensor; is the part of the Lagrangian associated with the Lagrangian density ; is velocity of a particle of matter with number . See also Four-force Four-gradient Pauli–Lubanski pseudovector References Wikisource version Four-vectors Momentum
Four-momentum
[ "Physics", "Mathematics" ]
2,049
[ "Physical quantities", "Quantity", "Four-vectors", "Vector physical quantities", "Momentum", "Moment (physics)" ]
204,682
https://en.wikipedia.org/wiki/Translation%20%28geometry%29
In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. In a Euclidean space, any translation is an isometry. As a function If is a fixed vector, known as the translation vector, and is the initial position of some object, then the translation function will work as . If is a translation, then the image of a subset under the function is the translate of by . The translate of by is often written as . Application in classical physics In classical physics, translational motion is movement that changes the position of an object, as opposed to rotation. For example, according to Whittaker: A translation is the operation changing the positions of all points of an object according to the formula where is the same vector for each point of the object. The translation vector common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements. When considering spacetime, a change of time coordinate is considered to be a translation. As an operator The translation operator turns a function of the original position, , into a function of the final position, . In other words, is defined such that This operator is more abstract than a function, since defines a relationship between two functions, rather than the underlying vectors themselves. The translation operator can act on many kinds of functions, such as when the translation operator acts on a wavefunction, which is studied in the field of quantum mechanics. As a group The set of all translations forms the translation group , which is isomorphic to the space itself, and a normal subgroup of Euclidean group . The quotient group of by is isomorphic to the group of rigid motions which fix a particular origin point, the orthogonal group : Because translation is commutative, the translation group is abelian. There are an infinite number of possible translations, so the translation group is an infinite group. In the theory of relativity, due to the treatment of space and time as a single spacetime, translations can also refer to changes in the time coordinate. For example, the Galilean group and the Poincaré group include translations with respect to time. Lattice groups One kind of subgroup of the three-dimensional translation group are the lattice groups, which are infinite groups, but unlike the translation groups, are finitely generated. That is, a finite generating set generates the entire group. Matrix representation A translation is an affine transformation with no fixed points. Matrix multiplications always have the origin as a fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication: Write the 3-dimensional vector using 4 homogeneous coordinates as . To translate an object by a vector , each homogeneous vector (written in homogeneous coordinates) can be multiplied by this translation matrix: As shown below, the multiplication will give the expected result: The inverse of a translation matrix can be obtained by reversing the direction of the vector: Similarly, the product of translation matrices is given by adding the vectors: Because addition of vectors is commutative, multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices). Translation of axes While geometric translation is often viewed as an active transformation that changes the position of a geometric object, a similar result can be achieved by a passive transformation that moves the coordinate system itself but leaves the object fixed. The passive version of an active geometric translation is known as a translation of axes. Translational symmetry An object that looks the same before and after translation is said to have translational symmetry. A common example is a periodic function, which is an eigenfunction of a translation operator. Translations of a graph The graph of a real function , the set of points , is often pictured in the real coordinate plane with as the horizontal coordinate and as the vertical coordinate. Starting from the graph of , a horizontal translation means composing with a function , for some constant number , resulting in a graph consisting of points . Each point of the original graph corresponds to the point in the new graph, which pictorially results in a horizontal shift. A vertical translation means composing the function with , for some constant , resulting in a graph consisting of the points . Each point of the original graph corresponds to the point in the new graph, which pictorially results in a vertical shift. For example, taking the quadratic function , whose graph is a parabola with vertex at , a horizontal translation 5 units to the right would be the new function whose vertex has coordinates . A vertical translation 3 units upward would be the new function whose vertex has coordinates . The antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other. Applications For describing vehicle dynamics (or movement of any rigid body), including ship dynamics and aircraft dynamics, it is common to use a mechanical model consisting of six degrees of freedom, which includes translations along three reference axes (as well as rotations about those three axes). These translations are often called surge, sway, and heave. See also 2D computer graphics#Translation Advection Change of basis Parallel transport Rotation matrix Scaling (geometry) Transformation matrix Translational symmetry References Further reading Zazkis, R., Liljedahl, P., & Gadowsky, K. Conceptions of function translation: obstacles, intuitions, and rerouting. Journal of Mathematical Behavior, 22, 437-450. Retrieved April 29, 2014, from www.elsevier.com/locate/jmathb Transformations of Graphs: Horizontal Translations. (2006, January 1). BioMath: Transformation of Graphs. Retrieved April 29, 2014 External links Translation Transform at cut-the-knot Geometric Translation (Interactive Animation) at Math Is Fun Understanding 2D Translation and Understanding 3D Translation by Roger Germundsson, The Wolfram Demonstrations Project. Euclidean symmetries Elementary geometry Transformation (function) Functions and mappings
Translation (geometry)
[ "Physics", "Mathematics" ]
1,253
[ "Functions and mappings", "Mathematical analysis", "Euclidean symmetries", "Transformation (function)", "Mathematical objects", "Elementary mathematics", "Elementary geometry", "Mathematical relations", "Geometry", "Symmetry" ]
204,762
https://en.wikipedia.org/wiki/Vaporization
Vaporization (or vapo(u)risation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: evaporation and boiling. Evaporation is a surface phenomenon, whereas boiling is a bulk phenomenon (a phenomenon in which the whole object or substance is involved in the process). Evaporation Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid. Boiling Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment. Sublimation Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Other uses of the term 'vaporization' The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO. At the moment of a large enough meteor or comet impact, bolide detonation, a nuclear fission, thermonuclear fusion, or theoretical antimatter weapon detonation, a flux of so many gamma ray, x-ray, ultraviolet, visual light and heat photons strikes matter in a such brief amount of time (a great number of high-energy photons, many overlapping in the same physical space) that all molecules lose their atomic bonds and "fly apart". All atoms lose their electron shells and become positively charged ions, in turn emitting photons of a slightly lower energy than they had absorbed. All such matter becomes a gas of nuclei and electrons which rise into the air due to the extremely high temperature or bond to each other as they cool. The matter vaporized this way is immediately a plasma in a state of maximum entropy and this state steadily reduces via the factor of passing time due to natural processes in the biosphere and the effects of physics at normal temperatures and pressures. A similar process occurs during ultrashort pulse laser ablation, where the high flux of incoming electromagnetic radiation strips the target material's surface of electrons, leaving positively charged atoms which undergo a coulomb explosion. Table References External links Physical chemistry Chemical processes
Vaporization
[ "Physics", "Chemistry" ]
664
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Phases of matter", "Critical phenomena", "Chemical processes", "nan", "Chemical process engineering", "Statistical mechanics", "Physical chemistry", "Matter" ]
204,912
https://en.wikipedia.org/wiki/Magnetic%20refrigeration
Magnetic refrigeration is a cooling technology based on the magnetocaloric effect. This technique can be used to attain extremely low temperatures, as well as the ranges used in common refrigerators. A magnetocaloric material warms up when a magnetic field is applied. The warming is due to changes in the internal state of the material releasing heat. When the magnetic field is removed, the material returns to its original state, reabsorbing the heat, and returning to original temperature. To achieve refrigeration, the material is allowed to radiate away its heat while in the magnetized hot state. Removing the magnetism, the material then cools to below its original temperature. The effect was first observed in 1881 by a German physicist Emil Warburg, followed by French physicist P. Weiss and Swiss physicist A. Piccard in 1917. The fundamental principle was suggested by P. Debye (1926) and W. Giauque (1927). The first working magnetic refrigerators were constructed by several groups beginning in 1933. Magnetic refrigeration was the first method developed for cooling below about 0.3 K (the lowest temperature attainable before magnetic refrigeration, by pumping on vapors). Magnetocaloric effect The magnetocaloric effect (MCE, from magnet and calorie) is a magneto-thermodynamic phenomenon in which a temperature change of a suitable material is caused by exposing the material to a changing magnetic field. This is also known by low temperature physicists as adiabatic demagnetization. In that part of the refrigeration process, a decrease in the strength of an externally applied magnetic field allows the magnetic domains of a magnetocaloric material to become disoriented from the magnetic field by the agitating action of the thermal energy (phonons) present in the material. If the material is isolated so that no energy is allowed to (re)migrate into the material during this time, (i.e., an adiabatic process) the temperature drops as the domains absorb the thermal energy to perform their reorientation. The randomization of the domains occurs in a similar fashion to the randomization at the Curie temperature of a ferromagnetic material, except that magnetic dipoles overcome a decreasing external magnetic field while energy remains constant, instead of magnetic domains being disrupted from internal ferromagnetism as energy is added. One of the most notable examples of the magnetocaloric effect is in the chemical element gadolinium and some of its alloys. Gadolinium's temperature increases when it enters certain magnetic fields. When it leaves the magnetic field, the temperature drops. The effect is considerably stronger for the gadolinium alloy . Praseodymium alloyed with nickel () has such a strong magnetocaloric effect that it has allowed scientists to approach to within one millikelvin, one thousandth of a degree of absolute zero. Equation The magnetocaloric effect can be quantified with the following equation: where is the adiabatic change in temperature of the magnetic system around temperature T, H is the applied external magnetic field, C is the heat capacity of the working magnet (refrigerant) and M is the magnetization of the refrigerant. From the equation we can see that the magnetocaloric effect can be enhanced by: a large field variation a magnet material with a small heat capacity a magnet with large changes in net magnetization vs. temperature, at constant magnetic field The adiabatic change in temperature, , can be seen to be related to the magnet's change in magnetic entropy () since This implies that the absolute change in the magnet's entropy determines the possible magnitude of the adiabatic temperature change under a thermodynamic cycle of magnetic field variation. T Thermodynamic cycle The cycle is performed as a refrigeration cycle that is analogous to the Carnot refrigeration cycle, but with increases and decreases in magnetic field strength instead of increases and decreases in pressure. It can be described at a starting point whereby the chosen working substance is introduced into a magnetic field, i.e., the magnetic flux density is increased. The working material is the refrigerant, and starts in thermal equilibrium with the refrigerated environment. Adiabatic magnetization: A magnetocaloric substance is placed in an insulated environment. The increasing external magnetic field (+H) causes the magnetic dipoles of the atoms to align, thereby decreasing the material's magnetic entropy and heat capacity. Since overall energy is not lost (yet) and therefore total entropy is not reduced (according to thermodynamic laws), the net result is that the substance is heated (T + ΔTad). Isomagnetic enthalpic transfer: This added heat can then be removed (-Q) by a fluid or gas — gaseous or liquid helium, for example. The magnetic field is held constant to prevent the dipoles from reabsorbing the heat. Once sufficiently cooled, the magnetocaloric substance and the coolant are separated (H=0). Adiabatic demagnetization: The substance is returned to another adiabatic (insulated) condition so the total entropy remains constant. However, this time the magnetic field is decreased, the thermal energy causes the magnetic moments to overcome the field, and thus the sample cools, i.e., an adiabatic temperature change. Energy (and entropy) transfers from thermal entropy to magnetic entropy, measuring the disorder of the magnetic dipoles. Isomagnetic entropic transfer: The magnetic field is held constant to prevent the material from reheating. The material is placed in thermal contact with the environment to be refrigerated. Because the working material is cooler than the refrigerated environment (by design), heat energy migrates into the working material (+Q). Once the refrigerant and refrigerated environment are in thermal equilibrium, the cycle can restart. Applied technique The basic operating principle of an adiabatic demagnetization refrigerator (ADR) is the use of a strong magnetic field to control the entropy of a sample of material, often called the "refrigerant". Magnetic field constrains the orientation of magnetic dipoles in the refrigerant. The stronger the magnetic field, the more aligned the dipoles are, corresponding to lower entropy and heat capacity because the material has (effectively) lost some of its internal degrees of freedom. If the refrigerant is kept at a constant temperature through thermal contact with a heat sink (usually liquid helium) while the magnetic field is switched on, the refrigerant must lose some energy because it is equilibrated with the heat sink. When the magnetic field is subsequently switched off, the heat capacity of the refrigerant rises again because the degrees of freedom associated with orientation of the dipoles are once again liberated, pulling their share of equipartitioned energy from the motion of the molecules, thereby lowering the overall temperature of a system with decreased energy. Since the system is now insulated when the magnetic field is switched off, the process is adiabatic, i.e., the system can no longer exchange energy with its surroundings (the heat sink), and its temperature decreases below its initial value, that of the heat sink. The operation of a standard ADR proceeds roughly as follows. First, a strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. The heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off, increasing the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink. In practice, the magnetic field is decreased slowly in order to provide continuous cooling and keep the sample at an approximately constant low temperature. Once the field falls to zero or to some low limiting value determined by the properties of the refrigerant, the cooling power of the ADR vanishes, and heat leaks will cause the refrigerant to warm up. Working materials The magnetocaloric effect (MCE) is an intrinsic property of a magnetic solid. This thermal response of a solid to the application or removal of magnetic fields is maximized when the solid is near its magnetic ordering temperature. Thus, the materials considered for magnetic refrigeration devices should be magnetic materials with a magnetic phase transition temperature near the temperature region of interest. For refrigerators that could be used in the home, this temperature is room temperature. The temperature change can be further increased when the order-parameter of the phase transition changes strongly within the temperature range of interest. The magnitudes of the magnetic entropy and the adiabatic temperature changes are strongly dependent upon the magnetic ordering process. The magnitude is generally small in antiferromagnets, ferrimagnets and spin glass systems but can be much larger for ferromagnets that undergo a magnetic phase transition. First order phase transitions are characterized by a discontinuity in the magnetization changes with temperature, resulting in a latent heat. Second order phase transitions do not have this latent heat associated with the phase transition. In the late 1990s Pecharksy and Gschneidner reported a magnetic entropy change in that was about 50% larger than that reported for Gd metal, which had the largest known magnetic entropy change at the time. This giant magnetocaloric effect (GMCE) occurred at 270 K, which is lower than that of Gd (294 K). Since the MCE occurs below room temperature these materials would not be suitable for refrigerators operating at room temperature. Since then other alloys have also demonstrated the giant magnetocaloric effect. These include , and alloys. Gadolinium and its alloys undergo second-order phase transitions that have no magnetic or thermal hysteresis. However, the use of rare earth elements makes these materials very expensive. (X = Ga, Co, In, Al, Sb) Heusler alloys are also promising candidates for magnetic cooling applications because they have Curie temperatures near room temperature and, depending on composition, can have martensitic phase transformations near room temperature. These materials exhibit the magnetic shape memory effect and can also be used as actuators, energy harvesting devices, and sensors. When the martensitic transformation temperature and the Curie temperature are the same (based on composition) the magnitude of the magnetic entropy change is the largest. In February 2014, GE announced the development of a functional Ni-Mn-based magnetic refrigerator. The development of this technology is very material-dependent and will likely not replace vapor-compression refrigeration without significantly improved materials that are cheap, abundant, and exhibit much larger magnetocaloric effects over a larger range of temperatures. Such materials need to show significant temperature changes under a field of two tesla or less, so that permanent magnets can be used for the production of the magnetic field. Paramagnetic salts The original proposed refrigerant was a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. In a paramagnetic salt ADR, the heat sink is usually provided by a pumped (about 1.2 K) or (about 0.3 K) cryostat. An easily attainable 1 T magnetic field is generally required for initial magnetization. The minimum temperature attainable is determined by the self-magnetization tendencies of the refrigerant salt, but temperatures from 1 to 100 mK are accessible. Dilution refrigerators had for many years supplanted paramagnetic salt ADRs, but interest in space-based and simple to use lab-ADRs has remained, due to the complexity and unreliability of the dilution refrigerator. At a low enough temperature, paramagnetic salts become either diamagnetic or ferromagnetic, limiting the lowest temperature that can be reached using this method. Nuclear demagnetization One variant of adiabatic demagnetization that continues to find substantial research application is nuclear demagnetization refrigeration (NDR). NDR follows the same principles, but in this case the cooling power arises from the magnetic dipoles of the nuclei of the refrigerant atoms, rather than their electron configurations. Since these dipoles are of much smaller magnitude, they are less prone to self-alignment and have lower intrinsic minimum fields. This allows NDR to cool the nuclear spin system to very low temperatures, often 1 μK or below. Unfortunately, the small magnitudes of nuclear magnetic dipoles also makes them less inclined to align to external fields. Magnetic fields of 3 teslas or greater are often needed for the initial magnetization step of NDR. In NDR systems, the initial heat sink must sit at very low temperatures (10–100 mK). This precooling is often provided by the mixing chamber of a dilution refrigerator or a paramagnetic salt. Commercial development Research and a demonstration proof of concept device in 2001 succeeded in applying commercial-grade materials and permanent magnets at room temperatures to construct a magnetocaloric refrigerator. On August 20, 2007, the Risø National Laboratory (Denmark) at the Technical University of Denmark, claimed to have reached a milestone in their magnetic cooling research when they reported a temperature span of 8.7 K. They hoped to introduce the first commercial applications of the technology by 2010. As of 2013 this technology had proven commercially viable only for ultra-low temperature cryogenic applications available for decades. Magnetocaloric refrigeration systems are composed of pumps, motors, secondary fluids, heat exchangers of different types, magnets and magnetic materials. These processes are greatly affected by irreversibilities and should be adequately considered. At year-end, Cooltech Applications announced that its first commercial refrigeration equipment would enter the market in 2014. Cooltech Applications launched their first commercially available magnetic refrigeration system on 20 June 2016. At the 2015 Consumer Electronics Show in Las Vegas, a consortium of Haier, Astronautics Corporation of America and BASF presented the first cooling appliance. BASF claim of their technology a 35% improvement over using compressors. In November 2015, at the Medica 2015 fair, Cooltech Applications presented, in collaboration with Kirsch medical GmbH, the world's first magnetocaloric medical cabinet. One year later, in September 2016, at the 7th International Conference on Magnetic Refrigeration at Room Temperature (Thermag VII)] held in Torino, Italy, Cooltech Applications presented the world's first magnetocaloric frozen heat exchanger. In 2017, Cooltech Applications presented a fully functional 500 liters' magnetocaloric cooled cabinet with a load and an air temperature inside the cabinet of +2°C. That proved that magnetic refrigeration is a mature technology, capable of replacing the classic refrigeration solutions. One year later, in September 2018, at the 8th International Conference on Magnetic Refrigeration at Room Temperature (Thermag VIII]), Cooltech Applications presented a paper on a magnetocaloric prototype designed as a 15 kW proof-of-concept unit. This has been considered by the community as the largest magnetocaloric prototype ever created. At the same conference, Dr. Sergiu Lionte announced that, due to financial issues, Cooltech Applications declared bankruptcy. Later on, in 2019 Ubiblue company, today named Magnoric, is formed by some of the old Cooltech Application's team members. The entire patent portfolio form Cooltech Applications was taken over by Magnoric since then, while publishing additional patents at the same time. In 2019, at the 5th Delft Days Conference on Magnetocalorics, Dr. Sergiu Lionte presented Ubiblue's (former Cooltech Application) last prototype. Later, the magnetocaloric community acknowledged that Ubiblue had the most developed magnetocalorics prototypes. Thermal and magnetic hysteresis problems remain to be solved for first-order phase transition materials that exhibit the GMCE. One potential application is in spacecraft. Vapor-compression refrigeration units typically achieve performance coefficients of 60% of that of a theoretical ideal Carnot cycle, much higher than current MR technology. Small domestic refrigerators are however much less efficient. In 2014 giant anisotropic behavior of the magnetocaloric effect was found in at 10 K. The anisotropy of the magnetic entropy change gives rise to a large rotating MCE offering the possibility to build simplified, compact, and efficient magnetic cooling systems by rotating it in a constant magnetic field. In 2015 Aprea et al. presented a new refrigeration concept, GeoThermag, which is a combination of magnetic refrigeration technology with that of low-temperature geothermal energy. To demonstrate the applicability of the GeoThermag technology, they developed a pilot system that consists of a 100-m deep geothermal probe; inside the probe, water flows and is used directly as a regenerating fluid for a magnetic refrigerator operating with gadolinium. The GeoThermag system showed the ability to produce cold water even at 281.8 K in the presence of a heat load of 60 W. In addition, the system has shown the existence of an optimal frequency f AMR, 0.26 Hz, for which it was possible to produce cold water at 287.9 K with a thermal load equal to 190 W with a COP of 2.20. Observing the temperature of the cold water that was obtained in the tests, the GeoThermag system showed a good ability to feed the cooling radiant floors and a reduced capacity for feeding the fan coil systems. History The effect was discovered first observed by German physicist Emil Warburg in 1881 Subsequently by French physicist Pierre Weiss and Swiss physicist Auguste Piccard in 1917. Major advances first appeared in the late 1920s when cooling via adiabatic demagnetization was independently proposed by chemistry Nobel Laureates Peter Debye in 1926 and William F. Giauque in 1927. It was first demonstrated experimentally by Giauque and his colleague D. P. MacDougall in 1933 for cryogenic purposes when they reached 0.25 K. Between 1933 and 1997, advances in MCE cooling occurred. In 1997, the first near room-temperature proof of concept magnetic refrigerator was demonstrated by Karl A. Gschneidner, Jr. by the Iowa State University at Ames Laboratory. This event attracted interest from scientists and companies worldwide who started developing new kinds of room temperature materials and magnetic refrigerator designs. A major breakthrough came 2002 when a group at the University of Amsterdam demonstrated the giant magnetocaloric effect in MnFe(P,As) alloys that are based on abundant materials. Refrigerators based on the magnetocaloric effect have been demonstrated in laboratories, using magnetic fields starting at 0.6 T up to 10 T. Magnetic fields above 2 T are difficult to produce with permanent magnets and are produced by a superconducting magnet (1 T is about 20.000 times the Earth's magnetic field). Room temperature devices Recent research has focused on near room temperature. Constructed examples of room temperature magnetic refrigerators include: In one example, Prof. Karl A. Gschneidner, Jr. unveiled a proof of concept magnetic refrigerator near room temperature on February 20, 1997. He also announced the discovery of the GMCE in on June 9, 1997. Since then, hundreds of peer-reviewed articles have been written describing materials exhibiting magnetocaloric effects. See also Coefficient of performance (COP) Cryostat Curie's law Dilution refrigerator Elastocaloric effect Electrocaloric effect Thermoacoustic refrigeration References Further reading Lounasmaa, Experimental Principles and Methods Below 1 K, Academic Press (1974). Richardson and Smith, Experimental Techniques in Condensed Matter Physics at Low Temperatures, Addison Wesley (1988). External links Cooling by adiabatic demagnetization - The Feynman Lectures on Physics What is magnetocaloric effect and what materials exhibit this effect the most? Ames Laboratory news release, May 25, 1999, Work begins on prototype magnetic-refrigeration unit. Refrigeration Systems Terry Heppenstall's notes, University of Newcastle upon Tyne (November 2000) XRS Adiabatic Demagnetization Refrigerator Executive Summary: A Continuous Adiabatic Demagnetization Refrigerator (.doc format) (Google cache) Magnetic technology revolutionizes refrigeration] Thermodynamic cycles Cooling technology Statistical mechanics Condensed matter physics Magnetism
Magnetic refrigeration
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,335
[ "Phases of matter", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
205,126
https://en.wikipedia.org/wiki/Drill
A drill is a tool used for making round holes or driving fasteners. It is fitted with a bit, either a drill or driver chuck. Hand-operated types are dramatically decreasing in popularity and cordless battery-powered ones proliferating due to increased efficiency and ease of use. Drills are commonly used in woodworking, metalworking, construction, machine tool fabrication, construction and utility projects. Specially designed versions are made for miniature applications. History Around 35,000 BC, Homo sapiens discovered the benefits of the application of rotary tools. This would have rudimentarily consisted of a pointed rock being spun between the hands to bore a hole through another material. This led to the hand drill, a smooth stick, that was sometimes attached to flint point, and was rubbed between the palms. This was used by many ancient civilizations around the world including the Mayans. The earliest perforated artifacts, such as bone, ivory, shells, and antlers found, are from the Upper Paleolithic era. Bow drill (strap-drill) are the first machine drills, as they convert a back and forth motion to a rotary motion, and they can be traced back to around 10,000 years ago. It was discovered that tying a cord around a stick, and then attaching the ends of the string to the ends of a stick (a bow), allowed a user to drill quicker and more efficiently. Mainly used to create fire, bow-drills were also used in ancient woodwork, stonework, and dentistry. Archaeologists discovered a Neolithic grave yard in Mehrgarh, Pakistan, dating from the time of the Harappans, around 7,500–9,000 years ago, containing nine adult bodies with a total of eleven teeth that had been drilled. There are hieroglyphs depicting Egyptian carpenters and bead makers in a tomb at Thebes using bow-drills. The earliest evidence of these tools being used in Egypt dates back to around 2500 BCE. The usage of bow-drills was widely spread through Europe, Africa, Asia, and North America, during ancient times and is still used today. Over the years many slight variations of bow and strap drills have developed for the various uses of either boring through materials or lighting fires. The core drill was developed in ancient Egypt by 3000 BC. The pump drill was invented during Roman times. It consists of a vertical spindle aligned by a piece of horizontal wood and a flywheel to maintain accuracy and momentum. The hollow-borer tip, first used around the 13th century, consisted of a stick with a tubular shaped piece of metal on the end, such as copper. This allowed a hole to be drilled while only actually grinding the outer section of it. This completely separates the inner stone or wood from the rest, allowing the drill to pulverize less material to create a similarly sized hole. While the pump-drill and the bow-drill were used in Western Civilization to bore smaller holes for a larger part of human history, the auger was used to drill larger holes starting sometime between Roman and Medieval ages. The auger allowed for more torque for larger holes. It is uncertain when the brace and bit was invented; however, the earliest picture found so far dates from the 15th century. It is a type of hand crank drill that consists of two parts as seen in the picture. The brace, on the upper half, is where the user holds and turns it and on the lower part is the bit. The bit is interchangeable as bits wear down. The auger uses a rotating helical screw similar to the Archimedean screw-shaped bit that is common today. The gimlet is also worth mentioning as it is a scaled down version of an auger. Archimedes' screw, present in drills to remove perforation dirt from the hole, was invented in Hellenistic Egypt around 300 BCE. The screw pump is the oldest positive displacement pump. The first records of a water screw, or screw pump, date back to Hellenistic Egypt before the 3rd century BC. The Egyptian screw, used to lift water from the Nile, was composed of tubes wound round a cylinder; as the entire unit rotates, water is lifted within the spiral tube to the higher elevation. A later screw pump design from Egypt had a spiral groove cut on the outside of a solid wooden cylinder and then the cylinder was covered by boards or sheets of metal closely covering the surfaces between the grooves. In the East, churn drills were invented as early as 221 BC during the Chinese Qin dynasty, capable of reaching a depth of 1500 m. Churn drills in ancient China were built of wood and labor-intensive, but were able to go through solid rock. The churn drill appears in Europe during the 12th century. In 1835 Isaac Singer is reported to have built a steam powered churn drill based on the method the Chinese of a rod tipped with a bit. Also worth briefly discussing are the early drill presses; they were machine tools that derived from bow-drills but were powered by windmills or water wheels. Drill presses consisted of the powered drills that could be raised or lowered into a material, allowing for less force by the user. In 1813 Richard Trevithick designed a steam-driven rotary drill, also the first drill to be powered by steam. In 1848 J.J. Couch invented the first pneumatic percussion drill. The next great advancement in drilling technology, the electric motor, led to the invention of the electric drill. It is credited to mining engineers Arthur James Arnot and William Blanch Brain of Melbourne, Australia who patented the electric drill in 1889. The first portable handheld drill was created by in 1895 by brothers Wilhelm & Carl Fein of Stuttgart, Germany. In 1917 the first trigger-switch, pistol-grip portable drill was patented by Black & Decker. This was the start of the modern drill era. Over the last century the electric drill has been created in a variety of types and multiple sizes for an assortment of specific uses. Types There are many types of drills: some are powered manually, others use electricity (electric drill) or compressed air (pneumatic drill) as the motive power, and a minority are driven by an internal combustion engine (for example, earth drilling augers). Drills with a percussive action (hammer drills) are mostly used in hard materials such as masonry (brick, concrete and stone) or rock. Drilling rigs are used to bore holes in the earth to obtain water or oil. Oil wells, water wells, or holes for geothermal heating are created with large drilling rigs. Some types of hand-held drills are also used to drive screws and other fasteners. Some small appliances that have no motor of their own may be drill-powered, such as small pumps, grinders, etc. Primitive Some forms of drills have been used since Pre-History, both to make holes in hard objects or as fire drills. Awl – The shaft is twisted with one hand Hand drill – The shaft is spun by rubbing motion of the hands Bow drill – The shaft is spun by cord of a bow that is moved back and forth. Pump drill – The shaft is spun by pushing down on a hand bar and by a flywheel Hand-powered Hand-powered metal drills have been in use for centuries. They include: Auger, a straight shaft with a wood-cutting blade at the bottom and a T-shaped handle Brace, a modified auger powered by means of a crankshaft Gimlet, a small tool for drilling holes Bradawl, similar to a screwdriver but with a drilling point Cranial drill, an instrument used throughout skull surgery or hand drill, also known as an , as it is analogous in form to a hand-cranked eggbeater with bevel gears Breast drill, a heavy duty subtype of eggbeater drill that has a flat chest piece in addition to one or more handles Push, such as Yankee or Persian drills, which use spiral or ratcheting mechanisms Pin chuck, a small hand-held jeweler's drill Power drills Drills powered by electricity (or more rarely, compressed air) are the most common tools in woodworking and machining shops. Electric drills can be corded (fed from an electric outlet through a power cable) or cordless (fed by rechargeable electric batteries). The latter have removable battery packs that can be swapped to allow uninterrupted drilling while recharging. A popular use of hand-held power drills is to set screws into wood, through the use of screwdriver bits. Drills optimized for this purpose have a clutch to avoid damaging the slots on the screw head. Pistol-grip drill – the most common hand-held power drill type. Right-angle drill – used to drill or drive screws in tight spaces. Hammer drill – combines rotary motion with a hammer action for drilling masonry. The hammer action may be engaged or disengaged as required. Drill press – larger power drill with a rigid holding frame, standalone mounted on a bench Rotary hammer combines a primary dedicated hammer mechanism with a separate rotation mechanism, and is used for more substantial material such as masonry or concrete. Most electric hammer drills are rated (input power) at between 600 and 1100 watts. The efficiency is usually 50–60% i.e. 1000 watts of input is converted into 500–600 watts of output (rotation of the drill and hammering action). For much of the 20th century, attachments could commonly be purchased to convert corded electric hand drills into a range of other power tools, such as orbital sanders and power saws, more cheaply than purchasing dedicated versions of those tools. As the prices of power tools and suitable electric motors have fallen such attachments have become much less common. Early cordless drills used interchangeable 7.2 V battery packs. Over the years battery voltages have increased, with 18 V drills being most common, but higher voltages are available, such as 24 V, 28 V, and 36 V. This allows these tools to produce as much torque as some corded drills. Common battery types of are nickel-cadmium (NiCd) batteries and lithium-ion batteries, with each holding about half the market share. NiCd batteries have been around longer, so they are less expensive (their main advantage), but have more disadvantages compared to lithium-ion batteries. NiCd disadvantages are limited life, self-discharging, environment problems upon disposal, and eventually internally short circuiting due to dendrite growth. Lithium-ion batteries are becoming more common because of their short charging time, longer life, absence of memory effect, and low weight. Instead of charging a tool for an hour to get 20 minutes of use, 20 minutes of charge can run the tool for an hour in average. Lithium-ion batteries also hold a charge for a significantly longer time than nickel-cadmium batteries, about two years if not used, vs. 1 to 4 months for a nickel-cadmium battery. Impact drills Also known as impact wrenches, is a form of drill that incorporates a hammer motion along with the rotating motion of a conventional drill. The hammering aspect of the impact drill occurs when the power of the motor cannot turn the bolt it will begin exerting bursts of force to "hammer" the bolt in the desired direction. These drills are commonly used to secure long bolts or screws into wood, metal, and concrete, as well as loosening ceased or over torqued bolts. Impact drills come in two major types, pneumatic and electric, and vary in size depending on application. Electric impact drills are most often found cordless and are widely used in construction, automobile repair, and fabrication. These electric drills are preferred over the pneumatic driven because of their maneuverability and ease of use. Pneumatic impact drills rely on air and have to remain connected to an air source to maintain power. The chuck on impact drills is different from the conventional handheld power drill. The chuck acts more as a collet with a hexagonal shape in which the bits and drivers lock into. Impact drivers can also be used to bore holes like a standard pistol grip drill, but this requires a special bit that will lock into the hexagonal collet. The design of the impact drills are almost identical to modern pistol grip power drills with only one major difference. Impact drills have a shorter, skinnier, stubby receiver where the collet is located compared to the larger tapered chuck on a conventional drill. This allows the user to fit in smaller places that a normal drill would not. Impact drills are not great in regards to torque and speed control. Most handheld drills have a variable speed option, whereas most impact drills have a fixed torque and speed. Impact drills are not designed for precision work due to this lack of adjustability. Hammer drill The hammer action of a hammer drill is provided by two cam plates that make the chuck rapidly pulse forward and backward as the drill spins on its axis. This pulsing (hammering) action is measured in Blows Per Minute (BPM) with 10,000 or more BPMs being common. Because the combined mass of the chuck and bit is comparable to that of the body of the drill, the energy transfer is inefficient and can sometimes make it difficult for larger bits to penetrate harder materials such as poured concrete. A standard hammer drill accepts 6 mm (1/4 inch) and 13 mm (1/2 inch) drill bits. The operator experiences considerable vibration, and the cams are generally made from hardened steel to avoid them wearing out quickly. In practice, drills are restricted to standard masonry bits up to 13 mm (1/2 inch) in diameter. A typical application for a hammer drill is installing electrical boxes, conduit straps or shelves in concrete. Rotary hammer The rotary hammer (also known as a rotary hammer drill, roto hammer drill or masonry drill). Standard chucks and parallel-shank carbide-tipped drills have been largely superseded by SDS chucks and matching (spline shank) drills, that have been designed to better withstand and transmit the percussive forces. These bits are effective at pulverising the masonry and drill into this hard material. Some styles of this tool are intended for masonry drilling only and the hammer action cannot be disengaged. Other styles allow the drill to be used without the hammer action for normal drilling, or hammering to be used without rotation for chiselling. In contrast to the cam-type hammer drill, a rotary/pneumatic hammer drill accelerates only the bit. This is accomplished through a piston design, rather than a spinning cam. Rotary hammers have much less vibration and penetrate most building materials. They can also be used as "drill only" or as "hammer only" which extends their usefulness for tasks such as chipping brick or concrete. Hole drilling progress is greatly superior to cam-type hammer drills, and these drills are generally used for holes of 19 mm (3/4 inch) or greater in size. A typical application for a rotary hammer drill is boring large holes for lag bolts in foundations, or installing large lead anchors in concrete for handrails or benches. Drill press A drill press (also known as a pedestal drill, pillar drill, or bench drill) is a style of drill that may be mounted on a stand or bolted to the floor or workbench. Portable models are made, some including a magnetic base. Major components include a base, column (or pillar), adjustable table, spindle, chuck, and drill head, usually driven by an electric motor. The head typically has a set of three handles radiating from a central hub that are turned to move the spindle and chuck vertically. The distance from the center of the chuck to the closest edge of the column is the throat. The swing is simply twice the throat, and the swing is how drill presses are classified and sold. Thus, a tool with 4" throat has an 8" swing (it can drill a hole in the center of an 8" work piece), and is called an 8" drill press. A drill press has a number of advantages over a hand-held drill: Less effort is required to apply the drill to the workpiece. The movement of the chuck and spindle is by a lever working on a rack and pinion, which gives the operator considerable mechanical advantage The table allows a vise or clamp to be used to position and restrain the work, making the operation much more secure The angle of the spindle is fixed relative to the table, allowing holes to be drilled accurately and consistently Drill presses are almost always equipped with more powerful motors compared to hand-held drills. This enables larger drill bits to be used and also speeds up drilling with smaller bits. For most drill presses—especially those meant for woodworking or home use—speed change is achieved by manually moving a belt across a stepped pulley arrangement. Some drill presses add a third stepped pulley to increase the number of available speeds. Modern drill presses can, however, use a variable-speed motor in conjunction with the stepped-pulley system. Medium-duty drill presses such as those used in machine shop (tool room) applications are equipped with a continuously variable transmission. This mechanism is based on variable-diameter pulleys driving a wide, heavy-duty belt. This gives a wide speed range as well as the ability to change speed while the machine is running. Heavy-duty drill presses used for metalworking are usually of the gear-head type described below. Drill presses are often used for miscellaneous workshop tasks other than drilling holes. This includes sanding, honing, and polishing. These tasks can be performed by mounting sanding drums, honing wheels and various other rotating accessories in the chuck. This can be unsafe in some cases, as the chuck arbor, which may be retained in the spindle solely by the friction of a taper fit, may dislodge during operation if the side loads are too high. Geared head A geared head drill press transmits power from the motor to the spindle through spur gearing inside the machine's head, eliminating a flexible drive belt. This assures a positive drive at all times and minimizes maintenance. Gear head drills are intended for metalworking applications where the drilling forces are higher and the desired speed (RPM) is lower than that used for woodworking. Levers attached to one side of the head are used to select different gear ratios to change the spindle speed, usually in conjunction with a two- or three-speed motor (this varies with the material). Most machines of this type are designed to be operated on three-phase electric power and are generally of more rugged construction than equivalently sized belt-driven units. Virtually all examples have geared racks for adjusting the table and head position on the column. Geared head drill presses are commonly found in tool rooms and other commercial environments where a heavy duty machine capable of production drilling and quick setup changes is required. In most cases, the spindle is machined to accept Morse taper tooling for greater flexibility. Larger geared head drill presses are frequently fitted with power feed on the quill mechanism, with an arrangement to disengage the feed when a certain drill depth has been achieved or in the event of excessive travel. Some gear-head drill presses have the ability to perform tapping operations without the need for an external tapping attachment. This feature is commonplace on larger gear head drill presses. A clutch mechanism drives the tap into the part under power and then backs it out of the threaded hole once the proper depth is reached. Coolant systems are also common on these machines to prolong tool life under production conditions. Radial arm A radial arm drill press is a large geared-head drill press in which the head can be moved along an arm that radiates from the machine's column. As it is possible to swing the arm relative to the machine's base, a radial arm drill press is able to operate over a large area without having to reposition the workpiece. This feature saves considerable time because it is much faster to reposition the machine's head than it is to unclamp, move, and then re-clamp the workpiece to the table. The size of work that can be handled may be considerable, as the arm can swing out of the way of the table, allowing an overhead crane or derrick to place a bulky workpiece on the table or base. A vise may be used with a radial arm drill press, but more often the workpiece is secured directly to the table or base, or is held in a fixture. Power spindle feed is nearly universal with these machines and coolant systems are common. Larger-size machines often have power feed motors for elevating or moving the arm. The biggest radial arm drill presses are able to drill holes as large as four inches or 100mm diameter in solid steel or cast iron. Radial arm drill presses are specified by the diameter of the column and the length of the arm. The length of the arm is usually the same as the maximum throat distance. The radial arm drill press pictured to the right has a 9-inch diameter and a 3-foot-long arm. The maximum throat distance of this machine would be approximately 36 inches, giving a maximum swing of 72 inches (6 feet or 1.8m). Magnetic drill press A magnetic drill is a portable machine for drilling holes in large and heavy workpieces which are difficult to move or bring to a stationary conventional drilling machine. It has a magnetic base and drills holes with the help of cutting tools like annular cutters (broach cutters) or with twist drill bits. There are various types depending on their operations and specializations, like magnetic drilling / tapping machines, cordless, pneumatic, compact horizontal, automatic feed, cross table base etc. Mill Mill drills are a lighter alternative to a milling machine. They combine a drill press (belt driven) with the X/Y coordinate abilities of the milling machine's table and a locking collet that ensures that the cutting tool will not fall from the spindle when lateral forces are experienced against the bit. Although they are light in construction, they have the advantages of being space-saving and versatile as well as inexpensive, being suitable for light machining that may otherwise not be affordable. Surgical Drills are used in surgery to remove or create holes in bone; specialties that use them include dentistry, orthopedic surgery and neurosurgery. The development of surgical drill technology has followed that of industrial drilling, including transitions to the use of lasers, endoscopy, use of advanced imaging technologies to guide drilling, and robotic drills. Accessories Drills are often used simply as motors to drive a variety of applications, in much the same way that tractors with generic PTOs are used to power ploughs, mowers, trailers, etc. Accessories available for drills include: Screw-driving tips of various kinds – flathead, Philips, etc. for driving screws in or out Water pumps Nibblers for cutting metal sheet Rotary sanding discs Rotary polishing discs Rotary cleaning brushes Drill bits Some of the main drill bit types are twist drill bits – a general purpose drill bit for making holes in wood, plastic, metals, concrete and more Counterbore Drill Bits – a drill bit used to enlarge existing holes Countersink Drill Bits – a drill bit to create a wide opening for a screw High-Speed Drill Bits – these are drill bits made to be very strong and therefore are often used to cut metals Spade drill Bits – spade-shaped drill bits used primarily to bore holes in softwoods Hole Saw – a large drill bit with a jagged edge, ideal for cutting larger holes (mostly in wood). Capacity Drilling capacity indicates the maximum diameter a given power drill or drill press can produce in a certain material. It is essentially a proxy for the continuous torque the machine is capable of producing. Typically a given drill will have its capacity specified for different materials, i.e., 10 mm for steel, 25 mm for wood, etc. For example, the maximum recommended capacities for the DeWalt DCD790 cordless drill for specific drill bit types and materials are as follows: See also Boring Dental drill Drifter drill Drill bit Drill bit sizes References External links Nonfatal Occupational Injuries Involving the Eyes – From US Department of Labor (Accessed 29 April 2007) NIOSH Power Tools Sound and Vibrations Database Hole making Rotating machines Tools Construction
Drill
[ "Physics", "Technology", "Engineering" ]
4,998
[ "Physical systems", "Rotating machines", "Machines", "Construction" ]
205,224
https://en.wikipedia.org/wiki/Progeria
Progeria is a specific type of progeroid syndrome, also known as Hutchinson–Gilford syndrome or Hutchinson–Gilford progeroid syndrome (HGPS). A single gene mutation is responsible for causing progeria. The affected gene, known as lamin A (LMNA), makes a protein necessary for holding the cell nucleus together. When this gene mutates, an abnormal form of lamin A protein called progerin is produced. Progeroid syndromes are a group of diseases that cause individuals to age faster than usual, leading to them appearing older than they actually are. People born with progeria typically live until their mid- to late-teens or early twenties. Severe cardiovascular complications usually develop by puberty, later on resulting in death. Signs and symptoms Most children with progeria appear normal at birth and during early infancy. Children with progeria usually develop the first symptoms during their first few months of life. The earliest symptoms may include a failure to thrive and a localized scleroderma-like skin condition. As a child ages past infancy, additional conditions become apparent, usually around 18–24 months. Limited growth, full-body alopecia (hair loss), and a distinctive appearance (a small face with a shallow, recessed jaw and a pinched nose) are all characteristics of progeria. Signs and symptoms of this progressive disease tend to become more marked as the child ages. Later, the condition causes wrinkled skin, kidney failure, loss of eyesight, and atherosclerosis and other cardiovascular problems. Scleroderma, a hardening and tightening of the skin on trunk and extremities of the body, is prevalent. People diagnosed with this disorder usually have small, fragile bodies, like those of older adults. The head is usually large relative to the body, with a narrow, wrinkled face and a beak nose. Prominent scalp veins are noticeable (made more obvious by alopecia), as well as prominent eyes. Musculoskeletal degeneration causes loss of body fat and muscle, stiff joints, hip dislocations, and other symptoms generally absent in the non-elderly population. Individuals usually retain typical mental and motor function. Pathophysiology Hutchinson-Gilford progeroid syndrome (HGPS) is an extremely rare autosomal dominant genetic disorder in which symptoms resembling aspects of aging are manifested at an early age. Its occurrence is usually the result of a sporadic germline mutation; although HGPS is genetically dominant, people rarely live long enough to have children, preventing them from passing the disorder on in a hereditary manner. HGPS is caused by mutations that weaken the structure of the cell nucleus, making normal cell division difficult. The histone mark H4K20me3 is involved and caused by de novo mutations that occur in a gene that encodes lamin A. Lamin A is made but is not processed properly. This poor processing creates an abnormal nuclear morphology and disorganized heterochromatin. Patients also do not have appropriate DNA repair, and they also have increased genomic instability. In normal conditions, the LMNA gene codes for a structural protein called prelamin A, which undergoes a series of processing steps before attaining its final form, called lamin A. Prelamin A contains a "CAAX" where C is a cysteine, A an aliphatic amino acid, and X any amino acid. This motif at the carboxyl-termini of proteins triggers three sequential enzymatic modifications. First, protein farnesyltransferase catalyzes the addition of a farnesyl moiety to the cysteine. Second, an endoprotease that recognizes the farnesylated protein catalyzes the peptide bond's cleavage between the cysteine and -aaX. In the third step, isoprenylcysteine carboxyl methyltransferase catalyzes methylation of the carboxyl-terminal farnesyl cysteine. The farnesylated and methylated protein is transported through a nuclear pore to the interior of the nucleus. Once in the nucleus, the protein is cleaved by a protease called zinc metallopeptidase STE24 (ZMPSTE24), which removes the last 15 amino acids, which includes the farnesylated cysteine. After cleavage by the protease, prelamin A is referred to as lamin A. In most mammalian cells, lamin A, along with lamin B1, lamin B2, and lamin C, makes up the nuclear lamina, which provides shape and stability to the inner nuclear envelope. Before the late 20th century, research on progeria yielded very little information about the syndrome. In 2003, the cause of progeria was discovered to be a point mutation in position 1824 of the LMNA gene, which replaces a cytosine with thymine. This mutation creates a 5' cryptic splice site within exon 11, resulting in a shorter than normal mRNA transcript. When this shorter mRNA is translated into protein, it produces an abnormal variant of the prelamin A protein, referred to as progerin. Progerin's farnesyl group cannot be removed because the ZMPSTE24 cleavage site is lacking from progerin, so the abnormal protein is permanently attached to the nuclear rim. One result is that the nuclear lamina does not provide the nuclear envelope with enough structural support, causing it to take on an abnormal shape. Since the support that the nuclear lamina normally provides is necessary for the organizing of chromatin during mitosis, weakening of the nuclear lamina limits the ability of the cell to divide. However, defective cell division is unlikely to be the main defect leading to progeria, particularly because children develop normally without any signs of disease until about one year of age. Farnesylated prelamin A variants also lead to defective DNA repair, which may play a role in the development of progeria. Progerin expression also leads to defects in the establishment of fibroblast cell polarity, which is also seen in physiological aging. To date, over 1,400 SNPs in the LMNA gene are known. They can manifest as changes in mRNA, splicing, or protein amino acid sequence (e.g. Arg471Cys, Arg482Gln, Arg527Leu, Arg527Cys, and Ala529Val). Progerin may also play a role in normal human aging, since its production is activated in typical senescent cells. Unlike other "accelerated aging diseases", such as Werner syndrome, Cockayne syndrome, or xeroderma pigmentosum, progeria may not be directly caused by defective DNA repair. These diseases each cause changes in a few specific aspects of aging but never in every aspect at once, so they are often called "segmental progerias". A 2003 report in Nature said that progeria may be a de novo dominant trait. It develops during cell division in a newly conceived zygote or in the gametes of one of the parents. It is caused by mutations in the LMNA (lamin A protein) gene on chromosome 1; the mutated form of lamin A is commonly known as progerin. One of the authors, Leslie Gordon, was a physician who did not know anything about progeria until her own son, Sam, was diagnosed at 22 months. Gordon and her husband, pediatrician Scott Berns, founded the Progeria Research Foundation. A subset of progeria patients with heterozygous mutations of LMNA have presented an atypical form of the condition, with initial symptoms not developing until late childhood or early adolescence. These patients have had longer lifespans than those with typical-onset progeria. This atypical form is extremely rare, with presentations of the condition varying between patients with even the same mutation. The general phenotype of atypical cases is consistent with typical progeria, but other factors (severity, onset, and lifespan) vary in presentation. Lamin A Lamin A is a major component of a protein scaffold on the inner edge of the nucleus called the nuclear lamina that helps organize nuclear processes such as RNA and DNA synthesis. Prelamin A contains a CAAX box at the C-terminus of the protein (where C is a cysteine and A is any aliphatic amino acids). This ensures that the cysteine is farnesylated and allows prelamin A to bind membranes, specifically the nuclear membrane. After prelamin A has been localized to the cell nuclear membrane, the C-terminal amino acids, including the farnesylated cysteine, are cleaved off by a specific protease. The resulting protein, now lamin A, is no longer membrane-bound and carries out functions inside the nucleus. In HGPS, the recognition site that the enzyme requires for cleavage of prelamin A to lamin A is mutated. Lamin A cannot be produced, and prelamin A builds up on the nuclear membrane, causing a characteristic nuclear blebbing. This results in the symptoms of progeria, although the relationship between the misshapen nucleus and the symptoms is not known. A study that compared HGPS patient cells with the skin cells from young and elderly normal human subjects found similar defects in the HGPS and elderly cells, including down-regulation of certain nuclear proteins, increased DNA damage, and demethylation of histone, leading to reduced heterochromatin. Nematodes over their lifespan show progressive lamin changes comparable to HGPS in all cells but neurons and gametes. These studies suggest that lamin A defects are associated with normal aging. Mitochondria The presence of progerin also leads to the accumulation of dysfunctional mitochondria within the cell. These mitochondria are characterized by a swollen morphology, caused by a condensation of mtDNA and TFAM into the mitochondria, which is driven by a severe mitochondrial dysfunction (low mitochondrial membrane potential, low ATP production, low respiration capacity and high ROS production). Therefore, contributing substantially to the senescence phenotype. Although, the explanation for this defective-mitochondria accumulation in progeria is about to be elucidated, it has been proposed that low PGC1-α expression (important for mitochondrial biogenesis, maintenance and function) along with low LAMP2 protein level and lysosome number (both important for mitophagy: the degradation of defective mitochondria pathway), could be implicated. Diagnosis Skin changes, abnormal growth, and loss of hair occur. These symptoms normally start appearing by one year of age. A genetic test for LMNA mutations can confirm the diagnosis of progeria. Prior to the advent of the genetic test, misdiagnosis was common. Differential diagnosis Other syndromes with similar symptoms (non-laminopathy progeroid syndromes) include: Acrogeria Berardinelli-Seip congenital lipodystrophy (congenital generalized lipodystrophy) Cockayne syndrome Ehlers-Danlos syndrome, progeroid form Gerodermia osteodysplastica Hallermann-Streiff syndrome Mandibuloacral dysplasia Neonatal progeroid syndrome (Wiedemann-Rautenstrauch syndrome) Nestor-Guillermo syndrome Penttinen syndrome Petty-Laxova-Weidemann progeroid syndrome POLR3A-related Wiedemann-Rautenstrauch syndrome PYCR1-related Wiedemann-Rautenstrauch-like syndrome Werner syndrome Treatment In November 2020, the U.S. Food and Drug Administration approved lonafarnib, which helps prevent buildup of defective progerin and similar proteins. A clinical trial in 2018 points to significantly lower mortality rates – treatment with lonafarnib alone compared with no treatment (3.7% vs. 33.3%) – at a median post-trial follow-up time span of 2.2 years. The drug, given orphan drug status and Pediatric Disease Priority Review Voucher, is taken twice daily in the form of capsules and may cost US$650,000 per year, making it prohibitive for the vast majority of families. It is unclear how it will be covered by health insurance in the United States. Common side effects of the drug include "nausea, vomiting, diarrhea, infections, decreased appetite, and fatigue". Other treatment options have focused on reducing complications (such as cardiovascular disease) with coronary artery bypass surgery and low-dose acetylsalicylic acid. Growth hormone treatment has been attempted. The use of Morpholinos has also been attempted in mice and cell cultures in order to reduce progerin production. Antisense Morpholino oligonucleotides specifically directed against the mutated exon 11–exon 12 junction in the mutated pre-mRNAs were used. A type of anticancer drug, the farnesyltransferase inhibitors (FTIs), has been proposed, but their use has been mostly limited to animal models. A Phase II clinical trial using the FTI lonafarnib began in May 2007. In studies on the cells another anti-cancer drug, rapamycin, caused removal of progerin from the nuclear membrane through autophagy. It has been proved that pravastatin and zoledronate are effective drugs when it comes to the blocking of farnesyl group production. Farnesyltransferase inhibitors (FTIs) are drugs that inhibit the activity of an enzyme needed to make a link between progerin proteins and farnesyl groups. This link generates the permanent attachment of the progerin to the nuclear rim. In progeria, cellular damage can occur because that attachment occurs, and the nucleus is not in a normal state. Lonafarnib is an FTI, which means it can avoid this link, so progerin can not remain attached to the nucleus rim, and it now has a more normal state. Studies of sirolimus, an mTOR Inhibitor, demonstrate that it can minimize the phenotypic effects of progeria fibroblasts. Other observed consequences of its use are abolishing nuclear blebbing, degradation of progerin in affected cells, and reducing insoluble progerin aggregates formation. These results have been observed only in vitro and are not the results of any clinical trial, although it is believed that the treatment might benefit HGPS patients. Recently, it has been demonstrated that the CRM1 protein (a key component of the nuclear export machinery in mammalian) is upregulated in HGPS cells, which drives to the abnormal localization of NES containing proteins from the nucleus to the cytoplasm. Moreover, the inhibition of CRM1 in HGPS alleviates the associated-senescence phenotype as well as the mitochondrial function (an important determinant in senescence) and lysosome content. These results are under in vivo validation with selinexor (a more suitable CRM1 inhibitor for human use). Prognosis As there is no known cure, life expectancy of people with progeria is 15 years, as of 2024. At least 90 percent of patients die from complications of atherosclerosis, such as heart attack or stroke. Mental development is not adversely affected; in fact, intelligence tends to be average to above average. With respect to the features of aging that progeria appears to manifest, the development of symptoms is comparable to aging at a rate eight to ten times faster than normal. With respect to those that progeria does not exhibit, patients show no neurodegeneration or cancer predisposition. They also do not develop conditions that are commonly associated with accumulation of damage, such as cataracts (caused by UV exposure) and osteoarthritis. Although there may not be any successful treatments for progeria itself, there are treatments for the problems it causes, such as arthritic, respiratory, and cardiovascular problems, and recent medicinal breakthroughs enabled one patient to live until age 28.<ref>Sammy Basso led research into his own rare disease, see https://www.economist.com/obituary/2024/10/17/sammy-basso-led-research-into-his-own-rare-disease from The Economist issue published 19th of October 2024</ref> People with progeria have normal reproductive development, and there are known cases of women with progeria who delivered healthy offspring. Epidemiology A study from the Netherlands has shown an incidence of 1 in 20 million births. According to the Progeria Research Foundation, as of September 2020, there are 179 known cases in the world, in 53 countries; 18 of the cases were identified in the United States. Hundreds of cases have been reported in medical history since 1886. However, the Progeria Research Foundation believes there may be as many as 150 undiagnosed cases worldwide. There have been only two cases in which a healthy person was known to carry the LMNA mutation that causes progeria. One family from India had four of six children with progeria. Research Mouse model A mouse model of progeria exists, though in the mouse, the LMNA prelamin A is not mutated. Instead, ZMPSTE24, the specific protease that is required to remove the C-terminus of prelamin A, is missing. Both cases result in the buildup of farnesylated prelamin A on the nuclear membrane and in the characteristic nuclear LMNA blebbing. In 2020, BASE editing was used in a mouse model to target the LMNA gene mutation that causes the progerin protein instead of the healthy Lamin A while in 2023 a study designed a peptide that prevented progerin from binding to BubR1 which is known to regulate aging in mice. DNA repair Repair of DNA double-strand breaks can occur by either of two processes, non-homologous end joining (NHEJ) or homologous recombination (HR). A-type lamins promote genetic stability by maintaining levels of proteins that have key roles in NHEJ and HR. Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and have increased sensitivity to DNA damaging agents. In progeria, the inability to adequately repair DNA damages due to defective A-type lamin may cause aspects of premature aging (also see DNA damage theory of aging). Epigenetic clock analysis of human HGPS Fibroblast samples from children with progeria syndrome exhibit accelerated epigenetic aging effects according to the epigenetic clock for skin and blood samples. History Progeria was first described in 1886 by Jonathan Hutchinson. It was also described independently in 1897 by Hastings Gilford. The condition was later named Hutchinson–Gilford progeria syndrome. Scientists are interested in progeria partly because it might reveal clues about the normal process of aging. Etymology The word progeria comes from the Greek words (πρό) 'before, premature', and (γῆρας), 'old age'. Society and culture Notable cases Yan Hui, a student of Confucius, aged rapidly and died at a young age, appearing as an old man by his late 20s. He may be one of the earliest potential examples of progeria in history. In 1987, fifteen-year-old Mickey Hays, who had progeria, appeared along with Jack Elam in the documentary I Am Not a Freak. Elam and Hays first met during the filming of the 1986 film The Aurora Encounter, in which Hays was cast as an alien. The friendship that developed lasted until Hays died in 1992, on his 20th birthday. Elam said, "You know I've met a lot of people, but I've never met anybody that got next to me like Mickey." Harold Kushner, who among other things wrote the book When Bad Things Happen to Good People, had a son, Aaron, who died at the age of 14 in 1977 of progeria. Margaret Casey, a 29-year-old woman with progeria who was then believed to be the oldest survivor of the premature aging disease, died on Sunday, May 26, 1985. Casey, a freelance artist, was admitted to Yale-New Haven Hospital on the night of May 25 with respiratory problems, which caused her death. Sam Berns was an American activist with the disease. He was the subject of the HBO documentary Life According to Sam. Berns also gave a TEDx talk titled "My Philosophy for a Happy Life" on December 13, 2013. Hayley Okines was an English progeria patient who spread awareness of the condition. Leon Botha, the South African painter and DJ who was known, among other things, for his work with the hip-hop duo Die Antwoord, lived with progeria. He died in 2011, aged 26. Tiffany Wedekind of Columbus, Ohio, is believed to be the oldest survivor of progeria at 44 years old as of September 2022. Alexandra Peraut is a Catalan girl with progeria; she has inspired the book Una nena entre vint milions'' ('A girl in 20 million'), a children's book to explain progeria to youngsters. Adalia Rose Williams, born December 10, 2006, was an American girl with progeria, who was a notable YouTuber and vlogger who shared her everyday life on social media. She died on January 12, 2022, at the age of 15. Amy Foose, born September 12, 1969, was an American girl with progeria, who died at the age of 16 on December 19, 1985. Sister of American automobile designer, artist, and TV star, Chip Foose, who started a foundation in her name called Amy's Depot. The Progeria Research Foundation gives out The Amy Award every few years, in honor of Amy. Sammy Basso, born December 1, 1995, was an Italian biologist, activist and writer who studied progeria and campaigned to raise awareness of the disease, died at the age of 28 on October 5, 2024. At the time of his death he was the longest-living survivor of the condition. References Sources Genodermatoses Progeroid syndromes Rare diseases Senescence Diseases named after discoverers
Progeria
[ "Chemistry", "Biology" ]
4,725
[ "Senescence", "Cellular processes", "Metabolism", "Progeroid syndromes" ]
205,393
https://en.wikipedia.org/wiki/Binary%20classification
Binary classification is the task of classifying the elements of a set into one of two groups (each called class). Typical binary classification problems include: Medical testing to determine if a patient has a certain disease or not; Quality control in industry, deciding whether a specification has been met; In information retrieval, deciding whether a page should be in the result set of a search or not In administration, deciding whether someone should be issued with a driving licence or not In cognition, deciding whether an object is food or not food. When measuring the accuracy of a binary classifier, the simplest way is to count the errors. But in the real world often one of the two classes is more important, so that the number of both of the different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative). Four outcomes Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category: true positives TP (correct positive assignments), true negatives TN (correct negative assignments), false positives FP (incorrect positive assignments), and false negatives FN (incorrect negative assignments). These can be arranged into a 2×2 contingency table, with rows corresponding to actual value – condition positive or condition negative – and columns corresponding to classification value – test outcome positive or test outcome negative. Evaluation From tallies of the four basic outcomes, there are many approaches that can be used to measure the accuracy of a classifier or predictor. Different fields have different preferences. The eight basic ratios A common approach to evaluation is to begin by computing two ratios of a standard pattern. There are eight basic ratios of this form that one can compute from the contingency table, which come in four complementary pairs (each pair summing to 1). These are obtained by dividing each of the four numbers by the sum of its row or column, yielding eight numbers, which can be referred to generically in the form "true positive row ratio" or "false negative column ratio". There are thus two pairs of column ratios and two pairs of row ratios, and one can summarize these with four numbers by choosing one ratio from each pair – the other four numbers are the complements. The row ratios are: true positive rate (TPR) = (TP/(TP+FN)), aka sensitivity or recall. These are the proportion of the population with the condition for which the test is correct. with complement the false negative rate (FNR) = (FN/(TP+FN)) true negative rate (TNR) = (TN/(TN+FP), aka specificity (SPC), with complement false positive rate (FPR) = (FP/(TN+FP)), also called independent of prevalence The column ratios are: positive predictive value (PPV, aka precision) (TP/(TP+FP)). These are the proportion of the population with a given test result for which the test is correct. with complement the false discovery rate (FDR) (FP/(TP+FP)) negative predictive value (NPV) (TN/(TN+FN)) with complement the false omission rate (FOR) (FN/(TN+FN)), also called dependence on prevalence. In diagnostic testing, the main ratios used are the true column ratios – true positive rate and true negative rate – where they are known as sensitivity and specificity. In informational retrieval, the main ratios are the true positive ratios (row and column) – positive predictive value and true positive rate – where they are known as precision and recall. Cullerne Bown has suggested a flow chart for determining which pair of indicators should be used when. Otherwise, there is no general rule for deciding. There is also no general agreement on how the pair of indicators should be used to decide on concrete questions, such as when to prefer one classifier over another. One can take ratios of a complementary pair of ratios, yielding four likelihood ratios (two column ratio of ratios, two row ratio of ratios). This is primarily done for the column (condition) ratios, yielding likelihood ratios in diagnostic testing. Taking the ratio of one of these groups of ratios yields a final ratio, the diagnostic odds ratio (DOR). This can also be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN); this has a useful interpretation – as an odds ratio – and is prevalence-independent. Other metrics There are a number of other metrics, most simply the accuracy or Fraction Correct (FC), which measures the fraction of all instances that are correctly categorized; the complement is the Fraction Incorrect (FiC). The F-score combines precision and recall into one number via a choice of weighing, most simply equal weighing, as the balanced F-score (F1 score). Some metrics come from regression coefficients: the markedness and the informedness, and their geometric mean, the Matthews correlation coefficient. Other metrics include Youden's J statistic, the uncertainty coefficient, the phi coefficient, and Cohen's kappa. Statistical binary classification Statistical classification is a problem studied in machine learning in which the classification is performed on the basis of a classification rule. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification. Some of the methods commonly used for binary classification are: Decision trees Random forests Bayesian networks Support vector machines Neural networks Logistic regression Probit model Genetic Programming Multi expression programming Linear genetic programming Each classifier is best in only a select domain based upon the number of observations, the dimensionality of the feature vector, the noise in the data and many other factors. For example, random forests perform better than SVM classifiers for 3D point clouds. Converting continuous values to binary Binary classification may be a form of dichotomization in which a continuous function is transformed into a binary variable. Tests whose results are of continuous values, such as most blood values, can artificially be made binary by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff. However, such conversion causes a loss of information, as the resultant binary classification does not tell how much above or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultant positive or negative predictive value is generally higher than the predictive value given directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration of hCG as a continuous value, a urine pregnancy test that measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml. See also Approximate membership query filter Examples of Bayesian inference Classification rule Confusion matrix Detection theory Kernel methods Multiclass classification Multi-label classification One-class classification Prosecutor's fallacy Receiver operating characteristic Thresholding (image processing) Uncertainty coefficient, aka proficiency Qualitative property Precision and recall (equivalent classification schema) References Bibliography Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ( SVM Book) John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. (Website for the book) Bernhard Schölkopf and A. J. Smola: Learning with Kernels. MIT Press, Cambridge, Massachusetts, 2002. Statistical classification Machine learning
Binary classification
[ "Engineering" ]
1,793
[ "Artificial intelligence engineering", "Machine learning" ]
205,406
https://en.wikipedia.org/wiki/Microgram
In the metric system, a microgram or microgramme is a unit of mass equal to one millionth () of a gram. The unit symbol is μg according to the International System of Units (SI); the recommended symbol in the United States and United Kingdom when communicating medical information is mcg. In μg, the prefix symbol for micro- is the Greek letter μ (mu). Abbreviation and symbol confusion When the Greek lowercase "μ" (mu) is typographically unavailable, it is occasionally – although not properly – replaced by the Latin lowercase "u". The United States–based Institute for Safe Medication Practices (ISMP) and the U.S. Food and Drug Administration (FDA) recommend that the symbol μg should not be used when communicating medical information due to the risk that the prefix μ (micro-) might be misread as the prefix m (milli-), resulting in a thousandfold overdose. The ISMP recommends the non-SI symbol mcg instead. However, the abbreviation mcg is also the symbol for an obsolete centimetre–gram–second system of units unit of measure known as millicentigram, which is equal to 10 μg. Gamma (symbol: γ) is a deprecated non-SI unit of mass equal to 1 μg. A fullwidth version of the "microgram" symbol is encoded by Unicode at code point for use in CJK contexts. In other contexts, a sequence of the Greek letter mu (U+03BC) and Latin letter g (U+0067) should be used. See also List of SI prefixes Orders of magnitude (mass), listing a few items that have a mass of around 1 μg. References SI derived units Units of mass
Microgram
[ "Physics", "Mathematics" ]
370
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
205,464
https://en.wikipedia.org/wiki/Human%20microbiome
The human microbiome is the aggregate of all microbiota that reside on or within human tissues and biofluids along with the corresponding anatomical sites in which they reside, including the gastrointestinal tract, skin, mammary glands, seminal fluid, uterus, ovarian follicles, lung, saliva, oral mucosa, conjunctiva, and the biliary tract. Types of human microbiota include bacteria, archaea, fungi, protists, and viruses. Though micro-animals can also live on the human body, they are typically excluded from this definition. In the context of genomics, the term human microbiome is sometimes used to refer to the collective genomes of resident microorganisms; however, the term human metagenome has the same meaning. The human body hosts many microorganisms, with approximately the same order of magnitude of non-human cells as human cells. Some microorganisms that humans host are commensal, meaning they co-exist without harming humans; others have a mutualistic relationship with their human hosts. Conversely, some non-pathogenic microorganisms can harm human hosts via the metabolites they produce, like trimethylamine, which the human body converts to trimethylamine N-oxide via FMO3-mediated oxidation. Certain microorganisms perform tasks that are known to be useful to the human host, but the role of most of them is not well understood. Those that are expected to be present, and that under normal circumstances do not cause disease, are sometimes deemed normal flora or normal microbiota. During early life, the establishment of a diverse and balanced human microbiota plays a critical role in shaping an individual's long-term health. Studies have shown that the composition of the gut microbiota during infancy is influenced by various factors, including mode of delivery, breastfeeding, and exposure to environmental factors. There are several beneficial species of bacteria and potential probiotics present in breast milk. Research has highlighted the beneficial effects of a healthy microbiota in early life, such as the promotion of immune system development, regulation of metabolism, and protection against pathogenic microorganisms. Understanding the complex interplay between the human microbiota and early life health is crucial for developing interventions and strategies to support optimal microbiota development and improve overall health outcomes in individuals. The Human Microbiome Project (HMP) took on the project of sequencing the genome of the human microbiota, focusing particularly on the microbiota that normally inhabit the skin, mouth, nose, digestive tract, and vagina. It reached a milestone in 2012 when it published its initial results. Terminology Though widely known as flora or microflora, this is a misnomer in technical terms, since the word root flora pertains to plants, and biota refers to the total collection of organisms in a particular ecosystem. Recently, the more appropriate term microbiota is applied, though its use has not eclipsed the entrenched use and recognition of flora with regard to bacteria and other microorganisms. Both terms are being used in different literature. Relative numbers The number of bacterial cells in the human body is estimated to be around 38 trillion, while the estimate for human cells is around 30 trillion. The number of bacterial genes is estimated to be 2 million, 100 times the number of approximately 20,000 human genes. Study The problem of elucidating the human microbiome is essentially identifying the members of a microbial community, which includes bacteria, eukaryotes, and viruses. This is done primarily using deoxyribonucleic acid (DNA)-based studies, though ribonucleic acid (RNA), protein and metabolite based studies are also performed. DNA-based microbiome studies typically can be categorized as either targeted amplicon studies or, more recently, shotgun metagenomic studies. The former focuses on specific known marker genes and is primarily informative taxonomically, while the latter is an entire metagenomic approach which can also be used to study the functional potential of the community. One of the challenges that is present in human microbiome studies, but not in other metagenomic studies, is to avoid including the host DNA in the study. Aside from simply elucidating the composition of the human microbiome, one of the major questions involving the human microbiome is whether there is a "core", that is, whether there is a subset of the community that is shared among most humans. If there is a core, then it would be possible to associate certain community compositions with disease states, which is one of the goals of the HMP. It is known that the human microbiome (such as the gut microbiota) is highly variable both within a single subject and among different individuals, a phenomenon which is also observed in mice. On 13 June 2012, a major milestone of the HMP was announced by the National Institutes of Health (NIH) director Francis Collins. The announcement was accompanied with a series of coordinated articles published in Nature and several journals in the Public Library of Science (PLoS) on the same day. By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans. From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool), and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA. The researchers calculated that more than 10,000 microbial species occupy the human ecosystem, and they have identified 81–99% of the genera. Analysis after the processing The statistical analysis is essential to validate the obtained results (ANOVA can be used to size the differences between the groups); if it is paired with graphical tools, the outcome is easily visualized and understood. Once a metagenome is assembled, it is possible to infer the functional potential of the microbiome. The computational challenges for this type of analysis are greater than for single genomes, because usually metagenomes assemblers have poorer quality, and many recovered genes are non-complete or fragmented. After the gene identification step, the data can be used to carry out a functional annotation by means of multiple alignment of the target genes against orthologs databases. Marker gene analysis It is a technique that exploits primers to target a specific genetic region and enables to determine the microbial phylogenies. The genetic region is characterized by a highly variable region which can confer detailed identification; it is delimited by conserved regions, which function as binding sites for primers used in PCR. The main gene used to characterize bacteria and archaea is 16S rRNA gene, while fungi identification is based on Internal Transcribed Spacer (ITS). The technique is fast and not so expensive and enables to obtain a low-resolution classification of a microbial sample; it is optimal for samples that may be contaminated by host DNA. Primer affinity varies among all DNA sequences, which may result in biases during the amplification reaction; indeed, low-abundance samples are susceptible to overamplification errors, since the other contaminating microorganisms result to be over-represented in case of increasing the PCR cycles. Therefore, the optimization of primer selection can help to decrease such errors, although it requires complete knowledge of the microorganisms present in the sample, and their relative abundances. Marker gene analysis can be influenced by the primer choice; in this kind of analysis, it is desirable to use a well-validated protocol (such as the one used in the Earth Microbiome Project). The first thing to do in a marker gene amplicon analysis is to remove sequencing errors; a lot of sequencing platforms are very reliable, but most of the apparent sequence diversity is still due to errors during the sequencing process. To reduce this phenomenon a first approach is to cluster sequences into Operational taxonomic unit (OTUs): this process consolidates similar sequences (a 97% similarity threshold is usually adopted) into a single feature that can be used in further analysis steps; this method however would discard SNPs because they would get clustered into a single OTU. Another approach is Oligotyping, which includes position-specific information from 16s rRNA sequencing to detect small nucleotide variations and from discriminating between closely related distinct taxa. These methods give as an output a table of DNA sequences and counts of the different sequences per sample rather than OTU. Another important step in the analysis is to assign a taxonomic name to microbial sequences in the data. This can be done using machine learning approaches that can reach an accuracy at genus-level of about 80%. Other popular analysis packages provide support for taxonomic classification using exact matches to reference databases and should provide greater specificity, but poor sensitivity. Unclassified microorganism should be further checked for organelle sequences. Phylogenetic analysis Many methods that exploit phylogenetic inference use the 16SRNA gene for Archea and Bacteria and the 18SRNA gene for Eukaryotes. Phylogenetic comparative methods (PCS) are based on the comparison of multiple traits among microorganisms; the principle is: the closely they are related, the higher number of traits they share. Usually PCS are coupled with phylogenetic generalized least square (PGLS) or other statistical analysis to get more significant results. Ancestral state reconstruction is used in microbiome studies to impute trait values for taxa whose traits are unknown. This is commonly performed with PICRUSt, which relies on available databases. Phylogenetic variables are chosen by researchers according to the type of study: through the selection of some variables with significant biological informations, it is possible to reduce the dimension of the data to analyse. Phylogenetic aware distance is usually performed with UniFrac or similar tools, such as Soresen's index or Rao's D, to quantify the differences between the different communities. All this methods are negatively affected by horizontal gene transmission (HGT), since it can generate errors and lead to the correlation of distant species. There are different ways to reduce the negative impact of HGT: the use of multiple genes or computational tools to assess the probability of putative HGT events. Ecological Network analysis Microbial communities develop in a very complex dynamic which can be viewed and analyzed as an ecosystem. The ecological interactions between microbes govern its change, equilibrium and stability, and can be represented by a population dynamic model. The ongoing study of ecological features of the microbiome is growing rapidly and allows to understand the fundamental properties of the microbiome. Understanding the underlying rules of microbial community could help with treating diseases related to unstable microbial communities. A very basic question is if different humans, who share different microbial communities, have the same underlying microbial dynamics. Increasing evidence and indications have found that the dynamics is indeed universal. This question is a basic step that will allow scientists to develop treatment strategies, based on the complex dynamics of human microbial communities. There are more important properties on which considerations should be taken into account for developing interventions strategies for controlling the human microbial dynamics. Controlling the microbial communities could result in solving very bad and harmful diseases. Types Bacteria Populations of microbes (such as bacteria and yeasts) inhabit the skin and mucosal surfaces in various parts of the body. Their role forms part of normal, healthy human physiology, however if microbe numbers grow beyond their typical ranges (often due to a compromised immune system) or if microbes populate (such as through poor hygiene or injury) areas of the body normally not colonized or sterile (such as the blood, or the lower respiratory tract, or the abdominal cavity), disease can result (causing, respectively, bacteremia/sepsis, pneumonia, and peritonitis). The Human Microbiome Project found that individuals host thousands of bacterial types, different body sites having their own distinctive communities. Skin and vaginal sites showed smaller diversity than the mouth and gut, these showing the greatest richness. The bacterial makeup for a given site on a body varies from person to person, not only in type, but also in abundance. Bacteria of the same species found throughout the mouth are of multiple subtypes, preferring to inhabit distinctly different locations in the mouth. Even the enterotypes in the human gut, previously thought to be well understood, are from a broad spectrum of communities with blurred taxon boundaries. It is estimated that 500 to 1,000 species of bacteria live in the human gut but belong to just a few phyla: Bacillota and Bacteroidota dominate but there are also Pseudomonadota, Verrucomicrobiota, Actinobacteriota, Fusobacteriota, and "Cyanobacteria". A number of types of bacteria, such as Actinomyces viscosus and A. naeslundii, live in the mouth, where they are part of a sticky substance called plaque. If this is not removed by brushing, it hardens into calculus (also called tartar). The same bacteria also secrete acids that dissolve tooth enamel, causing tooth decay. The vaginal microflora consist mostly of various lactobacillus species. It was long thought that the most common of these species was Lactobacillus acidophilus, but it has later been shown that L. iners is in fact most common, followed by L. crispatus. Other lactobacilli found in the vagina are L. jensenii, L. delbruekii and L. gasseri. Disturbance of the vaginal flora can lead to infections such as bacterial vaginosis and candidiasis. Archaea Archaea are present in the human gut, but, in contrast to the enormous variety of bacteria in this organ, the numbers of archaeal species are much more limited. The dominant group are the methanogens, particularly Methanobrevibacter smithii and Methanosphaera stadtmanae. However, colonization by methanogens is variable, and only about 50% of humans have easily detectable populations of these organisms. As of 2007, no clear examples of archaeal pathogens were known, although a relationship has been proposed between the presence of some methanogens and human periodontal disease. Methane-dominant small intestinal bacterial overgrowth (SIBO) is also predominantly caused by methanogens, and Methanobrevibacter smithii in particular. Fungi Fungi, in particular yeasts, are present in the human gut. The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. Yeasts are also present on the skin, such as Malassezia species, where they consume oils secreted from the sebaceous glands. Viruses Viruses, especially bacterial viruses (bacteriophages), colonize various body sites. These colonized sites include the skin, gut, lungs, and oral cavity. Virus communities have been associated with some diseases, and do not simply reflect the bacterial communities. In January 2024, biologists reported the discovery of "obelisks", a new class of viroid-like elements, and "oblins", their related group of proteins, in the human microbiome. Anatomical areas Skin A study of 20 skin sites on each of ten healthy humans found 205 identified genera in 19 bacterial phyla, with most sequences assigned to four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). A large number of fungal genera are present on healthy human skin, with some variability by region of the body; however, during pathological conditions, certain genera tend to dominate in the affected region. For example, Malassezia is dominant in atopic dermatitis and Acremonium is dominant on dandruff-affected scalps. The skin acts as a barrier to deter the invasion of pathogenic microbes. The human skin contains microbes that reside either in or on the skin and can be residential or transient. Resident microorganism types vary in relation to skin type on the human body. A majority of microbes reside on superficial cells on the skin or prefer to associate with glands. These glands such as oil or sweat glands provide the microbes with water, amino acids, and fatty acids. In addition, resident bacteria that associated with oil glands are often Gram-positive and can be pathogenic. Conjunctiva A small number of bacteria and fungi are normally present in the conjunctiva. Classes of bacteria include Gram-positive cocci (e.g., Staphylococcus and Streptococcus) and Gram-negative rods and cocci (e.g., Haemophilus and Neisseria) are present. Fungal genera include Candida, Aspergillus, and Penicillium. The lachrymal glands continuously secrete, keeping the conjunctiva moist, while intermittent blinking lubricates the conjunctiva and washes away foreign material. Tears contain bactericides such as lysozyme, so that microorganisms have difficulty in surviving the lysozyme and settling on the epithelial surfaces. Gastrointestinal tract In humans, the composition of the gastrointestinal microbiome is established during birth. Birth by Cesarean section or vaginal delivery also influences the gut's microbial composition. Babies born through the vaginal canal have non-pathogenic, beneficial gut microbiota similar to those found in the mother. However, the gut microbiota of babies delivered by C-section harbors more pathogenic bacteria such as Escherichia coli and Staphylococcus and it takes longer to develop non-pathogenic, beneficial gut microbiota. The relationship between some gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Some human gut microorganisms benefit the host by fermenting dietary fiber into short-chain fatty acids (SCFAs), such as acetic acid and butyric acid, which are then absorbed by the host. Intestinal bacteria also play a role in synthesizing vitamin B and vitamin K as well as metabolizing bile acids, sterols, and xenobiotics. The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ, and dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions. The composition of human gut microbiota changes over time, when the diet changes, and as overall health changes. A systematic review of 15 human randomized controlled trials from July 2016 found that certain commercially available strains of probiotic bacteria from the Bifidobacterium and Lactobacillus genera (B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei), when taken by mouth in daily doses of 109–1010 colony forming units (CFU) for 1–2 months, possess treatment efficacy (i.e., improves behavioral outcomes) in certain central nervous system disorders – including anxiety, depression, autism spectrum disorder, and obsessive–compulsive disorder – and improves certain aspects of memory. Urethra and bladder The genitourinary system appears to have a microbiota, which is an unexpected finding in light of the long-standing use of standard clinical microbiological culture methods to detect bacteria in urine when people show signs of a urinary tract infection; it is common for these tests to show no bacteria present. It appears that common culture methods do not detect many kinds of bacteria and other microorganisms that are normally present. As of 2017, sequencing methods were used to identify these microorganisms to determine if there are differences in microbiota between people with urinary tract problems and those who are healthy. To properly assess the microbiome of the bladder as opposed to the genitourinary system, the urine specimen should be collected directly from the bladder, which is often done with a catheter. Vagina Vaginal microbiota refers to those species and genera that colonize the vagina. These organisms play an important role in protecting against infections and maintaining vaginal health. The most abundant vaginal microorganisms found in premenopausal women are from the genus Lactobacillus, which suppress pathogens by producing hydrogen peroxide and lactic acid. Bacterial species composition and ratios vary depending on the stage of the menstrual cycle. Ethnicity also influences vaginal flora. The occurrence of hydrogen peroxide-producing lactobacilli is lower in African American women and vaginal pH is higher. Other influential factors such as sexual intercourse and antibiotics have been linked to the loss of lactobacilli. Moreover, studies have found that sexual intercourse with a condom does appear to change lactobacilli levels, and does increase the level of Escherichia coli within the vaginal flora. Changes in the normal, healthy vaginal microbiota is an indication of infections, such as candidiasis or bacterial vaginosis. Candida albicans inhibits the growth of Lactobacillus species, while Lactobacillus species which produce hydrogen peroxide inhibit the growth and virulence of Candida albicans in both the vagina and the gut. Fungal genera that have been detected in the vagina include Candida, Pichia, Eurotium, Alternaria, Rhodotorula, and Cladosporium, among others. Placenta Until recently the placenta was considered to be a sterile organ but commensal, nonpathogenic bacterial species and genera have been identified that reside in the placental tissue. However, the existence of a microbiome in the placenta is controversial as criticized in several researches. So called "placental microbiome" is likely derived from contamination of regents because low-biomass samples are easily contaminated. Uterus Until recently, the upper reproductive tract of women was considered to be a sterile environment. A variety of microorganisms inhabit the uterus of healthy, asymptomatic women of reproductive age. The microbiome of the uterus differs significantly from that of the vagina and gastrointestinal tract. Oral cavity The environment present in the human mouth allows the growth of characteristic microorganisms found there. It provides a source of water and nutrients, as well as a moderate temperature. Resident microbes of the mouth adhere to the teeth and gums to resist mechanical flushing from the mouth to stomach where acid-sensitive microbes are destroyed by hydrochloric acid. Anaerobic bacteria in the oral cavity include: Actinomyces, Arachnia, Bacteroides, Bifidobacterium, Eubacterium, Fusobacterium, Lactobacillus, Leptotrichia, Peptococcus, Peptostreptococcus, Propionibacterium, Selenomonas, Treponema, and Veillonella. Genera of fungi that are frequently found in the mouth include Candida, Cladosporium, Aspergillus, Fusarium, Glomus, Alternaria, Penicillium, and Cryptococcus, among others. Bacteria accumulate on both the hard and soft oral tissues in biofilm allowing them to adhere and strive in the oral environment while protected from the environmental factors and antimicrobial agents. Saliva plays a key biofilm homeostatic role allowing recolonization of bacteria for formation and controlling growth by detaching biofilm buildup. It also provides a means of nutrients and temperature regulation. The location of the biofilm determines the type of exposed nutrients it receives. Oral bacteria have evolved mechanisms to sense their environment and evade or modify the host. However, a highly efficient innate host defense system constantly monitors the bacterial colonization and prevents bacterial invasion of local tissues. A dynamic equilibrium exists between dental plaque bacteria and the innate host defense system. This dynamic between host oral cavity and oral microbes plays a key role in health and disease as it provides entry into the body. A healthy equilibrium presents a symbiotic relationship where oral microbes limit growth and adherence of pathogens while the host provides an environment for them to flourish. Ecological changes such as change of immune status, shift of resident microbes and nutrient availability shift from a mutual to parasitic relationship resulting in the host being prone to oral and systemic disease. Systemic diseases such as diabetes and cardiovascular diseases has been correlated to poor oral health. Of particular interest is the role of oral microorganisms in the two major dental diseases: dental caries and periodontal disease. Pathogen colonization at the periodontium cause an excessive immune response resulting in a periodontal pocket- a deepened space between the tooth and gingiva. This acts as a protected blood-rich reservoir with nutrients for anaerobic pathogens. Systemic disease at various sites of the body can result from oral microbes entering the blood bypassing periodontal pockets and oral membranes. Persistent proper oral hygiene is the primary method for preventing oral and systemic disease. It reduces the density of biofilm and overgrowth of potential pathogenic bacteria resulting in disease. However, proper oral hygiene may not be enough as the oral microbiome, genetics, and changes to immune response play a factor in developing chronic infections. Use of antibiotics could treat already spreading infection but ineffective against bacteria within biofilms. Nasal cavity The healthy nasal microbiome is dominated by Corynebacterium and Staphylococcus species. The mucosal microbiome plays a critical role in modulating viral infection. Lung Much like the oral cavity, the upper and lower respiratory system possess mechanical deterrents to remove microbes. Goblet cells produce mucus which traps microbes and moves them out of the respiratory system via continuously moving ciliated epithelial cells. In addition, a bactericidal effect is generated by nasal mucus which contains the enzyme lysozyme. The upper and lower respiratory tract appears to have its own set of microbiota. Pulmonary bacterial microbiota belong to 9 major bacterial genera: Prevotella, Sphingomonas, Pseudomonas, Acinetobacter, Fusobacterium, Megasphaera, Veillonella, Staphylococcus, and Streptococcus. Some of the bacteria considered "normal biota" in the respiratory tract can cause serious disease especially in immunocompromised individuals; these include Streptococcus pyogenes, Haemophilus influenzae, Streptococcus pneumoniae, Neisseria meningitidis, and Staphylococcus aureus. Fungal genera that compose the pulmonary mycobiome include Candida, Malassezia, Neosartorya, Saccharomyces, and Aspergillus, among others. Unusual distributions of bacterial and fungal genera in the respiratory tract is observed in people with cystic fibrosis. Their bacterial flora often contains antibiotic-resistant and slow-growing bacteria, and the frequency of these pathogens changes in relation to age. Biliary tract Traditionally the biliary tract has been considered to be normally sterile, and the presence of microorganisms in bile is a marker of pathological process. This assumption was confirmed by failure in allocation of bacterial strains from the normal bile duct. Papers began emerging in 2013 showing that the normal biliary microbiota is a separate functional layer which protects a biliary tract from colonization by exogenous microorganisms. Disease and death Human bodies rely on the innumerable bacterial genes as the source of essential nutrients. Both metagenomic and epidemiological studies indicate vital roles for the human microbiome in preventing a wide range of diseases, from type 2 diabetes and obesity to inflammatory bowel disease, Parkinson's disease, and even mental health conditions like depression. A symbiotic relationship between the gut microbiota and different bacteria may influence an individual's immune response. Metabolites generated by gut microbes appear to be causative factors in type 2 diabetes. Although in its infancy, microbiome-based treatment is also showing promise, most notably for treating drug-resistant C. difficile infection and in diabetes treatment. Clostridioides difficile infection An overwhelming presence of the bacteria, C. difficile, leads to an infection of the gastrointestinal tract, normally associated to dysbiosis with the microbiota believed to have been caused by the administration of antibiotics. Use of antibiotics eradicates the beneficial gut flora within the gastrointestinal tract, which normally prevents pathogenic bacteria from establishing dominance. Traditional treatment for C. difficile infections includes an additional regime of antibiotics, however, efficacy rates average between 20 and 30%. Recognizing the importance of healthy gut bacteria, researchers turned to a procedure known as fecal microbiota transplant (FMT), where patients experiencing gastrointestinal diseases, such as C. difficile infection (CDI), receive fecal content from a healthy individual in hopes of restoring a normal functioning intestinal microbiota. Fecal microbiota transplant is approximately 85–90% effective in people with CDI for whom antibiotics have not worked or in whom the disease recurs following antibiotics. Most people with CDI recover with one FMT treatment. Cancer Although cancer is generally a disease of host genetics and environmental factors, microorganisms are implicated in some 20% of human cancers. Particularly for potential factors in colon cancer, bacterial density is one million times higher than in the small intestine, and approximately 12-fold more cancers occur in the colon compared to the small intestine, possibly establishing a pathogenic role for microbiota in colon and rectal cancers. Microbial density may be used as a prognostic tool in assessment of colorectal cancers. The microbiota may affect carcinogenesis in three broad ways: (i) altering the balance of tumor cell proliferation and death, (ii) regulating immune system function, and (iii) influencing metabolism of host-produced factors, foods and pharmaceuticals. Tumors arising at boundary surfaces, such as the skin, oropharynx and respiratory, digestive and urogenital tracts, harbor a microbiota. Substantial microbe presence at a tumor site does not establish association or causal links. Instead, microbes may find tumor oxygen tension or nutrient profile supportive. Decreased populations of specific microbes or induced oxidative stress may also increase risks. Of the around 1030 microbes on earth, ten are designated by the International Agency for Research on Cancer as human carcinogens. Microbes may secrete proteins or other factors directly drive cell proliferation in the host, or may up- or down-regulate the host immune system including driving acute or chronic inflammation in ways that contribute to carcinogenesis. Concerning the relationship of immune function and development of inflammation, mucosal surface barriers are subject to environmental risks and must rapidly repair to maintain homeostasis. Compromised host or microbiota resiliency also reduce resistance to malignancy, possibly inducing inflammation and cancer. Once barriers are breached, microbes can elicit proinflammatory or immunosuppressive programs through various pathways. For example, cancer-associated microbes appear to activate NF-κΒ signaling within the tumor microenvironment. Other pattern recognition receptors, such as nucleotide-binding oligomerization domain–like receptor (NLR) family members NOD-2, NLRP3, NLRP6 and NLRP12, may play a role in mediating colorectal cancer. Likewise Helicobacter pylori appears to increase the risk of gastric cancer, due to its driving a chronic inflammatory response in the stomach. Inflammatory bowel disease Inflammatory bowel disease consists of two different diseases: ulcerative colitis and Crohn's disease and both of these diseases present with disruptions in the gut microbiota (also known as dysbiosis). This dysbiosis presents itself in the form of decreased microbial diversity in the gut, and is correlated to defects in host genes that changes the innate immune response in individuals. Human immunodeficiency virus The HIV disease progression influences the composition and function of the gut microbiota, with notable differences between HIV-negative, HIV-positive, and post-ART HIV-positive populations. HIV decreases the integrity of the gut epithelial barrier function by affecting tight junctions. This breakdown allows for translocation across the gut epithelium, which is thought to contribute to increases in inflammation seen in people with HIV. Vaginal microbiota plays a role in the infectivity of HIV, with an increased risk of infection and transmission when the woman has bacterial vaginosis, a condition characterized by an abnormal balance of vaginal bacteria. The enhanced infectivity is seen with the increase in pro-inflammatory cytokines and CCR5 + CD4+ cells in the vagina. However, a decrease in infectivity is seen with increased levels of vaginal Lactobacillus, which promotes an anti-inflammatory condition. Gut microbiome of centenarians Humans who are 100 years old or older, called centenarians, have a distinct gut microbiome. This microbiome is characteristically enriched in microorganisms that are able to synthesize novel secondary bile acids. These secondary bile acids include various isoforms of lithocholic acid that may contribute to healthy aging. Death With death, the microbiome of the living body collapses and a different composition of microorganisms named necrobiome establishes itself as an important active constituent of the complex physical decomposition process. Its predictable changes over time are thought to be useful to help determine the time of death. Environmental health Studies in 2009 questioned whether the decline in biota (including microfauna) as a result of human intervention might impede human health, hospital safety procedures, food product design, and treatments of disease. Changes, modulation and transmission Hygiene, probiotics, prebiotics, synbiotics, light therapy, microbiota transplants (fecal or skin), antibiotics, exercise, diet, breastfeeding, aging can change the human microbiome across various anatomical systems or regions such as skin and gut. Person-to-person transmission The human microbiome is transmitted between a mother and her children, as well as between people living in the same household. Research Migration Primary research indicates that immediate changes in the microbiota may occur when a person migrates from one country to another, such as when Thai immigrants settled in the United States or when Latin Americans immigrated into the United States. Losses of microbiota diversity were greater in obese individuals and children of immigrants. Cellulose digestion A 2024 study suggests that gut microbiota capable of digesting cellulose can be found in the human microbiome, and they are less abundant in people living in industrialized societies. See also Human Microbiome Project Human milk microbiome Human virome Hygiene hypothesis Initial acquisition of microbiota Microbiome Microbiome Immunity Project Microorganism Bibliography Ed Yong. I Contain Multitudes: The Microbes Within Us and a Grander View of Life. 368 pages, Published 9 August 2016 by Ecco, . References External links The Secret World Inside You Exhibit 2015–2016, American Museum of Natural History FAQ: Human Microbiome, January 2014 American Society For Microbiology Bacteriology Bacteria and humans Microbiology Microbiomes Environmental microbiology Human genome projects
Human microbiome
[ "Chemistry", "Biology", "Environmental_science" ]
7,531
[ "Microbiology", "Environmental microbiology", "Microscopy", "Genome projects", "Bacteria", "Microbiomes", "Human genome projects", "Bacteria and humans", "Gut flora" ]
205,592
https://en.wikipedia.org/wiki/Highway%20engineering
Highway engineering (also known as roadway engineering and street engineering) is a professional engineering discipline branching from the civil engineering subdiscipline of transportation engineering that involves the planning, design, construction, operation, and maintenance of roads, highways, streets, bridges, and tunnels to ensure safe and effective transportation of people and goods. Highway engineering became prominent towards the latter half of the 20th century after World War II. Standards of highway engineering are continuously being improved. Highway engineers must take into account future traffic flows, design of highway intersections/interchanges, geometric alignment and design, highway pavement materials and design, structural design of pavement thickness, and pavement maintenance. History The beginning of road construction could be dated to the time of the Romans. With the advancement of technology from carriages pulled by two horses to vehicles with power equivalent to 100 horses, road development had to follow suit. The construction of modern highways did not begin until the late 19th to early 20th century. The first research dedicated to highway engineering was initiated in the United Kingdom with the introduction of the Transport Research Laboratory (TRL), in 1930. In the US, highway engineering became an important discipline with the passing of the Federal-Aid Highway Act of 1944, which aimed to connect 90% of cities with a population of 50,000 or more. With constant stress from vehicles which grew larger as time passed, improvements to pavements were needed. With technology out of date, in 1958 the construction of the first motorway in Great Britain (the Preston bypass) played a major role in the development of new pavement technology. Planning and development Highway planning involves the estimation of current and future traffic volumes on a road network. The Highway planning is also a basic need for the Highway development. Highway engineers strive to predict and analyze all possible civil impacts of highway systems. Some considerations are the adverse effects on the environment, such as noise pollution, air pollution, water pollution, and other ecological impacts. Financing Developed countries are constantly faced with high maintenance cost of aging transportation highways. The growth of the motor vehicle industry and accompanying economic growth has generated a demand for safer, better performing, less congested highways. The growth of commerce, educational institutions, housing, and defense have largely drawn from government budgets in the past, making the financing of public highways a challenge. The multipurpose characteristics of highways, economic environment, and the advances in highway pricing technology are constantly changing. Therefore, the approaches to highway financing, management, and maintenance are constantly changing as well. Environmental impact assessment The economic growth of a community is dependent upon highway development to enhance mobility. However, improperly planned, designed, constructed, and maintained highways can disrupt the social and economic characteristics of any size community. Common adverse impacts to highway development include damage of habitat and bio-diversity, creation of air and water pollution, noise and vibration generation, damage of natural landscape, and the destruction of a community's social and cultural structure. Highway infrastructure must be constructed and maintained to high qualities and standards. There are three key steps for integrating environmental considerations into the planning, scheduling, construction, and maintenance of highways. This process is known as an Environmental Impact Assessment, or EIA, as it systematically deals with the following elements: Identification of the full range of possible impacts on the natural and socio-economic environment Evaluation and quantification of these impacts Formulation of measures to avoid, mitigate, and compensate for the anticipated impacts. Highway safety Highway systems generate the highest price in human injury and death, as nearly 50 million persons are injured in traffic accidents every year, not including the 1.2 million deaths. Road traffic injury is the single leading cause of unintentional death in the first five decades of human life. Management of safety is a systematic process that strives to reduce the occurrence and severity of traffic accidents. The man/machine interaction with road traffic systems is unstable and poses a challenge to highway safety management. The key for increasing the safety of highway systems is to design, build, and maintain them to be far more tolerant of the average range of this man/machine interaction with highways. Technological advancements in highway engineering have improved the design, construction, and maintenance methods used over the years. These advancements have allowed for newer highway safety innovations. By ensuring that all situations and opportunities are identified, considered, and implemented as appropriate, they can be evaluated in every phase of highway planning, design, construction, maintenance, and operation to increase the safety of our highway systems. Design The most appropriate location, alignment, and shape of a highway are selected during the design stage. Highway design involves the consideration of three major factors (human, vehicular, and roadway) and how these factors interact to provide a safe highway. Human factors include reaction time for braking and steering, visual acuity for traffic signs and signals, and car-following behaviour. Vehicle considerations include vehicle size and dynamics that are essential for determining lane width and maximum slopes, and for the selection of design vehicles. Highway engineers design road geometry to ensure stability of vehicles when negotiating curves and grades and to provide adequate sight distances for undertaking passing maneuvers along curves on two-lane, two-way roads. Geometric design Highway and transportation engineers must meet many safety, service, and performance standards when designing highways for certain site topography. Highway geometric design primarily refers to the visible elements of the highways. Highway engineers who design the geometry of highways must also consider environmental and social effects of the design on the surrounding infrastructure. There are certain considerations that must be properly addressed in the design process to successfully fit a highway to a site's topography and maintain its safety. Some of these design considerations are: Design speed Design traffic volume Number of lanes Level of service (LOS) Sight distance Alignment, super-elevation, and grades Cross section Lane width Structure gauge, Horizontal and vertical clearance The operational performance of a highway can be seen through drivers' reactions to the design considerations and their interaction. Materials The materials used for roadway construction have progressed with time, dating back to the early days of the Roman Empire. Advancements in methods with which these materials are characterized and applied to pavement structural design has accompanied this advancement in materials. There are three major types of pavement surfaces - pavement quality concrete (PQC), Portland cement concrete (PCC) and hot-mix asphalt (HMA). Underneath this wearing course are material layers that give structural support for the pavement system. These underlying surfaces may include either the aggregate base and sub base layers, or treated base and sub base layers, and additionally the underlying natural or treated sub grade. These treated layers may be cement-treated, asphalt-treated, or lime-treated for additional support. New Material Flexible pavement design A flexible, or asphalt, or Tarmac pavement typically consists of three or four layers. For a four layer flexible pavement, there is a surface course, base course, and subbase course constructed over a compacted, natural soil subgrade. When building a three layer flexible pavement, the subbase layer is not used and the base course is placed directly on the natural subgrade. A flexible pavement's surface layer is constructed of hot-mix asphalt (HMA).Unstabilized aggregates are typically used for the base course; however, the base course could also be stabilized with asphalt, Foamed Bitumen,<Roadstone Recycling> Portland cement, or another stabilizing agent. The subbase is generally constructed from local aggregate material, while the top of the subgrade is often stabilized with cement or lime. With flexible pavement, the highest stress occurs at the surface and the stress decreases as the depth of the pavement increases. Therefore, the highest quality material needs to be used for the surface, while lower quality materials can be used as the depth of the pavement increases. The term "flexible" is used because of the asphalts ability to bend and deform slightly, then return to its original position as each traffic load is applied and removed. It is possible for these small deformations to become permanent, which can lead to rutting in the wheel path over an extended time. The service life of a flexible pavement is typically designed in the range of 20 to 30 years. Required thicknesses of each layer of a flexible pavement vary widely depending on the materials used, magnitude, number of repetitions of traffic loads, environmental conditions, and the desired service life of the pavement. Factors such as these are taken into consideration during the design process so that the pavement will last for the designed life without excessive distresses. Rigid pavement design Rigid pavements are generally used in constructing airports and major highways, such as those in the interstate highway system. In addition, they commonly serve as heavy-duty industrial floor slabs, port and harbor yard pavements, and heavy-vehicle park or terminal pavements. Like flexible pavements, rigid highway pavements are designed as all-weather, long-lasting structures to serve modern day high-speed traffic. Offering high quality riding surfaces for safe vehicular travel, they function as structural layers to distribute vehicular wheel loads in such a manner that the induced stresses transmitted to the subgrade soil are of acceptable magnitudes. Portland cement concrete (PCC) is the most common material used in the construction of rigid pavement slabs. The reason for its popularity is due to its availability and the economy. Rigid pavements must be designed to endure frequently repeated traffic loadings. The typical designed service life of a rigid pavement is between 30 and 40 years, lasting about twice as long as a flexible pavement. One major design consideration of rigid pavements is reducing fatigue failure due to the repeated stresses of traffic. Fatigue failure is common among major roads because a typical highway will experience millions of wheel passes throughout its service life. In addition to design criteria such as traffic loadings, tensile stresses due to thermal energy must also be taken into consideration. As pavement design has progressed, many highway engineers have noted that thermally induced stresses in rigid pavements can be just as intense as those imposed by wheel loadings. Due to the relatively low tensile strength of concrete, thermal stresses are extremely important to the design considerations of rigid pavements. Rigid pavements are generally constructed in three layers - a prepared subgrade, base or subbase, and a concrete slab. The concrete slab is constructed according to a designed choice of plan dimensions for the slab panels, directly influencing the intensity of thermal stresses occurring within the pavement. In addition to the slab panels, temperature reinforcements must be designed to control cracking behavior in the slab. Joint spacing is determined by the slab panel dimensions. Three main types of concrete pavements commonly used are jointed plain concrete pavement (JPCP), jointed reinforced concrete pavement (JRCP), and continuously reinforced concrete pavements (CRCP). JPCPs are constructed with contraction joints which direct the natural cracking of the pavement. These pavements do not use any reinforcing steel. JRCPs are constructed with both contraction joints and reinforcing steel to control the cracking of the pavement. High temperatures and moisture stresses within the pavement creates cracking, which the reinforcing steel holds tightly together. At transverse joints, dowel bars are typically placed to assist with transferring the load of the vehicle across the cracking. CRCPs solely rely on continuous reinforcing steel to hold the pavement's natural transverse cracks together. Prestressed concrete pavements have also been used in the construction of highways; however, they are not as common as the other three. Prestressed pavements allow for a thinner slab thickness by partly or wholly neutralizing thermally induced stresses or loadings. Flexible pavement overlay design Over the service life of a flexible pavement, accumulated traffic loads may cause excessive rutting or cracking, inadequate ride quality, or an inadequate skid resistance. These problems can be avoided by adequately maintaining the pavement, but the solution usually has excessive maintenance costs, or the pavement may have an inadequate structural capacity for the projected traffic loads. Throughout a highway's life, its level of serviceability is closely monitored and maintained. One common method used to maintain a highway's level of serviceability is to place an overlay on the pavement's surface. There are three general types of overlay used on flexible pavements: asphalt-concrete overlay, Portland cement concrete overlay, and ultra-thin Portland cement concrete overlay. The concrete layer in a conventional PCC overlay is placed unbonded on top of the flexible surface. The typical thickness of an ultra-thin PCC overlay is 4 inches (10 cm) or less. There are two main categories of flexible pavement overlay design procedures: Component analysis design Deflection-based design Rigid pavement overlay design Near the end of a rigid pavement's service life, a decision must be made to either fully reconstruct the worn pavement, or construct an overlay layer. Considering an overlay can be constructed on a rigid pavement that has not reached the end of its service life, it is often more economically attractive to apply overlay layers more frequently. The required overlay thickness for a structurally sound rigid pavement is much smaller than for one that has reached the end of its service life. Rigid and flexible overlays are both used for rehabilitation of rigid pavements such as JPCP, JRCP, and CRCP. There are three subcategories of rigid pavement overlays that are organized depending on the bonding condition at the pavement overlay and existing slab interface. Bonded overlays Unbonded overlays Partially bonded overlays Drainage system design Designing for proper drainage of highway systems is crucial to their success. A highway should be graded and built to remain "high and dry". Regardless of how well other aspects of a road are designed and constructed, adequate drainage is mandatory for a road to survive its entire service life. Excess water in the highway structure can inevitably lead to premature failure, even if the failure is not catastrophic. Each highway drainage system is site-specific and can be very complex. Depending on the geography of the region, many methods for proper drainage may not be applicable. The highway engineer must determine which situations a particular design process should be applied, usually a combination of several appropriate methods and materials to direct water away from the structure. Pavement subsurface drainage, and underdrains help provide extended life and excellent and reliable pavement performance. Excessive moisture under a concrete pavement can cause pumping, cracking, and joint failure. Erosion control is a crucial component in the design of highway drainage systems. Surface drainage must be allowed for precipitation to drain away from the structure. Highways must be designed with a slope or crown so that runoff water will be directed to the shoulder of the road, into a ditch, and away from the site. Designing a drainage system requires the prediction of runoff and infiltration, open channel analysis, and culvert design for directing surface water to an appropriate location. Construction, maintenance, and management Highway construction Highway construction is generally preceded by detailed surveys and subgrade preparation. The methods and technology for constructing highways has evolved over time and become increasingly sophisticated. This advancement in technology has raised the level of skill sets required to manage highway construction projects. This skill varies from project to project, depending on factors such as the project's complexity and nature, the contrasts between new construction and reconstruction, and differences between urban region and rural region projects. There are a number of elements of highway construction which can be broken up into technical and commercial elements of the system. Some examples of each are listed below: Technical elements Materials Material quality Installation techniques Traffic Commercial elements Contract understanding Environmental aspects Political aspects Legal aspects Public concerns Typically, construction begins at the lowest elevation of the site, regardless of the project type, and moves upward. By reviewing the geotechnical specifications of the project, information is given about: Existing ground conditions Required equipment for excavation, grading, and material transportation to and from the site Properties of materials to be excavated Dewatering requirements necessary for below-grade work Shoring requirements for excavation protection Water quantities for compaction and dust control Subbase course construction A subbase course is a layer designed of carefully selected materials that is located between the subgrade and base course of the pavement. The subbase thickness is generally in the range of 4 to 16 inches, and it is designed to withstand the required structural capacity of the pavement section. Common materials used for a highway subbase include gravel, crushed stone, or subgrade soil that is stabilized with cement, fly ash, or lime. Permeable subbase courses are becoming more prevalent because of their ability to drain infiltrating water from the surface. They also prevent subsurface water from reaching the pavement surface. When local material costs are excessively expensive or the material requirements to increase the structural bearing of the sub-base are not readily available, highway engineers can increase the bearing capacity of the underlying soil by mixing in Portland cement, foamed asphalt, or use polymer soil stabilization such as cross-linking styrene acrylic polymer that increases the California Bearing Ratio of in-situ materials by a factor 4 – 6. Base course construction The base course is the region of the pavement section that is located directly under the surface course. If there is a subbase course, the base course is constructed directly about this layer. Otherwise, it is built directly on top of the subgrade. Typical base course thickness ranges from 4 to 6 inches and is governed by underlying layer properties. Heavy loads are continuously applied to pavement surfaces, and the base layer absorbs the majority of these stresses. Generally, the base course is constructed with an untreated crushed aggregate such as crushed stone, slag, or gravel. The base course material will have stability under the construction traffic and good drainage characteristics. The base course materials are often treated with cement, bitumen, calcium chloride, sodium chloride, fly ash, or lime. These treatments provide improved support for heavy loads, frost susceptibility, and serves as a moisture barrier between the base and surface layers. Surface course construction There are two most commonly used types of pavement surfaces used in highway construction: hot-mix asphalt and Portland cement concrete. These pavement surface courses provide a smooth and safe riding surface, while simultaneously transferring the heavy traffic loads through the various base courses and into the underlying subgrade soils. Hot-mix asphalt layers Hot-mix asphalt surface courses are referred to as flexible pavements. The Superpave System was developed in the late 1980s and has offered changes to the design approach, mix design, specifications, and quality testing of materials. The construction of an effective, long-lasting asphalt pavement requires an experienced construction crew, committed to their work quality and equipment control. Construction issues: Asphalt mix segregation Laydown Compaction Joints A prime coat is a low viscosity asphalt that is applied to the base course prior to laying the HMA surface course. This coat bonds loose material, creating a cohesive layer between the base course and asphalt surface. A tack coat is a low viscosity asphalt emulsion that is used to create a bond between an existing pavement surface and new asphalt overlay. Tack coats are typically applied on adjacent pavements (curbs) to assist the bonding of the HMA and concrete. Portland cement concrete (PCC) Portland cement concrete surface courses are referred to as rigid pavements, or concrete pavements. There are three general classifications of concrete pavements - jointed plain, jointed reinforced, and continuously reinforced. Traffic loadings are transferred between sections when larger aggregates in the PCC mix inter-lock together, or through load transfer devices in the transverse joints of the surface. Dowel bars are used as load-transferring devices to efficiently transfer loads across transverse joints while maintaining the joint's horizontal and vertical alignment. Tie-bars are deformed steel bars that are placed along longitudinal joints to hold adjacent pavement sections in place. Highway maintenance The overall purpose of highway maintenance is to fix defects and preserve the pavement's structure and serviceability. Defects must be defined, understood, and recorded in order to create an appropriate maintenance plan. Maintenance planning is solving an optimisation problem and it can be predictive. In predictive maintenance planning empirical, data-driven methods give more accurate results than mechanical models. Defects differ between flexible and rigid pavements. There are four main objectives of highway maintenance: repair of functional pavement defects extend the functional and structural service life of the pavement maintain road safety and signage keep road reserve in acceptable condition Through routine maintenance practices, highway systems and all of their components can be maintained to their original, as-built condition. Project management Project management involves the organization and structuring of project activities from inception to completion. Activities could be the construction of infrastructure such as highways and bridges or major and minor maintenance activities related to constructing such infrastructure. The entire project and involved activities must be handled in a professional manner and completed within deadlines and budget. In addition, minimizing social and environmental impacts is essential to successful project management. See also Highway and parkway Controlled-access highway Interstate Highway System Limited-access highway Parkway Strategic Highway Network Design and consideration Breakover angle Degree of curvature Geometric design of roads Pavement engineering Road furniture Road traffic safety Traffic barrier Traffic light Traffic sign Transition curve References External links: highway design standards Australia United Kingdom AggreBind United States of America (AASHTO) Arizona (USA) California (USA) Connecticut (USA) Kentucky (USA) New York (USA) New Jersey (USA) Texas (USA) Wisconsin (USA) Further reading Human Factors for Highway Engineers at Googlebooks Transportation engineering Road infrastructure Highways
Highway engineering
[ "Engineering" ]
4,354
[ "Transportation engineering", "Civil engineering", "Industrial engineering" ]
206,064
https://en.wikipedia.org/wiki/Van%20der%20Waals%20equation
The van der Waals equation is a mathematical formula that describes the behavior of real gases. It is named after Dutch physicist Johannes Diderik van der Waals. It is an equation of state that relates the pressure, temperature, and molar volume in a fluid. However, it can be written in terms of other, equivalent, properties in place of the molar volume, such as specific volume or number density. The equation modifies the ideal gas law in two ways: first, it considers particles to have a finite diameter (whereas an ideal gas consists of point particles); second, its particles interact with each other (unlike an ideal gas, whose particles move as though alone in the volume). It was only in 1909 that the scientific debate about the nature of matter—discrete or continuous—was finally settled. Indeed, at the time van der Waals created his equation, which he based on the idea that fluids are composed of discrete particles, few scientists believed that such particles really existed. They were regarded as purely metaphysical constructs that added nothing useful to the knowledge obtained from the results of experimental observations. However, the theoretical explanation of the critical point, which had been discovered a few years earlier, and later its qualitative and quantitative agreement with experiments cemented its acceptance in the scientific community. Ultimately these accomplishments won van der Waals the 1910 Nobel Prize in Physics. Today the equation is recognized as an important model of phase change processes. Van der Waals also adapted his equation so that it applied to a binary mixture of fluids. He, and others, then used the modified equation to discover a host of important facts about the phase equilibria of such fluids. This application, expanded to treat multi-component mixtures, has extended the predictive ability of the equation to fluids of industrial and commercial importance. In this arena it has spawned many similar equations in a continuing attempt by engineers to improve their ability to understand and manage these fluids; it remains relevant to the present. Behavior of the equation One way to write the van der Waals equation is:where is pressure, is the universal gas constant, is temperature, is molar volume, and and are experimentally determinable, substance-specific constants. Molar volume is given by , where is the Avogadro constant, is the volume, and is the number of molecules (the ratio is the amount of substance, a physical quantity with the base unit mole). When van der Waals created his equation, few scientists believed that fluids were composed of rapidly moving particles. Moreover, those who thought so had no knowledge of the atomic/molecular structure. The simplest conception of a particle, and the easiest to model mathematically, was a hard sphere; this is what van der Waals used. In that case, two particles of diameter would come into contact when their centers were a distance apart; hence the center of the one was excluded from a spherical volume equal to about the other. That is 8 times , the volume of each particle of radius , but there are 2 particles which gives 4 times the volume per particle. The total excluded volume is then ; that is, 4 times the volume of all the particles. Van der Waals and his contemporaries used an alternative, but equivalent, analysis based on the mean free path between molecular collisions that gave this result. From the fact that the volume fraction of particles, must be positive, van der Waals noted that as becomes larger the factor 4 must decrease (for spheres there is a known minimum ), but he was never able to determine the nature of the decrease. The constant , and has dimension of molar volume, [v]. The constant expresses the strength of the hypothesized interparticle attraction. Van der Waals only had as a model Newton's law of gravitation, in which two particles are attracted in proportion to the product of their masses. Thus he argued that in his case the attractive pressure was proportional to the square of the density. The proportionality constant, , when written in the form used above, has the dimension [pv2] (pressure times molar volume squared), which is also molar energy times molar volume. The intermolecular force was later conveniently described by the negative derivative of a pair potential function. For spherically symmetric particles, this is most simply a function of separation distance with a single characteristic length, , and a minimum energy, (with ). Two of the many such functions that have been suggested are shown in the accompanying plot. A modern theory based on statistical mechanics produces the same result for obtained by van der Waals and his contemporaries. This result is valid for any pair potential for which the increase in is sufficiently rapid. This includes the hard sphere model for which the increase is infinitely rapid and the result is exact. Indeed, the Sutherland potential most accurately models van der Waals' conception of a molecule. It also includes potentials that do not represent hard sphere force interactions provided that the increase in for is fast enough, but then it is approximate; increasingly better the faster the increase. In that case is only an "effective diameter" of the molecule. This theory also produces where is a number that depends on the shape of the potential function, . However, this result is only valid when the potential is weak, namely, when the minimum potential energy is very much smaller than the thermal energy, . In his book (see references and ) Ludwig Boltzmann wrote equations using (specific volume) rather than (molar volume, used here); Josiah Willard Gibbs did as well, as do most engineers. Physicists use the property (the reciprocal of number density), but there is no essential difference between equations written with any of these properties. Equations of state written using molar volume contain , those using specific volume contain (the substance specific is the molar mass with the mass of a single particle), and those written with number density contain . Once the constants and are experimentally determined for a given substance, the van der Waals equation can be used to predict attributes like the boiling point at any given pressure, and the critical point (defined by pressure and temperature such that the substance cannot be liquefied either when no matter how low the temperature, or when no matter how high the pressure; uniquely define ). These predictions are accurate for only a few substances. For most simple fluids they are only a valuable approximation. The equation also explains why superheated liquids can exist above their boiling point and subcooled vapors can exist below their condensation point. Example The graph on the right plots the intersection of the surface shown in Figures A and C and four planes of constant pressure. Each intersection produces a curve in the plane corresponding to the value of the pressure chosen. These curves are isobars, since they represent all the points with the same pressure. On the red isobar, , the slope is positive over the entire range, (although the plot only shows a finite region). This describes a fluid as homogeneous for all —that is, it does not undergo a phase transition at any temperature—which is a characteristic of all supercritical isobars . The orange isobar, , is the critical one that marks the boundary between homogeneity and heterogeneity. The critical point lies on this isobar. The green isobar, , has a region of negative slope. This region consists of states that are unstable and therefore never observed (for this reason this region is shown dotted gray). The green curve thus consists of two disconnected branches, indicated two phase states: a vapor on the right, and a denser liquid on the left. For this pressure, at a temperature (specified by mechanical, thermal, and material equilibrium), the boiling (saturated) liquid and condensing (saturated) vapor coexist, shown on the curve as the left and right green circles, respectively. The locus of these coexistent saturation points across all subcritical isobars forms the saturation curve on the surface. In this situation, the denser liquid will separate and collect below the vapor due to gravity, and a meniscus will form between them. This heterogeneous combination of coexisting liquid and vapor is the phase transition. Heating the liquid in this state increases the fraction of vapor in the mixture—its , an average of and weighted by this fraction, increases while remains the same. This is shown as the horizontal dotted gray line, which represents not a solution of the equation but the observed behavior. The points above , superheated liquid, and those below it, subcooled vapor, are metastable; a sufficiently strong disturbance causes them to transform to the stable alternative. These metastable regions are shown with green dashed lines. In summary, this isobar describes a fluid as a stable vapor for , a stable liquid for , and a mixture of liquid and vapor at , that also supports metastable states of subcooled vapor and superheated liquid. This behavior is characteristic of all subcritical isobars , where is a function of . The black isobar, , is the limit of positive pressures. None of its points represent stable solutions: they are either metastable (positive or zero slope) or unstable (negative slope). Interestingly, states of negative pressure (tension) exist. Their isobars lie below the black isobar, and form those parts of the surfaces seen in Figures A and C that lie below the zero-pressure plane. In this plane they have a parabola-like shape, and, like the zero-pressure isobar, their states are all either metastable (positive or zero slope) or unstable (negative slope). Surface plots Figure B shows the surface calculated from the ideal gas equation of state. This surface is universal, meaning it represents all ideal gases. Here, the surface is normalized so that the coordinate is at in the 3-dimensional plot space (the black dot). This normalization makes it easier to compare this surface with the surface generated by the van der Waals equation in Figure C. Figures A and C show the surface calculated from the van der Waals equation. Note that whereas the ideal gas surface is relatively uniform, the van der Waals surface has a distinctive "fold". This fold develops from a critical point defined by specific values of pressure, temperature, and molar volume. Because the surface is plotted using dimensionless variables (formed by the ratio of each property to its respective critical value), the critical point is located at the coordinates . When drawn using these dimensionless axes, this surface is, like that of the ideal gas, also universal. Moreover, it represents all real substances to a remarkably high degree of accuracy. This principle of corresponding states, developed by van der Waals from his equation, has become one of the fundamental ideas in the thermodynamics of fluids. The fold's boundary on the surface is marked, on each side of the critical point, by the spinodal curve (identified in Fig. A, and seen in Figs. A and C). This curve delimits an unstable region wherein no observable homogeneous states exist; elsewhere on the surface, states of liquid, vapor, and gas exist. The fold in the surface is what enables the equation to predict the phenomenon of liquid–vapor phase change. This phenomenon is described by the saturation curve (or coexistence curve): the locus of saturated liquid and vapor states which, being in equilibrium with each other, can coexist. The saturation curve is not specified by the properties of the surface alone—it is substance-dependent. The saturated liquid curve and saturated vapor curve (both identified in Fig. A) together comprise the saturation curve. The inset in Figure A shows the mixture states, which are a combination of the saturated liquid and vapor states that correspond to each end of the horizontal mixture line (that is, the points of intersection between the mixture line and its isotherm). However, these mixture states are not part of the surface generated by the van der Waals equation; they are not solutions of the equation. Relationship to the ideal gas law The ideal gas law follows from the van der Waals equation whenever the molar volume is sufficiently large (when , so ), or correspondingly whenever the molar density, , is sufficiently small (when , so ). When is large enough that both inequalities are satisfied, these two approximations reduce the van der Waals equation to ; rearranging in terms of and gives , which is the ideal gas law. This is not surprising since the van der Waals equation was constructed from the ideal gas equation in order to obtain an equation valid beyond the limit of ideal gas behavior. What is truly remarkable is the extent to which van der Waals succeeded. Indeed, Epstein in his classic thermodynamics textbook began his discussion of the van der Waals equation by writing, "In spite of its simplicity, it comprehends both the gaseous and the liquid state and brings out, in a most remarkable way, all the phenomena pertaining to the continuity of these two states". Also, in Volume 5 of his Lectures on Theoretical Physics, Sommerfeld, in addition to noting that "Boltzmann described van der Waals as the Newton of real gases", also wrote "It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the unstable [referring to superheated liquid, and subcooled vapor, now called metastable] states" that are associated with the phase change process. Utility of the equation The van der Waals equation has been, and remains, useful because: It yields simple analytic expressions for thermodynamic properties: internal energy, entropy, enthalpy, Helmholtz free energy, Gibbs free energy, and specific heat at constant pressure . It yields an analytic expression of its coefficient of thermal expansion and its isothermal compressibility. It yields an analytic analysis of the Joule–Thomson coefficient and associated inversion curve, which were instrumental in the development of the commercial liquefaction of gases. It shows that the specific heat at constant volume is a function of only. It explains the existence of the critical point and the liquid–vapor phase transition, including the observed metastable states. It establishes the theorem of corresponding states. It is an intermediate mathematical model, useful as a pedagogical tool when teaching physics, chemistry, and engineering. In addition, its saturation curve has an analytic solution, which can depict the liquid metals (mercury and cesium) quantitatively, and describes most real fluids qualitatively. As such, this solution can be regarded as one member of a family of equations of state (known as extended corresponding states). Consequently, the equation plays an important role in the modern theory of phase transitions. History In 1857 Rudolf Clausius published The Nature of the Motion which We Call Heat. In it he derived the relation for the pressure in a gas, composed of particles in motion, with number density , mass , and mean square speed . He then noted that using the classical laws of Boyle and Charles, one could write with a constant of proportionality . Hence temperature was proportional to the average kinetic energy of the particles. This article inspired further work based on the twin ideas that substances are composed of indivisible particles, and that heat is a consequence of the particle motion; movement that evolves in accordance with Newton's laws. The work, known as the kinetic theory of gases, was done principally by Clausius, James Clerk Maxwell, and Ludwig Boltzmann. At about the same time, Josiah Willard Gibbs advanced the work by converting it into statistical mechanics. This environment influenced Johannes Diderik van der Waals. After initially pursuing a teaching credential, he was accepted for doctoral studies at the University of Leiden under Pieter Rijke. This led, in 1873, to a dissertation that provided a simple, particle-based equation that described the gas–liquid change of state, the origin of a critical temperature, and the concept of corresponding states. The equation is based on two premises: first, that fluids are composed of particles with non-zero volumes, and second, that at a large enough distance each particle exerts an attractive force on all other particles in its vicinity. Boltzmann called these forces van der Waals cohesive forces. In 1869 Irish professor of chemistry Thomas Andrews at Queen's University Belfast, in a paper entitled On the Continuity of the Gaseous and Liquid States of Matter, displayed an experimentally obtained set of isotherms of carbonic acid, , that showed at low temperatures a jump in density at a certain pressure, while at higher temperatures there was no abrupt change (the figure can be seen here). Andrews called the isotherm at which the jump just disappears the critical point. Given the similarity of the titles of this paper and van der Waals' subsequent thesis, one might think that van der Waals set out to develop a theoretical explanation of Andrews' experiments; however, this is not what happened. Van der Waals began work by trying to determine a molecular attraction that appeared in Laplace's theory of capillarity, and only after establishing his equation he tested it using Andrews' results. By 1877 sprays of both liquid oxygen and liquid nitrogen had been produced, and a new field of research, low temperature physics, had been opened. The van der Waals equation played a part in all this, especially with respect to the liquefaction of hydrogen and helium which was finally achieved in 1908. From measurements of and in two states with the same density, the van der Waals equation produces the values Thus from two such measurements of pressure and temperature one could determine and , and from these values calculate the expected critical pressure, temperature, and molar volume. Goodstein summarized this contribution of the van der Waals equation as follows:All this labor required considerable faith in the belief that gas–liquid systems were all basically the same, even if no one had ever seen the liquid phase. This faith arose out of the repeated success of the van der Waals theory, which is essentially a universal equation of state, independent of the details of any particular substance once it has been properly scaled. [...] As a result, not only was it possible to believe that hydrogen could be liquefied, but it was even possible to predict the necessary temperature and pressure.Van der Waals was awarded the Nobel Prize in 1910, in recognition of the contribution of his formulation of this "equation of state for gases and liquids". As noted previously, modern-day studies of first-order phase changes make use of the van der Waals equation together with the Gibbs criterion, equal chemical potential of each phase, as a model of the phenomenon. This model has an analytic coexistence (saturation) curve expressed parametrically, (the parameter is related to the entropy difference between the two phases), that was first obtained by Plank, was known to Gibbs and others, and was later derived in a beautifully simple and elegant manner by Lekner. A summary of Lekner's solution is presented in a subsequent section, and a more complete discussion in the Maxwell construction. Critical point and corresponding states Figure 1 shows four isotherms of the van der Waals equation (abbreviated as vdW) on a (pressure, molar volume) plane. The essential character of these curves is that they come in three forms: At some critical temperature (orange isotherm), the slope is negative everywhere except at a single inflection point: the critical point , where both the slope and curvature are zero, . At higher temperatures (red isotherm), the isotherm's slope is negative everywhere. (This corresponds to values of for which the vdW equation has one real root for ). At lower temperatures (green and blue isotherms), all isotherms have two points where the slope is zero. (This corresponds to values of , for which the vdW equation has three real roots for ). The critical point can be analytically determined by evaluating the two partial derivatives of the vdW equation in (1) and equating them to zero. This produces the critical values and ; plugging these back into the vdW equation gives . This calculation can also be done algebraically by noting that the vdW equation can be written as a cubic in terms of , which at the critical point is which, by dividing out , can be refactored as Separately, since all three roots coalesce at the critical point, we can write These two cubic equations are the same when all their coefficients are equal; matching like terms produces a system of three equations: whose solution produces the previous results for . Using these critical values to define reduced (dimensionless) variables , , and renders the vdW equation in the dimensionless form (used to construct Fig. 1): This dimensionless form is a similarity relation; it indicates that all vdW fluids at the same will plot on the same curve. It expresses the law of corresponding states which Boltzmann described as follows: All the constants characterizing the gas have dropped out of this equation. If one bases measurements on the van der Waals units [Boltzmann's name for the reduced quantities here], then he obtains the same equation of state for all gases. [...] Only the values of the critical volume, pressure, and temperature depend on the nature of the particular substance; the numbers that express the actual volume, pressure, and temperature as multiples of the critical values satisfy the same equation for all substances. In other words, the same equation relates the reduced volume, reduced pressure, and reduced temperature for all substances. Obviously such a broad general relation is unlikely to be correct; nevertheless, the fact that one can obtain from it an essentially correct description of actual phenomena is very remarkable. This "law" is just a special case of dimensional analysis in which an equation containing 6 dimensional quantities, , and 3 independent dimensions, [p], [v], [T] (independent means that "none of the dimensions of these quantities can be represented as a product of powers of the dimensions of the remaining quantities", and [R]=[pv/T]), must be expressible in terms of 6 − 3 = 3 dimensionless groups. Here is a characteristic molar volume, a characteristic pressure, and a characteristic temperature, and the 3 dimensionless groups are . According to dimensional analysis the equation must then have the form , a general similarity relation. In his discussion of the vdW equation, Sommerfeld also mentioned this point. The reduced properties defined previously are , , and . Recent research has suggested that there is a family of equations of state that depend on an additional dimensionless group, and this provides a more exact correlation of properties. Nevertheless, as Boltzmann observed, the van der Waals equation provides an essentially correct description. The vdW equation produces the critical compressibility factor , while for most real fluids . Thus most real fluids do not satisfy this condition, and consequently their behavior is only described qualitatively by the vdW equation. However, the vdW equation of state is a member of a family of state equations based on the Pitzer (acentric) factor, , and the liquid metals (mercury and cesium) are well approximated by it. Thermodynamic properties The properties of molar internal energy and entropy —defined by the first and second laws of thermodynamics, hence all thermodynamic properties of a simple compressible substance—can be specified, up to a constant of integration, by two measurable functions: a mechanical equation of state , and a constant volume specific heat . Internal energy and specific heat at constant volume The internal energy is given by the energetic equation of state, where is an arbitrary constant of integration. Now in order for to be an exact differential—namely that be continuous with continuous partial derivatives—its second mixed partial derivatives must also be equal, . Then with this condition can be written as . Differentiating for the vdW equation gives , so . Consequently for a vdW fluid exactly as it is for an ideal gas. For simplicity, it is regarded as a constant here, for some constant number . Then both integrals can be evaluated, resulting in This is the energetic equation of state for a perfect vdW fluid. By making a dimensional analysis (what might be called extending the principle of corresponding states to other thermodynamic properties) it can be written in the reduced form where and is a dimensionless constant. Enthalpy The enthalpy of a system is given by . Substituting with and (the vdW equation multiplied by ) gives This is the enthalpic equation of state for a perfect vdW fluid, or in reduced form, Entropy The entropy is given by the entropic equation of state: Using as before, and integrating the second term using we obtain This is the entropic equation of state for a perfect vdW fluid, or in reduced form, Helmholtz free energy The Helmholtz free energy is , so combining the previous results gives This is the Helmholtz free energy for a perfect vdW fluid, or in reduced form, Gibbs free energy The Gibbs free energy is , so combining the previous results gives This is the Gibbs free energy for a perfect vdW fluid, or in reduced form, Thermodynamic derivatives: α, κT and cp The two first partial derivatives of the vdW equation are where is the isothermal compressibility (a measure of the relative increase of volume from an increase of pressure, at constant temperature), and is the coefficient of thermal expansion (a measure of the relative increase of volume from an increase of temperature, at constant pressure). Therefore, In the limit , the vdW equation becomes , and , and . Both these limits of and are the ideal gas values, which is consistent because, as noted earlier, a vdW fluid behaves like an ideal gas in this limit. The specific heat at constant pressure is defined as the partial derivative . However, it is not independent of , as they are related by the Mayer equation, . Then the two partials of the vdW equation can be used to express as Here in the limit , , which is also the ideal gas result as expected; however the limit gives the same result, which does not agree with experiments on liquids. In this liquid limit we also find , namely that the vdW liquid is incompressible. Moreover, since , it is also mechanically incompressible, that is, approaches 0 faster than does. Finally, , , and are all infinite on the curve . This curve, called the spinodal curve, is defined by . Stability According to the extremum principle of thermodynamics, and ; namely, that at equilibrium the entropy is a maximum. This leads to a requirement that . This mathematical criterion expresses a physical condition which Epstein described as follows: It is obvious that this middle part, dotted in our curves [the place where the requirement is violated, dashed gray in Fig. 1], can have no physical reality. In fact, let us imagine the fluid in a state corresponding to this part of the curve contained in a heat conducting vertical cylinder whose top is formed by a piston. The piston can slide up and down in the cylinder, and we put on it a load exactly balancing the pressure of the gas. If we take a little weight off the piston, there will no longer be equilibrium and it will begin to move upward. However, as it moves the volume of the gas increases and with it its pressure. The resultant force on the piston gets larger, retaining its upward direction. The piston will, therefore, continue to move and the gas to expand until it reaches the state represented by the maximum of the isotherm. Vice versa, if we add ever so little to the load of the balanced piston, the gas will collapse to the state corresponding to the minimum of the isotherm. For isotherms , this requirement is satisfied everywhere, thus all states are gas. For isotherms , the states that lie between the local minimum and local maximum , for which (shown dashed gray in Fig. 1), are unstable and thus not observed. This unstable region is the genesis of the phase change; there is a range , for which no observable states exist. The states for are liquid, and those for are vapor; the denser liquid separates and lies below the vapor due to gravity. The transition points, states with zero slope, are called spinodal points. Their locus is the spinodal curve, a boundary that separates the regions of the plane for which liquid, vapor, and gas exist from a region where no observable homogeneous states exist. This spinodal curve is obtained here from the vdW equation by differentiation (or equivalently from ) as A projection of the spinodal curve is plotted in Figure 1 as the black dash-dot curve. It passes through the critical point, which is also a spinodal point. Saturation Although the gap in delimited by the two spinodal points on an isotherm (e.g. in Fig. 1) is the origin of the phase change, the change occurs as some intermediate value. This can be seen by considering that both saturated liquid and saturated vapor can coexist in equilibrium, at which they have the same pressure and temperature. However, the minimum and maximum spinodal points are not at the same pressure. Therefore, at a temperature , the phase change is characterized by the pressure , which lies within the range of set by the spinodal points (), and by the molar volume of liquid and vapor , which lie outside the range of set by the spinodal points ( and ). Applying the vdW equation to the saturated liquid (fluid) and saturated vapor (gas) states gives: These two equations contain four variables (), so a third equation is required in order to uniquely specify three of these variables in terms of the fourth. The following is a derivation of this third equation (the result is ). Now, the energy required to vaporize a mole at constant pressure is (from the first law of thermodynamics) and at constant temperature is (from the second law). Thus, That is, in this case, the Gibbs free energy in the saturated liquid state equals that in the saturated vapor state. The Gibbs free energy is one of the four thermodynamic potentials whose partial derivatives produce all other thermodynamics state properties; its differential is . Integrating this over an isotherm from to , noting that the pressure is the same at each endpoint, and setting the result to zero yields Here because is a multivalued function, the integral must be divided into 3 parts corresponding to the 3 real roots of the vdW equation in the form, (this can be visualized most easily by imagining Fig. 1 rotated ); the result is a special case of material equilibrium. The last equality, which follows from integrating , is the Maxwell equal area rule, which requires that the upper area between the vdW curve and the horizontal through be equal to the lower area. This form means that the thermodynamic restriction that fixes is specified by the equation of state itself, . Using the equation for the Gibbs free energy for the vdW equation (), the difference can be evaluated as This is a third equation that along with the two vdW equations of can be solved numerically. This has been done given a value for either or , and tabular results presented; however, the equations also admit an analytic parametric solution obtained most simply and elegantly, by Lekner. Details of this solution may be found in the Maxwell construction; the results are: where and the parameter is given physically by . The values of all other property discontinuities across the saturation curve also follow from this solution. These functions define the coexistence curve (or saturation curve), which is the locus of the saturated liquid and saturated vapor states of the vdW fluid. Various projections of this saturation curve are plotted in Figures 1, 2a, and 2b. Referring back to Figure 1, the isotherms for are discontinuous. For example, the (green) isotherm consists of two separate segments. The solid green lines represent stable states, and terminate at dots that represent the saturated liquid and vapor states that comprise the phase change. The dashed green lines represent metastable states (superheated liquid and subcooled vapor) that are created in the process of phase transition, have a short lifetime, and then devolve into their lower energy stable alternative. At every point in the region between the two curves in Figure 2b, there are two states: one stable and one metastable. The coexistence of these states can be seen in Figure 1—for discontinuous isotherms, there are values of which correspond to two points on the isotherm: one on a solid line (the stable state) and one on a dashed region (the metastable state). In his treatise of 1898, in which he described the van der Waals equation in great detail, Boltzmann discussed these metastable states in a section titled "Undercooling, Delayed evaporation". (Today, these states are now denoted "subcooled vapor" and "superheated liquid".) Moreover, it has now become clear that these metastable states occur regularly in the phase transition process. In particular, processes that involve very high heat fluxes create large numbers of these states, and transition to their stable alternative with a corresponding release of energy that can be dangerous. Consequently, there is a pressing need to study their thermal properties. In the same section, Boltzmann also addressed and explained the negative pressures which some liquid metastable states exhibit (for example, the blue isotherm in Fig. 1). He concluded that such liquid states of tensile stresses were real, as did Tien and Lienhard many years later who wrote "The van der Waals equation predicts that at low temperatures liquids sustain enormous tension [...] In recent years measurements have been made that reveal this to be entirely correct." Even though the phase change produces a mathematical discontinuity in the homogeneous fluid properties (for example ), there is no physical discontinuity. As the liquid begins to vaporize, the fluid becomes a heterogeneous mixture of liquid and vapor whose molar volume varies continuously from to according to the equation of state where and is the mole fraction of the vapor. This equation is called the lever rule and applies to other properties as well. The states it represents form a horizontal line bridging the discontinuous region of an isotherm (not shown in Fig. 1 because it is a different equation from the vdW equation). Extended corresponding states The idea of corresponding states originated when van der Waals cast his equation in the dimensionless form, . However, as Boltzmann noted, such a simple representation could not correctly describe all substances. Indeed, the saturation analysis of this form produces ; namely, that all substances have the same dimensionless coexistence curve, which is not true. To avoid this paradox, an extended principle of corresponding states has been suggested in which where is a substance-dependent dimensionless parameter related to the only physical feature associated with an individual substance: its critical point. One candidate for is the critical compressibility factor ; however, because is difficult to measure accurately, the acentric factor developed by Kenneth Pitzer, , is more useful. The saturation pressure in this situation is represented by a one-parameter family of curves: . Several investigators have produced correlations of saturation data for a number of substances; Dong and Lienhard give which has an RMS error of over the range . Figure 3 is a plot of vs for various values of the Pitzer factor as given by this equation. The vertical axis is logarithmic in order to show the behavior at pressures closer to zero, where differences among the various substances (indicated by varying values of ) are more pronounced. Figure 4 is another plot of the same equation showing as a function of for various values of . It includes data from 51 substances, including the vdW fluid, over the range . This plot shows that the vdW fluid () is a member of the class of real fluids; indeed, the vdW fluid can quantitatively approximate the behavior of the liquid metals cesium () and mercury (), which share similar values of . However, in general it can describe the behavior of fluids of various only qualitatively. Joule–Thomson coefficient The Joule–Thomson coefficient, , is of practical importance because the two end states of a throttling process () lie on a constant enthalpy curve. Although ideal gases, for which , do not change temperature in such a process, real gases do, and it is important in applications to know whether they heat up or cool down. This coefficient can be found in terms of the previously derived and as When is positive, the gas temperature decreases as it passes through a throttling process, and when it is negative, the temperature increases. Therefore, the condition defines a curve that separates the region of the plane where from the region where . This curve is called the inversion curve, and its equation is . Evaluating this using the expression for derived in produces Note that for there will be cooling for (or, in terms of the critical temperature, ). As Sommerfeld noted, "This is the case with air and with most other gases. Air can be cooled at will by repeated expansion and can finally be liquified." In terms of , the equation has a simple positive solution , which for produces . Using this to eliminate from the vdW equation then gives the inversion curve as where, for simplicity, have been replaced by . The maximum of this quadratic curve occurs with , for which gives , or , and the corresponding . By the quadratic formula, the zeros of the curve are and ( and ). In terms of the dimensionless variables , the zeros are at and , while the maximum is , and occurs at . A plot of the curve is shown in green in Figure 5. Sommerfeld also displays this plot, together with a curve drawn using experimental data from H2. The two curves agree qualitatively, but not quantitatively. For example the maximum on these two curves differ by about 40% in both magnitude and location. Figure 5 shows an overlap between the saturation curve and the inversion curve plotted in the same region. This crossover means a van der Waals gas can be liquified by passing it through a throttling process under the proper conditions; real gases are liquified in this way. Compressibility factor Real gases are characterized by their difference from ideal gases by writing . Here , called the compressibility factor, is expressed either as or . In either case, the limit as or approaches zero is 1, and takes the ideal gas value. In the second case , so for a van der Waals fluid the compressibility factor is or in terms of reduced variables where . At the critical point, and . In the limit , ; the fluid behaves like an ideal gas, as mentioned before. The derivative is never negative when ; that is, when (which corresponds to ). Alternatively, the initial slope is negative when , is zero at , and is positive for larger (see Fig. 6). In this case, the value of passes through when . Here is called the Boyle temperature. It ranges between , and denotes a point in space where the equation of state reduces to the ideal gas law. However, the fluid does not behave like an ideal gas there, because neither its derivatives nor reduce to their ideal gas values, other than where the actual ideal gas region. Figure 6 plots various isotherms of vs . Also shown are the spinodal and coexistence curves described previously. The subcritical isotherm consists of stable, metastable, and unstable segments (identified in the same way as in Fig. 1). Also included are the zero initial slope isotherm and the one corresponding to infinite temperature. Figure 7 shows a generalized compressibility chart for a vdW gas. Like all other vdW properties, this is not quantitatively correct for most gases, but it has the correct qualitative features. Note the caustic generated by the crossing isotherms. Virial expansion Statistical mechanics suggests that the compressibility factor can be expressed by a power series, called a virial expansion: where the functions are the virial coefficients; the th term represents a -particle interaction. Expanding the term in the definition of () into an infinite series, convergent for , produces The corresponding expression for when is These are the virial expansions, one dimensional and one dimensionless, for the van der Waals fluid. The second virial coefficient is the slope of at . Notice that it can be positive when or negative when , which agrees with the result found previously by differentiation. For molecules modeled as non-attracting hard spheres, , and the vdW virial expansion becomes which illustrates the effect of the excluded volume alone. It was recognized early on that this was in error beginning with the term . Boltzmann calculated its correct value as , and used the result to propose an enhanced version of the vdW equation: On expanding , this produced the correct coefficients through and also gave infinite pressure at , which is approximately the close-packing distance for hard spheres. This was one of the first of many equations of state proposed over the years that attempted to make quantitative improvements to the remarkably accurate explanations of real gas behavior produced by the vdW equation. Mixtures In 1890 van der Waals published an article that initiated the study of fluid mixtures. It was subsequently included as Part III of a later published version of his thesis. His essential idea was that in a binary mixture of vdW fluids described by the equations the mixture is also a vdW fluid given by where Here and , with (so that ), are the mole fractions of the two fluid substances. Adding the equations for the two fluids shows that , although for sufficiently large with equality holding in the ideal gas limit. The quadratic forms for and are a consequence of the forces between molecules. This was first shown by Lorentz, and was credited to him by van der Waals. The quantities and in these expressions characterize collisions between two molecules of the same fluid component, while and represent collisions between one molecule of each of the two different fluid components. This idea of van der Waals' was later called a one fluid model of mixture behavior. Assuming that is the arithmetic mean of and , , substituting into the quadratic form and noting that produces Van der Waals wrote this relation, but did not make use of it initially. However, it has been used frequently in subsequent studies, and its use is said to produce good agreement with experimental results at high pressure. Common tangent construction In this article, van der Waals used the Helmholtz potential minimum principle to establish the conditions of stability. This principle states that in a system in diathermal contact with a heat reservoir , , and , namely at equilibrium, the Helmholtz potential is a minimum. Since, like , the molar Helmholtz function is also a potential function whose differential is this minimum principle leads to the stability condition . This condition means that the function, , is convex at all stable states of the system. Moreover, for those states the previous stability condition for the pressure is necessarily satisfied as well. Single fluid For a single substance, the definition of the molar Gibbs free energy can be written in the form . Thus when and are constant along with temperature, the function represents a straight line with slope , and intercept . Since the curve has positive curvature everywhere when , the curve and the straight line will have a single tangent. However, for a subcritical is not everywhere convex. With and a suitable value of , the line will be tangent to at the molar volume of each coexisting phase: saturated liquid and saturated vapor ; there will be a double tangent. Furthermore, each of these points is characterized by the same values of , , and These are the same three specifications for coexistence that were used previously. Figure 8 depicts an evaluation of as a green curve, with and marked by the left and right green circles, respectively. The region on the green curve for corresponds to the liquid state. As increases past , the curvature of (proportional to ) continually decreases. The inflection point, characterized by zero curvature, is a spinodal point; between and this point is the metastable superheated liquid. For further increases in the curvature decreases to a minimum then increases to another (zero curvature) spinodal point; between these two spinodal points is the unstable region in which the fluid cannot exist in a homogeneous equilibrium state (represented by the dotted grey curve). With a further increase in the curvature increases to a maximum at , where the slope is ; the region between this point and the second spinodal point is the metastable subcooled vapor. Finally, the region is the vapor. In this region the curvature continually decreases until it is zero at infinitely large . The double tangent line (solid black) that runs between and represents states that are stable but heterogeneous, not homogeneous solutions of the vdW equation. The states above this line (with larger Helmholtz free energy) are either metastable or unstable. The combined solid green-black curve in Figure 8 is the convex envelope of , which is defined as the largest convex curve that is less than or equal to the function. For a vdW fluid, the molar Helmholtz potential is where . Its derivative is which is the vdW equation, as expected. A plot of this function , whose slope at each point is specified by the vdW equation, for the subcritical isotherm is shown in Figure 8 along with the line tangent to it at its two coexisting saturation points. The data illustrated in Figure 8 is exactly the same as that shown in Figure 1 for this isotherm. This double tangent construction thus provides a graphical alternative to the Maxwell construction to establish the saturated liquid and vapor points on an isotherm. Binary fluid Van der Waals used the Helmholtz function because its properties could be easily extended to the binary fluid situation. In a binary mixture of vdW fluids, the Helmholtz potential is a function of two variables, , where is a composition variable (for example so ). In this case, there are three stability conditions: and the Helmholtz potential is a surface (of physical interest in the region ). The first two stability conditions show that the curvature in each of the directions and are both non-negative for stable states, while the third condition indicates that stable states correspond to elliptic points on this surface. Moreover, its limit specifies the spinodal curves on the surface. For a binary mixture, the Euler equation can be written in the form where are the molar chemical potentials of each substance, . For constant values of , , and , this equation is a plane with slopes in the direction, in the direction, and intercept . As in the case of a single substance, here the plane and the surface can have a double tangent, and the locus of the coexisting phase points forms a curve on each surface. The coexistence conditions are that the two phases have the same , , , and ; the last two are equivalent to having the same and individually, which are just the Gibbs conditions for material equilibrium in this situation. The two methods of producing the coexistence surface are equivalent. Although this case is similar to that of a single fluid, here the geometry can be much more complex. The surface can develop a wave (called a plait or fold) in the direction as well as the one in the direction. Therefore, there can be two liquid phases that can be either miscible, or wholly or partially immiscible, as well as a vapor phase. Despite a great deal of both theoretical and experimental work on this problem by van der Waals and his successors—work which produced much useful knowledge about the various types of phase equilibria that are possible in fluid mixtures—complete solutions to the problem were only obtained after 1967, when the availability of modern computers made calculations of mathematical problems of this complexity feasible for the first time. The results obtained were, in Rowlinson's words, a spectacular vindication of the essential physical correctness of the ideas behind the van der Waals equation, for almost every kind of critical behavior found in practice can be reproduced by the calculations, and the range of parameters that correlate with the different kinds of behavior are intelligible in terms of the expected effects of size and energy. Mixing rules In order to obtain these numerical results, the values of the constants of the individual component fluids must be known. In addition, the effect of collisions between molecules of the different components, given by and , must also be specified. In the absence of experimental data, or computer modeling results to estimate their value the empirical combining rules, geometric and algebraic means can be used, respectively: These relations correspond to the empirical combining rules for the intermolecular force constants, the first of which follows from a simple interpretation of the dispersion forces in terms of polarizabilities of the individual molecules, while the second is exact for rigid molecules. Using these empirical combining rules to generalize for fluid components, the quadradic mixing rules for the material constants are: These expressions come into use when mixing gases in proportion, such as when producing tanks of air for diving and managing the behavior of fluid mixtures in engineering applications. However, more sophisticated mixing rules are often necessary, in order to obtain satisfactory agreement with reality over the wide variety of mixtures encountered in practice. Another method of specifying the vdW constants, pioneered by W.B. Kay and known as Kay's rule, specifies the effective critical temperature and pressure of the fluid mixture by In terms of these quantities, the vdW mixture constants are which Kay used as the basis for calculations of the thermodynamic properties of mixtures. Kay's idea was adopted by T. W. Leland, who applied it to the molecular parameters , which are related to through by and . Using these together with the quadratic mixing rules for produces which is the van der Waals approximation expressed in terms of the intermolecular constants. This approximation, when compared with computer simulations for mixtures, are in good agreement over the range , namely for molecules of similar diameters. In fact, Rowlinson said of this approximation, "It was, and indeed still is, hard to improve on the original van der Waals recipe when expressed in [this] form". Mathematical and empirical validity Since van der Waals presented his thesis, "[m]any derivations, pseudo-derivations, and plausibility arguments have been given" for it. However, no mathematically rigorous derivation of the equation over its entire range of molar volume that begins from a statistical mechanical principle exists. Indeed, such a proof is not possible, even for hard spheres. Goodstein put it this way, "Obviously the value of the van der Waals equation rests principally on its empirical behavior rather than its theoretical foundation." Nevertheless, a review of the work that has been done is useful in order to better understand where and when the equation is valid mathematically, and where and why it fails. Review The classical canonical partition function, , of statistical mechanics for a three dimensional particle macroscopic system is where , is the de Broglie wavelength (alternatively is the quantum concentration), is the particle configuration integral, and is the intermolecular potential energy, which is a function of the particle position vectors . Lastly is the volume element of , which is a -dimensional space. The connection of with thermodynamics is made through the Helmholtz free energy, , from which all other properties can be found; in particular . For point particles that have no force interactions (), all integrals of can be evaluated producing . In the thermodynamic limit, with finite, the Helmholtz free energy per particle (or per mole, or per unit mass) is finite; for example, per mole it is . The thermodynamic state equations in this case are those of a monatomic ideal gas, specifically Early derivations of the vdW equation were criticized mainly on two grounds. First, a rigorous derivation from the partition function should produce an equation that does not include unstable states for which, . Second, the constant in the vdW equation (here is the volume of a single molecule) gives the maximum possible number of molecules as , or a close packing density of 1/4=0.25, whereas the known close-packing density of spheres is . Thus a single value of cannot describe both gas and liquid states. The second criticism is an indication that the vdW equation cannot be valid over the entire range of molar volume. Van der Waals was well aware of this problem; he devoted about 30% of his Nobel lecture to it, and also said that it is ... the weak point in the study of the equation of state. I still wonder whether there is a better way. In fact this question continually obsesses me, I can never free myself from it, it is with me even in my dreams. In 1949 the first criticism was proved by van Hove when he showed that in the thermodynamic limit, hard spheres with finite-range attractive forces have a finite Helmholtz free energy per particle. Furthermore, this free energy is a continuously decreasing function of the volume per particle (see Fig. 8 where are molar quantities). In addition, its derivative exists and defines the pressure, which is a non-increasing function of the volume per particle. Since the vdW equation has states for which the pressure increases with increasing volume per particle, this proof means it cannot be derived from the partition function, without an additional constraint that precludes those states. In 1891 Korteweg used kinetic theory ideas to show that a system of hard rods of length , constrained to move along a straight line of length and exerting only direct contact forces on one another, satisfy a vdW equation with ; Rayleigh also knew this. Tonks, by evaluating the configuration integral, later showed that the force exerted on a wall by this system is given by with . This can be put in a more recognizable, molar, form by dividing by the rod cross sectional area , and defining . This produces ; there is no condensation, as for all . This result is obtained because in one dimension, particles cannot pass by one another as they can in higher dimensions; their mass center coordinates satisfy the relations . As a result, the configuration integral is . In 1959 this one-dimensional gas model was extended by Kac to include particle pair interactions through an attractive potential, . This specific form allowed evaluation of the grand partition function, in the thermodynamic limit, in terms of the eigenfunctions and eigenvalues of a homogeneous integral equation. Although an explicit equation of state was not obtained, it was proved that the pressure was a strictly decreasing function of the volume per particle, hence condensation did not occur. Four years later, in 1963, Kac together with Uhlenbeck and Hemmer modified the pair potential of Kac's previous work as , so that was independent of . They found that a second limiting process they called the van der Waals limit, (in which the pair potential becomes both infinitely long range and infinitely weak) and performed after the thermodynamic limit, produced the one-dimensional vdW equation (here rendered in molar form) as well as the Gibbs criterion, (equivalently the Maxwell construction). As a result, all isotherms satisfy the condition as shown in Figure 9, and hence the first criticism of the vdW equation is not as serious as originally thought. Then, in 1966, Lebowitz and Penrose generalized what they called the Kac potential, , to apply to a nonspecific function of dimensions: For and this reduces to the specific one-dimensional function considered by Kac, et al., and for it is an arbitrary function (although subject to specific requirements) in physical three-dimensional space. In fact, the function must be bounded, non-negative, and one whose integral is finite, independent of . By obtaining upper and lower bounds on and hence on , taking the thermodynamic limit ( with finite) to obtain upper and lower bounds on the function , then subsequently taking the van der Waals limit, they found that the two bounds coalesced and thereby produced a unique limit (here written in terms of the free energy per mole and the molar volume): The abbreviation stands for "convex envelope"; this is a function which is the largest convex function that is less than or equal to the original function. The function is the limit function when ; also here . This result is illustrated in Figure 8 by the solid green curves and black line, which is the convex envelope of . The corresponding limit for the pressure is a generalized form of the vdW equation together with the Gibbs criterion, (equivalently the Maxwell construction). Here is the pressure when attractive molecular forces are absent. The conclusion from all this work is that a rigorous mathematical derivation from the partition function produces a generalization of the vdW equation together with the Gibbs criterion if the attractive force is infinitely weak with an infinitely long range. In that case, , the pressure that results from direct particle collisions (or more accurately the core repulsive forces), replaces . This is consistent with the second criticism, which can be stated as . Consequently, the vdW equation cannot be rigorously derived from the configuration integral over the entire range of . Nevertheless, it is possible to rigorously show that the vdW equation is equivalent to a two-term approximation of the virial equation. Hence it can be rigorously derived from the partition function as a two-term approximation in the additional limit . The virial equation of state This derivation is simplest when begun from the grand partition function, (). In this case, the connection with thermodynamics is through , together with the number of particles Substituting the expression for () in the series for produces Expanding in its convergent power series, using in each term, and equating powers of produces relations that can be solved for the in terms of the . For example, , , and . Then from , the number density is expressed as the series which can be inverted to give The coefficients are given in terms of by the Lagrange inversion theorem, or determined by substituting into the series for and equating powers of ; thus , and so on. Finally, using this series in the series for produces the virial expansion, or virial equation of state The second virial coefficient This conditionally convergent series is also an asymptotic power series for the limit , and a finite number of terms is an asymptotic approximation to . The dominant order approximation in this limit is , which is the ideal gas law. It can be written as an equality using order symbols, for example , which states that the remaining terms approach zero in the limit (or , which states, more accurately, that they approach zero in proportion to ). The two-term approximation is , and the expression for is where and is a dimensionless two-particle potential function. For spherically symmetric molecules, this function can be represented most simply with two parameters: a characteristic molecular diameter and a binding energy , as shown in the Figure 10 plot, in which . Also, for spherically symmetric molecules, five of the six integrals in the expression for can be done with the result From its definition, is positive for , and negative for with a minimum of at some . Furthermore, increases so rapidly that whenever then . In addition, in the limit ( is a dimensionless "coldness", and the quantity is a characteristic molecular temperature), the exponential can be approximated for by two terms of its power series expansion. In these circumstances, can be approximated as where has the minimum value of . On splitting the interval of integration into two parts, one less than and the other greater than , evaluating the first integral and making the second integration variable dimensionless using produces where and , where is a numerical factor whose value depends on the specific dimensionless intermolecular-pair potential Here and , where are the constants given in the introduction. The condition that be finite requires that be integrable over the range . This result indicates that a dimensionless (that is, a function of a dimensionless molecular temperature ) is a universal function for all real gases with an intermolecular pair potential of the form . This is an example of the principle of corresponding states on the molecular level. Moreover, this is true in general and has been developed extensively both theoretically and experimentally. The van der Waals approximation Substituting the (approximate in ) expression for into the two-term virial approximation produces Here the approximation is written in terms of molar quantities; its first two terms are the same as the first two terms of the vdW virial equation. The Taylor expansion of , uniformly convergent for , can be written as , so substituting for produces Alternatively this is which is the vdW equation. Summary According to this derivation, the vdW equation is an equivalent of the two-term approximation of the virial equation of statistical mechanics in the limits . Consequently the equation produces an accurate approximation in a region defined by (on a molecular basis ), which corresponds to a dilute gas. But as the density increases, the behavior of the vdW approximation and the two-term virial expansion differ markedly. Whereas the virial approximation in this instance either increases or decreases continuously, the vdW approximation together with the Maxwell construction expresses physical reality in the form of a phase change, while also indicating the existence of metastable states. This difference in behaviors was pointed out by Korteweg and Rayleigh (see Rowlinson) in the course of their dispute with Tait about the vdW equation. In this extended region, use of the vdW equation is not justified mathematically; however, it has empirical validity. Its various applications in this region that attest to this, both qualitative and quantitative, have been described previously in this article. This point was also made by Alder, et al. who, at a conference marking the 100th anniversary of van der Waals' thesis, noted that: It is doubtful whether we would celebrate the centennial of the Van der Waals equation if it were applicable only under circumstances where it has been proven to be rigorously valid. It is empirically well established that many systems whose molecules have attractive potentials that are neither long-range nor weak conform nearly quantatively to the Van der Waals model. An example is the theoretically much studied system of Argon, where the attractive potential has only a range half as large as the repulsive core. They continued by saying that this model has "validity down to temperatures below the critical temperature, where the attractive potential is not weak at all but, in fact, comparable to the thermal energy." They also described its application to mixtures "where the Van der Waals model has also been applied with great success. In fact, its success has been so great that not a single other model of the many proposed since, has equalled its quantitative predictions, let alone its simplicity." Engineers have made extensive use of this empirical validity, modifying the equation in numerous ways (by one account there have been some 400 cubic equations of state produced) in order to manage the liquids, and gases of pure substances and mixtures, that they encounter in practice. This situation has been aptly described by Boltzmann: ...van der Waals has given us such a valuable tool that it would cost us much trouble to obtain by the subtlest deliberations a formula that would really be more useful than the one that van der Waals found by inspiration, as it were. Notes References Barenblatt, G.I. (1979) [1978 in Russian]. Similarity, Self-Similarity, and Intermediate Asymptotics. Translated by Stein, Norman. Translation Editor VanDyke, Milton. NY and London. Consultants Bureau. See also Gas laws Ideal gas Inversion temperature Iteration Maxwell construction Real gas Theorem of corresponding states Van der Waals constants (data page) Redlich–Kwong equation of state Further reading . Eponymous equations of physics Equations of state Gas laws Engineering thermodynamics Equation
Van der Waals equation
[ "Physics", "Chemistry", "Engineering" ]
13,445
[ "Equations of physics", "Engineering thermodynamics", "Statistical mechanics", "Eponymous equations of physics", "Thermodynamics", "Gas laws", "Mechanical engineering", "Equations of state" ]
206,101
https://en.wikipedia.org/wiki/Stokes%27%20law
In fluid dynamics, Stokes' law gives the frictional force – also called drag force – exerted on spherical objects moving at very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations. Statement of the law The force of viscosity on a small sphere moving through a viscous fluid is given by: where (in SI units): is the frictional force – known as Stokes' drag – acting on the interface between the fluid and the particle (newtons, kg m s−2); (some authors use the symbol ) is the dynamic viscosity (Pascal-seconds, kg m−1 s−1); is the radius of the spherical object (meters); is the flow velocity relative to the object (meters per second). Note the minus sign in the equation, the drag force points in the opposite direction to the relative velocity: drag opposes the motion. Stokes' law makes the following assumptions for the behavior of a particle in a fluid: Laminar flow No inertial effects (zero Reynolds number) Spherical particles Homogeneous (uniform in composition) material Smooth surfaces Particles do not interfere with each other. Depending on desired accuracy, the failure to meet these assumptions may or may not require the use of a more complicated model. To 10% error, for instance, velocities need be limited to those giving Re < 1. For molecules Stokes' law is used to define their Stokes radius and diameter. The CGS unit of kinematic viscosity was named "stokes" after his work. Applications Stokes' law is the basis of the falling-sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameters are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine or golden syrup as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. Several school experiments often involve varying the temperature and/or concentration of the substances used in order to demonstrate the effects this has on the viscosity. Industrial methods include many different oils, and polymer liquids such as solutions. The importance of Stokes' law is illustrated by the fact that it played a critical role in the research leading to at least three Nobel Prizes. Stokes' law is important for understanding the swimming of microorganisms and sperm; also, the sedimentation of small particles and organisms in water, under the force of gravity. In air, the same theory can be used to explain why small water droplets (or ice crystals) can remain suspended in air (as clouds) until they grow to a critical size and start falling as rain (or snow and hail). Similar use of the equation can be made in the settling of fine particles in water or other fluids. Terminal velocity of sphere falling in a fluid At terminal (or settling) velocity, the excess force due to the difference between the weight and buoyancy of the sphere (both caused by gravity) is given by: where (in SI units): is the mass density of the sphere [kg/m3] is the mass density of the fluid [kg/m3] is the gravitational acceleration [m/s] Requiring the force balance and solving for the velocity gives the terminal velocity . Note that since the excess force increases as and Stokes' drag increases as , the terminal velocity increases as and thus varies greatly with particle size as shown below. If a particle only experiences its own weight while falling in a viscous fluid, then a terminal velocity is reached when the sum of the frictional and the buoyant forces on the particle due to the fluid exactly balances the gravitational force. This velocity [m/s] is given by: where (in SI units): is the gravitational field strength [m/s2] is the radius of the spherical particle [m] is the mass density of the particle [kg/m3] is the mass density of the fluid [kg/m3] is the dynamic viscosity [kg/(m•s)]. Derivation Steady Stokes flow In Stokes flow, at very low Reynolds number, the convective acceleration terms in the Navier–Stokes equations are neglected. Then the flow equations become, for an incompressible steady flow: where: is the fluid pressure (in Pa), is the flow velocity (in m/s), and is the vorticity (in s−1), defined as  By using some vector calculus identities, these equations can be shown to result in Laplace's equations for the pressure and each of the components of the vorticity vector:   and   Additional forces like those by gravity and buoyancy have not been taken into account, but can easily be added since the above equations are linear, so linear superposition of solutions and associated forces can be applied. Transversal flow around a sphere For the case of a sphere in a uniform far field flow, it is advantageous to use a cylindrical coordinate system . The –axis is through the centre of the sphere and aligned with the mean flow direction, while is the radius as measured perpendicular to the –axis. The origin is at the sphere centre. Because the flow is axisymmetric around the –axis, it is independent of the azimuth . In this cylindrical coordinate system, the incompressible flow can be described with a Stokes stream function , depending on and : with and the flow velocity components in the and direction, respectively. The azimuthal velocity component in the –direction is equal to zero, in this axisymmetric case. The volume flux, through a tube bounded by a surface of some constant value , is equal to and is constant. For this case of an axisymmetric flow, the only non-zero component of the vorticity vector is the azimuthal –component The Laplace operator, applied to the vorticity , becomes in this cylindrical coordinate system with axisymmetry: From the previous two equations, and with the appropriate boundary conditions, for a far-field uniform-flow velocity in the –direction and a sphere of radius , the solution is found to be The solution of velocity in cylindrical coordinates and components follows as: The solution of vorticity in cylindrical coordinates follows as: The solution of pressure in cylindrical coordinates follows as: The solution of pressure in spherical coordinates follows as: The formula of pressure is also called dipole potential analogous to the concept in electrostatics. A more general formulation, with arbitrary far-field velocity-vector , in cartesian coordinates follows with: In this formulation the non-conservative term represents a kind of so-called Stokeslet. The Stokeslet is the Green's function of the Stokes-Flow-Equations. The conservative term is equal to the dipole gradient field. The formula of vorticity is analogous to the Biot–Savart law in electromagnetism. Alternatively, in a more compact way, one can formulate the velocity field as follows: , where is the Hessian matrix differential operator and is a differential operator composed as the difference of the Laplacian and the Hessian. In this way it becomes explicitly clear, that the solution is composed from derivatives of a Coulomb-type potential () and a Biharmonic-type potential (). The differential operator applied to the vector norm generates the Stokeslet. The following formula describes the viscous stress tensor for the special case of Stokes flow. It is needed in the calculation of the force acting on the particle. In Cartesian coordinates the vector-gradient is identical to the Jacobian matrix. The matrix represents the identity-matrix. The force acting on the sphere can be calculated via the integral of the stress tensor over the surface of the sphere, where represents the radial unit-vector of spherical-coordinates: Rotational flow around a sphere Other types of Stokes flow Although the liquid is static and the sphere is moving with a certain velocity, with respect to the frame of sphere, the sphere is at rest and liquid is flowing in the opposite direction to the motion of the sphere. See also Einstein relation (kinetic theory) Scientific laws named after people Drag equation Viscometry Equivalent spherical diameter Deposition (geology) Stokes number – A determinant of the additional effect of turbulence on terminal fall velocity for particles in fluids Sources Originally published in 1879, the 6th extended edition appeared first in 1932. References Fluid dynamics
Stokes' law
[ "Chemistry", "Engineering" ]
1,820
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
206,115
https://en.wikipedia.org/wiki/Schwarzschild%20radius
The Schwarzschild radius or the gravitational radius is a physical parameter in the Schwarzschild solution to Einstein's field equations that corresponds to the radius defining the event horizon of a Schwarzschild black hole. It is a characteristic radius associated with any quantity of mass. The Schwarzschild radius was named after the German astronomer Karl Schwarzschild, who calculated this exact solution for the theory of general relativity in 1916. The Schwarzschild radius is given as where G is the gravitational constant, M is the object mass, and c is the speed of light. History In 1916, Karl Schwarzschild obtained the exact solution to Einstein's field equations for the gravitational field outside a non-rotating, spherically symmetric body with mass (see Schwarzschild metric). The solution contained terms of the form and , which becomes singular at and respectively. The has come to be known as the Schwarzschild radius. The physical significance of these singularities was debated for decades. It was found that the one at is a coordinate singularity, meaning that it is an artifact of the particular system of coordinates that was used; while the one at is a spacetime singularity and cannot be removed. The Schwarzschild radius is nonetheless a physically relevant quantity, as noted above and below. This expression had previously been calculated, using Newtonian mechanics, as the radius of a spherically symmetric body at which the escape velocity was equal to the speed of light. It had been identified in the 18th century by John Michell and Pierre-Simon Laplace. Parameters The Schwarzschild radius of an object is proportional to its mass. Accordingly, the Sun has a Schwarzschild radius of approximately , whereas Earth's is approximately and the Moon's is approximately . Derivation The simplest way of deriving the Schwarzschild radius comes from the equality of the modulus of a spherical solid mass' rest energy with its gravitational energy: So, the Schwarzschild radius reads as Black hole classification by Schwarzschild radius Any object whose radius is smaller than its Schwarzschild radius is called a black hole. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body (a rotating black hole operates slightly differently). Neither light nor particles can escape through this surface from the region inside, hence the name "black hole". Black holes can be classified based on their Schwarzschild radius, or equivalently, by their density, where density is defined as mass of a black hole divided by the volume of its Schwarzschild sphere. As the Schwarzschild radius is linearly related to mass, while the enclosed volume corresponds to the third power of the radius, small black holes are therefore much more dense than large ones. The volume enclosed in the event horizon of the most massive black holes has an average density lower than main sequence stars. Supermassive black hole A supermassive black hole (SMBH) is the largest type of black hole, though there are few official criteria on how such an object is considered so, on the order of hundreds of thousands to billions of solar masses. (Supermassive black holes up to 21 billion have been detected, such as NGC 4889.) Unlike stellar mass black holes, supermassive black holes have comparatively low average densities. (Note that a (non-rotating) black hole is a spherical region in space that surrounds the singularity at its center; it is not the singularity itself.) With that in mind, the average density of a supermassive black hole can be less than the density of water. The Schwarzschild radius of a body is proportional to its mass and therefore to its volume, assuming that the body has a constant mass-density. In contrast, the physical radius of the body is proportional to the cube root of its volume. Therefore, as the body accumulates matter at a given fixed density (in this example, 997 kg/m3, the density of water), its Schwarzschild radius will increase more quickly than its physical radius. When a body of this density has grown to around 136 million solar masses (), its physical radius would be overtaken by its Schwarzschild radius, and thus it would form a supermassive black hole. It is thought that supermassive black holes like these do not form immediately from the singular collapse of a cluster of stars. Instead they may begin life as smaller, stellar-sized black holes and grow larger by the accretion of matter, or even of other black holes. The Schwarzschild radius of the supermassive black hole at the Galactic Center of the Milky Way is approximately 12 million kilometres. Its mass is about . Stellar black hole Stellar black holes have much greater average densities than supermassive black holes. If one accumulates matter at nuclear density (the density of the nucleus of an atom, about 1018 kg/m3; neutron stars also reach this density), such an accumulation would fall within its own Schwarzschild radius at about and thus would be a stellar black hole. Micro black hole A small mass has an extremely small Schwarzschild radius. A black hole of mass similar to that of Mount Everest would have a Schwarzschild radius much smaller than a nanometre. Its average density at that size would be so high that no known mechanism could form such extremely compact objects. Such black holes might possibly be formed in an early stage of the evolution of the universe, just after the Big Bang, when densities of matter were extremely high. Therefore, these hypothetical miniature black holes are called primordial black holes. When moving to the Planck scale , it is convenient to write the gravitational radius in the form , (see also virtual black hole). Other uses In gravitational time dilation Gravitational time dilation near a large, slowly rotating, nearly spherical body, such as the Earth or Sun can be reasonably approximated as follows: where: is the elapsed time for an observer at radial coordinate r within the gravitational field; is the elapsed time for an observer distant from the massive object (and therefore outside of the gravitational field); is the radial coordinate of the observer (which is analogous to the classical distance from the center of the object); is the Schwarzschild radius. Compton wavelength intersection The Schwarzschild radius () and the Compton wavelength () corresponding to a given mass are similar when the mass is around one Planck mass (), when both are of the same order as the Planck length (). Gravitational radius and the Heisenberg Uncertainty Principle Thus, or , which is another form of the Heisenberg uncertainty principle on the Planck scale. (See also Virtual black hole). Calculating the maximum volume and radius possible given a density before a black hole forms The Schwarzschild radius equation can be manipulated to yield an expression that gives the largest possible radius from an input density that doesn't form a black hole. Taking the input density as , For example, the density of water is . This means the largest amount of water you can have without forming a black hole would have a radius of 400 920 754 km (about 2.67 AU). See also Black hole, a general survey Chandrasekhar limit, a second requirement for black hole formation John Michell Classification of black holes by type: Static or Schwarzschild black hole Rotating or Kerr black hole Charged black hole or Newman black hole and Kerr–Newman black hole A classification of black holes by mass: Micro black hole and extra-dimensional black hole Planck length Primordial black hole, a hypothetical leftover of the Big Bang Stellar black hole, which could either be a static black hole or a rotating black hole Supermassive black hole, which could also either be a static black hole or a rotating black hole Visible universe, if its density is the critical density, as a hypothetical black hole Virtual black hole Notes References Black holes 1916 in science Radii
Schwarzschild radius
[ "Physics", "Astronomy" ]
1,638
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
206,122
https://en.wikipedia.org/wiki/Big%20Crunch
The Big Crunch is a hypothetical scenario for the ultimate fate of the universe, in which the expansion of the universe eventually reverses and the universe recollapses, ultimately causing the cosmic scale factor to reach absolute zero, an event potentially followed by a reformation of the universe starting with another Big Bang. The vast majority of evidence, however, indicates that this hypothesis is not correct. Instead, astronomical observations show that the expansion of the universe is accelerating rather than being slowed by gravity, suggesting that a Big Freeze is much more likely to occur. Nonetheless, some physicists have proposed that a "Big Crunch-style" event could result from a dark energy fluctuation. The hypothesis dates back to 1922, with Russian physicist Alexander Friedmann creating a set of equations showing that the end of the universe depends on its density. It could either expand or contract rather than stay stable. With enough matter, gravity could stop the universe's expansion and eventually reverse it. This reversal would result in the universe collapsing on itself, not too dissimilar to a black hole. The ending of the Big Crunch would get filled with radiation from stars and high-energy particles; when this is condensed and blueshifted to higher energy, it would be intense enough to ignite the surface of stars before they collide. In the final moments, the universe would be one large fireball with a near-infinite temperature, and at the absolute end, neither time, nor space would remain. Overview The Big Crunch scenario hypothesized that the density of matter throughout the universe is sufficiently high that gravitational attraction will overcome the expansion that began with the Big Bang. The FLRW cosmology can predict whether the expansion will eventually stop based on the average energy density, Hubble parameter, and cosmological constant. If the expansion stopped, then contraction will inevitably follow, accelerating as time passes and finishing the universe in a kind of gravitational collapse, turning the universe into a black hole. Experimental evidence in the late 1990s and early 2000s (namely the observation of distant supernovas as standard candles; and the well-resolved mapping of the cosmic microwave background) led to the conclusion that the expansion of the universe is not getting slowed by gravity but is instead accelerating. The 2011 Nobel Prize in Physics was awarded to researchers who contributed to this discovery. The Big Crunch hypothesis also leads into another hypothesis known as the Big Bounce, in which after the big crunch destroys the universe, it does a sort of bounce, causing another big bang. This could potentially repeat forever in a phenomenon known as a cyclic universe. History Richard Bentley, a churchman and scholar, sent a letter to Isaac Newton in preparation for a lecture on Newton's theories and the rejection of atheism: If we're in a finite universe and all stars attract each other together, would they not all collapse to a singular point, and if we're in an infinite universe with infinite stars, would infinite forces in every direction not affect all of those stars? This question is known as Bentley's paradox, an early predecessor of the Big Crunch. Although, it is now known that stars move around and are not static. Einstein's cosmological constant Albert Einstein favored an unchanging model of the universe. He collaborated in 1917 with Dutch astronomer Willem de Sitter to help demonstrate that the theory of general relativity would work with a static model; Willem demonstrated that his equations could describe a very simple universe. Finding no problems initially, scientists adapted the model to describe the universe. They ran into a different form of Bentley's paradox. The theory of general relativity also described the universe as restless. Einstein realized that for a static universe to exist—which was observed at the time—an anti-gravity would be needed to counter the gravity contracting the universe together, adding an extra force that would ruin the equations in the theory of relativity. In the end, the cosmological constant, the name for the anti-gravity force, was added to the theory of relativity. Discovery of Hubble's law Edwin Hubble working in the Mount Wilson Observatory took measurements of the distances of galaxies and paired them with Vesto Silpher and Milton Humason's measurements of red shifts associated with those galaxies. He discovered a rough proportionality between the red shift of an object and its distance. Hubble plotted a trend line from 46 galaxies, studying and obtaining the Hubble Constant, which he deduced to be 500 km/s/Mpc, nearly seven times than what it is considered today, but still giving the proof that the universe was expanding and was not a static object. Abandonment of the cosmological constant After Hubble's discovery was published, Einstein abandoned the cosmological constant. In their simplest form, the equations generated a model of the universe that expanded or contracted. Contradicting what was observed, hence the creation of the cosmological constant. After the confirmation that the universe was expanding, Einstein called his assumption that the universe was static his "biggest mistake". In 1931, Einstein visited Hubble to thank him for "providing the basis of modern cosmology". After this discovery, Einstein's and Newton's models of a contracting, yet static universe were dropped for the expanding universe model. Cyclic universes A hypothesis called "Big Bounce" proposes that the universe could collapse to the state where it began and then initiate another Big Bang, so in this way, the universe would last forever but would pass through phases of expansion (Big Bang) and contraction (Big Crunch). This means that there may be a universe in a state of constant Big Bangs and Big Crunches. Cyclic universes were briefly considered by Albert Einstein in 1931. He hypothesized that there was a universe before the Big Bang, which ended in a Big Crunch, which could create a Big Bang as a reaction. Our universe could be in a cycle of expansion and contraction, a cycle possibly going on infinitely. Ekpyrotic model There are more modern models of Cyclic universes as well. The Ekpyrotic model, formed by Paul Steinhardt, states that the Big Bang could have been caused by two parallel orbifold planes, referred to as branes colliding in a higher-dimensional space. The four-dimension universe lies on one of the branes. The collision corresponds to the Big Crunch, then a Big Bang. The matter and radiation around us today are quantum fluctuations from before the branes. After several billion years, the universe has reached its modern state, and it will start contracting in another several billion years. Dark energy corresponds to the force between the branes, allowing for problems, like the flatness and monopole in the previous models to be fixed. The cycles can also go infinitely into the past and the future, and an attractor allows for a complete history of the universe. This fixes the problem of the earlier model of the universe going into heat death from entropy buildup. The new model avoids this with a net expansion after every cycle, stopping entropy buildup. There are still some flaws in this model, however. The basis of the model, branes, are still not understood completely by string theorists, and the possibility that the scale invariant spectrum could be destroyed from the big crunch. While cosmic inflation and the general character of the forces—or the collision of the branes in the Ekpyrotic model—required to make vacuum fluctuations is known. A candidate from particle physics is missing. Conformal Cyclic Cosmology (CCC) model Physicist Roger Penrose advanced a general relativity-based theory called the conformal cyclic cosmology in which the universe expands until all the matter decays and is turned to light. Since nothing in the universe would have any time or distance scale associated with it, it becomes identical with the Big Bang (resulting in a type of Big Crunch that becomes the next Big Bang, thus starting the next cycle). Penrose and Gurzadyan suggested that signatures of conformal cyclic cosmology could potentially be found in the cosmic microwave background; as of 2020, these have not been detected. There are also some flaws with this model as well: skeptics pointed out that in order to match up an infinitely large universe to an infinitely small universe, that all particles must lose their mass when the universe gets old. Penrose presented evidence of CCC in the form of rings that had uniform temperature in the CMB, the idea being that these rings would be the signature in our aeon—An aeon being the current cycle of the universe that we're in—was caused by spherical gravitational waves caused by colliding black holes from our previous aeon. Loop quantum cosmology (LQC) Loop quantum cosmology is a model of the universe that proposes a "quantum-bridge" between expanding and contracting universes. In this model quantum geometry creates a brand-new force that is negligible at low spacetime curvature, but that rises very rapidly in the Planck regime, overwhelming classical gravity and resolving singularities of general relativity. Once the singularities are resolved the conceptual paradigm of cosmology changes, forcing one to revisit the standard issues—such as the horizon problem—from a new perspective. Under this model, due to quantum geometry, the Big Bang is replaced by the Big Bounce with no assumptions or any fine tuning. The approach of effective dynamics has been used extensively in loop quantum cosmology to describe physics at the Planck scale, and also the beginning of the universe. Numerical simulations have confirmed the validity of effective dynamics, which provides a good approximation of the full loop quantum dynamics. It has been shown when states have very large quantum fluctuations at late times, meaning they do not lead to macroscopic universes as described by general relativity, but the effective dynamics departs from quantum dynamics near bounce and the later universe. In this case, the effective dynamics will overestimate the density at bounce, but it will still capture qualitative aspects extremely well. Empirical scenarios from physical theories If a form of quintessence driven by a scalar field evolving down a monotonically decreasing potential that passes sufficiently below zero is the (main) explanation of dark energy and current data (in particular observational constraints on dark energy) is true as well, the accelerating expansion of the Universe would inverse to contraction within the cosmic near-future of the next 100 million years. According to an Andrei-Ijjas-Steinhardt study, the scenario fits "naturally with cyclic cosmologies and recent conjectures about quantum gravity". The study suggests that the slow contraction phase would "endure for a period of order 1 billion y before the universe transitions to a new phase of expansion". Effects Paul Davies considered a scenario in which the Big Crunch happens about 100 billion years from the present. In his model, the contracting universe would evolve roughly like the expanding phase in reverse. First, galaxy clusters, and then galaxies, would merge, and the temperature of the cosmic microwave background (CMB) would begin to rise as CMB photons get blueshifted. Stars would eventually become so close together that they begin to collide with each other. Once the CMB becomes hotter than M-type stars (about 500,000 years before the Big Crunch in Davies' model), they would no longer be able to radiate away their heat and would cook themselves until they evaporate; this continues for successively hotter stars until O-type stars boil away about 100,000 years before the Big Crunch. In the last minutes, the temperature of the universe would be so great that atoms and atomic nuclei would break up and get sucked up into already coalescing black holes. At the time of the Big Crunch, all the matter in the universe would be crushed into an infinitely hot, infinitely dense singularity similar to the Big Bang. The Big Crunch may be followed by another Big Bang, creating a new universe. In culture In The Restaurant at the End of the Universe, a novel by Douglas Adams, the concept is that a restaurant, Milliways, is set up to allow patrons to observe the end of the Universe, or "Gnab Gib", as it is referred to, as they dine. The term is sometimes used in the mainstream, for example (as "gnaB giB") in Physics I For Dummies and in a posting discussing the Big Crunch. See also References Physical cosmology Ultimate fate of the universe
Big Crunch
[ "Physics", "Astronomy" ]
2,536
[ "Astrophysics", "Theoretical physics", "Physical cosmology", "Astronomical sub-disciplines" ]
206,217
https://en.wikipedia.org/wiki/SMART-1
SMART-1 was a European Space Agency satellite that orbited the Moon. It was launched on 27 September 2003 at 23:14 UTC from the Guiana Space Centre in Kourou, French Guiana. "SMART-1" stands for Small Missions for Advanced Research in Technology-1. On 3 September 2006 (05:42 UTC), SMART-1 was deliberately crashed into the Moon's surface, ending its mission. Spacecraft design SMART-1 was about one meter across (3.3 ft), and lightweight in comparison to other probes. Its launch mass was 367 kg or 809 pounds, of which 287 kg (633 lb) was non-propellant. It was propelled by a solar-powered Hall-effect thruster (Snecma PPS-1350-G) using 82 kg of xenon gas contained in a 50 litres tank at a pressure of 150 bar at launch. The ion engine thruster used an electrostatic field to ionize the xenon and accelerate the ions achieving a specific impulse of 16.1 kN·s/kg (1,640 seconds), more than three times the maximum for chemical rockets. One kg of propellant (1/350 to 1/300 of the total mass of the spacecraft) produced a delta-v of about 45 m/s. The electric propulsion subsystem weighted 29 kg with a peak power consumption of 1,200 watts. SMART-1 was the first in the program of ESA's Small Missions for Advanced Research and Technology. The solar arrays made capable of 1850 W at the beginning of the mission, were able to provide the maximum set of 1,190 W to the thruster, giving a nominal thrust of 68 mN, hence an acceleration of 0.2 mm/s2 or 0.7 m/s per hour (i.e., just under 0.00002 g of acceleration). As with all ion-engine powered craft, orbital maneuvers were not carried out in short bursts but very gradually. The particular trajectory taken by SMART-1 to the Moon required thrusting for about one third to one half of every orbit. When spiraling away from the Earth thrusting was done on the perigee part of the orbit. At the end of the mission, the thruster had demonstrated the following capability: Thruster operating time: 5000 h Xenon throughput: 82 kg Total Impulse: 1.2 MN-s Total ΔV: 3.9 km/s As part of the European Space Agency's strategy to build very inexpensive and relatively small spaceships, the total cost of SMART-1 was a relatively small 110 million euros (about 170 million U.S. dollars). SMART-1 was designed and developed by the Swedish Space Corporation on behalf of ESA. Assembly of the spacecraft was carried out by Saab Space in Linköping. Tests of the spacecraft were directed by Swedish Space Corporation and executed by Saab Space. The project manager at ESA was Giuseppe Racca until the spacecraft achieved the moon operational orbit. He was then replaced by Gerhard Schwehm for the Science phase. The project manager at the Swedish Space Corporation was Peter Rathsman. The Principal Project Scientist was Bernard Foing. The Ground Segment Manager during the preparation phase was Mike McKay and the Spacecraft Operations manager was Octavio Camino. Instruments AMIE The Advanced Moon micro-Imager Experiment was a miniature colour camera for lunar imaging. The CCD camera with three filters of 750, 900 and 950 nm was able to take images with an average pixel resolution of 80 m (about 260 ft). The camera weighed 2.1 kg (about 4.5 lb) and had a power consumption of 9 watts. D-CIXS The Demonstration of a Compact X-ray Spectrometer was an X-ray telescope for the identification of chemical elements on the lunar surface. It detected the X-ray fluorescence (XRF) of crystal compounds created through the interaction of the electron shell with the solar wind particles to measure the abundance of the three main components: magnesium, silicon and aluminium. The detection of iron, calcium and titanium depended on the solar activity. The detection range for X-rays was 0.5 to 10 keV. The spectrometer and XSM (described below) together weighed 5.2 kg and had a power consumption of 18 watts. XSM The X-ray solar monitor studied the solar variability to complement D-CIXS measurements. SIR The Smart-1 Infrared Spectrometer was an infrared spectrometer for the identification of mineral spectra of olivine and pyroxene. It detected wavelengths from 0.93 to 2.4 μm with 256 channels. The package weighed 2.3 kg and had a power consumption of 4.1 watts. EPDP The Electric Propulsion Diagnostic Package was to acquire data on the new propulsion system on SMART-1. The package weighed 0.8 kg and had a power consumption of 1.8 watts. SPEDE The Spacecraft Potential, Electron and Dust Experiment. The experiment weighed 0.8 kg and had a power consumption of 1.8 watts. Its function was to measure the properties and density of the plasma around the spacecraft, either as a Langmuir probe or as an electric field probe. SPEDE observed the emission of the spacecraft's ion engine and the "wake" the Moon leaves to the solar wind. Unlike most other instruments that have to be shut down to prevent damage, SPEDE could keep measuring inside radiation belts and in solar storms, such as the Halloween 2003 solar storms. It was built by Finnish Meteorological Institute and its name was intentionally chosen so that its acronym is the same as the nickname of Spede Pasanen, a famous Finnish movie actor, movie producer, and inventor. The algorithms developed for SPEDE were later used in the ESA lander Philae. KATE Ka band TT&C (telemetry, tracking and control) Experiment. The experiment weighed 6.2 kg and had a power consumption of 26 watts. The Ka-band transponder was designed as precursor for BepiColombo to perform radio science investigations and to monitor the dynamical performance of the electric propulsion system. Flight SMART-1 was launched 27 September 2003 together with Insat 3E and eBird 1, by an Ariane 5 rocket from the Guiana Space Centre in French Guiana. After 42 minutes it was released into a geostationary transfer orbit of 7,035 × 42,223 km. From there it used its Solar Electric Primary Propulsion (SEPP) to gradually spiral out during thirteen months. The orbit can be seen up to 26 October 2004 at spaceref.com, when the orbit was 179,718 × 305,214 km. On that date, after the 289th engine pulse, the SEPP had accumulated a total on-time of nearly 3,648 hours out of a total flight time of 8,000 hours, hence a little less than half of its total mission. It consumed about 58.8 kg of xenon and produced a delta-v of 2,737 m/s (46.5 m/s per kg xenon, 0.75 m/s per hour on-time). It was powered on again on 15 November for a planned burn of 4.5 days to enter fully into lunar orbit. It took until February 2005 using the electric thruster to decelerate into the final orbit 300–3,000 km above the Moon's surface. The end of mission performance demonstrated by the propulsion system is stated above. After its last perigee on 2 November, on 11 November 2004 it passed through the Earth-Moon L1 Lagrangian Point and into the area dominated by the Moon's gravitational influence, and at 1748 UT on 15 November passed the first periselene of its lunar orbit. The osculating orbit on that date was 6,704 × 53,208 km, with an orbital period of 129 hours, although the actual orbit was accomplished in only 89 hours. This illustrates the significant impact that the engine burns have on the orbit and marks the meaning of the osculating orbit, which is the orbit that would be travelled by the spacecraft if at that instant all perturbations, including thrust, would cease. ESA announced on 15 February 2005 an extension of the mission of SMART-1 by one year until August 2006. This date was later shifted to 3 September 2006 to enable further scientific observations from Earth. Lunar impact SMART-1 impacted the Moon's surface, as planned, on 3 September 2006 at 05:42:22 UTC, ending its mission. Moving at approximately 2,000 m/s (4,500 mph), SMART-1 created an impact visible with ground telescopes from Earth. It is hoped that not only will this provide some data simulating a meteor impact, but also that it might expose materials in the ground, like water ice, to spectroscopic analysis. ESA originally estimated that impact occurred at . In 2017, the impact site was identified from Lunar Reconnaissance Orbiter data at . At the time of impact, the Moon was visible in North and South America, and places in the Pacific Ocean, but not Europe, Africa, or western Asia. This project has generated data and know-how that will be used for other missions, such as the ESA's BepiColombo mission to Mercury. Important events and discoveries 27 September 2003: SMART-1 launched from the European Spaceport in Kourou by an Ariane 5 launcher. 17 June 2004: SMART-1 took a test image of Earth with the camera that would later be used for Moon closeup pictures. It shows parts of Europe and Africa. It was taken on 21 May with the AMIE camera. 2 November 2004: Last perigee of Earth orbit. 15 November 2004: First perilune of lunar orbit. 15 January 2005: Calcium detected in Mare Crisium. 26 January 2005: First close up pictures of the lunar surface sent back. 27 February 2005: Reached final orbit around the Moon with an orbital period of about five hours. 15 April 2005: The search for PELs begins. 3 September 2006: Mission ends with a planned crash into the Moon during orbit number 2,890. Smart-1 Ground Segment and Operations Smart-1 operations were conducted from the ESA European Space Operations Center ESOC in Darmstadt Germany led by the Spacecraft Operations Manager Octavio Camino. The ground segment of Smart-1 was a good example of infrastructure reuse at ESA: Flight Dynamics infrastructure and Data distribution System (DDS) from Rosetta, Mars Express and Venus Express. The generic mission control system software SCOS 2000, and a set of generic interface elements use at ESA for the operations of their missions. The use of CCSDS TLM and TC standards permitted a cost effective tailoring of seven different terminals of the ESA Tracking network (ESTRACK) plus Weilheim in Germany (DLR). The components that were developed specifically for Smart-1 were: the simulator; a mix of hardware and software derived from the Electrical Ground Support Equipment EGSE equipment, the Mission Planning System and the Automation System developed from MOIS (this last based on a prototype implemented for Envisat) and a suite of engineering tools called MUST. This last permitted the Smart-1 engineers to do anomaly investigation through internet, pioneering at ESA monitoring of spacecraft TLM using mobile phones and PDAs and receiving spacecraft alarms via SMS. The Mission Control Team was composed of seven engineers in the Flight Control Team FCT, a variable group between 2–5 Flight Dynamics engineers and 1–2 Data Systems engineers. Unlike most ESA missions, there were no Spacecraft Controllers (SPACONs), and all operations and mission-planning activities were done by the FCT. This concept originated overtime and night shifts during the first months of the mission but worked well during the cruise and the Moon phases. The major concern during the first three months of the mission was to leave the radiation belts as soon as possible in order to minimize the degradation of the solar arrays and the star tracker CCDs. The first and most critical problem came after the first revolution when a failure in the onboard Error Detection and Correction (EDAC) algorithm triggered an autonomous switch to the redundant computer in every orbit causing several reboots, finding the spacecraft in SAFE mode after every pericenter passage. The analysis of the spacecraft telemetry pointed directly to a radiation-triggered problem with the EDAC interrupt routine. Other anomalies during this period were a combination of environmental problems: high radiation doses, especially in the star trackers and onboard software anomalies: the Reed Solomon encoding became corrupt after switching data rates and had to be disabled. It was overcome by procedures and changes on ground operations approach. The star trackers were also subject of frequent hiccups during the earth escape and caused some of the Electric Propulsion (EP) interruptions. They were all resolved with several software patches. The EP showed sensitivity to radiation inducing shutdowns. This phenomenon identified as the Opto-coupler Single Event Transient (OSET), initially seen in LEOP during the first firing using cathode B, was characterized by a rapid drop in Anode Current triggering the alarm 'Flame Out' bit causing the shutdown of the EP. The problem was identified to be radiation induced Opto-coupler sensitivity. The recovery of such events was to restart the thruster. This was manually done during several months until an On Board Software Patch (OBSW) was developed to detect it and initiate an autonomous thruster restart. Its impact was limited to the orbit prediction calculation used for the Ground Stations to track the spacecraft and the subsequent orbit corrections. The different kind of anomalies and the frequent interruptions in the thrust of the Electric Propulsion led to an increase of the ground stations support and overtime of the flight operations team who had to react quickly. Their recovery was sometimes time consuming, especially when the spacecraft was found in SAFE mode. Overall, they impeded to run the operations as originally planned having one 8 hours pass every 4 days. The mission negotiated the use the ESTRACK network spare capacity. This concept permitted about eight times additional network coverage at no extra cost but originated unexpected overheads and conflicts. It ultimately permitted additional contacts with the spacecraft during the early stage of the mission and an important increase of science during the Moon phase. This phase required a major reconfiguration of the on-board stores and its operation. This change designed by the flight control team at ESOC and implemented by the Swedish Space Corporation in a short time required to re-write part of the Flight Control Procedures FOP for the operations at the Moon. The Operations during the Moon phase become highly automated: the flight dynamics pointing was "menu driven" allowing more than 98% of commanding being generated by the Mission Planning System MPS. The extension of the MPS system with the so called MOIS Executor, became the Smart-1 automation system. It permitted to operate 70% of the passes unmanned towards the end of the mission and allowed the validation of the first operational "spacecraft automation system" at ESA. The mission achieved all its objectives: getting out of the radiation belts influence 3 months after launch, spiraling out during 11 months and being captured by the Moon using resonances, the commissioning and operations of all instruments during the cruise phase and the optimization of the navigation and operational procedures required for Electric Propulsion operation. The efficient operations of the Electric Propulsion at the Moon allowed the reduction of the orbital radius benefiting the scientific operations and extending this mission by one extra year. A detailed chronology of the operations events is provided in ref. Smart-1 Mission Phases Launch and Early Orbit Phase: Launch on 27 September 2003, initial orbit 7,029 x 42263 km. Van Allen Belt Escape: Continuous thrust strategy to raise the perigee radius. Escape phase completed by 22 December 2003, orbit 20000 x 63427 km. Earth Escape Cruise: Thrust around perigee only to raise the apogee radius. Moon resonances and Capture: Trajectory assists by means of Moon resonances. Moon capture on 15 November 2004 at 310,000 km from the Earth and 90,000 km from the Moon. Lunar Descent: Thrust used to lower the orbit, operational orbit 2,200 x 4,600 km. Lunar Science: Until the end of lifetime in September 2006, interrupted only by a one-month re-boost phase in September 2005 to optimize the lunar orbit. Orbit re-boost: Phase in June/July 2006 using the attitude thrusters to adjust the impact date and time. Moon Impact: Operations from July 2006 until the impact on 3 September 2006. The full mission phases from the operations perspective is documented in including the performance of the different subsystems. See also List of artificial objects on the Moon References General Kaydash V., Kreslavsky M., Shkuratov Yu., Gerasimenko S., Pinet P., Chevrel S., Josset J.-L., Beauvivre S., Almeida M., Foing B. (2007). "PHOTOMETRIC CHARACTERIZATION OF SELECTED LUNAR SITES BY SMART-1 AMIE DATA". Lunar Planetary Science, XXXVIII, abstract 1535, . External links ESA SMART-1 scientific website SMART-1 Mission Profile by NASA's Solar System Exploration Observation of the Impact of Smart-1 SMART 1 on Serbian science portal Viva fizika SMART-1, Europe at the Moon Missions to the Moon Hall effect European Space Agency space probes Space programme of Sweden Space probes launched in 2003 Spacecraft that impacted the Moon 2003 establishments in South America 2006 on the Moon
SMART-1
[ "Physics", "Chemistry", "Materials_science" ]
3,615
[ "Physical phenomena", "Hall effect", "Electric and magnetic fields in matter", "Electrical phenomena", "Solid state engineering" ]
206,242
https://en.wikipedia.org/wiki/Differential%20%28mechanical%20device%29
A differential is a gear train with three drive shafts that has the property that the rotational speed of one shaft is the average of the speeds of the others. A common use of differentials is in motor vehicles, to allow the wheels at each end of a drive axle to rotate at different speeds while cornering. Other uses include clocks and analogue computers. Differentials can also provide a gear ratio between the input and output shafts (called the "axle ratio" or "diff ratio"). For example, many differentials in motor vehicles provide a gearing reduction by having fewer teeth on the pinion than the ring gear. History Milestones in the design or use of differentials include: 100 BCE–70 BCE: The Antikythera mechanism has been dated to this period. It was discovered in 1902 on a shipwreck by sponge divers, and modern research suggests that it used a differential gear to determine the angle between the ecliptic positions of the Sun and Moon, and thus the phase of the Moon. : Chinese engineer Ma Jun creates the first well-documented south-pointing chariot, a precursor to the compass. Its mechanism of action is unclear, though some 20th century engineers put forward the argument that it used a differential gear. 1810: Rudolph Ackermann of Germany invents a four-wheel steering system for carriages, which some later writers mistakenly report as a differential. 1823: Aza Arnold develops a differential drive train for use in cotton-spinning. The design quickly spreads across the United States and into the United Kingdom. 1827: Modern automotive differential patented by watchmaker Onésiphore Pecqueur (1792–1852) of the Conservatoire National des Arts et Métiers in France for use on a steam wagon. 1874: Aveling and Porter of Rochester, Kent list a crane locomotive in their catalogue fitted with their patent differential gear on the rear axle. 1876: James Starley of Coventry invents chain-drive differential for use on bicycles; invention later used on automobiles by Karl Benz. 1897: While building his Australian steam car, David Shearer made the first use of a differential in a motor vehicle. 1958: Vernon Gleasman patents the Torsen limited-slip differential. Use in wheeled vehicles Purpose During cornering, the outer wheels of a vehicle must travel further than the inner wheels (since they are on a larger radius). This is easily accommodated when the wheels are not connected, however it becomes more difficult for the drive wheels, since both wheels are connected to the engine (usually via a transmission). Some vehicles (for example go-karts and trams) use axles without a differential, thus relying on wheel slip when cornering. However, for improved cornering abilities, many vehicles use a differential, which allows the two wheels to rotate at different speeds. The purpose of a differential is to transfer the engine's power to the wheels while still allowing the wheels to rotate at different speeds when required. An illustration of the operating principle for a ring-and-pinion differential is shown below. Ring-and-pinion design A relatively simple design of differential is used in rear-wheel drive vehicles, whereby a ring gear is driven by a pinion gear connected to the transmission. The functions of this design are to change the axis of rotation by 90 degrees (from the propshaft to the half-shafts) and provide a reduction in the gear ratio. The components of the ring-and-pinion differential shown in the schematic diagram on the right are: 1. Output shafts (axles) 2. Drive gear 3. Output gears 4. Planetary gears 5. Carrier 6. Input gear 7. Input shaft (driveshaft) Epicyclic design An epicyclic differential uses epicyclic gearing to send certain proportions of torque to the front axle and the rear axle in an all-wheel drive vehicle. An advantage of the epicyclic design is its relatively compact width (when viewed along the axis of its input shaft). Spur-gear design A spur-gear differential has equal-sized spur gears at each end, each of which is connected to an output shaft. The input torque (i.e. from the engine or transmission) is applied to the differential via the rotating carrier. Pinion pairs are located within the carrier and rotate freely on pins supported by the carrier. The pinion pairs only mesh for the part of their length between the two spur gears, and rotate in opposite directions. The remaining length of a given pinion meshes with the nearer spur gear on its axle. Each pinion connects the associated spur gear to the other spur gear (via the other pinion). As the carrier is rotated (by the input torque), the relationship between the speeds of the input (i.e. the carrier) and that of the output shafts is the same as other types of open differentials. Uses of spur-gear differentials include the Oldsmobile Toronado American front-wheel drive car. Locking differentials Locking differentials have the ability to overcome the chief limitation of a standard open differential by essentially "locking" both wheels on an axle together as if on a common shaft. This forces both wheels to turn in unison, regardless of the traction (or lack thereof) available to either wheel individually. When this function is not required, the differential can be "unlocked" to function as a regular open differential. Locking differentials are mostly used on off-road vehicles, to overcome low-grip and variable grip surfaces. Limited-slip differentials An undesirable side-effect of a regular ("open") differential is that it can send most of the power to the wheel with the lesser traction (grip). In situation when one wheel has reduced grip (e.g. due to cornering forces or a low-grip surface under one wheel), an open differential can cause wheelspin in the tyre with less grip, while the tyre with more grip receives very little power to propel the vehicle forward. In order to avoid this situation, various designs of limited-slip differentials are used to limit the difference in power sent to each of the wheels. Torque vectoring Torque vectoring is a technology employed in automobile differentials that has the ability to vary the torque to each half-shaft with an electronic system; or in rail vehicles which achieve the same using individually motored wheels. In the case of automobiles, it is used to augment the stability or cornering ability of the vehicle. Other uses Non-automotive uses of differentials include performing analogue arithmetic. Two of the differential's three shafts are made to rotate through angles that represent (are proportional to) two numbers, and the angle of the third shaft's rotation represents the sum or difference of the two input numbers. The earliest known use of a differential gear is in the Antikythera mechanism, c. 80 BCE, which used a differential gear to control a small sphere representing the Moon from the difference between the Sun and Moon position pointers. The ball was painted black and white in hemispheres, and graphically showed the phase of the Moon at a particular point in time. An equation clock that used a differential for addition was made in 1720. In the 20th century, large assemblies of many differentials were used as analogue computers, calculating, for example, the direction in which a gun should be aimed. Compass-like devices Chinese south-pointing chariots may also have been very early applications of differentials. The chariot had a pointer which constantly pointed to the south, no matter how the chariot turned as it travelled. It could therefore be used as a type of compass. It is widely thought that a differential mechanism responded to any difference between the speeds of rotation of the two wheels of the chariot, and turned the pointer appropriately. However, the mechanism was not precise enough, and, after a few miles of travel, the dial could be pointing in the wrong direction. Clocks The earliest verified use of a differential was in a clock made by Joseph Williamson in 1720. It employed a differential to add the equation of time to local mean time, as determined by the clock mechanism, to produce solar time, which would have been the same as the reading of a sundial. During the 18th century, sundials were considered to show the "correct" time, so an ordinary clock would frequently have to be readjusted, even if it worked perfectly, because of seasonal variations in the equation of time. Williamson's and other equation clocks showed sundial time without needing readjustment. Nowadays, we consider clocks to be "correct" and sundials usually incorrect, so many sundials carry instructions about how to use their readings to obtain clock time. Analogue computers Differential analysers, a type of mechanical analogue computer, were used from approximately 1900 to 1950. These devices used differential gear trains to perform addition and subtraction. Vehicle suspension The Mars rovers Spirit and Opportunity (both launched in 2004) used differential gears in their rocker-bogie suspensions to keep the rover body balanced as the wheels on the left and right move up and down over uneven terrain. The Curiosity and Perseverance rovers used a differential bar instead of gears to perform the same function. See also Anti-lock braking system Ball differential Drifting (motorsport) List of auto parts Traction control system Whippletree References Further reading Popular Science, May 1946, How Your Car Turns Corners, a large article with numerous illustrations on how differentials work External links A video of a 3D model of an open differential Articles containing video clips Auto parts Automotive transmission technologies Gears Mechanisms (engineering) Vehicle technology
Differential (mechanical device)
[ "Engineering" ]
1,953
[ "Vehicle technology", "Mechanical engineering by discipline", "Mechanical engineering", "Mechanisms (engineering)" ]
206,886
https://en.wikipedia.org/wiki/Program%20evaluation%20and%20review%20technique
The program evaluation and review technique (PERT) is a statistical tool used in project management, which was designed to analyze and represent the tasks involved in completing a given project. PERT was originally developed by Charles E. Clark for the United States Navy in 1958; it is commonly used in conjunction with the Critical Path Method (CPM), which was also introduced in 1958. Overview PERT is a method of analyzing the tasks involved in completing a project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. It incorporates uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more event-oriented than start- and completion-oriented, and is used more for projects where time is the major constraint rather than cost. It is applied to very large-scale, one-time, complex, non-routine infrastructure projects, as well as R&D projects. PERT offers a management tool, which relies "on arrow and node diagrams of activities and events: arrows represent the activities or work necessary to reach the events or nodes that indicate each completed phase of the total project." PERT and CPM are complementary tools, because "CPM employs one time estimation and one cost estimation for each activity; PERT may utilize three time estimates (optimistic, expected, and pessimistic) and no costs for each activity. Although these are distinct differences, the term PERT is applied increasingly to all critical path scheduling." History PERT was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed by the United States Navy Special Projects Office, Lockheed Aircraft, and Booz Allen Hamilton to support the Navy's Polaris missile project. It found applications throughout industry. An early example is the 1968 Winter Olympics in Grenoble which used PERT from 1965 until the opening of the 1968 Games. This project model was the first of its kind, a revival for the scientific management of Frederick Taylor and later refined by Henry Ford (Fordism). DuPont's CPM was invented at roughly the same time as PERT. Initially PERT stood for Program Evaluation Research Task, but by 1959 was renamed. It had been made public in 1958 in two publications of the U.S. Department of the Navy, entitled Program Evaluation Research Task, Summary Report, Phase 1. and Phase 2. both primarily written by Charles F. Clark. In a 1959 article in The American Statistician, Willard Fazar, Head of the Program Evaluation Branch, Special Projects Office, U.S. Navy, gave a detailed description of the main concepts of PERT. He explained: Ten years after the introduction of PERT, the American librarian Maribeth Brennan compiled a selected bibliography with about 150 publications on PERT and CPM, all published between 1958 and 1968. For the subdivision of work units in PERT another tool was developed: the Work Breakdown Structure. The Work Breakdown Structure provides "a framework for complete networking, the Work Breakdown Structure was formally introduced as the first item of analysis in carrying out basic PERT/CPM." Terminology Events and activities In a PERT diagram, the main building block is the event, with connections to its known predecessor events and successor events. PERT event: a point that marks the start or completion of one or more activities. It consumes no time and uses no resources. When it marks the completion of one or more activities, it is not "reached" (does not occur) until all of the activities leading to that event have been completed. predecessor event: an event that immediately precedes some other event without any other events intervening. An event can have multiple predecessor events and can be the predecessor of multiple events. successor event: an event that immediately follows some other event without any other intervening events. An event can have multiple successor events and can be the successor of multiple events. Besides events, PERT also tracks activities and sub-activities: PERT activity: the actual performance of a task which consumes time and requires resources (such as labor, materials, space, machinery). It can be understood as representing the time, effort, and resources required to move from one event to another. A PERT activity cannot be performed until the predecessor event has occurred. PERT sub-activity: a PERT activity can be further decomposed into a set of sub-activities. For example, activity A1 can be decomposed into A1.1, A1.2 and A1.3. Sub-activities have all the properties of activities; in particular, a sub-activity has predecessor or successor events just like an activity. A sub-activity can be decomposed again into finer-grained sub-activities. Time PERT defines four types of time required to accomplish an activity: optimistic time: the minimum possible time required to accomplish an activity (o) or a path (O), assuming everything proceeds better than is normally expected pessimistic time: the maximum possible time required to accomplish an activity (p) or a path (P), assuming everything goes wrong (but excluding major catastrophes). most likely time: the best estimate of the time required to accomplish an activity (m) or a path (M), assuming everything proceeds as normal. expected time: the best estimate of the time required to accomplish an activity (te) or a path (TE), accounting for the fact that things don't always proceed as normal (the implication being that the expected time is the average time the task would require if the task were repeated on a number of occasions over an extended period of time). standard deviation of time : the variability of the time for accomplishing an activity (σte) or a path (σTE) Management tools PERT supplies a number of tools for management with determination of concepts, such as: float or slack is a measure of the excess time and resources available to complete a task. It is the amount of time that a project task can be delayed without causing a delay in any subsequent tasks (free float) or the whole project (total float). Positive slack would indicate ahead of schedule; negative slack would indicate behind schedule; and zero slack would indicate on schedule. critical path: the longest possible continuous pathway taken from the initial event to the terminal event. It determines the total calendar time required for the project; and, therefore, any time delays along the critical path will delay the reaching of the terminal event by at least the same amount. critical activity: An activity that has total float equal to zero. An activity with zero free float is not necessarily on the critical path since its path may not be the longest. lead time: the time by which a predecessor event must be completed in order to allow sufficient time for the activities that must elapse before a specific PERT event reaches completion. lag time: the earliest time by which a successor event can follow a specific PERT event. fast tracking: performing more critical activities in parallel crashing critical path: Shortening duration of critical activities Implementation The first step for scheduling the project is to determine the tasks that the project requires and the order in which they must be completed. The order may be easy to record for some tasks (e.g., when building a house, the land must be graded before the foundation can be laid) while difficult for others (there are two areas that need to be graded, but there are only enough bulldozers to do one). Additionally, the time estimates usually reflect the normal, non-rushed time. Many times, the time required to execute the task can be reduced for an additional cost or a reduction in the quality. Example In the following example there are seven tasks, labeled A through G. Some tasks can be done concurrently (A and B) while others cannot be done until their predecessor task is complete (C cannot begin until A is complete). Additionally, each task has three time estimates: the optimistic time estimate (o), the most likely or normal time estimate (m), and the pessimistic time estimate (p). The expected time (te) is computed using the formula (o + 4m + p) ÷ 6. Once this step is complete, one can draw a Gantt chart or a network diagram. Next step, creating network diagram by hand or by using diagram software A network diagram can be created by hand or by using diagram software. There are two types of network diagrams, activity on arrow (AOA) and activity on node (AON). Activity on node diagrams are generally easier to create and interpret. To create an AON diagram, it is recommended (but not required) to start with a node named start. This "activity" has a duration of zero (0). Then you draw each activity that does not have a predecessor activity (a and b in this example) and connect them with an arrow from start to each node. Next, since both c and d list a as a predecessor activity, their nodes are drawn with arrows coming from a. Activity e is listed with b and c as predecessor activities, so node e is drawn with arrows coming from both b and c, signifying that e cannot begin until both b and c have been completed. Activity f has d as a predecessor activity, so an arrow is drawn connecting the activities. Likewise, an arrow is drawn from e to g. Since there are no activities that come after f or g, it is recommended (but again not required) to connect them to a node labeled finish. By itself, the network diagram pictured above does not give much more information than a Gantt chart; however, it can be expanded to display more information. The most common information shown is: The activity name The expected duration time The early start time (ES) The early finish time (EF) The late start time (LS) The late finish time (LF) The slack In order to determine this information it is assumed that the activities and normal duration times are given. The first step is to determine the ES and EF. The ES is defined as the maximum EF of all predecessor activities, unless the activity in question is the first activity, for which the ES is zero (0). The EF is the ES plus the task duration (EF = ES + duration). The ES for start is zero since it is the first activity. Since the duration is zero, the EF is also zero. This EF is used as the ES for a and b. The ES for a is zero. The duration (4 work days) is added to the ES to get an EF of four. This EF is used as the ES for c and d. The ES for b is zero. The duration (5.33 work days) is added to the ES to get an EF of 5.33. The ES for c is four. The duration (5.17 work days) is added to the ES to get an EF of 9.17. The ES for d is four. The duration (6.33 work days) is added to the ES to get an EF of 10.33. This EF is used as the ES for f. The ES for e is the greatest EF of its predecessor activities (b and c). Since b has an EF of 5.33 and c has an EF of 9.17, the ES of e is 9.17. The duration (5.17 work days) is added to the ES to get an EF of 14.34. This EF is used as the ES for g. The ES for f is 10.33. The duration (4.5 work days) is added to the ES to get an EF of 14.83. The ES for g is 14.34. The duration (5.17 work days) is added to the ES to get an EF of 19.51. The ES for finish is the greatest EF of its predecessor activities (f and g). Since f has an EF of 14.83 and g has an EF of 19.51, the ES of finish is 19.51. Finish is a milestone (and therefore has a duration of zero), so the EF is also 19.51. Barring any unforeseen events, the project should take 19.51 work days to complete. The next step is to determine the late start (LS) and late finish (LF) of each activity. This will eventually show if there are activities that have slack. The LF is defined as the minimum LS of all successor activities, unless the activity is the last activity, for which the LF equals the EF. The LS is the LF minus the task duration (LS = LF − duration). The LF for finish is equal to the EF (19.51 work days) since it is the last activity in the project. Since the duration is zero, the LS is also 19.51 work days. This will be used as the LF for f and g. The LF for g is 19.51 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 14.34 work days. This will be used as the LF for e. The LF for f is 19.51 work days. The duration (4.5 work days) is subtracted from the LF to get an LS of 15.01 work days. This will be used as the LF for d. The LF for e is 14.34 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 9.17 work days. This will be used as the LF for b and c. The LF for d is 15.01 work days. The duration (6.33 work days) is subtracted from the LF to get an LS of 8.68 work days. The LF for c is 9.17 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 4 work days. The LF for b is 9.17 work days. The duration (5.33 work days) is subtracted from the LF to get an LS of 3.84 work days. The LF for a is the minimum LS of its successor activities. Since c has an LS of 4 work days and d has an LS of 8.68 work days, the LF for a is 4 work days. The duration (4 work days) is subtracted from the LF to get an LS of 0 work days. The LF for start is the minimum LS of its successor activities. Since a has an LS of 0 work days and b has an LS of 3.84 work days, the LS is 0 work days. Next step, determination of critical path and possible slack The next step is to determine the critical path and if any activities have slack. The critical path is the path that takes the longest to complete. To determine the path times, add the task durations for all available paths. Activities that have slack can be delayed without changing the overall time of the project. Slack is computed in one of two ways, slack = LF − EF or slack = LS − ES. Activities that are on the critical path have a slack of zero (0). The duration of path adf is 14.83 work days. The duration of path aceg is 19.51 work days. The duration of path beg is 15.67 work days. The critical path is aceg and the critical time is 19.51 work days. It is important to note that there can be more than one critical path (in a project more complex than this example) or that the critical path can change. For example, let's say that activities d and f take their pessimistic (b) times to complete instead of their expected (TE) times. The critical path is now adf and the critical time is 22 work days. On the other hand, if activity c can be reduced to one work day, the path time for aceg is reduced to 15.34 work days, which is slightly less than the time of the new critical path, beg (15.67 work days). Assuming these scenarios do not happen, the slack for each activity can now be determined. Start and finish are milestones and by definition have no duration, therefore they can have no slack (0 work days). The activities on the critical path by definition have a slack of zero; however, it is always a good idea to check the math anyway when drawing by hand. LFa – EFa = 4 − 4 = 0 LFc – EFc = 9.17 − 9.17 = 0 LFe – EFe = 14.34 − 14.34 = 0 LFg – EFg = 19.51 − 19.51 = 0 Activity b has an LF of 9.17 and an EF of 5.33, so the slack is 3.84 work days. Activity d has an LF of 15.01 and an EF of 10.33, so the slack is 4.68 work days. Activity f has an LF of 19.51 and an EF of 14.83, so the slack is 4.68 work days. Therefore, activity b can be delayed almost 4 work days without delaying the project. Likewise, activity d or activity f can be delayed 4.68 work days without delaying the project (alternatively, d and f can be delayed 2.34 work days each). Avoiding loops Depending upon the capabilities of the data input phase of the critical path algorithm, it may be possible to create a loop, such as A -> B -> C -> A. This can cause simple algorithms to loop indefinitely. Although it is possible to "mark" nodes that have been visited, then clear the "marks" upon completion of the process, a far simpler mechanism involves computing the total of all activity durations. If an EF of more than the total is found, the computation should be terminated. It is worth saving the identities of the most recently visited dozen or so nodes to help identify the problem link. As project scheduling tool Advantages PERT chart explicitly defines and makes visible dependencies (precedence relationships) between the work breakdown structure (commonly WBS) elements. PERT facilitates identification of the critical path and makes this visible. PERT facilitates identification of early start, late start, and slack for each activity. PERT provides for potentially reduced project duration due to better understanding of dependencies leading to improved overlapping of activities and tasks where feasible. The large amount of project data can be organized and presented in diagram for use in decision making. PERT can provide a probability of completing before a given time. Disadvantages There can be potentially hundreds or thousands of activities and individual dependency relationships. PERT is not easy to scale down for smaller projects. The network charts tend to be large and unwieldy, requiring several pages to print and requiring specially-sized paper. The lack of a timeframe on most PERT/CPM charts makes it harder to show status, although colours can help, e.g., specific colour for completed nodes. Uncertainty in project scheduling During project execution a real-life project will never execute exactly as it was planned due to uncertainty. This can be due to ambiguity resulting from subjective estimates that are prone to human errors or can be the result of variability arising from unexpected events or risks. The main reason that PERT may provide inaccurate information about the project completion time is due to this schedule uncertainty. This inaccuracy may be large enough to render such estimates as not helpful. One possible method to maximize solution robustness is to include safety in the baseline schedule in order to absorb disruptions. This is called proactive scheduling, however, allowing for every possible disruption would be very slow and couldn't be accommodated by the baseline schedule. A second approach, termed reactive scheduling, defines a procedure to react to disruptions that cannot be absorbed by the baseline schedule. See also Activity diagram Arrow diagramming method PERT distribution Critical chain project management Float (project management) Gantt chart GERT Precedence diagram method Project network Project management Project planning Triangular distribution PRINCE2 References Further reading External links Network theory Diagrams Evaluation methods Project management techniques Schedule (project management) Systems engineering Booz Allen Hamilton Operations research Management cybernetics Engineering management Management science
Program evaluation and review technique
[ "Physics", "Mathematics", "Engineering", "Biology" ]
4,230
[ "Systems engineering", "Behavior", "Engineering economics", "Physical quantities", "Time", "Applied mathematics", "Graph theory", "Network theory", "Operations research", "Behavioural sciences", "Management science", "Mathematical relations", "Engineering management", "Spacetime", "Sched...
207,074
https://en.wikipedia.org/wiki/Beta%20distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or (0, 1) in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution. The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines. The beta distribution is a suitable model for the random behavior of percentages and proportions. In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial, and geometric distributions. The formulation of the beta distribution discussed here is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution. The generalization to multiple variables is called a Dirichlet distribution. Definitions Probability density function The probability density function (PDF) of the beta distribution, for or , and shape parameters , , is a power function of the variable and of its reflection as follows: where is the gamma function. The beta function, , is a normalization constant to ensure that the total probability is 1. In the above equations is a realization—an observed value that actually occurred—of a random variable . Several authors, including N. L. Johnson and S. Kotz, use the symbols and (instead of and ) for the shape parameters of the beta distribution, reminiscent of the symbols traditionally used for the parameters of the Bernoulli distribution, because the beta distribution approaches the Bernoulli distribution in the limit when both shape parameters and approach the value of zero. In the following, a random variable beta-distributed with parameters and will be denoted by: Other notations for beta-distributed random variables used in the statistical literature are and . Cumulative distribution function The cumulative distribution function is where is the incomplete beta function and is the regularized incomplete beta function. For positive integer , , the cumulative distribution function of a beta distribution can be expressed in terms of the cumulative distribution function of a binomial distribution with . Alternative parameterizations Two parameters Mean and sample size The beta distribution may also be reparameterized in terms of its mean μ and the sum of the two shape parameters ( p. 83). Denoting by αPosterior and βPosterior the shape parameters of the posterior beta distribution resulting from applying Bayes theorem to a binomial likelihood function and a prior probability, the interpretation of the addition of both shape parameters to be sample size = ν = α·Posterior + β·Posterior is only correct for the Haldane prior probability Beta(0,0). Specifically, for the Bayes (uniform) prior Beta(1,1) the correct interpretation would be sample size = α·Posterior + β Posterior − 2, or ν = (sample size) + 2. For sample size much larger than 2, the difference between these two priors becomes negligible. (See section Bayesian inference for further details.) ν = α + β is referred to as the "sample size" of a beta distribution, but one should remember that it is, strictly speaking, the "sample size" of a binomial likelihood function only when using a Haldane Beta(0,0) prior in Bayes theorem. This parametrization may be useful in Bayesian parameter estimation. For example, one may administer a test to a number of individuals. If it is assumed that each person's score (0 ≤ θ ≤ 1) is drawn from a population-level beta distribution, then an important statistic is the mean of this population-level distribution. The mean and sample size parameters are related to the shape parameters α and β via α = μν, β = (1 − μ)ν Under this parametrization, one may place an uninformative prior probability over the mean, and a vague prior probability (such as an exponential or gamma distribution) over the positive reals for the sample size, if they are independent, and prior data and/or beliefs justify it. Mode and concentration Concave beta distributions, which have , can be parametrized in terms of mode and "concentration". The mode, , and concentration, , can be used to define the usual shape parameters as follows: For the mode, , to be well-defined, we need , or equivalently . If instead we define the concentration as , the condition simplifies to and the beta density at and can be written as: where directly scales the sufficient statistics, and . Note also that in the limit, , the distribution becomes flat. Mean and variance Solving the system of (coupled) equations given in the above sections as the equations for the mean and the variance of the beta distribution in terms of the original parameters α and β, one can express the α and β parameters in terms of the mean (μ) and the variance (var): This parametrization of the beta distribution may lead to a more intuitive understanding than the one based on the original parameters α and β. For example, by expressing the mode, skewness, excess kurtosis and differential entropy in terms of the mean and the variance: Four parameters A beta distribution with the two shape parameters α and β is supported on the range [0,1] or (0,1). It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum, a, and maximum c (c > a), values of the distribution, by a linear transformation substituting the non-dimensional variable x in terms of the new variable y (with support [a,c] or (a,c)) and the parameters a and c: The probability density function of the four parameter beta distribution is equal to the two parameter distribution, scaled by the range (c − a), (so that the total area under the density curve equals a probability of one), and with the "y" variable shifted and scaled as follows: That a random variable Y is beta-distributed with four parameters α, β, a, and c will be denoted by: Some measures of central location are scaled (by (c − a)) and shifted (by a), as follows: Note: the geometric mean and harmonic mean cannot be transformed by a linear transformation in the way that the mean, median and mode can. The shape parameters of Y can be written in term of its mean and variance as The statistical dispersion measures are scaled (they do not need to be shifted because they are already centered on the mean) by the range (c − a), linearly for the mean deviation and nonlinearly for the variance: Since the skewness and excess kurtosis are non-dimensional quantities (as moments centered on the mean and normalized by the standard deviation), they are independent of the parameters a and c, and therefore equal to the expressions given above in terms of X (with support [0,1] or (0,1)): Properties Measures of central tendency Mode The mode of a beta distributed random variable X with α, β > 1 is the most likely value of the distribution (corresponding to the peak in the PDF), and is given by the following expression: When both parameters are less than one (α, β < 1), this is the anti-mode: the lowest point of the probability density curve. Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β > 1 the mode (resp. anti-mode when ), is at the center of the distribution: it is symmetric in those cases. See Shapes section in this article for a full list of mode cases, for arbitrary values of α and β. For several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the (maximum) value of the density function occurring at the end is finite. For example, in the case of α = 2, β = 1 (or α = 1, β = 2), the density function becomes a right-triangle distribution which is finite at both ends. In several other cases there is a singularity at one end, where the value of the density function approaches infinity. For example, in the case α = β = 1/2, the beta distribution simplifies to become the arcsine distribution. There is debate among mathematicians about some of these cases and whether the ends (x = 0, and x = 1) can be called modes or not. Whether the ends are part of the domain of the density function Whether a singularity can ever be called a mode Whether cases with two maxima should be called bimodal Median The median of the beta distribution is the unique real number for which the regularized incomplete beta function . There is no general closed-form expression for the median of the beta distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow: For symmetric cases α = β, median = 1/2. For α = 1 and β > 0, median (this case is the mirror-image of the power function [0,1] distribution) For α > 0 and β = 1, median = (this case is the power function [0,1] distribution) For α = 3 and β = 2, median = 0.6142724318676105..., the real solution to the quartic equation 1 − 8x3 + 6x4 = 0, which lies in [0,1]. For α = 2 and β = 3, median = 0.38572756813238945... = 1−median(Beta(3, 2)) The following are the limits with one parameter finite (non-zero) and the other approaching these limits: A reasonable approximation of the value of the median of the beta distribution, for both α and β greater or equal to one, is given by the formula When α, β ≥ 1, the relative error (the absolute error divided by the median) in this approximation is less than 4% and for both α ≥ 2 and β ≥ 2 it is less than 1%. The absolute error divided by the difference between the mean and the mode is similarly small: Mean The expected value (mean) (μ) of a beta distribution random variable X with two parameters α and β is a function of only the ratio β/α of these parameters: Letting in the above expression one obtains , showing that for the mean is at the center of the distribution: it is symmetric. Also, the following limits can be obtained from the above expression: Therefore, for β/α → 0, or for α/β → ∞, the mean is located at the right end, . For these limit ratios, the beta distribution becomes a one-point degenerate distribution with a Dirac delta function spike at the right end, , with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the right end, . Similarly, for β/α → ∞, or for α/β → 0, the mean is located at the left end, . The beta distribution becomes a 1-point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the left end, x = 0. Following are the limits with one parameter finite (non-zero) and the other approaching these limits: While for typical unimodal distributions (with centrally located modes, inflexion points at both sides of the mode, and longer tails) (with Beta(α, β) such that ) it is known that the sample mean (as an estimate of location) is not as robust as the sample median, the opposite is the case for uniform or "U-shaped" bimodal distributions (with Beta(α, β) such that ), with the modes located at the ends of the distribution. As Mosteller and Tukey remark ( p. 207) "the average of the two extreme observations uses all the sample information. This illustrates how, for short-tailed distributions, the extreme observations should get more weight." By contrast, it follows that the median of "U-shaped" bimodal distributions with modes at the edge of the distribution (with Beta(α, β) such that ) is not robust, as the sample median drops the extreme sample observations from consideration. A practical application of this occurs for example for random walks, since the probability for the time of the last visit to the origin in a random walk is distributed as the arcsine distribution Beta(1/2, 1/2): the mean of a number of realizations of a random walk is a much more robust estimator than the median (which is an inappropriate sample measure estimate in this case). Geometric mean The logarithm of the geometric mean GX of a distribution with random variable X is the arithmetic mean of ln(X), or, equivalently, its expected value: For a beta distribution, the expected value integral gives: where ψ is the digamma function. Therefore, the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows: While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2: . The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln(X) strongly tends towards negative infinity as X approaches zero, while ln(X) flattens towards zero as . Along a line , the following limits apply: Following are the limits with one parameter finite (non-zero) and the other approaching these limits: The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2. Besides the fact that the difference between them approaches zero as α and β approach infinity and that the difference becomes large for values of α and β approaching zero, one can observe an evident asymmetry of the geometric mean with respect to the shape parameters α and β. The difference between the geometric mean and the mean is larger for small values of α in relation to β than when exchanging the magnitudes of β and α. N. L.Johnson and S. Kotz suggest the logarithmic approximation to the digamma function ψ(α) ≈ ln(α − 1/2) which results in the following approximation to the geometric mean: Numerical values for the relative error in this approximation follow: []; []; []; []; []; []; []; []. Similarly, one can calculate the value of shape parameters required for the geometric mean to equal 1/2. Given the value of the parameter β, what would be the value of the other parameter, α, required for the geometric mean to equal 1/2?. The answer is that (for ), the value of α required tends towards as . For example, all these couples have the same geometric mean of 1/2: [], [], [], [], [], [], []. The fundamental property of the geometric mean, which can be proven to be false for any other mean, is This makes the geometric mean the only correct mean when averaging normalized results, that is results that are presented as ratios to reference values. This is relevant because the beta distribution is a suitable model for the random behavior of percentages and it is particularly suitable to the statistical modelling of proportions. The geometric mean plays a central role in maximum likelihood estimation, see section "Parameter estimation, maximum likelihood." Actually, when performing maximum likelihood estimation, besides the geometric mean GX based on the random variable X, also another geometric mean appears naturally: the geometric mean based on the linear transformation ––, the mirror-image of X, denoted by G(1−X): Along a line , the following limits apply: Following are the limits with one parameter finite (non-zero) and the other approaching these limits: It has the following approximate value: Although both GX and G(1−X) are asymmetric, in the case that both shape parameters are equal , the geometric means are equal: GX = G(1−X). This equality follows from the following symmetry displayed between both geometric means: Harmonic mean The inverse of the harmonic mean (HX) of a distribution with random variable X is the arithmetic mean of 1/X, or, equivalently, its expected value. Therefore, the harmonic mean (HX) of a beta distribution with shape parameters α and β is: The harmonic mean (HX) of a beta distribution with α < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter α less than unity. Letting α = β in the above expression one obtains showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞. Following are the limits with one parameter finite (non-zero) and the other approaching these limits: The harmonic mean plays a role in maximum likelihood estimation for the four parameter case, in addition to the geometric mean. Actually, when performing maximum likelihood estimation for the four parameter case, besides the harmonic mean HX based on the random variable X, also another harmonic mean appears naturally: the harmonic mean based on the linear transformation (1 − X), the mirror-image of X, denoted by H1 − X: The harmonic mean (H(1 − X)) of a beta distribution with β < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter β less than unity. Letting α = β in the above expression one obtains showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞. Following are the limits with one parameter finite (non-zero) and the other approaching these limits: Although both HX and H1−X are asymmetric, in the case that both shape parameters are equal α = β, the harmonic means are equal: HX = H1−X. This equality follows from the following symmetry displayed between both harmonic means: Measures of statistical dispersion Variance The variance (the second moment centered on the mean) of a beta distribution random variable X with parameters α and β is: Letting α = β in the above expression one obtains showing that for α = β the variance decreases monotonically as increases. Setting in this expression, one finds the maximum variance var(X) = 1/4 which only occurs approaching the limit, at . The beta distribution may also be parametrized in terms of its mean μ and sample size () (see subsection Mean and sample size): Using this parametrization, one can express the variance in terms of the mean μ and the sample size ν as follows: Since , it follows that . For a symmetric distribution, the mean is at the middle of the distribution, , and therefore: Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions: Geometric variance and covariance The logarithm of the geometric variance, ln(varGX), of a distribution with random variable X is the second moment of the logarithm of X centered on the geometric mean of X, ln(GX): and therefore, the geometric variance is: In the Fisher information matrix, and the curvature of the log likelihood function, the logarithm of the geometric variance of the reflected variable 1 − X and the logarithm of the geometric covariance between X and 1 − X appear: For a beta distribution, higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions. See the section . The variance of the logarithmic variables and covariance of ln X and ln(1−X) are: where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function: Therefore, The accompanying plots show the log geometric variances and log geometric covariance versus the shape parameters α and β. The plots show that the log geometric variances and log geometric covariance are close to zero for shape parameters α and β greater than 2, and that the log geometric variances rapidly rise in value for shape parameter values α and β less than unity. The log geometric variances are positive for all values of the shape parameters. The log geometric covariance is negative for all values of the shape parameters, and it reaches large negative values for α and β less than unity. Following are the limits with one parameter finite (non-zero) and the other approaching these limits: Limits with two parameters varying: Although both ln(varGX) and ln(varG(1 − X)) are asymmetric, when the shape parameters are equal, α = β, one has: ln(varGX) = ln(varG(1−X)). This equality follows from the following symmetry displayed between both log geometric variances: The log geometric covariance is symmetric: Mean absolute deviation around the mean The mean absolute deviation around the mean for the beta distribution with shape parameters α and β is: The mean absolute deviation around the mean is a more robust estimator of statistical dispersion than the standard deviation for beta distributions with tails and inflection points at each side of the mode, Beta(α, β) distributions with α,β > 2, as it depends on the linear (absolute) deviations rather than the square deviations from the mean. Therefore, the effect of very large deviations from the mean are not as overly weighted. Using Stirling's approximation to the Gamma function, N.L.Johnson and S.Kotz derived the following approximation for values of the shape parameters greater than unity (the relative error for this approximation is only −3.5% for α = β = 1, and it decreases to zero as α → ∞, β → ∞): At the limit α → ∞, β → ∞, the ratio of the mean absolute deviation to the standard deviation (for the beta distribution) becomes equal to the ratio of the same measures for the normal distribution: . For α = β = 1 this ratio equals , so that from α = β = 1 to α, β → ∞ the ratio decreases by 8.5%. For α = β = 0 the standard deviation is exactly equal to the mean absolute deviation around the mean. Therefore, this ratio decreases by 15% from α = β = 0 to α = β = 1, and by 25% from α = β = 0 to α, β → ∞ . However, for skewed beta distributions such that α → 0 or β → 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity (although each of them, individually, approaches zero) because the mean absolute deviation approaches zero faster than the standard deviation. Using the parametrization in terms of mean μ and sample size ν = α + β > 0: α = μν, β = (1−μ)ν one can express the mean absolute deviation around the mean in terms of the mean μ and the sample size ν as follows: For a symmetric distribution, the mean is at the middle of the distribution, μ = 1/2, and therefore: Also, the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions: Mean absolute difference The mean absolute difference for the beta distribution is: The Gini coefficient for the beta distribution is half of the relative mean absolute difference: Skewness The skewness (the third moment centered on the mean, normalized by the 3/2 power of the variance) of the beta distribution is Letting α = β in the above expression one obtains γ1 = 0, showing once again that for α = β the distribution is symmetric and hence the skewness is zero. Positive skew (right-tailed) for α < β, negative skew (left-tailed) for α > β. Using the parametrization in terms of mean μ and sample size ν = α + β: one can express the skewness in terms of the mean μ and the sample size ν as follows: The skewness can also be expressed just in terms of the variance var and the mean μ as follows: The accompanying plot of skewness as a function of variance and mean shows that maximum variance (1/4) is coupled with zero skewness and the symmetry condition (μ = 1/2), and that maximum skewness (positive or negative infinity) occurs when the mean is located at one end or the other, so that the "mass" of the probability distribution is concentrated at the ends (minimum variance). The following expression for the square of the skewness, in terms of the sample size ν = α + β and the variance var, is useful for the method of moments estimation of four parameters: This expression correctly gives a skewness of zero for α = β, since in that case (see ): . For the symmetric case (α = β), skewness = 0 over the whole range, and the following limits apply: For the asymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions: Kurtosis The beta distribution has been applied in acoustic analysis to assess damage to gears, as the kurtosis of the beta distribution has been reported to be a good indicator of the condition of a gear. Kurtosis has also been used to distinguish the seismic signal generated by a person's footsteps from other signals. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, one can separate different targets based on the seismic waves they generate. Kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by human footsteps than other signals generated by vehicles, winds, noise, etc. Unfortunately, the notation for kurtosis has not been standardized. Kenney and Keeping use the symbol γ2 for the excess kurtosis, but Abramowitz and Stegun use different terminology. To prevent confusion between kurtosis (the fourth moment centered on the mean, normalized by the square of the variance) and excess kurtosis, when using symbols, they will be spelled out as follows: Letting α = β in the above expression one obtains . Therefore, for symmetric beta distributions, the excess kurtosis is negative, increasing from a minimum value of −2 at the limit as {α = β} → 0, and approaching a maximum value of zero as {α = β} → ∞. The value of −2 is the minimum value of excess kurtosis that any distribution (not just beta distributions, but any distribution of any possible kind) can ever achieve. This minimum value is reached when all the probability density is entirely concentrated at each end x = 0 and x = 1, with nothing in between: a 2-point Bernoulli distribution with equal probability 1/2 at each end (a coin toss: see section below "Kurtosis bounded by the square of the skewness" for further discussion). The description of kurtosis as a measure of the "potential outliers" (or "potential rare, extreme values") of the probability distribution, is correct for all distributions including the beta distribution. When rare, extreme values can occur in the beta distribution, the higher its kurtosis; otherwise, the kurtosis is lower. For α ≠ β, skewed beta distributions, the excess kurtosis can reach unlimited positive values (particularly for α → 0 for finite β, or for β → 0 for finite α) because the side away from the mode will produce occasional extreme values. Minimum kurtosis takes place when the mass density is concentrated equally at each end (and therefore the mean is at the center), and there is no probability mass density in between the ends. Using the parametrization in terms of mean μ and sample size ν = α + β: one can express the excess kurtosis in terms of the mean μ and the sample size ν as follows: The excess kurtosis can also be expressed in terms of just the following two parameters: the variance var, and the sample size ν as follows: and, in terms of the variance var and the mean μ as follows: The plot of excess kurtosis as a function of the variance and the mean shows that the minimum value of the excess kurtosis (−2, which is the minimum possible value for excess kurtosis for any distribution) is intimately coupled with the maximum value of variance (1/4) and the symmetry condition: the mean occurring at the midpoint (μ = 1/2). This occurs for the symmetric case of α = β = 0, with zero skewness. At the limit, this is the 2 point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. (A coin toss: one face of the coin being x = 0 and the other face being x = 1.) Variance is maximum because the distribution is bimodal with nothing in between the two modes (spikes) at each end. Excess kurtosis is minimum: the probability density "mass" is zero at the mean and it is concentrated at the two peaks at each end. Excess kurtosis reaches the minimum possible value (for any distribution) when the probability density function has two spikes at each end: it is bi-"peaky" with nothing in between them. On the other hand, the plot shows that for extreme skewed cases, where the mean is located near one or the other end (μ = 0 or μ = 1), the variance is close to zero, and the excess kurtosis rapidly approaches infinity when the mean of the distribution approaches either end. Alternatively, the excess kurtosis can also be expressed in terms of just the following two parameters: the square of the skewness, and the sample size ν as follows: From this last expression, one can obtain the same limits published over a century ago by Karl Pearson for the beta distribution (see section below titled "Kurtosis bounded by the square of the skewness"). Setting α + β = ν = 0 in the above expression, one obtains Pearson's lower boundary (values for the skewness and excess kurtosis below the boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region"). The limit of α + β = ν → ∞ determines Pearson's upper boundary. therefore: Values of ν = α + β such that ν ranges from zero to infinity, 0 < ν < ∞, span the whole region of the beta distribution in the plane of excess kurtosis versus squared skewness. For the symmetric case (α = β), the following limits apply: For the unsymmetric cases (α ≠ β) the following limits (with only the noted variable approaching the limit) can be obtained from the above expressions: Characteristic function The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function (of the first kind): where is the rising factorial, also called the "Pochhammer symbol". The value of the characteristic function for t = 0, is one: Also, the real and imaginary parts of the characteristic function enjoy the following symmetries with respect to the origin of variable t: The symmetric case α = β simplifies the characteristic function of the beta distribution to a Bessel function, since in the special case α + β = 2α the confluent hypergeometric function (of the first kind) reduces to a Bessel function (the modified Bessel function of the first kind ) using Kummer's second transformation as follows: In the accompanying plots, the real part (Re) of the characteristic function of the beta distribution is displayed for symmetric (α = β) and skewed (α ≠ β) cases. Other moments Moment generating function It also follows that the moment generating function is In particular MX(α; β; 0) = 1. Higher moments Using the moment generating function, the k-th raw moment is given by the factor multiplying the (exponential series) term in the series of the moment generating function where (x)(k) is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as Since the moment generating function has a positive radius of convergence, the beta distribution is determined by its moments. Moments of transformed random variables Moments of linearly transformed, product and inverted random variables One can also show the following expectations for a transformed random variable, where the random variable X is Beta-distributed with parameters α and β: X ~ Beta(α, β). The expected value of the variable 1 − X is the mirror-symmetry of the expected value based on X: Due to the mirror-symmetry of the probability density function of the beta distribution, the variances based on variables X and 1 − X are identical, and the covariance on X(1 − X is the negative of the variance: These are the expected values for inverted variables, (these are related to the harmonic means, see ): The following transformation by dividing the variable X by its mirror-image X/(1 − X) results in the expected value of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI): Variances of these transformed variables can be obtained by integration, as the expected values of the second moments centered on the corresponding variables: The following variance of the variable X divided by its mirror-image (X/(1−X) results in the variance of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI): The covariances are:   These expectations and variances appear in the four-parameter Fisher information matrix (.) Moments of logarithmically transformed random variables Expected values for logarithmic transformations (useful for maximum likelihood estimates, see ) are discussed in this section. The following logarithmic linear transformations are related to the geometric means GX and G(1−X) (see ): Where the digamma function ψ(α) is defined as the logarithmic derivative of the gamma function: Logit transformations are interesting, as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable: Johnson considered the distribution of the logit – transformed variable ln(X/1 − X), including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞). The logit of a beta variate has the logistic-beta distribution. Higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions as follows: therefore the variance of the logarithmic variables and covariance of ln(X) and ln(1−X) are: where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function: The variances and covariance of the logarithmically transformed variables X and (1 − X) are different, in general, because the logarithmic transformation destroys the mirror-symmetry of the original variables X and (1 − X), as the logarithm approaches negative infinity for the variable approaching zero. These logarithmic variances and covariance are the elements of the Fisher information matrix for the beta distribution. They are also a measure of the curvature of the log likelihood function (see section on Maximum likelihood estimation). The variances of the log inverse variables are identical to the variances of the log variables: It also follows that the variances of the logit-transformed variables are Quantities of information (entropy) Given a beta distributed random variable, X ~ Beta(α, β), the differential entropy of X is (measured in nats), the expected value of the negative of the logarithm of the probability density function: where f(x; α, β) is the probability density function of the beta distribution: The digamma function ψ appears in the formula for the differential entropy as a consequence of Euler's integral formula for the harmonic numbers which follows from the integral: The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero. It is to be expected that the maximum entropy should take place when the beta distribution becomes equal to the uniform distribution, since uncertainty is maximal when all possible events are equiprobable. For α or β approaching zero, the differential entropy approaches its minimum value of negative infinity. For (either or both) α or β approaching zero, there is a maximum amount of order: all the probability density is concentrated at the ends, and there is zero probability density at points located between the ends. Similarly for (either or both) α or β approaching infinity, the differential entropy approaches its minimum value of negative infinity, and a maximum amount of order. If either α or β approaches infinity (and the other is finite) all the probability density is concentrated at an end, and the probability density is zero everywhere else. If both shape parameters are equal (the symmetric case), α = β, and they approach infinity simultaneously, the probability density becomes a spike (Dirac delta function) concentrated at the middle x = 1/2, and hence there is 100% probability at the middle x = 1/2 and zero probability everywhere else. The (continuous case) differential entropy was introduced by Shannon in his original paper (where he named it the "entropy of a continuous distribution"), as the concluding part of the same paper where he defined the discrete entropy. It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative (as it is for the beta distribution). What really matters is the relative value of entropy. Given two beta distributed random variables, X1 ~ Beta(α, β) and X2 ~ Beta(, ), the cross-entropy is (measured in nats) The cross entropy has been used as an error metric to measure the distance between two hypotheses. Its absolute value is minimum when the two distributions are identical. It is the information measure most closely related to the log maximum likelihood (see section on "Parameter estimation. Maximum likelihood estimation")). The relative entropy, or Kullback–Leibler divergence DKL(X1 || X2), is a measure of the inefficiency of assuming that the distribution is X2 ~ Beta(, ) when the distribution is really X1 ~ Beta(α, β). It is defined as follows (measured in nats). The relative entropy, or Kullback–Leibler divergence, is always non-negative. A few numerical examples follow: X1 ~ Beta(1, 1) and X2 ~ Beta(3, 3); DKL(X1 || X2) = 0.598803; DKL(X2 || X1) = 0.267864; h(X1) = 0; h(X2) = −0.267864 X1 ~ Beta(3, 0.5) and X2 ~ Beta(0.5, 3); DKL(X1 || X2) = 7.21574; DKL(X2 || X1) = 7.21574; h(X1) = −1.10805; h(X2) = −1.10805. The Kullback–Leibler divergence is not symmetric DKL(X1 || X2) ≠ DKL(X2 || X1) for the case in which the individual beta distributions Beta(1, 1) and Beta(3, 3) are symmetric, but have different entropies h(X1) ≠ h(X2). The value of the Kullback divergence depends on the direction traveled: whether going from a higher (differential) entropy to a lower (differential) entropy or the other way around. In the numerical example above, the Kullback divergence measures the inefficiency of assuming that the distribution is (bell-shaped) Beta(3, 3), rather than (uniform) Beta(1, 1). The "h" entropy of Beta(1, 1) is higher than the "h" entropy of Beta(3, 3) because the uniform distribution Beta(1, 1) has a maximum amount of disorder. The Kullback divergence is more than two times higher (0.598803 instead of 0.267864) when measured in the direction of decreasing entropy: the direction that assumes that the (uniform) Beta(1, 1) distribution is (bell-shaped) Beta(3, 3) rather than the other way around. In this restricted sense, the Kullback divergence is consistent with the second law of thermodynamics. The Kullback–Leibler divergence is symmetric DKL(X1 || X2) = DKL(X2 || X1) for the skewed cases Beta(3, 0.5) and Beta(0.5, 3) that have equal differential entropy h(X1) = h(X2). The symmetry condition: follows from the above definitions and the mirror-symmetry f(x; α, β) = f(1 − x; α, β) enjoyed by the beta distribution. Relationships between statistical measures Mean, mode and median relationship If 1 < α < β then mode ≤ median ≤ mean. Expressing the mode (only for α, β > 1), and the mean in terms of α and β: If 1 < β < α then the order of the inequalities are reversed. For α, β > 1 the absolute distance between the mean and the median is less than 5% of the distance between the maximum and minimum values of x. On the other hand, the absolute distance between the mean and the mode can reach 50% of the distance between the maximum and minimum values of x, for the (pathological) case of α = 1 and β = 1, for which values the beta distribution approaches the uniform distribution and the differential entropy approaches its maximum value, and hence maximum "disorder". For example, for α = 1.0001 and β = 1.00000001: mode = 0.9999; PDF(mode) = 1.00010 mean = 0.500025; PDF(mean) = 1.00003 median = 0.500035; PDF(median) = 1.00003 mean − mode = −0.499875 mean − median = −9.65538 × 10−6 where PDF stands for the value of the probability density function. Mean, geometric mean and harmonic mean relationship It is known from the inequality of arithmetic and geometric means that the geometric mean is lower than the mean. Similarly, the harmonic mean is lower than the geometric mean. The accompanying plot shows that for α = β, both the mean and the median are exactly equal to 1/2, regardless of the value of α = β, and the mode is also equal to 1/2 for α = β > 1, however the geometric and harmonic means are lower than 1/2 and they only approach this value asymptotically as α = β → ∞. Kurtosis bounded by the square of the skewness As remarked by Feller, in the Pearson system the beta probability density appears as type I (any difference between the beta distribution and Pearson's type I distribution is only superficial and it makes no difference for the following discussion regarding the relationship between kurtosis and skewness). Karl Pearson showed, in Plate 1 of his paper published in 1916, a graph with the kurtosis as the vertical axis (ordinate) and the square of the skewness as the horizontal axis (abscissa), in which a number of distributions were displayed. The region occupied by the beta distribution is bounded by the following two lines in the (skewness2,kurtosis) plane, or the (skewness2,excess kurtosis) plane: or, equivalently, At a time when there were no powerful digital computers, Karl Pearson accurately computed further boundaries, for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line (excess kurtosis + 2 − skewness2 = 0) is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed that this upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is also the intersection with Pearson's distribution III, which has unlimited support in one direction (towards positive infinity), and can be bell-shaped or J-shaped. His son, Egon Pearson, showed that the region (in the kurtosis/squared-skewness plane) occupied by the beta distribution (equivalently, Pearson's distribution I) as it approaches this boundary (excess kurtosis − (3/2) skewness2 = 0) is shared with the noncentral chi-squared distribution. Karl Pearson (Pearson 1895, pp. 357, 360, 373–376) also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line. (This can be shown from the fact that the excess kurtosis of the gamma distribution is 6/k and the square of the skewness is 4/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied by the gamma distribution regardless of the value of the parameter "k"). Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line (as it is apparent from the fact that for the chi-squared distribution the excess kurtosis is 12/k and the square of the skewness is 8/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied regardless of the value of the parameter "k"). This is to be expected, since the chi-squared distribution X ~ χ2(k) is a special case of the gamma distribution, with parametrization X ~ Γ(k/2, 1/2) where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution. An example of a beta distribution near the upper boundary (excess kurtosis − (3/2) skewness2 = 0) is given by α = 0.1, β = 1000, for which the ratio (excess kurtosis)/(skewness2) = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary (excess kurtosis + 2 − skewness2 = 0) is given by α= 0.0001, β = 0.1, for which values the expression (excess kurtosis + 2)/(skewness2) = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis (ordinate). (However, in Pearson's original chart, the ordinate is kurtosis, instead of excess kurtosis, and it increases downwards rather than upwards). Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region". The boundary for this "impossible region" is determined by (symmetric or skewed) bimodal U-shaped distributions for which the parameters α and β approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a Bernoulli distribution, where the two only possible outcomes occur with respective probabilities p and q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2 (this is the lowest excess kurtosis possible for any distribution), and the probabilities are p ≈ q ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1. Symmetry All statements are conditional on α, β > 0: Probability density function reflection symmetry Cumulative distribution function reflection symmetry plus unitary translation Mode reflection symmetry plus unitary translation Median reflection symmetry plus unitary translation Mean reflection symmetry plus unitary translation Geometric means each is individually asymmetric, the following symmetry applies between the geometric mean based on X and the geometric mean based on its reflection (1-X) Harmonic means each is individually asymmetric, the following symmetry applies between the harmonic mean based on X and the harmonic mean based on its reflection (1-X) . Variance symmetry Geometric variances each is individually asymmetric, the following symmetry applies between the log geometric variance based on X and the log geometric variance based on its reflection (1-X) Geometric covariance symmetry Mean absolute deviation around the mean symmetry Skewness skew-symmetry Excess kurtosis symmetry Characteristic function symmetry of Real part (with respect to the origin of variable "t") Characteristic function skew-symmetry of Imaginary part (with respect to the origin of variable "t") Characteristic function symmetry of Absolute value (with respect to the origin of variable "t") Differential entropy symmetry Relative entropy (also called Kullback–Leibler divergence) symmetry Fisher information matrix symmetry Geometry of the probability density function Inflection points For certain values of the shape parameters α and β, the probability density function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution. Defining the following quantity: Points of inflection occur, depending on the value of the shape parameters α and β, as follows: (α > 2, β > 2) The distribution is bell-shaped (symmetric for α = β and skewed otherwise), with two inflection points, equidistant from the mode: (α = 2, β > 2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode: (α > 2, β = 2) The distribution is unimodal, negatively skewed, left-tailed, with one inflection point, located to the left of the mode: (1 < α < 2, β > 2, α+β>2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode: (0 < α < 1, 1 < β < 2) The distribution has a mode at the left end x = 0 and it is positively skewed, right-tailed. There is one inflection point, located to the right of the mode: (α > 2, 1 < β < 2) The distribution is unimodal negatively skewed, left-tailed, with one inflection point, located to the left of the mode: (1 < α < 2, 0 < β < 1) The distribution has a mode at the right end x=1 and it is negatively skewed, left-tailed. There is one inflection point, located to the left of the mode: There are no inflection points in the remaining (symmetric and skewed) regions: U-shaped: (α, β < 1) upside-down-U-shaped: (1 < α < 2, 1 < β < 2), reverse-J-shaped (α < 1, β > 2) or J-shaped: (α > 2, β < 1) The accompanying plots show the inflection point locations (shown vertically, ranging from 0 to 1) versus α and β (the horizontal axes ranging from 0 to 5). There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode. Shapes The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes (using only two parameters) is partly responsible for finding wide application for modeling actual measurements: Symmetric (α = β) the density function is symmetric about 1/2 (blue & teal plots). median = mean = 1/2. skewness = 0. variance = 1/(4(2α + 1)) α = β < 1 U-shaped (blue plot). bimodal: left mode = 0, right mode =1, anti-mode = 1/2 1/12 < var(X) < 1/4 −2 < excess kurtosis(X) < −6/5 α = β = 1/2 is the arcsine distribution var(X) = 1/8 excess kurtosis(X) = −3/2 CF = Rinc (t) α = β → 0 is a 2-point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end x = 0 and x = 1 and zero probability everywhere else. A coin toss: one face of the coin being x = 0 and the other face being x = 1. a lower value than this is impossible for any distribution to reach. The differential entropy approaches a minimum value of −∞ α = β = 1 the uniform [0, 1] distribution no mode var(X) = 1/12 excess kurtosis(X) = −6/5 The (negative anywhere else) differential entropy reaches its maximum value of zero CF = Sinc (t) α = β > 1 symmetric unimodal mode = 1/2. 0 < var(X) < 1/12 −6/5 < excess kurtosis(X) < 0 α = β = 3/2 is a semi-elliptic [0, 1] distribution, see: Wigner semicircle distribution var(X) = 1/16. excess kurtosis(X) = −1 CF = 2 Jinc (t) α = β = 2 is the parabolic [0, 1] distribution var(X) = 1/20 excess kurtosis(X) = −6/7 CF = 3 Tinc (t) α = β > 2 is bell-shaped, with inflection points located to either side of the mode 0 < var(X) < 1/20 −6/7 < excess kurtosis(X) < 0 α = β → ∞ is a 1-point Degenerate distribution with a Dirac delta function spike at the midpoint x = 1/2 with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the single point x = 1/2. The differential entropy approaches a minimum value of −∞ Skewed (α ≠ β) The density function is skewed. An interchange of parameter values yields the mirror image (the reverse) of the initial curve, some more specific cases: α < 1, β < 1 U-shaped Positive skew for α < β, negative skew for α > β. bimodal: left mode = 0, right mode = 1, anti-mode = 0 < median < 1. 0 < var(X) < 1/4 α > 1, β > 1 unimodal (magenta & cyan plots), Positive skew for α < β, negative skew for α > β. 0 < median < 1 0 < var(X) < 1/12 α < 1, β ≥ 1 reverse J-shaped with a right tail, positively skewed, strictly decreasing, convex mode = 0 0 < median < 1/2. (maximum variance occurs for , or α = Φ the golden ratio conjugate) α ≥ 1, β < 1 J-shaped with a left tail, negatively skewed, strictly increasing, convex mode = 1 1/2 < median < 1 (maximum variance occurs for , or β = Φ the golden ratio conjugate) α = 1, β > 1 positively skewed, strictly decreasing (red plot), a reversed (mirror-image) power function [0,1] distribution mean = 1 / (β + 1) median = 1 - 1/21/β mode = 0 α = 1, 1 < β < 2 concave 1/18 < var(X) < 1/12. α = 1, β = 2 a straight line with slope −2, the right-triangular distribution with right angle at the left end, at x = 0 var(X) = 1/18 α = 1, β > 2 reverse J-shaped with a right tail, convex 0 < var(X) < 1/18 α > 1, β = 1 negatively skewed, strictly increasing (green plot), the power function [0, 1] distribution mean = α / (α + 1) median = 1/21/α mode = 1 2 > α > 1, β = 1 concave 1/18 < var(X) < 1/12 α = 2, β = 1 a straight line with slope +2, the right-triangular distribution with right angle at the right end, at x = 1 var(X) = 1/18 α > 2, β = 1 J-shaped with a left tail, convex 0 < var(X) < 1/18 Related distributions Transformations If X ~ Beta(α, β) then 1 − X ~ Beta(β, α) mirror-image symmetry If X ~ Beta(α, β) then . The beta prime distribution, also called "beta distribution of the second kind". If , then has a generalized logistic distribution, with density , where is the logistic sigmoid. If X ~ Beta(α, β) then . If and then has density for and for , where is the Hypergeometric function. If X ~ Beta(n/2, m/2) then (assuming n > 0 and m > 0), the Fisher–Snedecor F distribution. If then min + X(max − min) ~ PERT(min, max, m, λ) where PERT denotes a PERT distribution used in PERT analysis, and m=most likely value. Traditionally λ = 4 in PERT analysis. If X ~ Beta(1, β) then X ~ Kumaraswamy distribution with parameters (1, β) If X ~ Beta(α, 1) then X ~ Kumaraswamy distribution with parameters (α, 1) If X ~ Beta(α, 1) then −ln(X) ~ Exponential(α) Special and limiting cases Beta(1, 1) ~ U(0, 1) with density 1 on that interval. Beta(n, 1) ~ Maximum of n independent rvs. with U(0, 1), sometimes called a a standard power function distribution with density n xn–1 on that interval. Beta(1, n) ~ Minimum of n independent rvs. with U(0, 1) with density n(1 − x)n−1 on that interval. If X ~ Beta(3/2, 3/2) and r > 0 then 2rX − r ~ Wigner semicircle distribution. Beta(1/2, 1/2) is equivalent to the arcsine distribution. This distribution is also Jeffreys prior probability for the Bernoulli and binomial distributions. the exponential distribution. the gamma distribution. For large , the normal distribution. More precisely, if then converges in distribution to a normal distribution with mean 0 and variance as n increases. Derived from other distributions The kth order statistic of a sample of size n from the uniform distribution is a beta random variable, U(k) ~ Beta(k, n+1−k). Gamma distribution: If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) are independent, then . Chi-squared distribution: If and are independent, then . The power transformation for the uniform distribution: If X ~ U(0, 1) and α > 0 then X1/α ~ Beta(α, 1). Cauchy distribution: If X ~ Cauchy(0, 1) then Combination with other distributions X ~ Beta(α, β) and Y ~ F(2β,2α) then for all x > 0. Compounding with other distributions If p ~ Beta(α, β) and X ~ Bin(k, p) then X ~ beta-binomial distribution If p ~ Beta(α, β) and X ~ NB(r, p) then X ~ beta negative binomial distribution Generalisations The generalization to multiple variables, i.e. a multivariate Beta distribution, is called a Dirichlet distribution. Univariate marginals of the Dirichlet distribution have a beta distribution. The beta distribution is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution. The Pearson type I distribution is identical to the beta distribution (except for arbitrary shifting and re-scaling that can also be accomplished with the four parameter parametrization of the beta distribution). The beta distribution is the special case of the noncentral beta distribution where : . The generalized beta distribution is a five-parameter distribution family which has the beta distribution as a special case. The matrix variate beta distribution is a distribution for positive-definite matrices. Statistical inference Parameter estimation Method of moments Two unknown parameters Two unknown parameters ( of a beta distribution supported in the [0,1] interval) can be estimated, using the method of moments, with the first two moments (sample mean and sample variance) as follows. Let: be the sample mean estimate and be the sample variance estimate. The method-of-moments estimates of the parameters are if if When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace with and with in the above couple of equations for the shape parameters (see the "Four unknown parameters" section below), where: Four unknown parameters All four parameters ( of a beta distribution supported in the [a, c] interval, see section "Alternative parametrizations, Four parameters") can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis). The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, (see previous section "Kurtosis") as follows: One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows: This is the ratio (multiplied by a factor of 3) between the previously derived limit boundaries for the beta distribution in a space (as originally done by Karl Pearson) defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis (see ): The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2 (Excess kurtosis is negative for the beta distribution with zero skewness, ranging from -2 to 0, so that -and therefore the sample shape parameters- is positive, ranging from zero when the shape parameters approach zero and the excess kurtosis approaches -2, to infinity when the shape parameters approach infinity and the excess kurtosis approaches zero). For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters , the parameters can be uniquely determined from the sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables (sample skewness and sample excess kurtosis) and two unknowns (the shape parameters): resulting in the following solution: Where one should take the solutions as follows: for (negative) sample skewness < 0, and for (positive) sample skewness > 0. The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of (sample excess kurtosis) and (sample squared skewness) and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front (lower) edge defined by "the impossible boundary" line (excess kurtosis + 2 - skewness2 = 0). Along this front (lower) boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton, sampling in the neighborhood of the line (sample excess kurtosis - (3/2)(sample skewness)2 = 0) (the just-J-shaped portion of the rear edge where blue meets beige), "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton write that "the higher moment parameters (kurtosis and skewness) are extremely fragile (near that line). However, the mean and standard deviation are fairly reliable." Therefore, the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches (3/2) times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See for a numerical example and further comments about this rear edge boundary line (sample excess kurtosis - (3/2)(sample skewness)2 = 0). As remarked by Karl Pearson himself this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped (or mirror-image J-shaped) distributions with very different values of shape parameters that are unlikely to occur much in practice). The usual skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem. The remaining two parameters can be determined using the sample mean and the sample variance using a variety of equations. One alternative is to calculate the support interval range based on the sample variance and the sample kurtosis. For this purpose one can solve, in terms of the range , the equation expressing the excess kurtosis in terms of the sample variance, and the sample size ν (see and ): to obtain: Another alternative is to calculate the support interval range based on the sample variance and the sample skewness. For this purpose one can solve, in terms of the range , the equation expressing the squared skewness in terms of the sample variance, and the sample size ν (see section titled "Skewness" and "Alternative parametrizations, four parameters"): to obtain: The remaining parameter can be determined from the sample mean and the previously obtained parameters: : and finally, . In the above formulas one may take, for example, as estimates of the sample moments: The estimators G1 for sample skewness and G2 for sample kurtosis are used by DAP/SAS, PSPP/SPSS, and Excel. However, they are not used by BMDP and (according to ) they were not used by MINITAB in 1998. Actually, Joanes and Gill in their 1998 study concluded that the skewness and kurtosis estimators used in BMDP and in MINITAB (at that time) had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in DAP/SAS, PSPP/SPSS, namely G1 and G2, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness (as shown by Joanes and Gill). Maximum likelihood Two unknown parameters As is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1, ..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is: Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters: where: since the digamma function denoted ψ(α) is defined as the logarithmic derivative of the gamma function: To ensure that the values with zero tangent slope are indeed a maximum (instead of a saddle-point or a minimum) one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative using the previous equations, this is equivalent to: where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function: These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since: Therefore, the condition of negative curvature at a maximum is equivalent to the statements: Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following logarithmic derivatives of the geometric means GX and G(1−X) are positive, since: While these slopes are indeed positive, the other slopes are negative: The slopes of the mean and the median with respect to α and β display similar sign behavior. From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled maximum likelihood estimate equations (for the average log-likelihoods) that needs to be inverted to obtain the (unknown) shape parameter estimates in terms of the (known) average of logarithms of the samples X1, ..., XN: where we recognize as the logarithm of the sample geometric mean and as the logarithm of the sample geometric mean based on (1 − X), the mirror-image of X. For , it follows that . These coupled equations containing digamma functions of the shape parameter estimates must be solved by numerical methods as done, for example, by Beckman et al. Gnanadesikan et al. give numerical solutions for a few cases. N.L.Johnson and S.Kotz suggest that for "not too small" shape parameter estimates , the logarithmic approximation to the digamma function may be used to obtain initial values for an iterative solution, since the equations resulting from this approximation can be solved exactly: which leads to the following solution for the initial values (of the estimate shape parameters in terms of the sample geometric means) for an iterative solution: Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions. When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ln(Xi) in the first equation with , and replace ln(1−Xi) in the second equation with (see "Alternative parametrizations, four parameters" section below). If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter (for skewed cases such that , otherwise, if symmetric, both -equal- parameters are known when one is known): This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image (X/(1 - X) resulting in the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) with support [0, +∞). As previously discussed in the section "Moments of logarithmically transformed random variables," the logit transformation , studied by Johnson, extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞). If, for example, is known, the unknown parameter can be obtained in terms of the inverse digamma function of the right hand side of this equation: In particular, if one of the shape parameters has a value of unity, for example for (the power function distribution with bounded support [0,1]), using the identity ψ(x + 1) = ψ(x) + 1/x in the equation , the maximum likelihood estimator for the unknown parameter is, exactly: The beta has support [0, 1], therefore , and hence , and therefore In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are (in general) a complicated function of the sample geometric mean, and of the sample geometric mean based on (1−X), the mirror-image of X. One may ask, if the variance (in addition to the mean) is necessary to estimate two shape parameters with the method of moments, why is the (logarithmic or geometric) variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters α = β, the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion (the variance). On the other hand, the geometric mean of a beta distribution with equal shape parameters α = β, depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on X and geometric mean based on (1 − X), the maximum likelihood method is able to provide best estimates for both parameters α = β, without need of employing the variance. One can express the joint log likelihood per N iid observations in terms of the sufficient statistics (the sample geometric means) as follows: We can plot the joint log likelihood per N observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy (the maximum entropy occurs for shape parameters equal to unity: the uniform distribution). It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances These variances (and therefore the curvatures) are much larger for small values of the shape parameter α and β. However, for shape parameter values α, β > 1, the variances (and therefore the curvatures) flatten out. Equivalently, this result follows from the Cramér–Rao bound, since the Fisher information matrix components for the beta distribution are these logarithmic variances. The Cramér–Rao bound states that the variance of any unbiased estimator of α is bounded by the reciprocal of the Fisher information: so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease. Also one can express the joint log likelihood per N iid observations in terms of the digamma function expressions for the logarithms of the sample geometric means as follows: this expression is identical to the negative of the cross-entropy (see section on "Quantities of information (entropy)"). Therefore, finding the maximum of the joint log likelihood of the shape parameters, per N iid observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters. with the cross-entropy defined as follows: Four unknown parameters The procedure is similar to the one followed in the two unknown parameter case. If Y1, ..., YN are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for N iid observations is: Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters: these equations can be re-arranged as the following system of four coupled equations (the first two equations are geometric means and the second two equations are the harmonic means) in terms of the maximum likelihood estimates for the four parameters : with sample geometric means: The parameters are embedded inside the geometric mean expressions in a nonlinear way (to the power 1/N). This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for , which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is positive-definite only for α, β > 2 (for further discussion, see section on Fisher information matrix, four parameter case), for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. The following Fisher information components (that represent the expectations of the curvature of the log likelihood function) have singularities at the following values: (for further discussion see section on Fisher information matrix). Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the uniform distribution (Beta(1, 1, a, c)), and the arcsine distribution (Beta(1/2, 1/2, a, c)). N.L.Johnson and S.Kotz ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of a, c, α and β are required, the above procedure (for the two unknown parameter case, with X transformed as X = (Y − a)/(c − a)) can be repeated using a succession of trial values of a and c, until the pair (a, c) for which maximum likelihood (given a and c) is as great as possible, is attained" (where, for the purpose of clarity, their notation for the parameters has been translated into the present notation). Fisher information matrix Let a random variable X have a probability density f(x;α). The partial derivative with respect to the (unknown, and to be estimated) parameter α of the log likelihood function is called the score. The second moment of the score is called the Fisher information: The expectation of the score is zero, therefore the Fisher information is also the second moment centered on the mean of the score: the variance of the score. If the log likelihood function is twice differentiable with respect to the parameter α, and under certain regularity conditions, then the Fisher information may also be written as follows (which is often a more convenient form for calculation purposes): Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter α of the log likelihood function. Therefore, Fisher information is a measure of the curvature of the log likelihood function of α. A low curvature (and therefore high radius of curvature), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature (and therefore low radius of curvature) has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters ("the observed Fisher information matrix") it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms. The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any estimator of a parameter α: The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter. When there are N parameters then the Fisher information takes the form of an N×N positive semidefinite symmetric matrix, the Fisher information matrix, with typical element: Under certain regularity conditions, the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation: With X1, ..., XN iid random variables, an N-dimensional "box" can be constructed with sides X1, ..., XN. Costa and Cover show that the (Shannon) differential entropy h(X) is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set. Two parameters For X1, ..., XN independent random variables each having a beta distribution parametrized with shape parameters α and β, the joint log likelihood function for N iid observations is: therefore the joint log likelihood function per N iid observations is For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal). Aryal and Nadarajah calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows: Since the Fisher information matrix is symmetric The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as trigamma functions, denoted ψ1(α), the second of the polygamma functions, defined as the derivative of the digamma function: These derivatives are also derived in the and plots of the log likelihood function are also shown in that section. contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components and are shown in . The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is: From Sylvester's criterion (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is positive-definite (under the standard condition that the shape parameters are positive α > 0 and β > 0). Four parameters If Y1, ..., YN are independent random variables each having a beta distribution with four parameters: the exponents α and β, and also a (the minimum of the distribution range), and c (the maximum of the distribution range) (section titled "Alternative parametrizations", "Four parameters"), with probability density function: the joint log likelihood function per N iid observations is: For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components = (4×4 total − 4 diagonal). Since the Fisher information matrix is symmetric, half of these components (12/2=6) are independent. Therefore, the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah calculated Fisher's information matrix for the four parameter case as follows: In the above expressions, the use of X instead of Y in the expressions var[ln(X)] = ln(varGX) is not an error. The expressions in terms of the log geometric variances and log geometric covariance occur as functions of the two parameter X ~ Beta(α, β) parametrization because when taking the partial derivatives with respect to the exponents (α, β) in the four parameter case, one obtains the identical expressions as for the two parameter case: these terms of the four parameter Fisher information matrix are independent of the minimum a and maximum c of the distribution's range. The only non-zero term upon double differentiation of the log likelihood function with respect to the exponents α and β is the second derivative of the log of the beta function: ln(B(α, β)). This term is independent of the minimum a and maximum c of the distribution's range. Double differentiation of this term results in trigamma functions. The sections titled "Maximum likelihood", "Two unknown parameters" and "Four unknown parameters" also show this fact. The Fisher information for N i.i.d. samples is N times the individual Fisher information (eq. 11.279, page 394 of Cover and Thomas). (Aryal and Nadarajah take a single observation, N = 1, to calculate the following components of the Fisher information, which leads to the same result as considering the derivatives of the log likelihood per N observations. Moreover, below the erroneous expression for in Aryal and Nadarajah has been corrected.) The lower two diagonal entries of the Fisher information matrix, with respect to the parameter a (the minimum of the distribution's range): , and with respect to the parameter c (the maximum of the distribution's range): are only defined for exponents α > 2 and β > 2 respectively. The Fisher information matrix component for the minimum a approaches infinity for exponent α approaching 2 from above, and the Fisher information matrix component for the maximum c approaches infinity for exponent β approaching 2 from above. The Fisher information matrix for the four parameter case does not depend on the individual values of the minimum a and the maximum c, but only on the total range (c − a). Moreover, the components of the Fisher information matrix that depend on the range (c − a), depend only through its inverse (or the square of the inverse), such that the Fisher information decreases for increasing range (c − a). The accompanying images show the Fisher information components and . Images for the Fisher information components and are shown in . All these Fisher information components look like a basin, with the "walls" of the basin being located at low values of the parameters. The following four-parameter-beta-distribution Fisher information components can be expressed in terms of the two-parameter: X ~ Beta(α, β) expectations of the transformed ratio ((1 − X)/X) and of its mirror image (X/(1 − X)), scaled by the range (c − a), which may be helpful for interpretation: These are also the expected values of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) and its mirror image, scaled by the range (c − a). Also, the following Fisher information components can be expressed in terms of the harmonic (1/X) variances or of variances based on the ratio transformed variables ((1-X)/X) as follows: See section "Moments of linearly transformed, product and inverted random variables" for these expectations. The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution with four parameters is: Using Sylvester's criterion (checking whether the diagonal elements are all positive), and since diagonal components and have singularities at α=2 and β=2 it follows that the Fisher information matrix for the four parameter case is positive-definite for α>2 and β>2. Since for α > 2 and β > 2 the beta distribution is (symmetric or unsymmetric) bell shaped, it follows that the Fisher information matrix is positive-definite only for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. Thus, important well known distributions belonging to the four-parameter beta distribution family, like the parabolic distribution (Beta(2,2,a,c)) and the uniform distribution (Beta(1,1,a,c)) have Fisher information components () that blow up (approach infinity) in the four-parameter case (although their Fisher information components are all defined for the two parameter case). The four-parameter Wigner semicircle distribution (Beta(3/2,3/2,a,c)) and arcsine distribution (Beta(1/2,1/2,a,c)) have negative Fisher information determinants for the four-parameter case. Bayesian inference The use of Beta distributions in Bayesian inference is due to the fact that they provide a family of conjugate prior probability distributions for binomial (including Bernoulli) and geometric distributions. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value p: Examples of beta distributions used as prior probabilities to represent ignorance of prior parameter values in Bayesian inference are Beta(1,1), Beta(0,0) and Beta(1/2,1/2). Rule of succession A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that the estimate of the expected value in the next trial is . This estimate is the expected value of the posterior distribution over p, namely Beta(s+1, n−s+1), which is given by Bayes' rule if one assumes a uniform prior probability over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials. Laplace's rule of succession has been criticized by prominent scientists. R. T. Cox described Laplace's application of the rule of succession to the sunrise problem ( p. 89) as "a travesty of the proper use of the principle". Keynes remarks ( Ch.XXX, p. 382) "indeed this is so foolish a theorem that to entertain it is discreditable". Karl Pearson showed that the probability that the next (n + 1) trials will be successes, after n successes in n trials, is only 50%, which has been considered too low by scientists like Jeffreys and unacceptable as a representation of the scientific process of experimentation to test a proposed scientific law. As pointed out by Jeffreys ( p. 128) (crediting C. D. Broad ) Laplace's rule of succession establishes a high probability of success ((n+1)/(n+2)) in the next trial, but only a moderate probability (50%) that a further sample (n+1) comparable in size will be equally successful. As pointed out by Perks, "The rule of succession itself is hard to accept. It assigns a probability to the next trial which implies the assumption that the actual run observed is an average run and that we are always at the end of an average run. It would, one would think, be more reasonable to assume that we were in the middle of an average run. Clearly a higher value for both probabilities is necessary if they are to accord with reasonable belief." These problems with Laplace's rule of succession motivated Haldane, Perks, Jeffreys and others to search for other forms of prior probability (see the next ). According to Jaynes, the main problem with the rule of succession is that it is not valid when s=0 or s=n (see rule of succession, for an analysis of its validity). Bayes–Laplace prior probability (Beta(1,1)) The beta distribution achieves maximum differential entropy for Beta(1,1): the uniform probability density, for which all values in the domain of the distribution have equal density. This uniform distribution Beta(1,1) was suggested ("with a great deal of doubt") by Thomas Bayes as the prior probability distribution to express ignorance about the correct prior distribution. This prior distribution was adopted (apparently, from his writings, with little sign of doubt) by Pierre-Simon Laplace, and hence it was also known as the "Bayes–Laplace rule" or the "Laplace rule" of "inverse probability" in publications of the first half of the 20th century. In the later part of the 19th century and early part of the 20th century, scientists realized that the assumption of uniform "equal" probability density depended on the actual functions (for example whether a linear or a logarithmic scale was most appropriate) and parametrizations used. In particular, the behavior near the ends of distributions with finite support (for example near x = 0, for a distribution with initial support at x = 0) required particular attention. Keynes ( Ch.XXX, p. 381) criticized the use of Bayes's uniform prior probability (Beta(1,1)) that all values between zero and one are equiprobable, as follows: "Thus experience, if it shows anything, shows that there is a very marked clustering of statistical ratios in the neighborhoods of zero and unity, of those for positive theories and for correlations between positive qualities in the neighborhood of zero, and of those for negative theories and for correlations between negative qualities in the neighborhood of unity. " Haldane's prior probability (Beta(0,0)) The Beta(0,0) distribution was proposed by J.B.S. Haldane, who suggested that the prior probability representing complete uncertainty should be proportional to p−1(1−p)−1. The function p−1(1−p)−1 can be viewed as the limit of the numerator of the beta distribution as both shape parameters approach zero: α, β → 0. The Beta function (in the denominator of the beta distribution) approaches infinity, for both parameters approaching zero, α, β → 0. Therefore, p−1(1−p)−1 divided by the Beta function approaches a 2-point Bernoulli distribution with equal probability 1/2 at each end, at 0 and 1, and nothing in between, as α, β → 0. A coin-toss: one face of the coin being at 0 and the other face being at 1. The Haldane prior probability distribution Beta(0,0) is an "improper prior" because its integration (from 0 to 1) fails to strictly converge to 1 due to the singularities at each end. However, this is not an issue for computing posterior probabilities unless the sample size is very small. Furthermore, Zellner points out that on the log-odds scale, (the logit transformation ln(p/1 − p)), the Haldane prior is the uniformly flat prior. The fact that a uniform prior probability on the logit transformed variable ln(p/1 − p) (with domain (−∞, ∞)) is equivalent to the Haldane prior on the domain [0, 1] was pointed out by Harold Jeffreys in the first edition (1939) of his book Theory of Probability ( p. 123). Jeffreys writes "Certainly if we take the Bayes–Laplace rule right up to the extremes we are led to results that do not correspond to anybody's way of thinking. The (Haldane) rule dx/(x(1 − x)) goes too far the other way. It would lead to the conclusion that if a sample is of one type with respect to some property there is a probability 1 that the whole population is of that type." The fact that "uniform" depends on the parametrization, led Jeffreys to seek a form of prior that would be invariant under different parametrizations. Jeffreys' prior probability (Beta(1/2,1/2) for a Bernoulli or for a binomial distribution) Harold Jeffreys proposed to use an uninformative prior probability measure that should be invariant under reparameterization: proportional to the square root of the determinant of Fisher's information matrix. For the Bernoulli distribution, this can be shown as follows: for a coin that is "heads" with probability p ∈ [0, 1] and is "tails" with probability 1 − p, for a given (H,T) ∈ {(0,1), (1,0)} the probability is pH(1 − p)T. Since T = 1 − H, the Bernoulli distribution is pH(1 − p)1 − H. Considering p as the only parameter, it follows that the log likelihood for the Bernoulli distribution is The Fisher information matrix has only one component (it is a scalar, because there is only one parameter: p), therefore: Similarly, for the Binomial distribution with n Bernoulli trials, it can be shown that Thus, for the Bernoulli, and Binomial distributions, Jeffreys prior is proportional to , which happens to be proportional to a beta distribution with domain variable x = p, and shape parameters α = β = 1/2, the arcsine distribution: It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes theorem for the posterior probability. Hence Beta(1/2,1/2) is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in Bayes theorem, the posterior probability turns out to be a beta distribution. It is important to realize, however, that Jeffreys prior is proportional to for the Bernoulli and binomial distribution, but not for the beta distribution. Jeffreys prior for the beta distribution is given by the determinant of Fisher's information for the beta distribution, which, as shown in the is a function of the trigamma function ψ1 of shape parameters α and β as follows: As previously discussed, Jeffreys prior for the Bernoulli and binomial distributions is proportional to the arcsine distribution Beta(1/2,1/2), a one-dimensional curve that looks like a basin as a function of the parameter p of the Bernoulli and binomial distributions. The walls of the basin are formed by p approaching the singularities at the ends p → 0 and p → 1, where Beta(1/2,1/2) approaches infinity. Jeffreys prior for the beta distribution is a 2-dimensional surface (embedded in a three-dimensional space) that looks like a basin with only two of its walls meeting at the corner α = β = 0 (and missing the other two walls) as a function of the shape parameters α and β of the beta distribution. The two adjoining walls of this 2-dimensional surface are formed by the shape parameters α and β approaching the singularities (of the trigamma function) at α, β → 0. It has no walls for α, β → ∞ because in this case the determinant of Fisher's information matrix for the beta distribution approaches zero. It will be shown in the next section that Jeffreys prior probability results in posterior probabilities (when multiplied by the binomial likelihood function) that are intermediate between the posterior probability results of the Haldane and Bayes prior probabilities. Jeffreys prior may be difficult to obtain analytically, and for some cases it just doesn't exist (even for simple distribution functions like the asymmetric triangular distribution). Berger, Bernardo and Sun, in a 2009 paper defined a reference prior probability distribution that (unlike Jeffreys prior) exists for the asymmetric triangular distribution. They cannot obtain a closed-form expression for their reference prior, but numerical calculations show it to be nearly perfectly fitted by the (proper) prior where θ is the vertex variable for the asymmetric triangular distribution with support [0, 1] (corresponding to the following parameter values in Wikipedia's article on the triangular distribution: vertex c = θ, left end a = 0,and right end b = 1). Berger et al. also give a heuristic argument that Beta(1/2,1/2) could indeed be the exact Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution. Therefore, Beta(1/2,1/2) not only is Jeffreys prior for the Bernoulli and binomial distributions, but also seems to be the Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution (for which the Jeffreys prior does not exist), a distribution used in project management and PERT analysis to describe the cost and duration of project tasks. Clarke and Barron prove that, among continuous positive priors, Jeffreys prior (when it exists) asymptotically maximizes Shannon's mutual information between a sample of size n and the parameter, and therefore Jeffreys prior is the most uninformative prior (measuring information as Shannon information). The proof rests on an examination of the Kullback–Leibler divergence between probability density functions for iid random variables. Effect of different prior probability choices on the posterior beta distribution If samples are drawn from the population of a random variable X that result in s successes and f failures in n Bernoulli trials n = s + f, then the likelihood function for parameters s and f given x = p (the notation x = p in the expressions below will emphasize that the domain x stands for the value of the parameter p in the binomial distribution), is the following binomial distribution: If beliefs about prior probability information are reasonably well approximated by a beta distribution with parameters α Prior and β Prior, then: According to Bayes' theorem for a continuous event space, the posterior probability density is given by the product of the prior probability and the likelihood function (given the evidence s and f = n − s), normalized so that the area under the curve equals one, as follows: The binomial coefficient appears both in the numerator and the denominator of the posterior probability, and it does not depend on the integration variable x, hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B(αPrior,βPrior) cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior because the normalizing factors all cancel out. Several authors (including Jeffreys himself) thus use an un-normalized prior formula since the normalization constant cancels out. The numerator of the posterior probability ends up being just the (un-normalized) product of the prior probability and the likelihood function, and the denominator is its integral from zero to one. The beta function in the denominator, B(s + α Prior, n − s + β Prior), appears as a normalization constant to ensure that the total posterior probability integrates to unity. The ratio s/n of the number of successes to the total number of trials is a sufficient statistic in the binomial case, which is relevant for the following results. For the Bayes' prior probability (Beta(1,1)), the posterior probability is: For the Jeffreys' prior probability (Beta(1/2,1/2)), the posterior probability is: and for the Haldane prior probability (Beta(0,0)), the posterior probability is: From the above expressions it follows that for s/n = 1/2) all the above three prior probabilities result in the identical location for the posterior probability mean = mode = 1/2. For s/n < 1/2, the mean of the posterior probabilities, using the following priors, are such that: mean for Bayes prior > mean for Jeffreys prior > mean for Haldane prior. For s/n > 1/2 the order of these inequalities is reversed such that the Haldane prior probability results in the largest posterior mean. The Haldane prior probability Beta(0,0) results in a posterior probability density with mean (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials. Therefore, the Haldane prior results in a posterior probability with expected value in the next trial equal to the maximum likelihood. The Bayes prior probability Beta(1,1) results in a posterior probability density with mode identical to the ratio s/n (the maximum likelihood). In the case that 100% of the trials have been successful s = n, the Bayes prior probability Beta(1,1) results in a posterior expected value equal to the rule of succession (n + 1)/(n + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of 1 (absolute certainty of success in the next trial). Jeffreys prior probability results in a posterior expected value equal to (n + 1/2)/(n + 1). Perks (p. 303) points out: "This provides a new rule of succession and expresses a 'reasonable' position to take up, namely, that after an unbroken run of n successes we assume a probability for the next trial equivalent to the assumption that we are about half-way through an average run, i.e. that we expect a failure once in (2n + 2) trials. The Bayes–Laplace rule implies that we are about at the end of an average run or that we expect a failure once in (n + 2) trials. The comparison clearly favours the new result (what is now called Jeffreys prior) from the point of view of 'reasonableness'." Conversely, in the case that 100% of the trials have resulted in failure (s = 0), the Bayes prior probability Beta(1,1) results in a posterior expected value for success in the next trial equal to 1/(n + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of success in the next trial of 0 (absolute certainty of failure in the next trial). Jeffreys prior probability results in a posterior expected value for success in the next trial equal to (1/2)/(n + 1), which Perks (p. 303) points out: "is a much more reasonably remote result than the Bayes–Laplace result 1/(n + 2)". Jaynes questions (for the uniform prior Beta(1,1)) the use of these formulas for the cases s = 0 or s = n because the integrals do not converge (Beta(1,1) is an improper prior for s = 0 or s = n). In practice, the conditions 0<s<n necessary for a mode to exist between both ends for the Bayes prior are usually met, and therefore the Bayes prior (as long as 0 < s < n) results in a posterior mode located between both ends of the domain. As remarked in the section on the rule of succession, K. Pearson showed that after n successes in n trials the posterior probability (based on the Bayes Beta(1,1) distribution as the prior probability) that the next (n + 1) trials will all be successes is exactly 1/2, whatever the value of n. Based on the Haldane Beta(0,0) distribution as the prior probability, this posterior probability is 1 (absolute certainty that after n successes in n trials the next (n + 1) trials will all be successes). Perks (p. 303) shows that, for what is now known as the Jeffreys prior, this probability is ((n + 1/2)/(n + 1))((n + 3/2)/(n + 2))...(2n + 1/2)/(2n + 1), which for n = 1, 2, 3 gives 15/24, 315/480, 9009/13440; rapidly approaching a limiting value of as n tends to infinity. Perks remarks that what is now known as the Jeffreys prior: "is clearly more 'reasonable' than either the Bayes–Laplace result or the result on the (Haldane) alternative rule rejected by Jeffreys which gives certainty as the probability. It clearly provides a very much better correspondence with the process of induction. Whether it is 'absolutely' reasonable for the purpose, i.e. whether it is yet large enough, without the absurdity of reaching unity, is a matter for others to decide. But it must be realized that the result depends on the assumption of complete indifference and absence of knowledge prior to the sampling experiment." Following are the variances of the posterior distribution obtained with these three prior probability distributions: for the Bayes' prior probability (Beta(1,1)), the posterior variance is: for the Jeffreys' prior probability (Beta(1/2,1/2)), the posterior variance is: and for the Haldane prior probability (Beta(0,0)), the posterior variance is: So, as remarked by Silvey, for large n, the variance is small and hence the posterior distribution is highly concentrated, whereas the assumed prior distribution was very diffuse. This is in accord with what one would hope for, as vague prior knowledge is transformed (through Bayes theorem) into a more precise posterior knowledge by an informative experiment. For small n the Haldane Beta(0,0) prior results in the largest posterior variance while the Bayes Beta(1,1) prior results in the more concentrated posterior. Jeffreys prior Beta(1/2,1/2) results in a posterior variance in between the other two. As n increases, the variance rapidly decreases so that the posterior variance for all three priors converges to approximately the same value (approaching zero variance as n → ∞). Recalling the previous result that the Haldane prior probability Beta(0,0) results in a posterior probability density with mean (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials, it follows from the above expression that also the Haldane prior Beta(0,0) results in a posterior with variance identical to the variance expressed in terms of the max. likelihood estimate s/n and sample size (in ): with the mean μ = s/n and the sample size ν = n. In Bayesian inference, using a prior distribution Beta(αPrior,βPrior) prior to a binomial distribution is equivalent to adding (αPrior − 1) pseudo-observations of "success" and (βPrior − 1) pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter p of the binomial distribution by the proportion of successes over both real- and pseudo-observations. A uniform prior Beta(1,1) does not add (or subtract) any pseudo-observations since for Beta(1,1) it follows that (αPrior − 1) = 0 and (βPrior − 1) = 0. The Haldane prior Beta(0,0) subtracts one pseudo observation from each and Jeffreys prior Beta(1/2,1/2) subtracts 1/2 pseudo-observation of success and an equal number of failure. This subtraction has the effect of smoothing out the posterior distribution. If the proportion of successes is not 50% (s/n ≠ 1/2) values of αPrior and βPrior less than 1 (and therefore negative (αPrior − 1) and (βPrior − 1)) favor sparsity, i.e. distributions where the parameter p is closer to either 0 or 1. In effect, values of αPrior and βPrior between 0 and 1, when operating together, function as a concentration parameter. The accompanying plots show the posterior probability density functions for sample sizes n ∈ {3,10,50}, successes s ∈ {n/2,n/4} and Beta(αPrior,βPrior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. Also shown are the cases for n = {4,12,40}, success s = {n/4} and Beta(αPrior,βPrior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. The first plot shows the symmetric cases, for successes s ∈ {n/2}, with mean = mode = 1/2 and the second plot shows the skewed cases s ∈ {n/4}. The images show that there is little difference between the priors for the posterior with sample size of 50 (characterized by a more pronounced peak near p = 1/2). Significant differences appear for very small sample sizes (in particular for the flatter distribution for the degenerate case of sample size = 3). Therefore, the skewed cases, with successes s = {n/4}, show a larger effect from the choice of prior, at small sample size, than the symmetric cases. For symmetric distributions, the Bayes prior Beta(1,1) results in the most "peaky" and highest posterior distributions and the Haldane prior Beta(0,0) results in the flattest and lowest peak distribution. The Jeffreys prior Beta(1/2,1/2) lies in between them. For nearly symmetric, not too skewed distributions the effect of the priors is similar. For very small sample size (in this case for a sample size of 3) and skewed distribution (in this example for s ∈ {n/4}) the Haldane prior can result in a reverse-J-shaped distribution with a singularity at the left end. However, this happens only in degenerate cases (in this example n = 3 and hence s = 3/4 < 1, a degenerate value because s should be greater than unity in order for the posterior of the Haldane prior to have a mode located between the ends, and because s = 3/4 is not an integer number, hence it violates the initial assumption of a binomial distribution for the likelihood) and it is not an issue in generic cases of reasonable sample size (such that the condition 1 < s < n − 1, necessary for a mode to exist between both ends, is fulfilled). In Chapter 12 (p. 385) of his book, Jaynes asserts that the Haldane prior Beta(0,0) describes a prior state of knowledge of complete ignorance, where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure, while the Bayes (uniform) prior Beta(1,1) applies if one knows that both binary outcomes are possible. Jaynes states: "interpret the Bayes–Laplace (Beta(1,1)) prior as describing not a state of complete ignorance, but the state of knowledge in which we have observed one success and one failure...once we have seen at least one success and one failure, then we know that the experiment is a true binary one, in the sense of physical possibility." Jaynes does not specifically discuss Jeffreys prior Beta(1/2,1/2) (Jaynes discussion of "Jeffreys prior" on pp. 181, 423 and on chapter 12 of Jaynes book refers instead to the improper, un-normalized, prior "1/p dp" introduced by Jeffreys in the 1939 edition of his book, seven years before he introduced what is now known as Jeffreys' invariant prior: the square root of the determinant of Fisher's information matrix. "1/p" is Jeffreys' (1946) invariant prior for the exponential distribution, not for the Bernoulli or binomial distributions). However, it follows from the above discussion that Jeffreys Beta(1/2,1/2) prior represents a state of knowledge in between the Haldane Beta(0,0) and Bayes Beta (1,1) prior. Similarly, Karl Pearson in his 1892 book The Grammar of Science (p. 144 of 1900 edition) maintained that the Bayes (Beta(1,1) uniform prior was not a complete ignorance prior, and that it should be used when prior information justified to "distribute our ignorance equally"". K. Pearson wrote: "Yet the only supposition that we appear to have made is this: that, knowing nothing of nature, routine and anomy (from the Greek ανομία, namely: a- "without", and nomos "law") are to be considered as equally likely to occur. Now we were not really justified in making even this assumption, for it involves a knowledge that we do not possess regarding nature. We use our experience of the constitution and action of coins in general to assert that heads and tails are equally probable, but we have no right to assert before experience that, as we know nothing of nature, routine and breach are equally probable. In our ignorance we ought to consider before experience that nature may consist of all routines, all anomies (normlessness), or a mixture of the two in any proportion whatever, and that all such are equally probable. Which of these constitutions after experience is the most probable must clearly depend on what that experience has been like." If there is sufficient sampling data, and the posterior probability mode is not located at one of the extremes of the domain (x = 0 or x = 1), the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar posterior probability densities. Otherwise, as Gelman et al. (p. 65) point out, "if so few data are available that the choice of noninformative prior distribution makes a difference, one should put relevant information into the prior distribution", or as Berger (p. 125) points out "when different reasonable priors yield substantially different answers, can it be right to state that there is a single answer? Would it not be better to admit that there is scientific uncertainty, with the conclusion depending on prior beliefs?." Occurrence and applications Order statistics The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the kth smallest of a sample of size n from a continuous uniform distribution has a beta distribution. This result is summarized as: From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived. Subjective logic In standard logic, propositions are considered to be either true or false. In contradistinction, subjective logic assumes that humans cannot determine with absolute certainty whether a proposition about the real world is absolutely true or false. In subjective logic the posteriori probability estimates of binary events can be represented by beta distributions. Wavelet analysis A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" that promptly decays. Wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Thus, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. Therefore, standard Fourier Transforms are only applicable to stationary processes, while wavelets are applicable to non-stationary processes. Continuous wavelets can be constructed based on the beta distribution. Beta wavelets can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two shape parameters α and β. Population genetics The Balding–Nichols model is a two-parameter parametrization of the beta distribution used in population genetics. It is a statistical description of the allele frequencies in the components of a sub-divided population: where and ; here F is (Wright's) genetic distance between two populations. Project management: task cost and schedule modeling The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM), Joint Cost Schedule Modeling (JCSM) and other project management/control systems to describe the time to completion and the cost of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution: where a is the minimum, c is the maximum, and b is the most likely value (the mode for α > 1 and β > 1). The above estimate for the mean is known as the PERT three-point estimation and it is exact for either of the following values of β (for arbitrary α within these ranges): β = α > 1 (symmetric case) with standard deviation , skewness = 0, and excess kurtosis = or β = 6 − α for 5 > α > 1 (skewed case) with standard deviation skewness, and excess kurtosis The above estimate for the standard deviation σ(X) = (c − a)/6 is exact for either of the following values of α and β: α = β = 4 (symmetric) with skewness = 0, and excess kurtosis = −6/11. β = 6 − α and (right-tailed, positive skew) with skewness, and excess kurtosis = 0 β = 6 − α and (left-tailed, negative skew) with skewness, and excess kurtosis = 0 Otherwise, these can be poor approximations for beta distributions with other values of α and β, exhibiting average errors of 40% in the mean and 549% in the variance. Random variate generation If X and Y are independent, with and then So one algorithm for generating beta variates is to generate , where X is a gamma variate with parameters (α, 1) and Y is an independent gamma variate with parameters (β, 1). In fact, here and are independent, and . If and is independent of and , then and is independent of . This shows that the product of independent and random variables is a random variable. Also, the kth order statistic of n uniformly distributed variates is , so an alternative if α and β are small integers is to generate α + β − 1 uniform variates and choose the α-th smallest. Another way to generate the Beta distribution is by Pólya urn model. According to this method, one start with an "urn" with α "black" balls and β "white" balls and draw uniformly with replacement. Every trial an additional ball is added according to the color of the last ball which was drawn. Asymptotically, the proportion of black and white balls will be distributed according to the Beta distribution, where each repetition of the experiment will produce a different value. It is also possible to use the inverse transform sampling. Normal approximation to the Beta distribution A beta distribution with α ~ β and α and β >> 1 is approximately normal with mean 1/2 and variance 1/(4(2α + 1)). If α ≥ β the normal approximation can be improved by taking the cube-root of the logarithm of the reciprocal of History Thomas Bayes, in a posthumous paper published in 1763 by Richard Price, obtained a beta distribution as the density of the probability of success in Bernoulli trials (see ), but the paper does not analyze any of the moments of the beta distribution or discuss any of its properties. The first systematic modern discussion of the beta distribution is probably due to Karl Pearson. In Pearson's papers the beta distribution is couched as a solution of a differential equation: Pearson's Type I distribution which it is essentially identical to except for arbitrary shifting and re-scaling (the beta and Pearson Type I distributions can always be equalized by proper choice of parameters). In fact, in several English books and journal articles in the few decades prior to World War II, it was common to refer to the beta distribution as Pearson's Type I distribution. William P. Elderton in his 1906 monograph "Frequency curves and correlation" further analyzes the beta distribution as Pearson's Type I distribution, including a full discussion of the method of moments for the four parameter case, and diagrams of (what Elderton describes as) U-shaped, J-shaped, twisted J-shaped, "cocked-hat" shapes, horizontal and angled straight-line cases. Elderton wrote "I am chiefly indebted to Professor Pearson, but the indebtedness is of a kind for which it is impossible to offer formal thanks." Elderton in his 1906 monograph provides an impressive amount of information on the beta distribution, including equations for the origin of the distribution chosen to be the mode, as well as for other Pearson distributions: types I through VII. Elderton also included a number of appendixes, including one appendix ("II") on the beta and gamma functions. In later editions, Elderton added equations for the origin of the distribution chosen to be the mean, and analysis of Pearson distributions VIII through XII. As remarked by Bowman and Shenton "Fisher and Pearson had a difference of opinion in the approach to (parameter) estimation, in particular relating to (Pearson's method of) moments and (Fisher's method of) maximum likelihood in the case of the Beta distribution." Also according to Bowman and Shenton, "the case of a Type I (beta distribution) model being the center of the controversy was pure serendipity. A more difficult model of 4 parameters would have been hard to find." The long running public conflict of Fisher with Karl Pearson can be followed in a number of articles in prestigious journals. For example, concerning the estimation of the four parameters for the beta distribution, and Fisher's criticism of Pearson's method of moments as being arbitrary, see Pearson's article "Method of moments and method of maximum likelihood" (published three years after his retirement from University College, London, where his position had been divided between Fisher and Pearson's son Egon) in which Pearson writes "I read (Koshai's paper in the Journal of the Royal Statistical Society, 1933) which as far as I am aware is the only case at present published of the application of Professor Fisher's method. To my astonishment that method depends on first working out the constants of the frequency curve by the (Pearson) Method of Moments and then superposing on it, by what Fisher terms "the Method of Maximum Likelihood" a further approximation to obtain, what he holds, he will thus get, 'more efficient values' of the curve constants". David and Edwards's treatise on the history of statistics cites the first modern treatment of the beta distribution, in 1911, using the beta designation that has become standard, due to Corrado Gini, an Italian statistician, demographer, and sociologist, who developed the Gini coefficient. N.L.Johnson and S.Kotz, in their comprehensive and very informative monograph on leading historical personalities in statistical sciences credit Corrado Gini as "an early Bayesian...who dealt with the problem of eliciting the parameters of an initial Beta distribution, by singling out techniques which anticipated the advent of the so-called empirical Bayes approach." References External links "Beta Distribution" by Fiona Maclachlan, the Wolfram Demonstrations Project, 2007. Beta Distribution – Overview and Example, xycoon.com Beta Distribution, brighton-webs.co.uk Beta Distribution Video, exstrom.com Harvard University Statistics 110 Lecture 23 Beta Distribution, Prof. Joe Blitzstein Continuous distributions Factorial and binomial topics Conjugate prior distributions Exponential family distributions
Beta distribution
[ "Mathematics" ]
27,939
[ "Factorial and binomial topics", "Combinatorics" ]
207,079
https://en.wikipedia.org/wiki/Gamma%20distribution
In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use: With a shape parameter and a scale parameter With a shape parameter and a rate parameter In each of these forms, both parameters are positive real numbers. The distribution has important applications in various fields, including econometrics, Bayesian statistics, life testing. In econometrics, the (α, θ) parameterization is common for modeling waiting times, such as the time until death, where it often takes the form of an Erlang distribution for integer α values. Bayesian statisticians prefer the (α,λ) parameterization, utilizing the gamma distribution as a conjugate prior for several inverse scale parameters, facilitating analytical tractability in posterior distribution computations. The probability density and cumulative distribution functions of the gamma distribution vary based on the chosen parameterization, both offering insights into the behavior of gamma-distributed random variables. The gamma distribution is integral to modeling a range of phenomena due to its flexible shape, which can capture various statistical distributions, including the exponential and chi-squared distributions under specific conditions. Its mathematical properties, such as mean, variance, skewness, and higher moments, provide a toolset for statistical analysis and inference. Practical applications of the distribution span several disciplines, underscoring its importance in theoretical and applied statistics. The gamma distribution is the maximum entropy probability distribution (both with respect to a uniform base measure and a base measure) for a random variable for which is fixed and greater than zero, and is fixed ( is the digamma function). Definitions The parameterization with and appears to be more common in econometrics and other applied fields, where the gamma distribution is frequently used to model waiting times. For instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. See Hogg and Craig for an explicit motivation. The parameterization with and is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (rate) parameters, such as the of an exponential distribution or a Poisson distribution – or for that matter, the of the gamma distribution itself. The closely related inverse-gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution. If is a positive integer, then the distribution represents an Erlang distribution; i.e., the sum of independent exponentially distributed random variables, each of which has a mean of . Characterization using shape α and rate λ The gamma distribution can be parameterized in terms of a shape parameter and an inverse scale parameter , called a rate parameter. A random variable that is gamma-distributed with shape and rate is denoted The corresponding probability density function in the shape-rate parameterization is where is the gamma function. For all positive integers, . The cumulative distribution function is the regularized gamma function: where is the lower incomplete gamma function. If is a positive integer (i.e., the distribution is an Erlang distribution), the cumulative distribution function has the following series expansion: Characterization using shape α and scale θ A random variable that is gamma-distributed with shape and scale is denoted by The probability density function using the shape-scale parametrization is Here is the gamma function evaluated at . The cumulative distribution function is the regularized gamma function: where is the lower incomplete gamma function. It can also be expressed as follows, if is a positive integer (i.e., the distribution is an Erlang distribution): Both parametrizations are common because either can be more convenient depending on the situation. Properties Mean and variance The mean of gamma distribution is given by the product of its shape and scale parameters: The variance is: The square root of the inverse shape parameter gives the coefficient of variation: Skewness The skewness of the gamma distribution only depends on its shape parameter, , and it is equal to Higher moments The -th raw moment is given by: Median approximations and bounds Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is the value such that A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (for ) where is the mean and is the median of the distribution. For other values of the scale parameter, the mean scales to , and the median bounds and approximations would be similarly scaled by . K. P. Choi found the first five terms in a Laurent series asymptotic approximation of the median by comparing the median to Ramanujan's function. Berg and Pedersen found more terms: Partial sums of these series are good approximations for high enough ; they are not plotted in the figure, which is focused on the low- region that is less well approximated. Berg and Pedersen also proved many properties of the median, showing that it is a convex function of , and that the asymptotic behavior near is (where is the Euler–Mascheroni constant), and that for all the median is bounded by . A closer linear upper bound, for only, was provided in 2021 by Gaunt and Merkle, relying on the Berg and Pedersen result that the slope of is everywhere less than 1: for (with equality at ) which can be extended to a bound for all by taking the max with the chord shown in the figure, since the median was proved convex. An approximation to the median that is asymptotically accurate at high and reasonable down to or a bit lower follows from the Wilson–Hilferty transformation: which goes negative for . In 2021, Lyon proposed several approximations of the form . He conjectured values of and for which this approximation is an asymptotically tight upper or lower bound for all . In particular, he proposed these closed-form bounds, which he proved in 2023: is a lower bound, asymptotically tight as is an upper bound, asymptotically tight as Lyon also showed (informally in 2021, rigorously in 2023) two other lower bounds that are not closed-form expressions, including this one involving the gamma function, based on solving the integral expression substituting 1 for : (approaching equality as ) and the tangent line at where the derivative was found to be : (with equality at ) where Ei is the exponential integral. Additionally, he showed that interpolations between bounds could provide excellent approximations or tighter bounds to the median, including an approximation that is exact at (where ) and has a maximum relative error less than 0.6%. Interpolated approximations and bounds are all of the form where is an interpolating function running monotonially from 0 at low to 1 at high , approximating an ideal, or exact, interpolator : For the simplest interpolating function considered, a first-order rational function the tightest lower bound has and the tightest upper bound has The interpolated bounds are plotted (mostly inside the yellow region) in the log–log plot shown. Even tighter bounds are available using different interpolating functions, but not usually with closed-form parameters like these. Summation If has a distribution for (i.e., all distributions have the same scale parameter ), then provided all are independent. For the cases where the are independent but have different scale parameters, see Mathai or Moschopoulos. The gamma distribution exhibits infinite divisibility. Scaling If then, for any , by moment generating functions, or equivalently, if (shape-rate parameterization) Indeed, we know that if is an exponential r.v. with rate , then is an exponential r.v. with rate ; the same thing is valid with Gamma variates (and this can be checked using the moment-generating function, see, e.g.,these notes, 10.4-(ii)): multiplication by a positive constant divides the rate (or, equivalently, multiplies the scale). Exponential family The gamma distribution is a two-parameter exponential family with natural parameters and (equivalently, and ), and natural statistics and . If the shape parameter is held fixed, the resulting one-parameter family of distributions is a natural exponential family. Logarithmic expectation and variance One can show that or equivalently, where is the digamma function. Likewise, where is the trigamma function. This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is . Information entropy The information entropy is In the , parameterization, the information entropy is given by Kullback–Leibler divergence The Kullback–Leibler divergence (KL-divergence), of ("true" distribution) from ("approximating" distribution) is given by Written using the , parameterization, the KL-divergence of from is given by Laplace transform The Laplace transform of the gamma PDF, which is the moment-generating function of the gamma distribution, is (where is a random variable with that distribution). Related distributions General Let be independent and identically distributed random variables following an exponential distribution with rate parameter λ, then where n is the shape parameter and is the rate, and . If (in the shape–rate parametrization), then has an exponential distribution with rate parameter . In the shape-scale parametrization, has an exponential distribution with rate parameter . If (in the shape–scale parametrization), then is identical to , the chi-squared distribution with degrees of freedom. Conversely, if and is a positive constant, then . If , one obtains the Schulz-Zimm distribution, which is most prominently used to model polymer chain lengths. If is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the -th "arrival" in a one-dimensional Poisson process with intensity . If then If has a Maxwell–Boltzmann distribution with parameter , then If , then follows an exponential-gamma (abbreviated exp-gamma) distribution. It is sometimes referred to as the log-gamma distribution. Formulas for its mean and variance are in the section #Logarithmic expectation and variance. If , then follows a generalized gamma distribution with parameters , , and . More generally, if , then for follows a generalized gamma distribution with parameters , , and . If with shape and scale , then (see Inverse-gamma distribution for derivation). Parametrization 1: If are independent, then , or equivalently, Parametrization 2: If are independent, then , or equivalently, If and are independently distributed, then has a beta distribution with parameters and , and is independent of , which is -distributed. If and , then converges in distribution to defined under parametrization 2. If are independently distributed, then the vector (, where , follows a Dirichlet distribution with parameters . For large the gamma distribution converges to normal distribution with mean and variance . The gamma distribution is the conjugate prior for the precision of the normal distribution with known mean. The matrix gamma distribution and the Wishart distribution are multivariate generalizations of the gamma distribution (samples are positive-definite matrices rather than positive real numbers). The gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution. Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analog of the gamma distribution. Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion models. Modified Half-normal distribution – the Gamma distribution is a member of the family of Modified half-normal distribution. The corresponding density is , where denotes the Fox–Wright Psi function. For the shape-scale parameterization , if the scale parameter where denotes the Inverse-gamma distribution, then the marginal distribution where denotes the Beta prime distribution. Compound gamma If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse scale, has a closed-form solution known as the compound gamma distribution. If, instead, the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results in K-distribution. Weibull and stable count The gamma distribution can be expressed as the product distribution of a Weibull distribution and a variant form of the stable count distribution. Its shape parameter can be regarded as the inverse of Lévy's stability parameter in the stable count distribution: where is a standard stable count distribution of shape , and is a standard Weibull distribution of shape . Statistical inference Parameter estimation Maximum likelihood estimation The likelihood function for iid observations is from which we calculate the log-likelihood function Finding the maximum with respect to by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the parameter, which equals the sample mean divided by the shape parameter : Substituting this into the log-likelihood function gives We need at least two samples: , because for , the function increases without bounds as . For , it can be verified that is strictly concave, by using inequality properties of the polygamma function. Finding the maximum with respect to by taking the derivative and setting it equal to zero yields where is the digamma function and is the sample mean of . There is no closed-form solution for . The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of can be found either using the method of moments, or using the approximation If we let then is approximately which is within 1.5% of the correct value. An explicit form for the Newton–Raphson update of this initial guess is: At the maximum-likelihood estimate , the expected values for and agree with the empirical averages: Caveat for small shape parameter For data, , that is represented in a floating point format that underflows to 0 for values smaller than , the logarithms that are needed for the maximum-likelihood estimate will cause failure if there are any underflows. If we assume the data was generated by a gamma distribution with cdf , then the probability that there is at least one underflow is: This probability will approach 1 for small and large . For example, at , and , . A workaround is to instead have the data in logarithmic format. In order to test an implementation of a maximum-likelihood estimator that takes logarithmic data as input, it is useful to be able to generate non-underflowing logarithms of random gamma variates, when . Following the implementation in scipy.stats.loggamma, this can be done as follows: sample and independently. Then the required logarithmic sample is , so that . Closed-form estimators There exist consistent closed-form estimators of and that are derived from the likelihood of the generalized gamma distribution. The estimate for the shape is and the estimate for the scale is Using the sample mean of , the sample mean of , and the sample mean of the product simplifies the expressions to: If the rate parameterization is used, the estimate of . These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators. Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale is A bias correction for the shape parameter is given as Bayesian minimum mean squared error With known and unknown , the posterior density function for theta (using the standard scale-invariant prior for ) is Denoting Integration with respect to can be carried out using a change of variables, revealing that is gamma-distributed with parameters , . The moments can be computed by taking the ratio ( by ) which shows that the mean ± standard deviation estimate of the posterior distribution for is Bayesian inference Conjugate prior In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal (with known mean), Pareto, gamma with known shape , inverse gamma with known shape parameter, and Gompertz with known scale parameter. The gamma distribution's conjugate prior is: where is the normalizing constant with no closed-form solution. The posterior distribution can be found by updating the parameters as follows: where is the number of observations, and is the -th observation. Occurrence and applications Consider a sequence of events, with the waiting time for each event being an exponential distribution with rate . Then the waiting time for the -th event to occur is the gamma distribution with integer shape . This construction of the gamma distribution allows it to model a wide variety of phenomena where several sub-events, each taking time with exponential distribution, must happen in sequence for a major event to occur. Examples include the waiting time of cell-division events, number of compensatory mutations for a given mutation, waiting time until a repair is necessary for a hydraulic system, and so on. In biophysics, the dwell time between steps of a molecular motor like ATP synthase is nearly exponential at constant ATP concentration, revealing that each step of the motor takes a single ATP hydrolysis. If there were n ATP hydrolysis events, then it would be a gamma distribution with degree n. The gamma distribution has been used to model the size of insurance claims and rainfalls. This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process – much like the exponential distribution generates a Poisson process. The gamma distribution is also used to model errors in multi-level Poisson regression models because a mixture of Poisson distributions with gamma-distributed rates has a known closed form distribution, called negative binomial. In wireless communication, the gamma distribution is used to model the multi-path fading of signal power; see also Rayleigh distribution and Rician distribution. In oncology, the age distribution of cancer incidence often follows the gamma distribution, wherein the shape and scale parameters predict, respectively, the number of driver events and the time interval between them. In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals. In bacterial gene expression where protein production can occur in bursts, the copy number of a given protein often follows the gamma distribution, where the shape and scale parameters are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced per burst. In genomics, the gamma distribution was applied in peak calling step (i.e., in recognition of signal) in ChIP-chip and ChIP-seq data analysis. In Bayesian statistics, the gamma distribution is widely used as a conjugate prior. It is the conjugate prior for the precision (i.e. inverse of the variance) of a normal distribution. It is also the conjugate prior for the exponential distribution. In phylogenetics, the gamma distribution is the most commonly used approach to model among-sites rate variation when maximum likelihood, Bayesian, or distance matrix methods are used to estimate phylogenetic trees. Phylogenetic analyzes that use the gamma distribution to model rate variation estimate a single parameter from the data because they limit consideration to distributions where . This parameterization means that the mean of this distribution is 1 and the variance is . Maximum likelihood and Bayesian methods typically use a discrete approximation to the continuous gamma distribution. Random variate generation Given the scaling property above, it is enough to generate gamma variables with , as we can later convert to any value of with a simple division. Suppose we wish to generate random variables from , where n is a non-negative integer and . Using the fact that a distribution is the same as an distribution, and noting the method of generating exponential variables, we conclude that if is uniformly distributed on (0, 1], then is distributed (i.e. inverse transform sampling). Now, using the "-addition" property of gamma distribution, we expand this result: where are all uniformly distributed on (0, 1] and independent. All that is left now is to generate a variable distributed as for and apply the "-addition" property once more. This is the most difficult part. Random generation of gamma variates is discussed in detail by Devroye, noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid. For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter modified acceptance-rejection method Algorithm GD (shape ), or transformation method when . Also see Cheng and Feast Algorithm GKM 3 or Marsaglia's squeeze method. The following is a version of the Ahrens-Dieter acceptance–rejection method: Generate , and as iid uniform (0, 1] variates. If then and . Otherwise, and . If then go to step 1. is distributed as . A summary of this is where is the integer part of , is generated via the algorithm above with (the fractional part of ) and the are all independent. While the above approach is technically correct, Devroye notes that it is linear in the value of and generally is not a good choice. Instead, he recommends using either rejection-based or table-based methods, depending on context. For example, Marsaglia's simple transformation-rejection method relying on one normal variate and one uniform variate : Set and . Set . If and return , else go back to step 2. With generates a gamma distributed random number in time that is approximately constant with . The acceptance rate does depend on , with an acceptance rate of 0.95, 0.98, and 0.99 for α = 1, 2, and 4. For , one can use to boost to be usable with this method. In Matlab numbers can be generated using the function gamrnd(), which uses the α, θ representation. References External links ModelAssist (2017) Uses of the gamma distribution in risk modeling, including applied examples in Excel . Engineering Statistics Handbook Continuous distributions Factorial and binomial topics Conjugate prior distributions Exponential family distributions Infinitely divisible probability distributions Survival analysis Gamma and related functions
Gamma distribution
[ "Mathematics" ]
4,681
[ "Factorial and binomial topics", "Combinatorics" ]
4,435,023
https://en.wikipedia.org/wiki/Mud%20logging
Mud logging is the creation of a detailed record (well log) of a borehole by examining the cuttings of rock brought to the surface by the circulating drilling medium (most commonly drilling mud). Mud logging is usually performed by a third-party mud logging company. This provides well owners and producers with information about the lithology and fluid content of the borehole while drilling. Historically it is the earliest type of well log. Under some circumstances compressed air is employed as a circulating fluid, rather than mud. Although most commonly used in petroleum exploration, mud logging is also sometimes used when drilling water wells and in other mineral exploration, where drilling fluid is the circulating medium used to lift cuttings out of the hole. In hydrocarbon exploration, hydrocarbon surface gas detectors record the level of natural gas brought up in the mud. A mobile laboratory is situated by the near the drilling rig or on deck of an offshore drilling rig, or on a drill ship. The services Mud logging technicians in an oil field drilling operation determine positions of hydrocarbons with respect to depth, identify downhole lithology, monitor natural gas entering the drilling mud stream, and draw well logs for use by oil company geologist. Rock cuttings circulated to the surface in drilling mud are sampled and discussed. The mud logging company is normally contracted by the oil company (or operator). They then organize this information in the form of a graphic log, showing the data charted on a representation of the wellbore. The oil company representative (Company Man, or "CoMan"), together with the tool pusher and well-site geologist (WSG), provides mud loggers with their instructions. The mud logging company is contracted specifically as to when to start well-logging activity and what services to provide. Mud logging may begin on the first day of drilling, known as the "spud in" date, but is more likely at some later time (and depth) determined by the oil industry geologist's research. The mud logger may also possess logs from wells drilled in the surrounding area. This information (known as "offset data") can provide valuable clues as to the characteristics of the particular geostrata that the rig crew is about to drill through. Mud loggers connect various sensors to the drilling apparatus and install specialized equipment to monitor or "log" drill activity. This can be physically and mentally challenging, especially when having to be done during drilling activity. Much of the equipment will require precise calibration or alignment by the mud logger to provide accurate readings. Mud logging technicians observe and interpret the indicators in the mud returns during the drilling process, and at regular intervals log properties such as drilling rate, mud weight, flowline temperature, oil indicators, pump pressure, pump rate, lithology (rock type) of the drilled cuttings, and other data. Mud logging requires a good deal of diligence and attention. Sampling the drilled cuttings must be performed at predetermined intervals, and can be difficult during rapid drilling. Another important task of the mud logger is to monitor gas levels (and types) and notify other personnel on the rig when gas levels may be reaching dangerous levels, so appropriate steps can be taken to avoid a dangerous well blowout condition. Because of the lag time between drilling and the time required for the mud and cuttings to return to the surface, a modern augmentation has come into use: Measurement while drilling. The MWD technician, often a separate service company employee, logs data in a similar manner but the data is different in source and content. Most of the data logged by an MWD technician comes from expensive and complex, sometimes electronic, tools that are downhole installed at or near the drill bit. Scope Mud logging includes observation and microscopic examination of drill cuttings (formation rock chips), and evaluation of gas hydrocarbon and its constituents, basic chemical and mechanical parameters of drilling fluid or drilling mud (such as chlorides and temperature), as well as compiling other information about the drilling parameters. Then data is plotted on a graphic log called a mud log. Example1, Example2. Other real-time drilling parameters that may be compiled include, but are not limited to; rate of penetration (ROP) of the bit (sometimes called the drill rate), pump rate (quantity of fluid being pumped), pump pressure, weight on bit, drill string weight, rotary speed, rotary torque, RPM (Revolutions per minute), SPM (Strokes per minute) mud volumes, mud weight and mud viscosity. This information is usually obtained by attaching monitoring devices to the drilling rig's equipment with a few exceptions such as the mud weight and mud viscosity which are measured by the derrickhand or the mud engineer. Rate of drilling is affected by the pressure of the column of mud in the borehole and its relative counterbalance to the internal pore pressures of the encountered rock. A rock pressure greater than the mud fluid will tend to cause rock fragments to spall as it is cut and can increase the drilling rate. "D-exponents" are mathematical trend lines which estimate this internal pressure. Thus both visual evidence of spalling and mathematical plotting assist in formulating recommendations for optimum drilling mud densities for both safety (blowout prevention) and economics. (Faster drilling is generally preferred.) Mud logging is often written as a single word "mudlogging". The finished product can be called a "mud log" or "mudlog". The occupational description is "mud logger" or "mudlogger". In most cases, the two word usage seems to be more common. The mud log provides a reliable time log of drilled formations. Details The rate of penetration in Figure 1 & 2 is represented by the black line on the left side of the log. The farther to the left that the line goes, the faster the rate of penetration. On this mud log, ROP is measured in feet per hour, but on some older, hand-drawn mud logs, it is measured in minutes per foot. The porosity in Figure 1 is represented by the blue line farthest to the left of the log. It indicates the pore space within the rock structure. Oil and gas reside within this pore space. Note how far to the left the porosity goes, where all the sand (in yellow) is. This indicates that the sand has good porosity. Porosity is not a direct or physical measurement of the pore space but rather an extrapolation from other drilling parameters and, therefore, is not always reliable. The lithology in Figure 1 & 2 is represented by the cyan, gray/black and yellow blocks of color. Cyan = lime, gray/black = shale and yellow = sand. More yellow represents more sand identified at that depth. The lithology is measured as a percentage of the total sample as visually inspected under a microscope, normally at 10× magnification (Figure 3). These are but a fraction of the different types of formations that might be encountered. (Color coding is not necessarily standardized among different mud logging companies, though the symbol representations for each are very similar.) In Figure 3, a sample of cuttings is seen under a microscope at 10× magnification after they have been washed off. Some of the larger shale and lime fragments are separated from this sample by running it through sieves and must be considered when estimating percentages. Also, this image view is only a fragment of the total sample, and some of the sand at the bottom of the tray cannot be seen and must also be considered in the total estimation. Thus, this sample would be considered to be about 90% shale, 5% sand and 5% lime (in 5% increments). The gas in Figure 1 & 2 is represented by the green line and is measured in units as the quantity of total gas, but does not represent the actual quantity of oil or gas the reservoir contains. In (Figure 1) the squared-off dash-dot lines just to the right of the sand (in yellow) and left of the gas (in green) represents the heavier hydrocarbons detected. Cyan = C2 (ethane), purple = C3 (propane) and blue = C4 (butane). Detecting and analyzing these heavy gases help to determine the type of oil or gas the formation contains. See also Directional drilling Drilling fluid (mud) Geosteering LWD (Logging While Drilling) MWD (Measurement While Drilling) Well logging References Further reading Chambre Syndicale de la recherche et de la production du petrole et du gaz naturel, 1982, Geological and mud logging in drilling control: catalogue of typical cases, Houston, TX: Gulf Publishing Company and Paris: Editions technip, 81 p. Exlog, 1979, Field geologist's training guide: an introduction to oilfield geology, mud logging and formation evaluation, Sacramento, CA: Exploration Logging, Inc., 301 p. Privately published with no ISBN Whittaker, Alun, 1991, Mud logging handbook, Englewood Cliffs, NJ: Prentice Hall, 531 p. External links Articles and books on mud logging Hand drawn mud logs Geoservices definition of Mud Logging Maverick Energy Lexicon Mud Logging Gas Detectors Well logging Petroleum geology
Mud logging
[ "Chemistry", "Engineering" ]
1,906
[ "Petroleum", "Petroleum geology", "Petroleum engineering", "Well logging" ]
4,436,145
https://en.wikipedia.org/wiki/Apex%20%28radio%20band%29
Apex radio stations (also known as skyscraper and pinnacle) was the name commonly given to a short-lived group of United States broadcasting stations, which were used to evaluate transmitting on frequencies that were much higher than the ones used by standard amplitude modulation (AM) and shortwave stations. Their name came from the tall height of their transmitter antennas, which were needed because coverage was primarily limited to local line-of-sight distances. These stations were assigned to what at the time were described as "ultra-high shortwave" frequencies, between roughly 25 and 44 MHz. They employed amplitude modulation (AM) transmissions, although in most cases using a wider bandwidth than standard broadcast band AM stations, in order to provide high fidelity sound with less static and distortion. In 1937 the Federal Communications Commission (FCC) formally allocated an Apex station band, consisting of 75 transmitting frequencies running from 41.02 to 43.98 MHz. These stations were never given permission to operate commercially, although they were allowed to retransmit programming from standard AM stations. Most operated under experimental licenses, however this band was the first to include a formal "non-commercial educational" station classification. The FCC eventually concluded that frequency modulation (FM) transmissions were superior, and the Apex band was eliminated effective January 1, 1941, in order to make way for the creation of the original FM band, assigned to 42 to 50 MHz. Initial development During the 1920s and 1930s, radio engineers and government regulators investigated the characteristics of transmitting frequencies higher than those currently in use. In the United States, by 1930 the original AM broadcasting band consisted of 96 frequencies from 550 to 1500 kHz, with a 10 kHz spacing between adjacent assignments. On this band, a station's coverage during the daytime consisted exclusively of its groundwave signal, which for the most powerful stations might exceed 200 miles (320 kilometers), although it was significantly less for the average station. However, during the nighttime, changes in the ionosphere resulted in additional long distance skywave signals, that were commonly reflected for up to hundreds of kilometers. Over time, technology was developed to transmit on progressively higher frequencies. (Although initially these were in general called "ultra-high shortwave" frequencies, radio spectrum nomenclature was later standardized, with 3 to 30 MHz transmissions becoming known as "High Frequency" (HF), 30 to 300 MHz called "Very High Frequency" (VHF), and 300 to 3,000 MHz called "Ultra High Frequency" (UHF)). It soon became apparent that there were significant differences in the propagation characteristics of various frequency ranges. Signals from shortwave stations, operating roughly in the range from 5 MHz to 20 MHz, were found to be readily reflected by the ionosphere during both the day and at night, resulting in stations that sometimes could transmit halfway around the world. Investigations of increasingly higher frequencies found that, above around 20 MHz, signal propagation by both groundwave and skywave generally became minimal, which meant that station coverage now began to be limited to just line-of-sight distances from the transmitting antenna. This was considered to be a valuable characteristic by the FCC, because it would allow the establishment of broadcasting stations with limited but consistent day and night coverage, that could only be received by their local communities. It also meant that multiple stations could operate on the same frequency throughout the country without interfering with each other. Because the standard AM broadcast band was considered to be too full to allow any meaningful increase in the number of stations, the FCC began to issue licenses to parties interested in testing the suitability of higher frequencies. Most Apex stations operated under experimental licenses, and were commonly affiliated with and subsidized by a commercially licensed AM station. Until the late 1930s, commercially made radio receivers did not cover these high frequencies, so early Apex station listeners constructed their own receivers, or built converters for existing models. On March 18, 1934, W8XH in Buffalo, New York, a companion station to AM station WBEN, became the first Apex station to air a regular schedule. Although most of these stations merely retransmitted the programs of their AM station partners, in a few cases efforts were made to provide original programming. In 1936, The Milwaukee Journal's W9XAZ, which initially had relayed the programming of WTMJ, became the first Apex station to originate its own programming on a regular basis. While monitoring the first group of stations, it was soon realized that, due to the strengthening of the ionosphere during periods of high solar activity, at times the lower end of the VHF frequencies would produce strong, and undesirable, skywave signals. (The December 1937 issue of All-Wave Radio reported that W6XKG in Los Angeles, transmitting on 25.95 MHz, had been heard in both Asia and Europe, while W9XAZ, 26.4 MHz in Milwaukee, Wisconsin had "a strong signal in Australia", and W8XAI, 31.6 MHz in Rochester, New York, "is another station that is often heard in Australia.") This most commonly occurred during the summer months, and during peaks in the 11-year sunspot cycle. This determination led to the FCC moving the developing broadcasting service stations, which by now began to include experimental FM radio and TV stations, to higher frequencies that were less affected by solar influences. Apex band establishment (1937) In October 1937, the FCC announced a sweeping allocation of frequency assignments for the various competing services, including television, relay, and public service, which covered 10 kHz to 300 MHz. Included was a band of Apex stations, consisting of 75 channels with 40 kHz separations, and spanning from 41.02 to 43.98 MHz. The 40 kHz spacing between adjacent frequencies was four times as much as the 10 kHz spacing on the standard AM broadcast band, which reduced adjacent-frequency interference, and provided more bandwidth for high-fidelity programming. At the time it was estimated that there were about 50 Apex-style stations currently in operation, although transmitting on a variety of frequencies. In January 1938 the band's first 25 channels, from 41.02 to 41.98 MHz, were reserved for non-commercial educational stations, with the Cleveland City Board of Education's WBOE in Cleveland, Ohio, the first station to begin operation within this group. Apex band assignments (1937–1941) Conversion to FM (1941) At the time the Apex band was established, the FCC noted that "The Commission at an early date will consider carefully the needs and requirements for high-frequency broadcast stations using both conventional [AM] modulation and frequency modulation". As of January 15, 1940, only 2 non-commercial and 14 experimental stations held Apex band licenses, all of which were assigned operating frequencies in the bottom half of the band. (A similar number of experimental stations held grants for frequencies in the 25–26 MHz region.) In addition, at this same time 20 experimental FM stations had been assigned slots within the top half of the Apex band frequencies. The commission's studies soon found significant advantages to FM transmissions over the Apex AM signals. Sound quality, and especially resistance to interference from static, including from lightning, was found to be far superior for FM. Although FM assignments required five times the bandwidth of Apex stations (200 kHz vs. 40 kHz), the "capture effect" allowed FM stations operating on the same frequency to be spaced closer together than Apex stations. By 1939 the FCC began encouraging Apex stations to consider changing to the technically superior FM transmissions. In May 1940, the FCC decided to authorize a commercial FM band effective January 1, 1941, operating on 40 channels spanning 42–50 MHz. (This was later changed to 88–106 MHz, and still later to 88–108 MHz, which increased the number of channels to 100.) This new assignment also resulted in the elimination of the Apex band, and the Apex stations were informed that they needed to either go silent or convert to FM. With this change, a few of the original Apex stations were converted into some of the earliest FM stations. The three educational stations were allowed some leeway in making the conversion to FM, with WBOE switching over in February 1941, WNYE receiving permission to continue as an Apex station until June 29, 1941, and WBKY receiving a series of authorizations to continue using its AM transmitter until May 1, 1944. Currently, the frequencies that had been used by the Apex band are allocated for land mobile communication. There would be at least one attempt to revive the Apex band concept. Beginning in May 1946, consulting radio engineer Sarkes Tarzian operated a 200-watt experimental AM station, W9XHZ, on 87.75 MHz in Bloomington, Indiana. After two years of successful operation of what he referred to as his "HIFAM" station, in 1948 he proposed that the FCC allocate a small high-frequency broadcast band, 400 kHz wide with 10 kHz spacing between frequency assignments. Tarzian promoted this as a low-cost alternative to expensive FM transmitters and receivers, saying that a $5.95 converter could be added to existing AM radios that would allow them to pick up the HIFAM stations. He continued to operate his experimental station, which eventually became KS2XAP, until 1950, although by then its transmitting hours were greatly restricted, as the FCC required the station to remain off the air whenever nearby WFBM-TV in Indianapolis was broadcasting. This was due to the fact that the TV station's audio transmitter used the same frequency as Tarzian's station. Moreover, after his station's final license expired on June 1, 1950, the FCC denied Tarzian any further renewals, concluding it would not reverse its earlier determination that there was no need for a second AM broadcast band. Notes References External links America's Apex Broadcasting Stations of the 1930s by John Schneider, Monitoring Times Magazine, December 2010. (theradiohistorian.org) A Detroit Apex Station in 1936 (W8XWJ) by John Schneider, September 17, 2013. (radioworld.com) Pre-History: Detroit's Experimental Amplitude Modulation (AM) "Apex" Station, W8XWJ (michiguide.com) FCC History Cards for W8XWJ (covering 1936–1941) Apex Radio in Milwaukee (W9XAZ) (jeff560.tripod.com) Apex and FM chronology (jeff560.tripod.com) "High Frequency Broadcast Stations in the United States" (Licensed by FCC as of January 1, 1937), Broadcasting Yearbook (1937 edition), page 331. "High Frequency (Apex) Broadcast Stations in the United States" (authorized by FCC as of January 1, 1938), Broadcasting Yearbook (1938 edition), page 290. "High Frequency (Apex) Broadcast Stations in the United States" (authorized by FCC as of January 1, 1939), Broadcasting Yearbook (1939 edition), page 369. (jeff560.tripod.com) "High Frequency Broadcasting Stations in the United States" (Authorized by FCC as of January 15, 1940), Broadcasting Yearbook (1940 edition), page 374. High Frequency and FM broadcast stations in the U.S. in 1942 Radio Annual (jeff560.tripod.com) "Sarkes Tarzian and His HiFAM Experiment" by Andrew Mitz, Radio Age, July 2004. Radio technology History of radio in the United States 1930s in American music Telecommunications-related introductions in 1934 Bandplans Broadcast engineering
Apex (radio band)
[ "Technology", "Engineering" ]
2,353
[ "Information and communications technology", "Broadcast engineering", "Telecommunications engineering", "Radio technology", "Electronic engineering" ]
4,436,559
https://en.wikipedia.org/wiki/Stetter%20reaction
The Stetter reaction is a reaction used in organic chemistry to form carbon-carbon bonds through a 1,4-addition reaction utilizing a nucleophilic catalyst. While the related 1,2-addition reaction, the benzoin condensation, was known since the 1830s, the Stetter reaction was not reported until 1973 by Dr. Hermann Stetter. The reaction provides synthetically useful 1,4-dicarbonyl compounds and related derivatives from aldehydes and Michael acceptors. Unlike 1,3-dicarbonyls, which are easily accessed through the Claisen condensation, or 1,5-dicarbonyls, which are commonly made using a Michael reaction, 1,4-dicarbonyls are challenging substrates to synthesize, yet are valuable starting materials for several organic transformations, including the Paal–Knorr synthesis of furans and pyrroles. Traditionally utilized catalysts for the Stetter reaction are thiazolium salts and cyanide anion, but more recent work toward the asymmetric Stetter reaction has found triazolium salts to be effective. The Stetter reaction is an example of umpolung chemistry, as the inherent polarity of the aldehyde is reversed by the addition of the catalyst to the aldehyde, rendering the carbon center nucleophilic rather than electrophilic. Mechanism As the Stetter reaction is an example of umpolung chemistry, the aldehyde is converted from an electrophile to a nucleophile under the reaction conditions. This is accomplished by activation from some catalyst - either cyanide (CN−) or thiazolium salt. For the use of either catalyst, the mechanism is very similar; the only difference is that with thiazolium salts, the catalyst must be deprotonated first to form the active catalytic species. The active catalyst can be described as the combination of two contributing resonance forms - an ylide or a carbene, both of which portray the nucleophilic character at carbon. The thiazolium ylide or CN− can then add into the aldehyde substrate, forming a cyanohydrin in the case of CN− or the Breslow intermediate in the case of thiazolium salt. The Breslow intermediate was proposed by Ronald Breslow in 1958 and is a common intermediate for all thiamine-catalyzed reactions, whether in vitro or in vivo. Once the "nucleophilic aldehyde" synthon is formed, whether as a cyanohydrin or stabilized by a thiazolium ylide, the reaction can proceed down two pathways. The faster pathway is self-condensation with another molecule of aldehyde to give benzoin products. However, benzoin condensation is completely reversible, and therefore does not interfere with product formation in the Stetter reaction. In fact, benzoins can be used instead of aldehydes as substrates to achieve the same overall Stetter transformation, because benzoins can be restored to their aldehyde precursors under the reaction conditions. The desired pathway toward the Stetter product is the 1,4-addition of the nucleophilic aldehyde to a Michael-type acceptor. After 1,4-addition, the reaction is irreversible and ultimately, the 1,4-dicarbonyl is formed when the catalyst is kicked out to regenerate CN− or the thiazolium ylide. Scope The Stetter reaction produces classically difficult to access 1,4-dicarbonyl compounds and related derivatives. The traditional Stetter reaction is quite versatile, working on a wide variety of substrates. Aromatic aldehydes, heteroaromatic aldehydes, and benzoins can all be used as acyl anion precursors with thiazolium salt and cyanide catalysts. However, aliphatic aldehydes can only be utilized if a thiazolium salt is used as a catalyst, as they undergo aldol condensation side reaction when a cyanide catalyst is used. In addition, α,β-unsaturated esters, ketones, nitriles, nitros, and aldehydes are all appropriate Michael acceptors with either catalyst. However, the general scope of asymmetric Stetter reactions is more limited. Intramolecular asymmetric Stetter reactions enjoy a range of acceptable Michael acceptors and acyl anion precursors in essentially any combination. Intramolecular asymmetric Stetter reactions can utilize aromatic, heteroaromatic and aliphatic aldehydes with a tethered α,β-unsaturated ester, ketone, thioester, malonate, nitrile or Weinreb amide. It has been shown that α,β-unsaturated nitros and aldehydes are not suitable Michael acceptors and have markedly decreased enantiomeric excess in such reactions. Another limitation encountered with intramolecular asymmetric Stetter reactions is that only substrates that result in the formation of a six-membered ring show synthetically useful enantiomeric excess; substrates which form five and seven-membered rings either do not react or show low stereoinduction. On the other hand, intermolecular asymmetric reactions are quite confined to specifically matched combinations of acyl anion precursor and Michael acceptor, such as an aliphatic aldehyde with a nitroalkene. In addition, these substrates tend to be rather activated, as the intermolecular asymmetric Stetter reaction is still in the early stages of development. Variations Several variations of the Stetter reaction have been developed since its discovery in 1973. In 2001, Murry et al reported a Stetter reaction of aromatic aldehydes onto acylimine derivatives to give α-amido ketone products. The acylimine acceptors were generated in situ from α-tosylamide substrates, which underwent elimination in the presence of base. Good to excellent yields (75-90%) were observed. Mechanistic investigations showed that the corresponding benzoins were not adequate substrates, contrary to traditional Stetter reactions. From this, the authors conclude the Stetter reaction of acylimines is under kinetic control, rather than thermodynamic control. Another variation of the Stetter reaction involves the use of 1,2-dicarbonyls as precursors to the acyl anion intermediate. In 2005, Scheidt and coworkers reported the use of sodium pyruvate, which loses CO2 to form the Breslow intermediate. Similarly, in 2011 Bortolini and coworkers demonstrated the use of α-diketones to generate an acyl anion. Under the conditions they developed, 2,3-butadienone is cleaved after addition to the thiazolium catalyst to release ethyl acetate and generate the Breslow intermediate necessary for the Stetter reaction to proceed. In addition, they showed the atom economy and utility of using a cyclic α-diketone to generate the Stetter product with a tethered ethyl ester. The reaction precedes through the same mechanism as the acyclic version, but the ester generated by attack of ethanol remains tethered to the product. However, the conditions only allow for the generation of ethyl esters, due to the necessity of ethanol as solvent. Substitution of ethanol with tert-butanol resulted in no product. The authors speculate this is due to the difference in acidity between the two alcoholic solvents. In 2004, Scheidt and coworkers introduced acyl silanes as competent substrates in the Stetter reaction, a variation they termed the "sila-Stetter reaction." Under their reaction conditions, the thiazolium catalyst induces a [1,2] Brook rearrangement, which is followed by desilylation by an isopropanol additive to give the common Breslow intermediate of the traditional Stetter reaction. The desilylation step was found to be necessary, and the reaction does not proceed without an alcoholic additive. Acyl silanes are less electrophilic than the corresponding aldehydes, preventing typical benzoin-type byproducts often observed in the Stetter reaction. Asymmetric Stetter Reaction The first asymmetric variant of the Stetter reaction was reported in 1996 by Enders et al, employing a chiral triazolium catalyst 1. Subsequently, several other catalysts were reported for asymmetric Stetter reactions, including 2, 3, and 4. The success of the Rovis group's catalyst 2 led them to further explore this family of catalysts and expand their use for asymmetric Stetter reactions. In 2004, they reported the enantioselective formation of quaternary centers from aromatic aldehydes in an intramolecular Stetter reaction with a slightly modified catalyst. Further work extended the scope of this reaction to include aliphatic aldehydes as well. Subsequently, it was shown that the olefin geometry of the Michael acceptor dictates diastereoselectivity in these reactions, whereby the catalyst dictates the enantioselectivity of the initial carbon bond formation and allylic strain minimization dictates the diastereoselective intramolecular protonation. The inherent difficulties of controlling enantioselectivity in intermolecular reactions made the development of an intermolecular asymmetric Stetter reaction a challenge. While limited enantiomeric excess had been reported by Enders in the early 1990s for the reaction of n-butanal with chalcone, conditions for a synthetically useful asymmetric intermolecular Stetter reaction were not reported until 2008 when both the groups of Enders and Rovis published such reactions. The Enders group utilized a triazolium-based catalyst to effect the coupling of aromatic aldehydes with chalcone derivatives with moderate yields. The concurrent publication from the Rovis group also employed a triazolium-based catalyst and reported the Stetter reaction between glyoxamides and alkylidenemalonates in good to excellent yields. Rovis and coworkers subsequently went on to explore the asymmetric intermolecular Stetter reaction of heterocyclic aldehydes and nitroalkenes. During optimization of this reaction, it was found that a catalyst with a fluorinated backbone greatly enhanced enantioselectivity in the reaction. It was proposed that the fluorinated backbone helps to lock the conformation of the catalyst in a way the increases enantioselectivity. Further computational studies on this system verified that the stereoelectronic attraction between the developing partial negative charge on the nitroalkene in the transition state and the partial positive charge of the C-F dipole is responsible for the increase in enantiomeric excess observed with the use of the catalyst with backbone fluorination. While this is a marked advance in the area of intermolecular asymmetric Stetter reactions, the substrate scope is limited and the catalyst is optimized for the specific substrates being utilized. Another contribution to the development of asymmetric intermolecular Stetter reactions came from Glorius and coworkers in 2011. They demonstrated the synthesis of α-amino acids enantioselectively by utilizing N-acylamido acrylate as the conjugate acceptor. Significantly, the reaction can be run on a 5 mmol scale without loss of yield or enantioselectivity. Applications The Stetter reaction is an effective tool in organic synthesis. The products of the Stetter reaction, 1,4-dicarbonyls, are valuable moieties for the synthesis of complex molecules. For example, Trost and coworkers employed a Stetter reaction as one step in their synthesis of rac-hirsutic acid C. The intramolecular coupling of an aliphatic aldehyde with a tethered α,β-unsaturated ester led to the desired tricyclic 1,4-dicarbonyl in 67% yield. This intermediate was converted into rac-hirsutic acid C in seven more steps. The Stetter reaction is commonly used in sequence with the Paal-Knorr synthesis of furans and pyrroles, which a 1,4-dicarbonyl undergoes condensation with itself or in the presence of an amine under high temperature, acidic conditions. In 2001, Tius and coworkers reported the asymmetric total synthesis of roseophilin utilizing an intermolecular Stetter reaction to couple an aliphatic aldehyde with a cyclic enone. After ring-closing metathesis and alkene reduction, the 1,4-dicarbonyl product was converted to a pyrrole via the Paal-Knorr synthesis and further elaborated to the natural product. In 2004, a one-pot coupling-isomerization-Stetter-Paal Knorr sequence was reported. This procedure first utilizes palladium cross-coupling chemistry to couple aryl halides with propargylic alcohols to give α,β-unsaturated ketones, which can then undergo a Stetter reaction with an aldehyde. Once the 1,4-dicarbonyl compound is formed, heating in the presence of acid will give the furan, while heating in the presence of ammonium chloride and acid will give the pyrrole. The entire sequence is performed in one-pot with no work-up or purification between steps. Ma and coworkers developed an alternative method for accessing furans utilizing the Stetter reaction. In their report, 3-aminofurans are synthesized under Stetter conditions for coupling aromatic aldehydes with dimethyl acetylenedicarboxylate (DMAD), whereby the thiazolium ylide is hydrolyzed by the aromatization of the furan product. As the thiazolium is destroyed under these conditions, it is not catalytic and must be used in stoichiometric quantities. They further elaborated on this work by developing a method in which 2-aminofurans are synthesized by cyclization onto a nitrile. In this method, the thiazolium ylide is employed catalytically and the free amine product is generated. Related Baylis–Hillman reaction References Addition reactions Carbon-carbon bond forming reactions Name reactions
Stetter reaction
[ "Chemistry" ]
3,053
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
4,437,283
https://en.wikipedia.org/wiki/Mass%20transfer%20coefficient
In engineering, the mass transfer coefficient is a diffusion rate constant that relates the mass transfer rate, mass transfer area, and concentration change as driving force: Where: is the mass transfer coefficient [mol/(s·m2)/(mol/m3)], or m/s is the mass transfer rate [mol/s] is the effective mass transfer area [m2] is the driving force concentration difference [mol/m3]. This can be used to quantify the mass transfer between phases, immiscible and partially miscible fluid mixtures (or between a fluid and a porous solid). Quantifying mass transfer allows for design and manufacture of separation process equipment that can meet specified requirements, estimate what will happen in real life situations (chemical spill), etc. Mass transfer coefficients can be estimated from many different theoretical equations, correlations, and analogies that are functions of material properties, intensive properties and flow regime (laminar or turbulent flow). Selection of the most applicable model is dependent on the materials and the system, or environment, being studied. Mass transfer coefficient units (mol/s)/(m2·mol/m3) = m/s Note, the units will vary based upon which units the driving force is expressed in. The driving force shown here as '' is expressed in units of moles per unit of volume, but in some cases the driving force is represented by other measures of concentration with different units. For example, the driving force may be partial pressures when dealing with mass transfer in a gas phase and thus use units of pressure. See also Mass transfer Separation process Sieving coefficient References Transport phenomena
Mass transfer coefficient
[ "Physics", "Chemistry", "Engineering" ]
342
[ "Transport phenomena", "Chemical engineering", "Physical phenomena" ]
4,438,763
https://en.wikipedia.org/wiki/Gillespie%20algorithm
In probability theory, the Gillespie algorithm (or the Doob–Gillespie algorithm or stochastic simulation algorithm, the SSA) generates a statistically correct trajectory (possible solution) of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others (circa 1945), presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power (see stochastic simulation). As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology. History The process that led to the algorithm recognizes several important steps. In 1931, Andrei Kolmogorov introduced the differential equations corresponding to the time-evolution of stochastic processes that proceed by jumps, today known as Kolmogorov equations (Markov jump process) (a simplified version is known as master equation in the natural sciences). It was William Feller, in 1940, who found the conditions under which the Kolmogorov equations admitted (proper) probabilities as solutions. In his Theorem I (1940 work) he establishes that the time-to-the-next-jump was exponentially distributed and the probability of the next event is proportional to the rate. As such, he established the relation of Kolmogorov's equations with stochastic processes. Later, Doob (1942, 1945) extended Feller's solutions beyond the case of pure-jump processes. The method was implemented in computers by David George Kendall (1950) using the Manchester Mark 1 computer and later used by Maurice S. Bartlett (1953) in his studies of epidemics outbreaks. Gillespie (1977) obtains the algorithm in a different manner by making use of a physical argument. Idea Mathematics In a reaction chamber, there are a finite number of molecules. At each infinitesimal slice of time, a single reaction might take place. The rate is determined by the number of molecules in each chemical species. Naively, we can simulate the trajectory of the reaction chamber by discretizing time, then simulate each time-step. However, there might be long stretches of time where no reaction occurs. The Gillespie algorithm samples a random waiting time until some reaction occurs, then take another random sample to decide which reaction has occurred. The key assumptions are that each reaction is Markovian in time there are no correlations between reactions Given the two assumptions, the random waiting time for some reaction is exponentially distributed, with exponential rate being the sum of the individual reaction's rates. Validity in biochemical simulations Traditional continuous and deterministic biochemical rate equations do not accurately predict cellular reactions since they rely on bulk reactions that require the interactions of millions of molecules. They are typically modeled as a set of coupled ordinary differential equations. In contrast, the Gillespie algorithm allows a discrete and stochastic simulation of a system with few reactants because every reaction is explicitly simulated. A trajectory corresponding to a single Gillespie simulation represents an exact sample from the probability mass function that is the solution of the master equation. The physical basis of the algorithm is the collision of molecules within a reaction vessel. It is assumed that collisions are frequent, but collisions with the proper orientation and energy are infrequent. It is assumed that the reaction environment is well mixed. Algorithm A review (Gillespie, 2007) outlines three different, but equivalent formulations; the direct, first-reaction, and first-family methods, whereby the former two are special cases of the latter. The formulation of the direct and first-reaction methods is centered on performing the usual Monte Carlo inversion steps on the so-called "fundamental premise of stochastic chemical kinetics", which mathematically is the function where each of the terms are propensity functions of an elementary reaction, whose argument is , the vector of species counts. The parameter is the time to the next reaction (or sojourn time), and is the current time. To paraphrase Gillespie, this expression is read as "the probability, given , that the system's next reaction will occur in the infinitesimal time interval , and will be of stoichiometry corresponding to the th reaction". This formulation provides a window to the direct and first-reaction methods by implying is an exponentially-distributed random variable, and is "a statistically independent integer random variable with point probabilities ". Thus, the Monte Carlo generating method is simply to draw two pseudorandom numbers, and on , and compute and the smallest integer satisfying Utilizing this generating method for the sojourn time and next reaction, the direct method algorithm is stated by Gillespie as 1. Initialize the time and the system's state 2. With the system in state at time , evaluate all the and their sum 3. Calculate the above value of and 4. Effect the next reaction by replacing and 5. Record as desired. Return to step 2, or else end the simulation. where represents adding the component of the given state-change vector . This family of algorithms is computationally expensive and thus many modifications and adaptations exist, including the next reaction method (Gibson & Bruck), tau-leaping, as well as hybrid techniques where abundant reactants are modeled with deterministic behavior. Adapted techniques generally compromise the exactitude of the theory behind the algorithm as it connects to the master equation, but offer reasonable realizations for greatly improved timescales. The computational cost of exact versions of the algorithm is determined by the coupling class of the reaction network. In weakly coupled networks, the number of reactions that is influenced by any other reaction is bounded by a small constant. In strongly coupled networks, a single reaction firing can in principle affect all other reactions. An exact version of the algorithm with constant-time scaling for weakly coupled networks has been developed, enabling efficient simulation of systems with very large numbers of reaction channels (Slepoy Thompson Plimpton 2008). The generalized Gillespie algorithm that accounts for the non-Markovian properties of random biochemical events with delay has been developed by Bratsun et al. 2005 and independently Barrio et al. 2006, as well as (Cai 2007). See the articles cited below for details. Partial-propensity formulations, as developed independently by both Ramaswamy et al. (2009, 2010) and Indurkhya and Beal (2010), are available to construct a family of exact versions of the algorithm whose computational cost is proportional to the number of chemical species in the network, rather than the (larger) number of reactions. These formulations can reduce the computational cost to constant-time scaling for weakly coupled networks and to scale at most linearly with the number of species for strongly coupled networks. A partial-propensity variant of the generalized Gillespie algorithm for reactions with delays has also been proposed (Ramaswamy Sbalzarini 2011). The use of partial-propensity methods is limited to elementary chemical reactions, i.e., reactions with at most two different reactants. Every non-elementary chemical reaction can be equivalently decomposed into a set of elementary ones, at the expense of a linear (in the order of the reaction) increase in network size. Examples Reversible binding of A and B to form AB dimers A simple example may help to explain how the Gillespie algorithm works. Consider a system of molecules of two types, and . In this system, and reversibly bind together to form dimers such that two reactions are possible: either A and B react reversibly to form an dimer, or an dimer dissociates into and . The reaction rate constant for a given single A molecule reacting with a given single molecule is , and the reaction rate for an dimer breaking up is . If at time t there is one molecule of each type then the rate of dimer formation is , while if there are molecules of type and molecules of type , the rate of dimer formation is . If there are dimers then the rate of dimer dissociation is . The total reaction rate, , at time t is then given by So, we have now described a simple model with two reactions. This definition is independent of the Gillespie algorithm. We will now describe how to apply the Gillespie algorithm to this system. In the algorithm, we advance forward in time in two steps: calculating the time to the next reaction, and determining which of the possible reactions the next reaction is. Reactions are assumed to be completely random, so if the reaction rate at a time t is , then the time, δt, until the next reaction occurs is a random number drawn from exponential distribution function with mean . Thus, we advance time from t to t + δt. The probability that this reaction is an molecule binding to a molecule is simply the fraction of total rate due to this type of reaction, i.e., the probability that reaction is The probability that the next reaction is an dimer dissociating is just 1 minus that. So with these two probabilities we either form a dimer by reducing and by one, and increase by one, or we dissociate a dimer and increase and by one and decrease by one. Now we have both advanced time to t + δt, and performed a single reaction. The Gillespie algorithm just repeats these two steps as many times as needed to simulate the system for however long we want (i.e., for as many reactions). The result of a Gillespie simulation that starts with and at t=0, and where and , is shown at the right. For these parameter values, on average there are 8 dimers and 2 of and but due to the small numbers of molecules fluctuations around these values are large. The Gillespie algorithm is often used to study systems where these fluctuations are important. That was just a simple example, with two reactions. More complex systems with more reactions are handled in the same way. All reaction rates must be calculated at each time step, and one chosen with probability equal to its fractional contribution to the rate. Time is then advanced as in this example. References Further reading (Slepoy Thompson Plimpton 2008): (Bratsun et al. 2005): (Barrio et al. 2006): (Cai 2007): (Barnes Chu 2010): (Ramaswamy González-Segredo Sbalzarini 2009): (Ramaswamy Sbalzarini 2010): (Indurkhya Beal 2010): (Ramaswamy Sbalzarini 2011): (Yates Klingbeil 2013): Chemical kinetics Computational chemistry Monte Carlo methods Stochastic simulation
Gillespie algorithm
[ "Physics", "Chemistry" ]
2,239
[ "Chemical reaction engineering", "Monte Carlo methods", "Computational physics", "Theoretical chemistry", "Computational chemistry", "Chemical kinetics" ]
4,439,998
https://en.wikipedia.org/wiki/Uranium%20borohydride
Uranium borohydride is the inorganic compound with the empirical formula U(BH4)4. Two polymeric forms are known, as well as a monomeric derivative that exists in the gas phase. Because the polymers convert to the gaseous form at mild temperatures, uranium borohydride once attracted much attention. It is solid green. Structure It is a homoleptic coordination complex with borohydride (also called tetrahydroborate). These anions can serve as bidentate (κ2-BH4−) bridges between two uranium atoms or as tridentate ligands (κ3-BH4−) on single uranium atoms. In the solid state, a polymeric form exists that has a 14-coordinate structure with two tridentate terminal groups and four bidentate bridging groups. Gaseous features a monomeric 12-coordinate uranium, with four κ3-BH4− ligands, which envelop the metal, conferring volatility. Preparation This compound was first prepared by treating uranium tetrafluoride with aluminium borohydride: UF4 + 2 Al(BH4)3 → U(BH4)4 + 2 Al(BH4)F2 It may also be prepared by the solid-state reaction of uranium tetrachloride with lithium borohydride: UCl4 + 4 LiBH4 → U(BH4)4 + 4 LiCl Although solid U(BH4)4 is a polymer, it undergoes cracking, converting to the monomer. The related methylborohydride complex U(BH3CH3)4 is monomeric as a solid and hence more volatile. History During the Manhattan Project, the need arose to find volatile compounds of uranium suitable for use in the diffusion separation of uranium isotopes. Uranium borohydride is, after uranium hexafluoride, the most volatile compound of uranium known with a vapor pressure of at 60 °C. Uranium borohydride was discovered by Hermann Irving Schlesinger and Herbert C. Brown, who also discovered sodium borohydride. Uranium hexafluoride is corrosive, which led to serious consideration of the borohydride. However, by the time the synthesis method was finalized, the problems related to uranium hexafluoride were solved. Borohydrides are nonideal ligands for isotope separations, since there are isotopes of boron that occur naturally in high abundance: 10B (20%) and 11B (80%), while fluorine-19 is the only isotope of fluorine that occurs in nature in more than trace quantities. References Uranium(IV) compounds Borohydrides Inorganic polymers Coordination polymers
Uranium borohydride
[ "Chemistry" ]
584
[ "Inorganic polymers", "Inorganic compounds" ]
4,440,593
https://en.wikipedia.org/wiki/Histone%20methylation
Histone methylation is a process by which methyl groups are transferred to amino acids of histone proteins that make up nucleosomes, which the DNA double helix wraps around to form chromosomes. Methylation of histones can either increase or decrease transcription of genes, depending on which amino acids in the histones are methylated, and how many methyl groups are attached. Methylation events that weaken chemical attractions between histone tails and DNA increase transcription because they enable the DNA to uncoil from nucleosomes so that transcription factor proteins and RNA polymerase can access the DNA. This process is critical for the regulation of gene expression that allows different cells to express different genes. Function Histone methylation, as a mechanism for modifying chromatin structure is associated with stimulation of neural pathways known to be important for formation of long-term memories and learning. Histone methylation is crucial for almost all phases of animal embryonic development. Animal models have shown methylation and other epigenetic regulation mechanisms to be associated with conditions of aging, neurodegenerative diseases, and intellectual disability (Rubinstein–Taybi syndrome, X-linked intellectual disability). Misregulation of H3K4, H3K27, and H4K20 are associated with cancers. This modification alters the properties of the nucleosome and affects its interactions with other proteins, particularly in regards to gene transcription processes. Histone methylation can be associated with either transcriptional repression or activation. For example, trimethylation of histone H3 at lysine 4 (H3K4me3) is an active mark for transcription and is upregulated in hippocampus one hour after contextual fear conditioning in rats. However, dimethylation of histone H3 at lysine 9 (H3K9me2), a signal for transcriptional silencing, is increased after exposure to either the fear conditioning or a novel environment alone. Methylation of some lysine (K) and arginine (R) residues of histones results in transcriptional activation. Examples include methylation of lysine 4 of histone 3 (H3K4me1), and arginine (R) residues on H3 and H4. Addition of methyl groups to histones by histone methyltransferases, can either activate or further repress transcription, depending on the amino acid being methylated and the presence of other methyl or acetyl groups in the vicinity. Most often, for histone lysine methylations, histone methyltransferases (which methylate) are characterized as "writers", whereas demethylases are characterized as "erasers". Mechanism The fundamental unit of chromatin, called a nucleosome, contains DNA wound around a protein octamer. This octamer consists of two copies each of four histone proteins: H2A, H2B, H3, and H4. Each one of these proteins has a tail extension, and these tails are the targets of nucleosome modification by methylation. DNA activation or inactivation is largely dependent on the specific tail residue methylated and its degree of methylation. Histones can be methylated on lysine (K) and arginine (R) residues only, but methylation is most commonly observed on lysine residues of histone tails H3 and H4. The tail end furthest from the nucleosome core is the N-terminal (residues are numbered starting at this end). Common sites of methylation associated with gene activation include H3K4, H3K48, and H3K79. Common sites for gene inactivation include H3K9 and H3K27. Studies of these sites have found that methylation of histone tails at different residues serve as markers for the recruitment of various proteins or protein complexes that serve to regulate chromatin activation or inactivation. Lysine and arginine residues both contain amino groups, which confer basic and hydrophobic characteristics. Lysine is able to be mono-, di-, or trimethylated with a methyl group replacing each hydrogen of its NH3+ group. With a free NH2 and NH2+ group, arginine is able to be mono- or dimethylated. This dimethylation can occur asymmetrically on the NH2 group or symmetrically with one methylation on each group. Each addition of a methyl group on each residue requires a specific set of protein enzymes with various substrates and cofactors. Generally, methylation of an arginine residue requires a complex including protein arginine methyltransferase (PRMT) while lysine requires a specific histone methyltransferase (HMT), usually containing an evolutionarily conserved SET domain. Different degrees of residue methylation can confer different functions, as exemplified in the methylation of the commonly studied H4K20 residue. Monomethylated H4K20 (H4K20me1) is involved in the compaction of chromatin and therefore transcriptional repression. However, H4K20me2 is vital in the repair of damaged DNA. When dimethylated, the residue provides a platform for the binding of protein 53BP1 involved in the repair of double-stranded DNA breaks by non-homologous end joining. H4K20me3 is observed to be concentrated in heterochromatin and reductions in this trimethylation are observed in cancer progression. Therefore, H4K20me3 serves an additional role in chromatin repression. Repair of DNA double-stranded breaks in chromatin also occurs by homologous recombination and also involves histone methylation (H3K9me3) to facilitate access of the repair enzymes to the sites of damage. Histone methyltransferase The genome is tightly condensed into chromatin, which needs to be loosened for transcription to occur. In order to halt the transcription of a gene the DNA must be wound tighter. This can be done by modifying histones at certain sites by methylation. Histone methyltransferases are enzymes which transfer methyl groups from S-Adenosyl methionine (SAM) onto the lysine or arginine residues of the H3 and H4 histones. There are instances of the core globular domains of histones being methylated as well. The histone methyltransferases are specific to either lysine or arginine. The lysine-specific transferases are further broken down into whether or not they have a SET domain or a non-SET domain. These domains specify exactly how the enzyme catalyzes the transfer of the methyl from SAM to the transfer protein and further to the histone residue. The methyltransferases can add 1-3 methyls on the target residues. These methyls that are added to the histones act to regulate transcription by blocking or encouraging DNA access to transcription factors. In this way the integrity of the genome and epigenetic inheritance of genes are under the control of the actions of histone methyltransferases. Histone methylation is key in distinguishing the integrity of the genome and the genes that are expressed by cells, thus giving the cells their identities. Methylated histones can either repress or activate transcription. For example, while H3K4me2, H3K4me3, and H3K79me3 are generally associated with transcriptional activity, whereas H3K9me2, H3K9me3, H3K27me2, H3K27me3, and H4K20me3 are associated with transcriptional repression. Epigenetics Modifications made on the histone have an effect on the genes that are expressed in a cell and this is the case when methyls are added to the histone residues by the histone methyltransferases. Histone methylation plays an important role on the assembly of the heterochromatin mechanism and the maintenance of gene boundaries between genes that are transcribed and those that aren’t. These changes are passed down to progeny and can be affected by the environment that the cells are subject to. Epigenetic alterations are reversible meaning that they can be targets for therapy. The activities of histone methyltransferases are offset by the activity of histone demethylases. This allows for the switching on or off of transcription by reversing pre-existing modifications. It is necessary for the activities of both histone methyltransferases and histone demethylases to be regulated tightly. Misregulation of either can lead to gene expression that leads to increased susceptibility to disease. Many cancers arise from the inappropriate epigenetic effects of misregulated methylation. However, because these processes are at times reversible, there is interest in utilizing their activities in concert with anti-cancer therapies. In X chromosome inactivation In female organisms, a sperm containing an X chromosome fertilizes the egg, giving the embryo two copies of the X chromosome. Females, however, do not initially require both copies of the X chromosome as it would only double the amount of protein products transcribed as shown by the hypothesis of dosage compensation. The paternal X chromosome is quickly inactivated during the first few divisions. This inactive X chromosome (Xi) is packed into an incredibly tight form of chromatin called heterochromatin. This packing occurs due to the methylation of the different lysine residues that help form different histones. In humans X inactivation is a random process, that is mediated by the non-coding RNA XIST. Although methylation of lysine residues occurs on many different histones, the most characteristic of Xi occurs on the ninth lysine of the third histone (H3K9). While a single methylation of this region allows for the genes bound to remain transcriptionally active, in heterochromatin this lysine residue is often methylated twice or three times, H3K9me2 or H3K9me3 respectively, to ensure that the DNA bound is inactive. More recent research has shown that H3K27me3 and H4K20me1 are also common in early embryos. Other methylation markings associated with transcriptionally active areas of DNA, H3K4me2 and H3K4me3, are missing from the Xi chromosome along with many acetylation markings. Although it was known that certain Xi histone methylation markings stayed relatively constant between species, it has recently been discovered that different organisms and even different cells within a single organism can have different markings for their X inactivation. Through histone methylation, there is genetic imprinting, so that the same X homolog stays inactivated through chromosome replications and cell divisions. Mutations Due to the fact that histone methylation regulates much of what genes become transcribed, even slight changes to the methylation patterns can have dire effects on the organism. Mutations that occur to increase and decrease methylation have great changes on gene regulation, while mutations to enzymes such as methyltransferase and demethyltransferase can completely alter which proteins are transcribed in a given cell. Over methylation of a chromosome can cause certain genes that are necessary for normal cell function, to become inactivated. In a certain yeast strain, Saccharomyces cerevisiae, a mutation that causes three lysine residues on the third histone, H3K4, H3K36, and H3K79, to become methylated causes a delay in the mitotic cell cycle, as many genes required for this progression are inactivated. This extreme mutation leads to the death of the organism. It has been discovered that the deletion of genes that will eventually allow for the production of histone methyltransferase allows this organism to live as its lysine residues are not methylated. In recent years it has come to the attention of researchers that many types of cancer are caused largely due to epigenetic factors. Cancer can be caused in a variety of ways due to differential methylation of histones. Since the discovery of oncogenes as well as tumor suppressor genes it has been known that a large factor of causing and repressing cancer is within our own genome. If areas around oncogenes become unmethylated these cancer-causing genes have the potential to be transcribed at an alarming rate. Opposite of this is the methylation of tumor suppressor genes. In cases where the areas around these genes were highly methylated, the tumor suppressor gene was not active and therefore cancer was more likely to occur. These changes in methylation pattern are often due to mutations in methyltransferase and demethyltransferase. Other types of mutations in proteins such as isocitrate dehydrogenase 1 (IDH1) and isocitrate dehydrogenase 2 (IDH2) can cause the inactivation of histone demethyltransferase which in turn can lead to a variety of cancers, gliomas and leukemias, depending on in which cells the mutation occurs. One-carbon metabolism modifies histone methylation In one-carbon metabolism, the amino acids glycine and serine are converted via the folate and methionine cycles to nucleotide precursors and SAM. Multiple nutrients fuel one-carbon metabolism, including glucose, serine, glycine, and threonine. High levels of the methyl donor SAM influence histone methylation, which may explain how high SAM levels prevent malignant transformation. See also Histone code Histone acetylation and deacetylation Histone methyltransferase Methylation Methyllysine Genetic imprinting DNA methylation References Further reading Orouji, Elias & Utikal, Jochen. (2018). Tackling malignant melanoma epigenetically: histone lysine methylation. Clinical Epigenetics 2018 10:145 https://clinicalepigeneticsjournal.biomedcentral.com/articles/10.1186/s13148-018-0583-z Gozani, O., & Shi, Y. (2014). Histone Methylation in Chromatin Signaling. In: Fundamentals of Chromatin (pp. 213–256). Springer New York. Molecular genetics Cellular processes Epigenetics Post-translational modification
Histone methylation
[ "Chemistry", "Biology" ]
3,030
[ "Gene expression", "Biochemical reactions", "Post-translational modification", "Molecular genetics", "Cellular processes", "Molecular biology" ]
16,783,683
https://en.wikipedia.org/wiki/Magnetic%20nanoring
Magnetic Nanorings are a form of magnetic nanoparticles, typically made of iron oxide in the shape of a ring. They have multiple applications in the medical field and computer engineering. In experimental trials, they provide a more localized form of cancer treatment by attacking individual cells instead of a general cancerous region of the body, as well as a clearer image of tumors by improving accuracy of cancer cell identification. They also allow for a more efficient and smaller, MRAM (memory storage unit in a computer), which helps reduce the size of the technology houses it. Magnetic nanorings can be produced in various compositions, shapes, and sizes by using hematite nanorings as the base structure. Applications Cancer treatment Magnetic nanorings have been experimentally proven to improve the accuracy of hyperthermia cancer treatment and cancer imaging. Hyperthermia Cancer Treatment Multiple studies have shown that magnetic nanorings improves magnetic hyperthermia cancer treatment by targeting cancer cells and limit the amount of environmental heating, thus creating a more tailored treatment. Magnetic hyperthermia is an experimental subdivision of hyperthermia cancer treatment which utilizes cancer cells' vulnerability to high temperatures, typically 40-44 degrees Celsius, to initiate cell death. Magnetic hyperthermia utilizes heating properties of magnetic hysteresis by injecting magnetic nanoparticles to the cancerous area, then applies an alternating magnetic field to conduct heat. The use of magnetic nanoparticles is particularly useful because it can reach regions of the body that surface treatments (such as microwaves, ultrasounds, and radiation) cannot, and it can remain in the cancerous region for an extended period of time allowing for multiple treatment sessions per injection. In addition, there is easy control of the amount of heat based on size and shape of the magnetic nanoparticle, and it can temporarily bond with antibodies for effective targeting of the tumor. While there may be concern regarding acute toxicity from the use of foreign metals, the dose is well below the acute toxicity range, and studies have suggested it is safer than other methods because of its accuracy and effectiveness within a lower temperature range. Studies have also shown that magnetic nanoring based hyperthermia treatment can be used in conjunction with immune blockade checkpoint techniques, which is a way to trigger the body's immune system to attack the cancerous region. Specifically, inducing the Fenton Reaction can more effectively kill cancer cells and prevent new ones from growing. The Fenton Reaction, a reaction involving iron ions, functions by transforming the acidic cancerous environment into an inhospitable basic environment for cancer cells. Consequently, iron-containing magnetic nanorings are particularly useful for cancer treatment. Past methods of magnetic hyperthermia cancer treatment used Superparamagnetic Iron Oxide Nanoparticles (SPIONs) in the shape of a sphere which would nonspecifically heat the environment around the tumor killing healthy cells. In comparison, Vortex Iron oxide Particles (VIPs), a magnetic nanoring, allows for more controlled and precise intracellular hyperthermia. Intracellular hyperthermia occurs when the VIP enters the cell and heats up from the inside allowing for an even more specified form of hyperthermia. VIPs can also produce a magnetic vortex, which is when the magnetic moments (measure of intensity and direction of magnetism) of the VIPs occur in a curling-inward direction under an alternating magnetic field. The curling-inward direction of the magnetic moments causes heat production only within the vortex, allowing for a more efficient and less harmful form of treatment. Cancer Imaging Magnetic nanorings have shown to create clearer MRIs and photoacoustic images of tumors in experiments. This form of magnetic nanoring contains gold and is shaped like a wreath. Once again, the magnetic nanoring more effectively identifies cancer cells than previous methods because the wreath shape will disassemble in response to a magnetic field and high levels of glutathione, a chemical specifically found in cancer cells, which allows for higher-contrast imaging. MRAM Magnetic nanorings are used in MRAM (magnetic random access memory) because of its capabilities to rapidly switch currents. Magnetic nanorings replaced GMR (giant magnetoresistance) particles in the CIMS (current induced magnetization switching) of MRAM because the long ovular or rectangular shape of GMR would cause interference with neighboring GMR. This interference would create magnetic noise, thus decreasing the effectiveness of MRAM. In comparison, the symmetrical structure of magnetic nanorings reduces the interactions with neighboring nanorings, thus creating a more consistent and reliable MRAM. The smaller size of the nanorings also allows for decreased power consumption and the creation of a more compact MRAM, ultimately decreasing the size of electronics. Synthesis Magnetic nanorings are created through hydrothermal synthesis (a synthesis reaction that occurs at high temperatures) with microwaves to facilitate a faster reaction rate. \alpha-Fe2O3 (Hematite) Almost all forms of magnetic nanorings are formed by modifying hematite(\alpha-Fe2O3 ), which is created by combining aqueous iron(III)chloride and aqueous ammonium dihydrogen phosphate at 220 degree Celsius. Altering the amount of reactants controls the shape and size of the produced hematite. FeCl3(aq) + NH4H2PO4(aq) -> \alpha-Fe2O3(s) Fe3O4 and \gamma-Fe3O4 Fe3O4 is produced by combining hematite with hydrogen gas at 420 degrees Celsius for 120 minutes. \alpha-Fe2O3(s) + H2(g) -> Fe3O4(s) \gamma-Fe3O4 is produced by cooling Fe3O4 to 210 degrees Celsius with air for 120 minutes. Fe3O4(s) + air -> \gamma-Fe3O4(s) MFe2O4 M is a metal with a 2+ charge, such as Co, Mn, Ni, and Cu. MFe2O4 is produced by mixing \alpha-Fe2O3 with an aqueous solution with metal ions and hydroxide ions at 60 degrees Celsius, then a metal hydroxide( M(OH)2 ) coating forms on top of the hematite. The hematite with a metal hydroxide coating is then heated at 300 degrees Celsius for 30 minutes with hydrogen gas, and then heated again at 720 degrees Celsius for 3 hours with air to form MFe2O4 . \alpha-Fe2O3(s) + M(OH)2(aq) -> \alpha-Fe2O3(s) + M(OH)2 (s) \alpha-Fe2O3(s) + M(OH)2(s) -> MFe2O4 (s) See also Magnetic nanoparticles Hyperthermia therapy Magnetic hysteresis References Nanoelectronics
Magnetic nanoring
[ "Materials_science" ]
1,444
[ "Nanotechnology", "Nanoelectronics" ]
16,784,189
https://en.wikipedia.org/wiki/Oligomer%20restriction
Oligomer Restriction (abbreviated OR) is a procedure to detect an altered DNA sequence in a genome. A labeled oligonucleotide probe is hybridized to a target DNA, and then treated with a restriction enzyme. If the probe exactly matches the target, the restriction enzyme will cleave the probe, changing its size. If, however, the target DNA does not exactly match the probe, the restriction enzyme will have no effect on the length of the probe. The OR technique, now rarely performed, was closely associated with the development of the popular polymerase chain reaction (PCR) method. Example In part 1a of the schematic the oligonucleotide probe, labeled on its left end (asterisk), is shown on the top line. It is fully complementary to its target DNA (here taken from the human β-hemoglobin gene), as shown on the next line. Part of the probe includes the Recognition site for the restriction enzyme Dde I (underlined). In part 1b, the restriction enzyme has cleaved the probe and its target (Dde I leaves three bases unpaired at each end). The labeled end of the probe is now just 8 bases in length, and is easily separated by Gel electrophoresis from the uncut probe, which was 40 bases long. In part 2, the same probe is shown hybridized to a target DNA which includes a single base mutation (here the mutation responsible for Sickle Cell Anemia, or SCA). The mismatched hybrid no longer acts as a recognition site for the restriction enzyme, and the probe remains at its original length. History The Oligomer Restriction technique was developed as a variation of the Restriction Fragment Length Polymorphism (RFLP) assay method, with the hope of avoiding the laborious Southern blotting step used in RFLP analysis. OR was conceived by Randall Saiki and Henry Erlich in the early 1980s, working at Cetus Corporation in Emeryville, California. It was patented in 1984 and published in 1985, having been applied to the genomic mutation responsible for Sickle Cell Anemia. OR was soon replaced by the more general technique of Allele Specific Oligonucleotide (ASO) probes. Problems The Oligomer Restriction method was beset by a number of problems: It could be applied only to the small set of DNA polymorphisms which alter a restriction site, and only to those sites for which sequence information was known. Many of the known RFLP assays detected polymorphisms which were far away from their probe locations. It is difficult to label oligonucleotides to a level high enough to use them as probes for genomic DNA. This problem also plagued the development of ASO probes. It is difficult to design oligonucleotides and use them in a way that they become hybridization probes for just a single site within a genome. Binding to non-specific locations can often obscure the effect of the probe at the target location. Not all restriction enzymes have the desired specificity for their recognition sequence. Some can recognize and cut single-stranded DNA, and some show a low level of cleavage for mismatched sites. Even a small amount of non-specific cleavage can swamp the weak signal expected from the target sequence. It was difficult to design an OR method that included controls for both of the alleles being tested. In part 2 of the simplified example described above, the probe was not cleaved when hybridized to a mutant target. But the same (non-) result would occur for the large excess of unhybridized probe, as well as if any problem occurred preventing the complete digestion by restriction enzyme. In the actual method reported, a second non-polymorphic restriction site was used to cut all of the hybridized probe, and a second unlabeled oligonucleotide was used to 'block' the unhybridized probe. These controls would not have been available for other targets. Relationship to PCR Despite its limitations, the OR technique benefited from its close association with the development of the polymerase chain reaction. Kary Mullis, who also worked at Cetus, had synthesized the oligonucleotide probes being tested by Saiki and Erlich. Aware of the problems they were encountering, he envisioned an alternative method for analyzing the SCA mutation that would use components of the Sanger DNA sequencing technique. Realizing the difficulty of hybridizing an oligonucleotide primer to a single location in the genome, he considered using a second primer on the opposite strand. He then generalized that process and realized that repeated extensions of the two primers would lead to an exponential increase in the segment of DNA between the primers - a chain reaction of replication catalyzed by DNA polymerase. As Mullis encountered his own difficulties in demonstrating PCR, he joined an existing group of researchers that were addressing the problems with OR. Together, they developed the combined PCR-OR assay. Thus, OR became the first method used to analyze PCR-amplified genomic DNA. Mullis also encountered difficulties in publishing the basic idea of PCR (scientific journals rarely publish concepts without accompanying results). When his manuscript for the journal Nature was rejected, the basic description of PCR was hurriedly added to the paper originally intended to report the OR method (Mullis was also a co-author there). This OR paper thus became the first publication of PCR, and for several years would become the report most cited by other researchers. References DNA profiling techniques Molecular biology techniques Molecular biology Laboratory techniques
Oligomer restriction
[ "Chemistry", "Biology" ]
1,155
[ "Genetics techniques", "DNA profiling techniques", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
16,785,076
https://en.wikipedia.org/wiki/Infiltration%20basin
An infiltration basin (or recharge basin) is a form of engineered sump or percolation pond that is used to manage stormwater runoff, prevent flooding and downstream erosion, and improve water quality in an adjacent river, stream, lake or bay. It is essentially a shallow artificial pond that is designed to infiltrate stormwater through permeable soils into the groundwater aquifer. Infiltration basins do not release water except by infiltration, evaporation or emergency overflow during flood conditions. It is distinguished from a detention basin, sometimes called a dry pond, which is designed to discharge to a downstream water body (although it may incidentally infiltrate some of its volume to groundwater); and from a retention basin, which is designed to include a permanent pool of water. Design considerations Infiltration basins must be carefully designed to infiltrate the soil on a given site at a rate that will not cause flooding. They may be less effective in areas with: High groundwater levels, close to the infiltrating surface Compacted soils High levels of sediment in stormwater High clay soil content. At some sites infiltration basins have worked effectively where the installation also includes an extended detention basin as a pretreatment stage, to remove sediment. The basins may fail where they cannot be frequently maintained, and their use is discouraged in some areas of the United States. For example, they are not recommended for use in the U.S. state of Georgia, which has many areas with high clay soil content, unless soil on the particular site is modified ("engineered soil") during construction, to improve the infiltration characteristics. See also Dry well French drain Percolation trench Rain garden Sustainable urban drainage systems Septic drain field Tree box filter Semicircular bund References External links Maryland Stormwater Design Manual - See Section 3.3 for Infiltration Feasibility Criteria & Design Diagrams International Stormwater BMP Database - Performance Data on Urban Stormwater Best Management Practices Stormwater management Tools: Model for Urban Stormwater Improvement Conceptualisation (MUSIC) Environmental engineering Hydrology Infrastructure Stormwater management Hydraulic structures Drainage
Infiltration basin
[ "Chemistry", "Engineering", "Environmental_science" ]
423
[ "Hydrology", "Water treatment", "Stormwater management", "Chemical engineering", "Water pollution", "Construction", "Civil engineering", "Environmental engineering", "Infrastructure" ]
7,690,175
https://en.wikipedia.org/wiki/RNA-dependent%20RNA%20polymerase
RNA-dependent RNA polymerase (RdRp) or RNA replicase is an enzyme that catalyzes the replication of RNA from an RNA template. Specifically, it catalyzes synthesis of the RNA strand complementary to a given RNA template. This is in contrast to typical DNA-dependent RNA polymerases, which all organisms use to catalyze the transcription of RNA from a DNA template. RdRp is an essential protein encoded in the genomes of most RNA-containing viruses that lack a DNA stage, including SARS-CoV-2. Some eukaryotes also contain RdRps, which are involved in RNA interference and differ structurally from viral RdRps. History Viral RdRps were discovered in the early 1960s from studies on mengovirus and polio virus when it was observed that these viruses were not sensitive to actinomycin D, a drug that inhibits cellular DNA-directed RNA synthesis. This lack of sensitivity suggested the action of a virus-specific enzyme that could copy RNA from an RNA template. Distribution RdRps are highly conserved in viruses and are related to telomerase, though the reason for this was an ongoing question as of 2009. The similarity led to speculation that viral RdRps are ancestral to human telomerase. The most famous example of RdRp is in the polio virus. The viral genome is composed of RNA, which enters the cell through receptor-mediated endocytosis. From there, the RNA acts as a template for complementary RNA synthesis. The complementary strand acts as a template for the production of new viral genomes that are packaged and released from the cell ready to infect more host cells. The advantage of this method of replication is that no DNA stage complicates replication. The disadvantage is that no 'back-up' DNA copy is available. Many RdRps associate tightly with membranes making them difficult to study. The best-known RdRps are polioviral 3Dpol, vesicular stomatitis virus L, and hepatitis C virus NS5B protein. Many eukaryotes have RdRps that are involved in RNA interference: these amplify microRNAs and small temporal RNAs and produce double-stranded RNA using small interfering RNAs as primers. These RdRps are used in the defense mechanisms and can be appropriated by RNA viruses. Their evolutionary history predates the divergence of major eukaryotic groups. Replication RdRp differs from DNA dependent RNA polymerase as it catalyzes RNA synthesis of strands complementary to a given RNA template. The RNA replication process is a four-step mechanism: Nucleoside triphosphate (NTP) binding – initially, the RdRp presents with a vacant active site in which an NTP binds, complementary to the corresponding nucleotide on the template strand. Correct NTP binding causes the RdRp to undergo a conformational change. Active site closure – the conformational change, initiated by the correct NTP binding, results in the restriction of active site access and produces a catalytically competent state. Phosphodiester bond formation – two Mg2+ ions are present in the catalytically active state and arrange themselves around the newly synthesized RNA chain such that the substrate NTP undergoes a phosphatidyl transfer and forms a phosphodiester bond with the new chain. Without the use of these Mg2+ ions, the active site is no longer catalytically stable and the RdRp complex changes to an open conformation. Translocation – once the active site is open, the RNA template strand moves by one position through the RdRp protein complex and continues chain elongation by binding a new NTP, unless otherwise specified by the template. RNA synthesis can be performed by a primer-independent (de novo) or a primer-dependent mechanism that utilizes a viral protein genome-linked (VPg) primer. The de novo initiation consists in the addition of a NTP to the 3'-OH of the first initiating NTP. During the following elongation phase, this nucleotidyl transfer reaction is repeated with subsequent NTPs to generate the complementary RNA product. Termination of the nascent RNA chain produced by RdRp is not completely known, however, RdRp termination is sequence-independent. One major drawback of RNA-dependent RNA polymerase replication is the transcription error rate. RdRps lack fidelity on the order of 104 nucleotides, which is thought to be a direct result of inadequate proofreading. This variation rate is favored in viral genomes as it allows for the pathogen to overcome host defenses trying to avoid infection, allowing for evolutionary growth. Structure Viral/prokaryotic RdRp, along with many single-subunit DdRp, employ a fold whose organization has been linked to the shape of a right hand with three subdomains termed fingers, palm, and thumb. Only the palm subdomain, composed of a four-stranded antiparallel beta sheet with two alpha helices, is well conserved. In RdRp, the palm subdomain comprises three well-conserved motifs (A, B, and C). Motif A (D-x(4,5)-D) and motif C (GDD) are spatially juxtaposed; the aspartic acid residues of these motifs are implied in the binding of Mg2+ and/or Mn2+. The asparagine residue of motif B is involved in selection of ribonucleoside triphosphates over dNTPs and, thus, determines whether RNA rather than DNA is synthesized. The domain organization and the 3D structure of the catalytic centre of a wide range of RdRps, even those with a low overall sequence homology, are conserved. The catalytic center is formed by several motifs containing conserved amino acid residues. Eukaryotic RNA interference requires a cellular RdRp (c RdRp). Unlike the "hand" polymerases, they resemble simplified multi-subunit DdRPs, specifically in the catalytic β/β' subunits, in that they use two sets of double-psi β-barrels in the active site. QDE1 () in Neurospora crassa, which has both barrels in the same chain, is an example of such a c RdRp enzyme. Bacteriophage homologs of c RdRp, including the similarly single-chain DdRp yonO (), appear to be closer to c RdRps than DdRPs are. Viruses Four superfamilies of viruses cover all RNA-containing viruses with no DNA stage: Viruses containing positive-strand RNA or double-strand RNA, except retroviruses and Birnaviridae All positive-strand RNA eukaryotic viruses with no DNA stage, such as Coronaviridae All RNA-containing bacteriophages; the two families of RNA-containing bacteriophages are Fiersviridae (positive ssRNA phages) and Cystoviridae (dsRNA phages) dsRNA virus family Reoviridae, Totiviridae, Hypoviridae, Partitiviridae Mononegavirales (negative-strand RNA viruses with non-segmented genomes; ) Negative-strand RNA viruses with segmented genomes (), such as orthomyxoviruses and bunyaviruses dsRNA virus family Birnaviridae () Flaviviruses produce a polyprotein from the ssRNA genome. The polyprotein is cleaved to a number of products, one of which is NS5, an RdRp. It possesses short regions and motifs homologous to other RdRps. RNA replicase found in positive-strand ssRNA viruses are related to each other, forming three large superfamilies. Birnaviral RNA replicase is unique in that it lacks motif C (GDD) in the palm. Mononegaviral RdRp (PDB 5A22) has been automatically classified as similar to (+)−ssRNA RdRps, specifically one from Pestivirus and one from Leviviridae. Bunyaviral RdRp monomer (PDB 5AMQ) resembles the heterotrimeric complex of Orthomyxoviral (Influenza; PDB 4WSB) RdRp. Since it is a protein universal to RNA-containing viruses, RdRp is a useful marker for understanding their evolution. Recombination When replicating its (+)ssRNA genome, the poliovirus RdRp is able to carry out recombination. Recombination appears to occur by a copy choice mechanism in which the RdRp switches (+)ssRNA templates during negative strand synthesis. Recombination frequency is determined in part by the fidelity of RdRp replication. RdRp variants with high replication fidelity show reduced recombination, and low fidelity RdRps exhibit increased recombination. Recombination by RdRp strand switching occurs frequently during replication in the (+)ssRNA plant carmoviruses and tombusviruses. Intragenic complementation Sendai virus (family Paramyxoviridae) has a linear, single-stranded, negative-sense, nonsegmented RNA genome. The viral RdRp consists of two virus-encoded subunits, a smaller one P and a larger one L. Testing different inactive RdRp mutants with defects throughout the length of the L subunit in pairwise combinations, restoration of viral RNA synthesis was observed in some combinations. This positive L–L interaction is referred to as intragenic complementation and indicates that the L protein is an oligomer in the viral RNA polymerase complex. Drug therapies RdRps can be used as drug targets for viral pathogens as their function is not necessary for eukaryotic survival. By inhibiting RdRp function, new RNAs cannot be replicated from an RNA template strand, however, DNA-dependent RNA polymerase remains functional. Some antiviral drugs against Hepatitis C and COVID-19 specifically target RdRp. These include Sofosbuvir and Ribavirin against Hepatitis C and remdesivir, an FDA approved drug against COVID-19 GS-441524 triphosphate is a substrate for RdRp, but not mammalian polymerases. It results in premature chain termination and inhibition of viral replication. GS-441524 triphosphate is the biologically active form of remdesivir. Remdesivir is classified as a nucleotide analog that inhibits RdRp function by covalently binding to and interrupting termination of the nascent RNA through early or delayed termination or preventing further elongation of the RNA polynucleotide. This early termination leads to nonfunctional RNA that gets degraded through normal cellular processes. RNA interference The use of RdRp plays a major role in RNA interference in eukaryotes, a process used to silence gene expression via small interfering RNAs (siRNAs) binding to mRNA rendering them inactive. Eukaryotic RdRp becomes active in the presence of dsRNA, and is less widely distributed than other RNAi components as it lost in some animals, though still found in C. elegans, P. tetraurelia, and plants. This presence of dsRNA triggers the activation of RdRp and RNAi processes by priming the initiation of RNA transcription through the introduction of siRNAs. In C. elegans, siRNAs are integrated into the RNA-induced silencing complex, RISC, which works alongside mRNAs targeted for interference to recruit more RdRps to synthesize more secondary siRNAs and repress gene expression. See also Spiegelman's Monster NS5B inhibitor Notes References External links Gene expression RNA EC 2.7.7
RNA-dependent RNA polymerase
[ "Chemistry", "Biology" ]
2,429
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
7,690,493
https://en.wikipedia.org/wiki/2006%20Missouri%20Amendment%202
Missouri Constitutional Amendment 2 (The Missouri Stem Cell Research and Cures Initiative) was a state constitutional amendment initiative that concerned stem cell research and human cloning. It allows any stem cell research and therapy in the U.S. state of Missouri that is legal under federal law, including somatic cell nuclear transfer to produce human embryos for stem cell production. It prohibits cloning or attempting to clone a human being, which is defined to mean "to implant in a uterus or attempt to implant in a uterus anything other than the product of fertilization of an egg of a human female by a sperm of a human male for the purpose of initiating a pregnancy that could result in the creation of a human fetus, or the birth of a human being". Commercials supporting and opposing the amendment aired during the 2006 World Series, in which the St. Louis Cardinals participated. The issue became especially intertwined with the 2006 U.S. Senate election in Missouri, with the Republican and Democratic candidates on opposite sides of the issue. Missouri Constitutional Amendment 2 appeared on the ballot for the November 2006 general election and passed with 51% of the vote. Support The organization that led the movement to get the initiative on the ballot and later supported its adoption was called the Missouri Coalition for Lifesaving Cures. The measure was proposed to stop repeated attempts by the Missouri Legislature to ban certain types of stem cell research, namely SCNT. Claire McCaskill, the Democratic nominee for U.S. Senate, supported the measure. During the 2006 World Series, which was partially held in St. Louis, a television ad featuring actor Michael J. Fox aired. The ad was paid for by McCaskill's campaign, and the primary reason Fox gave for his support for McCaskill was her stance in favor of stem cell research. The advertisement was controversial because Fox was visibly suffering tremors, which were side effects of the medications used to treat Parkinson's Syndrome. Rush Limbaugh, a conservative radio host and Missouri native, criticized Fox for allowing himself to have been used by special interests supporting the measure. Limbaugh criticized the uncontrollable movements that Fox made in the commercial, and claimed that it was Fox had either deliberately stopped taking his medication or was feigning his tremors. Opposition The coalition that initially led the opposition to the amendment was called Missourians Against Human Cloning. Later in the effort, when the coalition was unable to raise the money for the "Vote No" ads, Life Communications Fund took the lead in doing so. They created a series of "Vote No" ads for television, radio and print. Earlier in the campaign, the Vitae Foundation ran a series of educational ads, a "prophetic voice campaign," on the differences between adult and embryonic stem cell research, which was a major gain for those opposed to the Amendment, because, according to them, the "cures" were only occurring as a result of adult stem cell treatments, not via embryonic stem cells. Drawing awareness to the differences between adult and embryonic stem cell research was critical to their strategy. That was the goal of the first ad created in the series. Each ad then slowly moved the target audience (Catholics, Protestants and Evangelicals) to oppose the amendment. The final ad attempted to link embryonic stem cell research to human cloning. A majority of Missourians were opposed to human cloning, especially their target audience. The prophetic voice campaign ran for about 6 months. The "Vote No" ads ran for roughly 3 months. Jim Talent, the incumbent Republican U.S. Senator facing re-election, was one of several candidates opposed to the amendment. In rebuttal to the Michael J. Fox advertisement (which never directly mentioned Amendment 2), a Life Communications television ad with several celebrities appeared in opposition to the measure. At least three of the celebrities opposed the measure for religious reasons: Kurt Warner, former St. Louis Rams quarterback; Kansas City Royals baseball player Mike Sweeney, and Jim Caviezel, who played Jesus in The Passion of the Christ. Patricia Heaton, from Everybody Loves Raymond, opposed the amendment on the grounds that low-income women would be exploited for their eggs. Jeff Suppan, a pitcher for the St. Louis Cardinals, also appeared in opposition to the amendment. Polling As election day drew near, public support seemed to be shifting away from Amendment 2. Polls had shown support as high as 68% in favor of the Amendment in December 2005. By October 29, 2006, support had fallen to 51%, with 35% opposed. Results and aftermath On November 7, 2006, Amendment 2 passed by a margin of 2.4% (or 50,800 votes). The final tally of votes ended in 51.2% for yes and 48.8% for no. The measure failed in 97 of the 114 counties in the state, but picked up enough votes in St. Louis, Kansas City, and Columbia (and their surrounding counties) to pass statewide. Democrat Claire McCaskill (an amendment supporter) unseated Republican incumbent U.S. Senator Jim Talent (an amendment opponent) the same night that the amendment passed. The very expensive campaigns for and against the amendment broke every record on political spending on statewide races in Missouri. Following the passage of the amendment, the Stowers Institute for Medical Research canceled plans for a major expansion in Kansas City. Because of the very close vote, the Institute asserted that the political climate in Missouri was too hostile for investment in stem cell research. References External links Full-text of Amendment 2 Michael J. Fox ad for Claire McCaskill and stem cells Celebrity Opposition Ad Stem cell research 2006 Missouri elections 2006 ballot measures in the United States Missouri ballot measures Constitution of Missouri U.S. state constitutional amendments
2006 Missouri Amendment 2
[ "Chemistry", "Biology" ]
1,177
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
7,691,702
https://en.wikipedia.org/wiki/The%20Computer%20Wore%20Tennis%20Shoes%20%281995%20film%29
The Computer Wore Tennis Shoes is a 1995 American made-for-television science fiction comedy film directed by Peyton Reed (in his feature directorial debut) and written by Joseph L. McEveety and Ryan Rowe. The film is a remake of the 1969 film of the same name. It premiered on ABC as an ABC Family Movie on February 18, 1995. It is the second in a series of four remakes of Disney live-action films produced for broadcast on the network during the 1994–95 television season, the other three being The Shaggy Dog, Escape to Witch Mountain, and Freaky Friday. The film stars Kirk Cameron in the lead role of Dexter Riley, a boy who becomes an instant genius, with encyclopedic knowledge. The film also co-stars Larry Miller and Dean Jones plays the role of an evil dean from a competing school. Plot Dexter, a once lazy and underachieving student at Medfield College, becomes an instant genius when a freak computer lab accident transfers an entire online encyclopedia to his brain. He uses his newfound intellect to ace a physics midterm in under ten minutes. The dean of Medfield College wishes to capitalize on Dexter’s superb intelligence by placing him on the school's quiz bowl team and allowing him to recruit his friends as teammates. Medfield College wins several quiz bowls matches with Dexter exclusively answering every question. He receives national recognition and generates positive publicity for Medfield College. Dexter's success also has a negative impact. His friends believe he is becoming too conceited and contemplate leaving the quiz bowl team. Norwood Gill, a child prodigy and computer hacker from a rival school that won the Quiz Bowl three times and develops an obsession with Dexter, and probes into his background. Government agents also investigate Dexter as a potential computer hacker known as "Viper". Norwood ultimately uncovers the origin of Dexter's intellect. He exposes the information during a quiz bowl broadcast, but the revelation is largely dismissed by the audience. As Norwood prepares to compete against Dexter in the quiz bowl championship, he infects him with a computer virus that negates his encyclopedic knowledge. The virus takes full effect midway through the championship, forcing Dexter to rely on his teammates for support. Medfield College ultimately wins the championship and celebrates. Norwood is apprehended by government agents for committing several cybercrimes when he exposes himself as the viper. Cast Uncredited Julia Sweeney (television reporter) Reception Variety gave the film a moderately positive review, calling it an "utterly silly yarn" that "lacks the zaniness of the original", and complimented Larry Miller's performance. People gave it a B+ rating and called it a "fun, facile remake" with a good cast. References External links 1995 television films 1995 films 1995 comedy films 1995 science fiction films 1990s American films 1990s English-language films 1990s science fiction comedy films American Broadcasting Company original films American comedy television films American science fiction comedy films American science fiction television films Films about computing Films directed by Peyton Reed Films shot in Los Angeles Disney film remakes Disney television films Medfield College films Remakes of American films Television remakes of films Walt Disney anthology television series episodes English-language science fiction comedy films
The Computer Wore Tennis Shoes (1995 film)
[ "Technology" ]
657
[ "Works about computing", "Films about computing" ]
7,692,767
https://en.wikipedia.org/wiki/Visual%20hull
A visual hull is a geometric entity created by shape-from-silhouette 3D reconstruction technique introduced by A. Laurentini. This technique assumes the foreground object in an image can be separated from the background. Under this assumption, the original image can be thresholded into a foreground/background binary image, which we call a silhouette image. The foreground mask, known as a silhouette, is the 2D projection of the corresponding 3D foreground object. Along with the camera viewing parameters, the silhouette defines a back-projected generalized cone that contains the actual object. This cone is called a silhouette cone. The upper right thumbnail shows two such cones produced from two silhouette images taken from different viewpoints. The intersection of the two cones is called a visual hull, which is a bounding geometry of the actual 3D object (see the bottom right thumbnail). When the reconstructed geometry is only used for rendering from a different viewpoint, the implicit reconstruction together with rendering can be done using graphics hardware. In two dimensions A technique used in some modern touchscreen devices employs cameras placed in the corners situated opposite infrared LEDs. The one-dimensional projection (shadow) of objects on the surface may be used to reconstruct the convex hull of the object. Visual hull generation method has also been used within experimental tele-meeting systems that aim to allow a user in a remote location to interact with virtual objects. The method uses multiple cameras to capture the real-world movements and interactions of the "sender", employing hardware-accelerated volumetric visual hull representation to create 3D volume from 2D multi-view images. Its ultimate aim is to allow 3D collaboration between the two users in the virtual realm, with the visual hull technique reducing the computational power required to allow this type of interaction and enabling the use of consumer goods such as the Wii Remote as a tool for interaction. See also 3D reconstruction from multiple images Tomographic reconstruction References Computer graphics Geometry in computer vision Projective geometry Photogrammetry 3D imaging Convex hull algorithms
Visual hull
[ "Mathematics" ]
402
[ "Geometry in computer vision", "Geometry" ]
7,694,425
https://en.wikipedia.org/wiki/M.%20Stanley%20Whittingham
Sir Michael Stanley Whittingham (born 22 December 1941) is a British-American chemist. He is a professor of chemistry and director of both the Institute for Materials Research and the Materials Science and Engineering program at Binghamton University, State University of New York. He also serves as director of the Northeastern Center for Chemical Energy Storage (NECCES) of the U.S. Department of Energy at Binghamton. He was awarded the Nobel Prize in Chemistry in 2019 alongside Akira Yoshino and John B. Goodenough. Whittingham is a key figure in the history of lithium-ion batteries, which are used in everything from mobile phones to electric vehicles. He discovered intercalation electrodes and thoroughly described intercalation reactions in rechargeable batteries in the 1970s. He holds the patents on the concept of using intercalation chemistry in high power-density, highly reversible lithium-ion batteries. He also invented the first rechargeable lithium metal battery (LMB), patented in 1977 and assigned to Exxon for commercialization in small devices and electric vehicles. Whittingham's rechargeable lithium metal battery is based on a LiAl anode and an intercalation-type TiS2 cathode. His work on lithium batteries laid the foundation for others' developments, so he is called the founding father of lithium-ion batteries. Education and career Whittingham was born in the Carlton suburb of Nottingham, England, on 22 December 1941. His father was a civil engineer, the first in the family to go to college. His mother Dorothy Mary (née Findley) was a chemist before marriage. He was educated at Stamford School from 1951 to 1960, before going up to New College, Oxford to read chemistry. At the University of Oxford, he took his BA (1964), MA (1967), and DPhil (1968). After completing his graduate studies, Whittingham became a postdoctoral fellow at Stanford University. He worked 16 years for Exxon Research & Engineering Company and four years working for Schlumberger prior to becoming a professor at Binghamton University. From 1994 to 2000, he served as the university's vice provost for research. He also served as vice-chair of the Research Foundation of the State University of New York for six years. He is a Distinguished Professor of Chemistry and Materials Science and Engineering at Binghamton University. Whittingham was named Chief Scientific Officer of NAATBatt International in 2017. Whittingham co-chaired the DOE study of Chemical Energy Storage in 2007, and is a director of the Northeastern Center for Chemical Energy Storage (NECCES), a U.S. Department of Energy Energy Frontier Research Center (EFRC) at Binghamton. In 2014, NECCES was awarded $12.8 million, from the U.S. Department of Energy to help accelerate scientific breakthroughs needed to build the 21st-century economy. In 2018, NECCES was granted another $3 million by the Department of Energy to continue its research on batteries. The NECCES team is using the funding to improve energy-storage materials and to develop new materials that are "cheaper, environmentally friendly, and able to store more energy than current materials can". Research Whittingham conceived the intercalation electrode. Exxon manufactured Whittingham's lithium-ion battery in the 1970s, based on a titanium disulfide cathode and a lithium-aluminum anode. The battery had high energy density and the diffusion of lithium ions into the titanium disulfide cathode was reversible, making the battery rechargeable. In addition, titanium disulfide has a particularly fast rate of lithium ion diffusion into the crystal lattice. Exxon threw its resources behind the commercialization of a Li/LiClO/ TiS battery. However, safety concerns led Exxon to end the project. Whittingham and his team continued to publish their work in academic journals of electrochemistry and solid-state physics. He left Exxon in 1984 and spent four years at Schlumberger as a manager. In 1988, he became Professor at the Chemistry Department, Binghamton University, U.S. to pursue his academic interests. "All these batteries are called intercalation batteries. It’s like putting jam in a sandwich. In the chemical terms, it means you have a crystal structure, and we can put lithium ions in, take them out, and the structure’s exactly the same afterwards," Whittingham said. "We retain the crystal structure. That’s what makes these lithium batteries so good, allows them to cycle for so long." Lithium batteries have limited capacity because less than one lithium-ion/electron is reversibly intercalated per transition metal redox center. To achieve higher energy densities, one approach is to go beyond the one-electron redox intercalation reactions. Whittingham's research has advanced to multi-electron intercalation reactions, which can increase the storage capacity by intercalating multiple lithium ions. A few multi-electron intercalation materials have been successfully developed by Whittingham, like LiVOPO4/VOPO4. The multivalent vanadium cation (V3+<->V5+) plays an important role to accomplish the multi-electron reactions. These promising materials shine lights on the battery industry to increase energy density rapidly. Whittingham received the Young Author Award from The Electrochemical Society in 1971, the Battery Research Award in 2003, and was elected a Fellow in 2004. In 2010, he was listed as one of the Top 40 innovators for contributions to advancing green technology by Greentech Media. In 2012, Whittingham received the IBA Yeager Award for Lifetime Contribution to Lithium Battery Materials Research, and he was elected a Fellow of Materials Research Society in 2013. He was listed along with John B. Goodenough, for pioneering research leading to the development of the lithium-ion battery on a list of Clarivate Citation Laureates for the Nobel Prize in Chemistry by Thomson Reuters in 2015. In 2018, Whittingham was elected to the National Academy of Engineering, "for pioneering the application of intercalation chemistry for energy storage materials." In 2019, Whittingham, along with John B. Goodenough and Akira Yoshino, was awarded the 2019 Nobel Prize in Chemistry "for the development of lithium-ion batteries." Personal life Stanley is married to Dr. Georgina Whittingham, a professor of Spanish at the State University of New York at Oswego. He has two children, Michael Whittingham and Jenniffer Whittingham-Bras. Recognition 2007 Chancellor's Award for Excellence in Scholarship and Creative Activities, and Outstanding Research Award, State University of New York 2010 Award for Lifetime Contributions from the American Chemical Society 2015 Thomson Reuters Citation Laureate 2017 Senior Scientist Award from the International Society for Solid State Ionics 2018 Turnbull Award from the Materials Research Society 2018 Member National Academy of Engineering 2019 Nobel Prize in Chemistry with John B. Goodenough and Akira Yoshino 2020 Great Immigrants Award by the Carnegie Corporation of New York 2023 VinFuture Grand Prize with Martin Green, Rachid Yazami and Akira Yoshino 2024 Knighted in the 2024 King's Birthday Honours "for services to chemistry". Books Most-cited papers (:) References External links M. Stanley Whittingham's profile at Binghamton University website M. Stanley Whittingham's interview at École supérieure de physique et de chimie industrielles de la ville de Paris history of science website including the Nobel Lecture on Sunday 8 December 2019 The Origins of the Lithium Battery 1941 births Living people People educated at Stamford School Alumni of New College, Oxford Binghamton University faculty State University of New York faculty Nobel laureates in Chemistry English Nobel laureates British Nobel laureates British emigrants to the United States English chemists English inventors Schlumberger people ExxonMobil people Inorganic chemists Solid state chemists Knights Bachelor
M. Stanley Whittingham
[ "Chemistry" ]
1,664
[ "Solid state chemists", "Inorganic chemists" ]
7,699,045
https://en.wikipedia.org/wiki/Liquid%20capacitive%20inclinometers
Liquid capacitive inclinometers are inclinometers (or clinometers) whose sensing elements are made with a liquid-filled differential capacitor; they sense the local direction of acceleration due to gravity (or movement). A capacitive inclinometer has a disc-like cavity that is partly filled with a dielectric liquid. One of the sides of the cavity has an etched conductor plate that is used to form one of the conductors of a variable parallel plate capacitor. The liquid along with the other side of the cavity forms the other plate of the capacitor. In operation, the sensor is mounted so that the disc is in a vertical plane with its axis horizontal. Gravity then acts on the liquid pulling it down in the cavity forming a semicircle. As the sensor is rotated the liquid remains in this semicircular pattern covering a different area of the etched plate. This change in area results in a change in the capacitance. The change in capacitance is then electronically converted into an output signal that is linear with respect to the input angle. References External links Diagram of liquid capacitive inclinometer Dimensional instruments Inclinometers Accelerometers de:Klinometer sl:Klinometer
Liquid capacitive inclinometers
[ "Physics", "Mathematics", "Technology", "Engineering" ]
262
[ "Accelerometers", "Dimensional instruments", "Physical quantities", "Acceleration", "Quantity", "Measuring instruments", "Size" ]
1,705,815
https://en.wikipedia.org/wiki/Dark-energy%20star
A dark-energy star is a hypothetical compact astrophysical object, which a minority of physicists think might constitute an alternative explanation for observations of astronomical black hole candidates. The concept was proposed by physicist George Chapline. The theory states that infalling matter is converted into vacuum energy or dark energy, as the matter falls through the event horizon. The space within the event horizon would end up with a large value for the cosmological constant and have negative pressure to exert against gravity. There would be no information-destroying singularity. Theory In March 2005, physicist George Chapline claimed that quantum mechanics makes it a "near certainty" that black holes do not exist and are instead dark-energy stars. The dark-energy star is a different concept from that of a gravastar. Dark-energy stars were first proposed because in quantum physics, absolute time is required; however, in general relativity, an object falling towards a black hole would, to an outside observer, seem to have time pass infinitely slowly at the event horizon. The object itself would feel as if time flowed normally. In order to reconcile quantum mechanics with black holes, Chapline theorized that a phase transition in the phase of space occurs at the event horizon. He based his ideas on the physics of superfluids. As a column of superfluid grows taller, at some point, density increases, slowing down the speed of sound, so that it approaches zero. However, at that point, quantum physics makes sound waves dissipate their energy into the superfluid, so that the zero sound speed condition is never encountered. In the dark-energy star hypothesis, infalling matter approaching the event horizon decays into successively lighter particles. Nearing the event horizon, environmental effects accelerate proton decay. This may account for high-energy cosmic-ray sources and positron sources in the sky. When the matter falls through the event horizon, the energy equivalent of some or all of that matter is converted into dark energy. This negative pressure counteracts the mass the star gains, avoiding a singularity. The negative pressure also gives a very high number for the cosmological constant. Furthermore, 'primordial' dark-energy stars could form by fluctuations of spacetime itself, which is analogous to "blobs of liquid condensing spontaneously out of a cooling gas". This not only alters the understanding of black holes, but has the potential to explain the dark energy and dark matter that are indirectly observed. See also Black star (semiclassical gravity) Dark energy Dark matter Gravastar Stellar black hole References Sources External links MPIE Galactic Center Research (subscription only) Black holes Dark concepts in astrophysics Dark matter Hypothetical stars Quantum gravity Fringe physics Dark energy
Dark-energy star
[ "Physics", "Astronomy" ]
557
[ "Physical phenomena", "Black holes", "Physical quantities", "Unsolved problems in physics", "Dark energy", "Stellar phenomena", "Physics beyond the Standard Model", "Dark matter", "Concepts in astronomy", "Energy (physics)", "Exotic matter", "Wikipedia categories named after physical quantitie...
1,706,048
https://en.wikipedia.org/wiki/Battery%20room
A battery room is a room that houses batteries for backup or uninterruptible power systems. The rooms are found in telecommunication central offices, and provide standby power for computing equipment in datacenters. Batteries provide direct current (DC) electricity, which may be used directly by some types of equipment, or which may be converted to alternating current (AC) by uninterruptible power supply (UPS) equipment. The batteries may provide power for minutes, hours or days, depending on each system's design, although they are most commonly activated during brief electric utility outages lasting only seconds. Battery rooms were used to segregate the fumes and corrosive chemicals of wet cell batteries (often lead–acid) from the operating equipment, and for better control of temperature and ventilation. In 1890, the Western Union central telegraph office in New York City had 20,000 wet cells, mostly of the primary zinc-copper type. Telecommunications Telephone system central offices contain large battery systems to provide power for customer telephones, telephone switches, and related apparatus. Terrestrial microwave links, cellular telephone sites, fibre optic apparatus and satellite communications facilities also have standby battery systems, which may be large enough to occupy a separate room in the building. In normal operation power from the local commercial utility operates telecommunication equipment, and batteries provide power if the normal supply is interrupted. These can be sized for the expected full duration of an interruption, or may be required only to provide power while a standby generator set or other emergency power supply is started. Batteries often used in battery rooms are the flooded lead-acid battery, the valve regulated lead-acid battery or the nickel–cadmium battery. Batteries are installed in groups. Several batteries are wired together in a series circuit forming a group providing DC electric power at 12, 24, 48 or 60 volts (or higher). Usually there are two or more groups of series-connected batteries. These groups of batteries are connected in a parallel circuit. This arrangement allows an individual group of batteries to be taken offline for service or replacement without compromising the availability of uninterruptible power. Generally, the larger the battery room's electrical capacity, the larger the size of each individual battery and the higher the room's DC voltage. Electrical utilities Battery rooms are also found in electric power plants and substations where reliable power is required for operation of switchgear, critical standby systems, and possibly black start of the station. Often batteries for large switchgear line-ups are 125 V or 250 V nominal systems, and feature redundant battery chargers with independent power sources. Separate battery rooms may be provided to protect against loss of the station due to a fire in a battery bank. For stations that are capable of black start, power from the battery system may be required for many purposes including switchgear operations. Very large utility batteries may be used for grid energy storage. Submarines and ocean-going vessels Battery rooms are found on diesel-electric submarines, where they contain the lead-acid batteries used for undersea propulsion of the vessel. Even nuclear submarines contain large battery rooms as backups to provide maneuvering power if the nuclear reactor is shut down. Batteries in surface vessels may also be contained in a battery room. Battery rooms on ocean-going vessels must prevent seawater from contacting battery acid, as this could produce toxic chlorine gas. This is of particular concern on submarines. Design issues Since several types of secondary batteries give off hydrogen if overcharged, ventilation of a battery room is critical to maintain the concentration below the lower explosive limit. The number of air changes per hour required to prevent unsafe accumulation can be calculated from the number of cells and the charging current, given the chemistry of the battery. The life span of secondary batteries is reduced at high temperature and the energy storage capacity is reduced at low temperature, so a battery room must have heating or cooling to maintain the proper temperature. Batteries may contain large quantities of corrosive electrolytes such as sulfuric acid used in lead-acid batteries or caustic potash (aka potassium hydroxide) used in NiCd batteries. Materials of the battery room must resist corrosion and contain any accidental spills. Plant personnel must be protected from spilled electrolyte. In some jurisdictions, large battery systems may contain reportable amounts of sulfuric acid, a concern for fire departments. Battery rooms in industrial and utility installations typically have an eye-wash station or decontamination showers nearby, so that workers who are accidentally splashed with electrolyte can immediately wash it away from the eyes and skin. See also List of battery types Battery storage power station References Further reading Kusko, Alexander (1989). Emergency/Standby Power Systems, pp. 99–117. New York: McGraw-Hill Book Co., . National Fire Protection Association (2005). 'NFPA 111: Standard on Stored Electrical Energy Emergency and Standby Power' Room Electric power Rooms Telecommunications infrastructure
Battery room
[ "Physics", "Engineering" ]
1,009
[ "Physical quantities", "Rooms", "Power (physics)", "Electric power", "Electrical engineering", "Architecture" ]
1,706,275
https://en.wikipedia.org/wiki/Raney%20nickel
Raney nickel , also known as the primary catalyst for the Cormas-Grisius Electrophilic Benzene Addition, is a fine-grained solid composed mostly of nickel derived from a nickel–aluminium alloy. Several grades are known, of which most are gray solids. Some are pyrophoric, but most are used as air-stable slurries. Raney nickel is used as a reagent and as a catalyst in organic chemistry. It was developed in 1926 by American engineer Murray Raney for the hydrogenation of vegetable oils. Raney Nickel is a registered trademark of W. R. Grace and Company. Other major producers are Evonik and Johnson Matthey. Preparation Alloy preparation The Ni–Al alloy is prepared by dissolving nickel in molten aluminium followed by cooling ("quenching"). Depending on the Ni:Al ratio, quenching produces a number of different phases. During the quenching procedure, small amounts of a third metal, such as zinc or chromium, are added to enhance the activity of the resulting catalyst. This third metal is called a "promoter". The promoter changes the mixture from a binary alloy to a ternary alloy, which can lead to different quenching and leaching properties during activation. Activation In the activation process, the alloy, usually as a fine powder, is treated with a concentrated solution of sodium hydroxide. The simplified leaching reaction is given by the following chemical equation: 2 Al + 2 NaOH + 6 H2O → 2 Na[Al(OH)4] + 3 H2 The formation of sodium aluminate (Na[Al(OH)4]) requires that solutions of high concentration of sodium hydroxide be used to avoid the formation of aluminium hydroxide, which otherwise would precipitate as bayerite. Hence sodium hydroxide solutions with concentrations of up to 5 M are used. The temperature used to leach the alloy has a marked effect on the properties of the catalyst. Commonly, leaching is conducted between 70 and 100 °C. The surface area of Raney nickel (and related catalysts in general) tends to decrease with increasing leaching temperature. This is due to structural rearrangements within the alloy that may be considered analogous to sintering, where alloy ligaments would start adhering to each other at higher temperatures, leading to the loss of the porous structure. During the activation process, Al is leached out of the NiAl3 and Ni2Al3 phases that are present in the alloy, while most of the Ni remains, in the form of NiAl. The removal of Al from some phases but not others is known as "selective leaching". The NiAl phase has been shown to provide the structural and thermal stability of the catalyst. As a result, the catalyst is quite resistant to decomposition ("breaking down", commonly known as "aging"). This resistance allows Raney nickel to be stored and reused for an extended period; however, fresh preparations are usually preferred for laboratory use. For this reason, commercial Raney nickel is available in both "active" and "inactive" forms. Before storage, the catalyst can be washed with distilled water at ambient temperature to remove remaining sodium aluminate. Oxygen-free (degassed) water is preferred for storage to prevent oxidation of the catalyst, which would accelerate its aging process and result in reduced catalytic activity. Properties Macroscopically, Raney nickel is a finely divided, grey powder. Microscopically, each particle of this powder is a three-dimensional mesh, with pores of irregular size and shape, the vast majority of which are created during the leaching process. Raney nickel is notable for being thermally and structurally stable, as well as having a large Brunauer-Emmett-Teller (BET ) surface area. These properties are a direct result of the activation process and contribute to a relatively high catalytic activity. The surface area is typically determined by a BET measurement using a gas that is preferentially adsorbed on metallic surfaces, such as hydrogen. Using this type of measurement, almost all the exposed area in a particle of the catalyst has been shown to have Ni on its surface. Since Ni is the active metal of the catalyst, a large Ni surface area implies a large surface is available for reactions to occur simultaneously, which is reflected in an increased catalyst activity. Commercially available Raney nickel has an average Ni surface area of 100 m2 per gram of catalyst. A high catalytic activity, coupled with the fact that hydrogen is absorbed within the pores of the catalyst during activation, makes Raney nickel a useful catalyst for many hydrogenation reactions. Its structural and thermal stability (i.e., it does not decompose at high temperatures) allows its use under a wide range of reaction conditions. Additionally, the solubility of Raney nickel is negligible in most common laboratory solvents, with the exception of mineral acids such as hydrochloric acid, and its relatively high density (about 6.5 g cm−3) also facilitates its separation from a liquid phase after a reaction is completed. Applications Raney nickel is used in a large number of industrial processes and in organic synthesis because of its stability and high catalytic activity at room temperature. Industrial applications In a commercial application, Raney nickel is used as a catalyst for the hydrogenation of benzene to cyclohexane. Other heterogeneous catalysts, such as those using platinum group elements are used in some cases. Platinum metals tend to be more active, requiring milder temperatures, but they are more expensive than Raney nickel. The cyclohexane thus produced may be used in the synthesis of adipic acid, a raw material used in the industrial production of polyamides such as nylon. Other industrial applications of Raney nickel include the conversion of: Dextrose to sorbitol; Nitro compounds to amines, for example, 2,4-dinitrotoluene to 2,4-toluenediamine; Nitriles to amines, for example, stearonitrile to stearylamine and adiponitrile to hexamethylenediamine; Olefins to paraffins, for example, sulfolene to sulfolane; Acetylenes to paraffins, for example, 1,4-butynediol to 1,4-butanediol. Applications in organic synthesis Desulfurization Raney nickel is used in organic synthesis for desulfurization. For example, thioacetals will be reduced to hydrocarbons in the last step of the Mozingo reduction: Thiols, and sulfides can be removed from aliphatic, aromatic, or heteroaromatic compounds. Likewise, Raney nickel will remove the sulfur of thiophene to give a saturated alkane. Reduction of functional groups It is typically used in the reduction of compounds with multiple bonds, such as alkynes, alkenes, nitriles, dienes, aromatics and carbonyl-containing compounds. Additionally, Raney nickel will reduce heteroatom-heteroatom bonds, such as hydrazines, nitro groups, and nitrosamines. It has also found use in the reductive alkylation of amines and the amination of alcohols. When reducing a carbon-carbon double bond, Raney nickel will add hydrogen in a syn fashion. Related catalysts Raney cobalt has also been described. In contrast to the pyrophoric nature of some forms of Raney nickel, nickel silicide-based catalysts represent potentially safer alternatives. Raney alloys include FeTi and other non Nickel alloys. FeTi has been considered for low pressure Hydrogen Storage. Aldricimica Acta (free from Sigma nee Aldrich) has a complete list of Raney alloys. Safety Due to its large surface area and high volume of contained hydrogen gas, dry, activated Raney nickel is a pyrophoric material that requires handling under an inert atmosphere. Raney nickel is typically supplied as a 50% slurry in water. Even after reaction, residual Raney nickel contains significant amounts of hydrogen gas and may spontaneously ignite when exposed to air. Additionally, acute exposure to Raney nickel may cause irritation of the respiratory tract and nasal cavities, and causes pulmonary fibrosis if inhaled. Ingestion may lead to convulsions and intestinal disorders. It can also cause eye and skin irritation. Chronic exposure may lead to pneumonitis and other signs of sensitization to nickel, such as skin rashes ("nickel itch"). Nickel is also rated as being a possible human carcinogen by the IARC (Group 2B, EU category 3) and teratogen, while the inhalation of fine aluminium oxide particles is associated with Shaver's disease. Development Murray Raney graduated as a mechanical engineer from the University of Kentucky in 1909. In 1915 he joined the Lookout Oil and Refining Company in Tennessee and was responsible for the installation of electrolytic cells for the production of hydrogen which was used in the hydrogenation of vegetable oils. During that time the industry used a nickel catalyst prepared from nickel(II) oxide. Believing that better catalysts could be produced, around 1921 he started to perform independent research while still working for Lookout Oil. In 1924 a 1:1 ratio Ni/Si alloy was produced, which after treatment with sodium hydroxide, was found to be five times more active than the best catalyst used in the hydrogenation of cottonseed oil. A patent for this discovery was issued in December 1925. Subsequently, Raney produced a 1:1 Ni/Al alloy following a procedure similar to the one used for the nickel-silicon catalyst. He found that the resulting catalyst was even more active and filed a patent application in 1926. This is now a common alloy composition for modern Raney nickel catalysts. Other common alloy compositions include 21:29 Ni/Al and 3:7 Ni/Al. Both the activity and preparation protocols for these catalysts vary. Following the development of Raney nickel, other alloy systems with aluminium were considered, of which the most notable include copper, ruthenium and cobalt. Further research showed that adding a small amount of a third metal to the binary alloy would promote the activity of the catalyst. Some widely used promoters are zinc, molybdenum and chromium. An alternative way of preparing enantioselective Raney nickel has been devised by surface adsorption of tartaric acid. See also Nickel aluminide Urushibara nickel Rieke nickel Nickel boride catalyst Raney cobalt, a similar cobalt/aluminum alloy catalyst which is sometimes more selective for certain hydrogenation products (e.g. primary amines via nitrile reduction). References External links International Chemical Safety Card 0062 NIOSH Pocket Guide to Chemical Hazards 1941 paper describing the preparation of W-2 grade Raney nickel: Catalysts Nickel alloys Hydrogenation catalysts Pyrophoric materials
Raney nickel
[ "Chemistry", "Technology" ]
2,282
[ "Nickel alloys", "Catalysis", "Catalysts", "Hydrogenation catalysts", "Alloys", "Hydrogenation", "Chemical kinetics" ]
1,706,360
https://en.wikipedia.org/wiki/Rado%27s%20theorem%20%28Ramsey%20theory%29
Rado's theorem is a theorem from the branch of mathematics known as Ramsey theory. It is named for the German mathematician Richard Rado. It was proved in his thesis, Studien zur Kombinatorik. Statement Let be a system of linear equations, where is a matrix with integer entries. This system is said to be -regular if, for every -coloring of the natural numbers 1, 2, 3, ..., the system has a monochromatic solution. A system is regular if it is r-regular for all r ≥ 1. Rado's theorem states that a system is regular if and only if the matrix A satisfies the columns condition. Let ci denote the i-th column of A. The matrix A satisfies the columns condition provided that there exists a partition C1, C2, ..., Cn of the column indices such that if , then s1 = 0 for all i ≥ 2, si can be written as a rational linear combination of the cjs in all the Ck with k < i. This means that si is in the linear subspace of Q'm spanned by the set of the cjs. Special cases Folkman's theorem, the statement that there exist arbitrarily large sets of integers all of whose nonempty sums are monochromatic, may be seen as a special case of Rado's theorem concerning the regularity of the system of equations where T ranges over each nonempty subset of the set Other special cases of Rado's theorem are Schur's theorem and Van der Waerden's theorem. For proving the former apply Rado's theorem to the matrix . For Van der Waerden's theorem with m chosen to be length of the monochromatic arithmetic progression, one can for example consider the following matrix: Computability Given a system of linear equations it is a priori unclear how to check computationally that it is regular. Fortunately, Rado's theorem provides a criterion which is testable in finite time. Instead of considering colourings (of infinitely many natural numbers), it must be checked that the given matrix satisfies the columns condition. Since the matrix consists only of finitely many columns, this property can be verified in finite time. However, the subset sum problem can be reduced to the problem of computing the required partition C1, C2, ..., Cn of columns: Given an input set S for the subset sum problem we can write the elements of S in a matrix of shape 1 × |S|. Then the elements of S corresponding to vectors in the partition C1 sum to zero. The subset sum problem is NP-complete. Hence, verifying that a system of linear equations is regular is also an NP-complete problem. References Ramsey theory Theorems in discrete mathematics
Rado's theorem (Ramsey theory)
[ "Mathematics" ]
579
[ "Discrete mathematics", "Combinatorics", "Theorems in discrete mathematics", "Mathematical problems", "Mathematical theorems", "Ramsey theory" ]
1,706,851
https://en.wikipedia.org/wiki/Tropinone
Tropinone is an alkaloid, famously synthesised in 1917 by Robert Robinson as a synthetic precursor to atropine, a scarce commodity during World War I. Tropinone and the alkaloids cocaine and atropine all share the same tropane core structure. Its corresponding conjugate acid at pH 7.3 major species is known as tropiniumone. Synthesis The first synthesis of tropinone was by Richard Willstätter in 1901. It started from the seemingly related cycloheptanone, but required many steps to introduce the nitrogen bridge; the overall yield for the synthesis path is only 0.75%. Willstätter had previously synthesized cocaine from tropinone, in what was the first synthesis and elucidation of the structure of cocaine. Robinson's "double Mannich" reaction The 1917 synthesis by Robinson is considered a classic in total synthesis due to its simplicity and biomimetic approach. Tropinone is a bicyclic molecule, but the reactants used in its preparation are fairly simple: succinaldehyde, methylamine and acetonedicarboxylic acid (or even acetone). The synthesis is a good example of a biomimetic reaction or biogenetic-type synthesis because biosynthesis makes use of the same building blocks. It also demonstrates a tandem reaction in a one-pot synthesis. Furthermore, the yield of the synthesis was 17% and with subsequent improvements exceeded 90%. This reaction is described as an intramolecular "double Mannich reaction" for obvious reasons. It is not unique in this regard, as others have also attempted it in piperidine synthesis. In place of acetone, acetonedicarboxylic acid is known as the "synthetic equivalent" the 1,3-dicarboxylic acid groups are so-called "activating groups" to facilitate the ring forming reactions. The calcium salt is there as a "buffer" as it is claimed that higher yields are possible if the reaction is conducted at "physiological pH". Reaction mechanism The main features apparent from the reaction sequence below are: Nucleophilic addition of methylamine to succinaldehyde, followed by loss of water to create an imine Intramolecular addition of the imine to the second aldehyde unit and first ring closure Intermolecular Mannich reaction of the enolate of acetone dicarboxylate New enolate formation and new imine formation with loss of water for Second intramolecular Mannich reaction and second ring closure Loss of 2 carboxylic groups to tropinone Some authors have actually tried to retain one of the CO2H groups. CO2R-tropinone has 4 stereoisomers, although the corresponding ecgonidine alkyl ester has only a pair of enantiomers. From cycloheptanone IBX dehydrogenation (oxidation) of cycloheptanone (suberone) to 2,6-cycloheptadienone [1192-93-4] followed by reaction with an amine is versatile a way of forming tropinones. The mechanism evoked is clearly delineated to be a double Michael reaction (i.e. conjugate addition). Biochemistry method Reduction of tropinone The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. See also Benztropine Daturaolone 2-Carbomethoxytropinone (2-CMT) an intermediate in the creation of ecgonine cocaine analogues Ecgonidine References External links MSDS for tropinone Tropane alkaloids Ketones Total synthesis
Tropinone
[ "Chemistry" ]
878
[ "Ketones", "Functional groups", "Tropane alkaloids", "Alkaloids by chemical classification", "Chemical synthesis", "Total synthesis" ]
1,706,886
https://en.wikipedia.org/wiki/Purkinje%20effect
The Purkinje effect or Purkinje phenomenon (; sometimes called the Purkinje shift, often pronounced ) is the tendency for the peak luminance sensitivity of the eye to shift toward the blue end of the color spectrum at low illumination levels as part of dark adaptation. In consequence, reds will appear darker relative to other colors as light levels decrease. The effect is named after the Czech anatomist Jan Evangelista Purkyně. While the effect is often described from the perspective of the human eye, it is well established in a number of animals under the same name to describe the general shifting of spectral sensitivity due to pooling of rod and cone output signals as a part of dark/light adaptation. This effect introduces a difference in color contrast under different levels of illumination. For instance, in bright sunlight, geranium flowers appear bright red against the dull green of their leaves, or adjacent blue flowers, but in the same scene viewed at dusk, the contrast is reversed, with the red petals appearing a dark red or black, and the leaves and blue petals appearing relatively bright. The sensitivity to light in scotopic vision varies with wavelength, though the perception is essentially black-and-white. The Purkinje shift is the relation between the absorption maximum of rhodopsin, reaching a maximum at about , and that of the opsins in the longer-wavelength cones that dominate in photopic vision, about (green). In visual astronomy, the Purkinje shift can affect visual estimates of variable stars when using comparison stars of different colors, especially if one of the stars is red. Physiology The Purkinje effect occurs at the transition between primary use of the photopic (cone-based) and scotopic (rod-based) systems, that is, in the mesopic state: as intensity dims, the rods take over, and before color disappears completely, it shifts towards the rods' top sensitivity. The effect occurs because in mesopic conditions the outputs of cones in the retina, which are generally responsible for the perception of color in daylight, are pooled with outputs of rods which are more sensitive under those conditions and have peak sensitivity in blue-green wavelength of 507 nm. Use of red lights The insensitivity of rods to long-wavelength (i.e. red) light has led to the use of red lights under certain special circumstances—for example, in the control rooms of submarines, in research laboratories, aircraft, and in naked-eye astronomy. Red lights are used in conditions where it is desirable to activate both the photopic and scotopic systems. Submarines are well lit to facilitate the vision of the crew members working there, but the control room must be lit differently to allow crew members to read instrument panels yet remain dark adjusted. By using red lights or wearing red goggles (called "dark adaptor goggles"), the cones can receive enough light to provide photopic vision (namely the high-acuity vision required for reading). The rods are not saturated by the bright red light because they are not sensitive to long-wavelength light, so the crew members remain dark adapted. Similarly, airplane cockpits use red lights so pilots can read their instruments and maps while maintaining night vision to see outside the aircraft. Red lights are also often used in research settings. Many research animals (such as rats and mice) have limited photopic vision, as they have far fewer cone photoreceptors. The animal subjects do not perceive red lights and thus experience darkness (the active period for nocturnal animals), but the human researchers, who have one kind of cone (the "L cone") that is sensitive to long wavelengths, are able to read instruments or perform procedures that would be impractical even with fully dark adapted (but low acuity) scotopic vision. For the same reason, zoo displays of nocturnal animals often are illuminated with red light. History The effect was discovered in 1819 by Jan Evangelista Purkyně. Purkyně was a polymath who would often meditate at dawn during long walks in the blossomed Bohemian fields. Purkyně noticed that his favorite flowers appeared bright red on a sunny afternoon, while at dawn they looked very dark. He reasoned that the eye has not one but two systems adapted to see colors, one for bright overall light intensity, and the other for dusk and dawn. Purkyně wrote in his Neue Beiträge: Objectively, the degree of illumination has a great influence on the intensity of color quality. In order to prove this most vividly, take some colors before daybreak, when it begins slowly to get lighter. Initially one sees only black and grey. Particularly the brightest colors, red and green, appear darkest. Yellow cannot be distinguished from a rosy red. Blue became noticeable to me first. Nuances of red, which otherwise burn brightest in daylight, namely carmine, cinnabar and orange, show themselves as darkest for quite a while, in contrast to their average brightness. Green appears more bluish to me, and its yellow tint develops with increasing daylight only. See also Kruithof curve Dark adaptation Dark adaptor goggles Skot (unit) Nox (unit) References External links Color Optical Illusions, Purkinje Effect Color appearance phenomena Color vision Visual perception
Purkinje effect
[ "Physics" ]
1,081
[ "Optical phenomena", "Physical phenomena", "Color appearance phenomena" ]
1,707,086
https://en.wikipedia.org/wiki/Tag%20%28metadata%29
In information systems, a tag is a keyword or term assigned to a piece of information (such as an Internet bookmark, multimedia, database record, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are generally chosen informally and personally by the item's creator or by its viewer, depending on the system, although they may also be chosen from a controlled vocabulary. Tagging was popularized by websites associated with Web 2.0 and is an important feature of many Web 2.0 services. It is now also part of other database systems, desktop applications, and operating systems. Overview People use tags to aid classification, mark ownership, note boundaries, and indicate online identity. Tags may take the form of words, images, or other identifying marks. An analogous example of tags in the physical world is museum object tagging. People were using textual keywords to classify information and objects long before computers. Computer based search algorithms made the use of such keywords a rapid way of exploring records. Tagging gained popularity due to the growth of social bookmarking, image sharing, and social networking websites. These sites allow users to create and manage labels (or "tags") that categorize content using simple keywords. Websites that include tags often display collections of tags as tag clouds, as do some desktop applications. On websites that aggregate the tags of all users, an individual user's tags can be useful both to them and to the larger community of the website's users. Tagging systems have sometimes been classified into two kinds: top-down and bottom-up. Top-down taxonomies are created by an authorized group of designers (sometimes in the form of a controlled vocabulary), whereas bottom-up taxonomies (called folksonomies) are created by all users. This definition of "top down" and "bottom up" should not be confused with the distinction between a single hierarchical tree structure (in which there is one correct way to classify each item) versus multiple non-hierarchical sets (in which there are multiple ways to classify an item); the structure of both top-down and bottom-up taxonomies may be either hierarchical, non-hierarchical, or a combination of both. Some researchers and applications have experimented with combining hierarchical and non-hierarchical tagging to aid in information retrieval. Others are combining top-down and bottom-up tagging, including in some large library catalogs (OPACs) such as WorldCat. When tags or other taxonomies have further properties (or semantics) such as relationships and attributes, they constitute an ontology. Metadata tags as described in this article should not be confused with the use of the word "tag" in some software to refer to an automatically generated cross-reference; examples of the latter are tags tables in Emacs and smart tags in Microsoft Office. History The use of keywords as part of an identification and classification system long predates computers. Paper data storage devices, notably edge-notched cards, that permitted classification and sorting by multiple criteria were already in use prior to the twentieth century, and faceted classification has been used by libraries since the 1930s. In the late 1970s and early 1980s, Emacs, the text editor for Unix systems, offered a companion software program called Tags that could automatically build a table of cross-references called a tags table that Emacs could use to jump between a function call and that function's definition. This use of the word "tag" did not refer to metadata tags, but was an early use of the word "tag" in software to refer to a word index. Online databases and early websites deployed keyword tags as a way for publishers to help users find content. In the early days of the World Wide Web, the keywords meta element was used by web designers to tell web search engines what the web page was about, but these keywords were only visible in a web page's source code and were not modifiable by users. In 1997, the collaborative portal "A Description of the Equator and Some ØtherLands" produced by documenta X, Germany, used the folksonomic term Tag for its co-authors and guest authors on its Upload page. In "The Equator" the term Tag for user-input was described as an abstract literal or keyword to aid the user. However, users defined singular Tags, and did not share Tags at that point. In 2003, the social bookmarking website Delicious provided a way for its users to add "tags" to their bookmarks (as a way to help find them later); Delicious also provided browseable aggregated views of the bookmarks of all users featuring a particular tag. Within a couple of years, the photo sharing website Flickr allowed its users to add their own text tags to each of their pictures, constructing flexible and easy metadata that made the pictures highly searchable. The success of Flickr and the influence of Delicious popularized the concept, and other social software websites—such as YouTube, Technorati, and Last.fm—also implemented tagging. In 2005, the Atom web syndication standard provided a "category" element for inserting subject categories into web feeds, and in 2007 Tim Bray proposed a "tag" URN. Examples Within a blog Many systems (and other web content management systems) allow authors to add free-form tags to a post, along with (or instead of) placing the post into a predetermined category. For example, a post may display that it has been tagged with baseball and tickets. Each of those tags is usually a web link leading to an index page listing all of the posts associated with that tag. The blog may have a sidebar listing all the tags in use on that blog, with each tag leading to an index page. To reclassify a post, an author edits its list of tags. All connections between posts are automatically tracked and updated by the blog software; there is no need to relocate the page within a complex hierarchy of categories. Within application software Some desktop applications and web applications feature their own tagging systems, such as email tagging in Gmail and Mozilla Thunderbird, bookmark tagging in Firefox, audio tagging in iTunes or Winamp, and photo tagging in various applications. Some of these applications display collections of tags as tag clouds. Assigned to computer files There are various systems for applying tags to the files in a computer's file system. In Apple's Mac System 7, released in 1991, users could assign one of seven editable colored labels (with editable names such as "Essential", "Hot", and "In Progress") to each file and folder. In later iterations of the Mac operating system ever since OS X 10.9 was released in 2013, users could assign multiple arbitrary tags as extended file attributes to any file or folder, and before that time the open-source OpenMeta standard provided similar tagging functionality for Mac OS X. Several semantic file systems that implement tags are available for the Linux kernel, including Tagsistant. Microsoft Windows allows users to set tags only on Microsoft Office documents and some kinds of picture files. Cross-platform file tagging standards include Extensible Metadata Platform (XMP), an ISO standard for embedding metadata into popular image, video and document file formats, such as JPEG and PDF, without breaking their readability by applications that do not support XMP. XMP largely supersedes the earlier IPTC Information Interchange Model. Exif is a standard that specifies the image and audio file formats used by digital cameras, including some metadata tags. TagSpaces is an open-source cross-platform application for tagging files; it inserts tags into the filename. For an event An official tag is a keyword adopted by events and conferences for participants to use in their web publications, such as blog entries, photos of the event, and presentation slides. Search engines can then index them to make relevant materials related to the event searchable in a uniform way. In this case, the tag is part of a controlled vocabulary. In research A researcher may work with a large collection of items (e.g. press quotes, a bibliography, images) in digital form. If he/she wishes to associate each with a small number of themes (e.g. to chapters of a book, or to sub-themes of the overall subject), then a group of tags for these themes can be attached to each of the items in the larger collection. In this way, freeform classification allows the author to manage what would otherwise be unwieldy amounts of information. Special types Triple tags A triple tag or machine tag uses a special syntax to define extra semantic information about the tag, making it easier or more meaningful for interpretation by a computer program. Triple tags comprise three parts: a namespace, a predicate, and a value. For example, geo:long=50.123456 is a tag for the geographical longitude coordinate whose value is 50.123456. This triple structure is similar to the Resource Description Framework model for information. The triple tag format was first devised for geolicious in November 2004, to map Delicious bookmarks, and gained wider acceptance after its adoption by Mappr and GeoBloggers to map Flickr photos. In January 2007, Aaron Straup Cope at Flickr introduced the term machine tag as an alternative name for the triple tag, adding some questions and answers on purpose, syntax, and use. Specialized metadata for geographical identification is known as geotagging; machine tags are also used for other purposes, such as identifying photos taken at a specific event or naming species using binomial nomenclature. Hashtags A hashtag is a kind of metadata tag marked by the prefix #, sometimes known as a "hash" symbol. This form of tagging is used on microblogging and social networking services such as Twitter, Facebook, Google+, VK and Instagram. The hash is used to distinguish tag text, as distinct, from other text in the post. Knowledge tags A knowledge tag is a type of meta-information that describes or defines some aspect of a piece of information (such as a document, digital image, database table, or web page). Knowledge tags are more than traditional non-hierarchical keywords or terms; they are a type of metadata that captures knowledge in the form of descriptions, categorizations, classifications, semantics, comments, notes, annotations, hyperdata, hyperlinks, or references that are collected in tag profiles (a kind of ontology). These tag profiles reference an information resource that resides in a distributed, and often heterogeneous, storage repository. Knowledge tags are part of a knowledge management discipline that leverages Enterprise 2.0 methodologies for users to capture insights, expertise, attributes, dependencies, or relationships associated with a data resource. Different kinds of knowledge can be captured in knowledge tags, including factual knowledge (that found in books and data), conceptual knowledge (found in perspectives and concepts), expectational knowledge (needed to make judgments and hypothesis), and methodological knowledge (derived from reasoning and strategies). These forms of knowledge often exist outside the data itself and are derived from personal experience, insight, or expertise. Knowledge tags are considered an expansion of the information itself that adds additional value, context, and meaning to the information. Knowledge tags are valuable for preserving organizational intelligence that is often lost due to turnover, for sharing knowledge stored in the minds of individuals that is typically isolated and unharnessed by the organization, and for connecting knowledge that is often lost or disconnected from an information resource. Advantages and disadvantages In a typical tagging system, there is no explicit information about the meaning or semantics of each tag, and a user can apply new tags to an item as easily as applying older tags. Hierarchical classification systems can be slow to change, and are rooted in the culture and era that created them; in contrast, the flexibility of tagging allows users to classify their collections of items in the ways that they find useful, but the personalized variety of terms can present challenges when searching and browsing. When users can freely choose tags (creating a folksonomy, as opposed to selecting terms from a controlled vocabulary), the resulting metadata can include homonyms (the same tags used with different meanings) and synonyms (multiple tags for the same concept), which may lead to inappropriate connections between items and inefficient searches for information about a subject. For example, the tag "orange" may refer to the fruit or the color, and items related to a version of the Linux kernel may be tagged "Linux", "kernel", "Penguin", "software", or a variety of other terms. Users can also choose tags that are different inflections of words (such as singular and plural), which can contribute to navigation difficulties if the system does not include stemming of tags when searching or browsing. Larger-scale folksonomies address some of the problems of tagging, in that users of tagging systems tend to notice the current use of "tag terms" within these systems, and thus use existing tags in order to easily form connections to related items. In this way, folksonomies may collectively develop a partial set of tagging conventions. Complex system dynamics Despite the apparent lack of control, research has shown that a simple form of shared vocabulary emerges in social bookmarking systems. Collaborative tagging exhibits a form of complex systems dynamics (or self-organizing dynamics). Thus, even if no central controlled vocabulary constrains the actions of individual users, the distribution of tags converges over time to stable power law distributions. Once such stable distributions form, simple folksonomic vocabularies can be extracted by examining the correlations that form between different tags. In addition, research has suggested that it is easier for machine learning algorithms to learn tag semantics when users tag "verbosely"—when they annotate resources with a wealth of freely associated, descriptive keywords. Spamming Tagging systems open to the public are also open to tag spam, in which people apply an excessive number of tags or unrelated tags to an item (such as a YouTube video) in order to attract viewers. This abuse can be mitigated using human or statistical identification of spam items. The number of tags allowed may also be limited to reduce spam. Syntax Some tagging systems provide a single text box to enter tags, so to be able to tokenize the string, a separator must be used. Two popular separators are the space character and the comma. To enable the use of separators in the tags, a system may allow for higher-level separators (such as quotation marks) or escape characters. Systems can avoid the use of separators by allowing only one tag to be added to each input widget at a time, although this makes adding multiple tags more time-consuming. A syntax for use within HTML is to use the rel-tag microformat which uses the rel attribute with value "tag" (i.e., rel="tag") to indicate that the linked-to page acts as a tag for the current context. See also Annotation Collective intelligence Concept map Enterprise bookmarking Enterprise social software Expert system Explicit knowledge Human–computer interaction Information ecology Knowledge transfer Knowledge worker Management information system Meta-knowledge Organizational memory RRID Semantics Semantic Web Social network aggregation Subject (documents) Subject indexing Notes References Collective intelligence Computer jargon Information retrieval techniques Knowledge representation Metadata Reference Web 2.0
Tag (metadata)
[ "Technology" ]
3,215
[ "Computing terminology", "Computer jargon", "Metadata", "Data", "Natural language and computing" ]
1,708,182
https://en.wikipedia.org/wiki/Designer%20baby
A designer baby is a baby whose genetic makeup has been selected or altered, often to exclude a particular gene or to remove genes associated with disease. This process usually involves analysing a wide range of human embryos to identify genes associated with particular diseases and characteristics, and selecting embryos that have the desired genetic makeup; a process known as preimplantation genetic diagnosis. Screening for single genes is commonly practiced, and polygenic screening is offered by a few companies. Other methods by which a baby's genetic information can be altered involve directly editing the genome before birth, which is not routinely performed and only one instance of this is known to have occurred as of 2019, where Chinese twins Lulu and Nana were edited as embryos, causing widespread criticism. Genetically altered embryos can be achieved by introducing the desired genetic material into the embryo itself, or into the sperm and/or egg cells of the parents; either by delivering the desired genes directly into the cell or using gene-editing technology. This process is known as germline engineering and performing this on embryos that will be brought to term is typically prohibited by law. Editing embryos in this manner means that the genetic changes can be carried down to future generations, and since the technology concerns editing the genes of an unborn baby, it is considered controversial and is subject to ethical debate. While some scientists condone the use of this technology to treat disease, concerns have been raised that this could be translated into using the technology for cosmetic purposes and enhancement of human traits. Pre-implantation genetic diagnosis Pre-implantation genetic diagnosis (PGD or PIGD) is a procedure in which embryos are screened prior to implantation. The technique is used alongside in vitro fertilisation (IVF) to obtain embryos for evaluation of the genome – alternatively, ovocytes can be screened prior to fertilisation. The technique was first used in 1989. PGD is used primarily to select embryos for implantation in the case of possible genetic defects, allowing identification of mutated or disease-related alleles and selection against them. It is especially useful in embryos from parents where one or both carry a heritable disease. PGD can also be used to select for embryos of a certain sex, most commonly when a disease is more strongly associated with one sex than the other (as is the case for X-linked disorders which are more common in males, such as haemophilia). Infants born with traits selected following PGD are sometimes considered to be designer babies. One application of PGD is the selection of 'saviour siblings', children who are born to provide a transplant (of an organ or group of cells) to a sibling with a usually life-threatening disease. Saviour siblings are conceived through IVF and then screened using PGD to analyze genetic similarity to the child needing a transplant, to reduce the risk of rejection. Process Embryos for PGD are obtained from IVF procedures in which the oocyte is artificially fertilised by sperm. Oocytes from the woman are harvested following controlled ovarian hyperstimulation (COH), which involves fertility treatments to induce production of multiple oocytes. After harvesting the oocytes, they are fertilised in vitro, either during incubation with multiple sperm cells in culture, or via intracytoplasmic sperm injection (ICSI), where sperm is directly injected into the oocyte. The resulting embryos are usually cultured for 3–6 days, allowing them to reach the blastomere or blastocyst stage. Once embryos reach the desired stage of development, cells are biopsied and genetically screened. The screening procedure varies based on the nature of the disorder being investigated. Polymerase chain reaction (PCR) is a process in which DNA sequences are amplified to produce many more copies of the same segment, allowing screening of large samples and identification of specific genes. The process is often used when screening for monogenic disorders, such as cystic fibrosis. Another screening technique, fluorescent in situ hybridisation (FISH) uses fluorescent probes which specifically bind to highly complementary sequences on chromosomes, which can then be identified using fluorescence microscopy. FISH is often used when screening for chromosomal abnormalities such as aneuploidy, making it a useful tool when screening for disorders such as Down syndrome. Following the screening, embryos with the desired trait (or lacking an undesired trait such as a mutation) are transferred into the mother's uterus, then allowed to develop naturally. Regulation PGD regulation is determined by individual countries' governments, with some prohibiting its use entirely, including in Austria, China, and Ireland. In many countries, PGD is permitted under very stringent conditions for medical use only, as is the case in France, Switzerland, Italy and the United Kingdom. Whilst PGD in Italy and Switzerland is only permitted under certain circumstances, there is no clear set of specifications under which PGD can be carried out, and selection of embryos based on sex is not permitted. In France and the UK, regulations are much more detailed, with dedicated agencies setting out framework for PGD. Selection based on sex is permitted under certain circumstances, and genetic disorders for which PGD is permitted are detailed by the countries' respective agencies. In contrast, the United States federal law does not regulate PGD, with no dedicated agencies specifying regulatory framework by which healthcare professionals must abide. Elective sex selection is permitted, accounting for around 9% of all PGD cases in the U.S., as is selection for desired conditions such as deafness or dwarfism. Pre-implantation Genetic Testing Based on the specific analysis conducted: PGT-M (Preimplantation Genetic Testing for monogenic diseases): It is used to detect hereditary diseases caused by the mutation or alteration of the DNA sequence of a single gene. PGT-A (Preimplantation Genetic Testing for aneuploidy): It is used to diagnose numerical abnormalities (aneuploidies). Human germline engineering Human germline engineering is a process in which the human genome is edited within a germ cell, such as a sperm cell or oocyte (causing heritable changes), or in the zygote or embryo following fertilization. Germline engineering results in changes in the genome being incorporated into every cell in the body of the offspring (or of the individual following embryonic germline engineering). This process differs from somatic cell engineering, which does not result in heritable changes. Most human germline editing is performed on individual cells and non-viable embryos, which are destroyed at a very early stage of development. In November 2018, however, a Chinese scientist, He Jiankui, announced that he had created the first human germline genetically edited babies. Genetic engineering relies on a knowledge of human genetic information, made possible by research such as the Human Genome Project, which identified the position and function of all the genes in the human genome. As of 2019, high-throughput sequencing methods allow genome sequencing to be conducted very rapidly, making the technology widely available to researchers. Germline modification is typically accomplished through techniques which incorporate a new gene into the genome of the embryo or germ cell in a specific location. This can be achieved by introducing the desired DNA directly to the cell for it to be incorporated, or by replacing a gene with one of interest. These techniques can also be used to remove or disrupt unwanted genes, such as ones containing mutated sequences. Whilst germline engineering has mostly been performed in mammals and other animals, research on human cells in vitro is becoming more common. Most commonly used in human cells are germline gene therapy and the engineered nuclease system CRISPR/Cas9. Germline gene modification Gene therapy is the delivery of a nucleic acid (usually DNA or RNA) into a cell as a pharmaceutical agent to treat disease. Most commonly it is carried out using a vector, which transports the nucleic acid (usually DNA encoding a therapeutic gene) into the target cell. A vector can transduce a desired copy of a gene into a specific location to be expressed as required. Alternatively, a transgene can be inserted to deliberately disrupt an unwanted or mutated gene, preventing transcription and translation of the faulty gene products to avoid a disease phenotype. Gene therapy in patients is typically carried out on somatic cells in order to treat conditions such as some leukaemias and vascular diseases. Human germline gene therapy in contrast is restricted to in vitro experiments in some countries, whilst others prohibited it entirely, including Australia, Canada, Germany and Switzerland. Whilst the National Institutes of Health in the US does not currently allow in utero germline gene transfer clinical trials, in vitro trials are permitted. The NIH guidelines state that further studies are required regarding the safety of gene transfer protocols before in utero research is considered, requiring current studies to provide demonstrable efficacy of the techniques in the laboratory. Research of this sort is currently using non-viable embryos to investigate the efficacy of germline gene therapy in treatment of disorders such as inherited mitochondrial diseases. Gene transfer to cells is usually by vector delivery. Vectors are typically divided into two classes – viral and non-viral. Viral vectors Viruses infect cells by transducing their genetic material into a host's cell, using the host's cellular machinery to generate viral proteins needed for replication and proliferation. By modifying viruses and loading them with the therapeutic DNA or RNA of interest, it is possible to use these as a vector to provide delivery of the desired gene into the cell. Retroviruses are some of the most commonly used viral vectors, as they not only introduce their genetic material into the host cell, but also copy it into the host's genome. In the context of gene therapy, this allows permanent integration of the gene of interest into the patient's own DNA, providing longer lasting effects. Viral vectors work efficiently and are mostly safe but present with some complications, contributing to the stringency of regulation on gene therapy. Despite partial inactivation of viral vectors in gene therapy research, they can still be immunogenic and elicit an immune response. This can impede viral delivery of the gene of interest, as well as cause complications for the patient themselves when used clinically, especially in those who already have a serious genetic illness. Another difficulty is the possibility that some viruses will randomly integrate their nucleic acids into the genome, which can interrupt gene function and generate new mutations. This is a significant concern when considering germline gene therapy, due to the potential to generate new mutations in the embryo or offspring. Non-viral vectors Non-viral methods of nucleic acid transfection involved injecting a naked DNA plasmid into cell for incorporation into the genome. This method used to be relatively ineffective with low frequency of integration, however, efficiency has since greatly improved, using methods to enhance the delivery of the gene of interest into cells. Furthermore, non-viral vectors are simple to produce on a large scale and are not highly immunogenic. Some non-viral methods are detailed below: Electroporation is a technique in which high voltage pulses are used to carry DNA into the target cell across the membrane. The method is believed to function due to the formation of pores across the membrane, but although these are temporary, electroporation results in a high rate of cell death which has limited its use. An improved version of this technology, electron-avalanche transfection, has since been developed, which involves shorter (microsecond) high voltage pulses which result in more effective DNA integration and less cellular damage. The gene gun is a physical method of DNA transfection, where a DNA plasmid is loaded onto a particle of heavy metal (usually gold) and loaded onto the 'gun'. The device generates a force to penetrate the cell membrane, allowing the DNA to enter whilst retaining the metal particle. Oligonucleotides are used as chemical vectors for gene therapy, often used to disrupt mutated DNA sequences to prevent their expression. Disruption in this way can be achieved by introduction of small RNA molecules, called siRNA, which signal cellular machinery to cleave the unwanted mRNA sequences to prevent their transcription. Another method utilises double-stranded oligonucleotides, which bind transcription factors required for transcription of the target gene. By competitively binding these transcription factors, the oligonucleotides can prevent the gene's expression. ZFNs Zinc-finger nucleases (ZFNs) are enzymes generated by fusing a zinc finger DNA-binding domain to a DNA-cleavage domain. Zinc finger recognizes between 9 and 18 bases of sequence. Thus by mixing those modules, it becomes easier to target any sequence researchers wish to alter ideally within complex genomes. A ZFN is a macromolecular complex formed by monomers in which each subunit contains a zinc domain and a FokI endonuclease domain. The FokI domains must dimerize for activities, thus narrowing target area by ensuring that two close DNA-binding events occurs. The resulting cleavage event enables most genome-editing technologies to work. After a break is created, the cell seeks to repair it. A method is NHEJ, in which the cell polishes the two ends of broken DNA and seals them back together, often producing a frame shift. An alternative method is homology-directed repairs. The cell tries to fix the damage by using a copy of the sequence as a backup. By supplying their own template, researcher can have the system to insert a desired sequence instead. The success of using ZFNs in gene therapy depends on the insertion of genes to the chromosomal target area without causing damage to the cell. Custom ZFNs offer an option in human cells for gene correction. TALENs There is a method called TALENs that targets singular nucleotides. TALENs stand for transcription activator-like effector nucleases. TALENs are made by TAL effector DNA-binding domain to a DNA cleavage domain. All these methods work by as the TALENs are arranged. TALENs are "built from arrays of 33-35 amino acid modules…by assembling those arrays…researchers can target any sequence they like". This event is referred as Repeat Variable Diresidue (RVD). The relationship between the amino acids enables researchers to engineer a specific DNA domain. The TALEN enzymes are designed to remove specific parts of the DNA strands and replace the section; which enables edits to be made. TALENs can be used to edit genomes using non-homologous end joining (NHEJ) and homology directed repair. CRISPR/Cas9 The CRISPR/Cas9 system (CRISPR – Clustered Regularly Interspaced Short Palindromic Repeats, Cas9 – CRISPR-associated protein 9) is a genome editing technology based on the bacterial antiviral CRISPR/Cas system. The bacterial system has evolved to recognize viral nucleic acid sequences and cut these sequences upon recognition, damaging infecting viruses. The gene editing technology uses a simplified version of this process, manipulating the components of the bacterial system to allow location-specific gene editing. The CRISPR/Cas9 system broadly consists of two major components – the Cas9 nuclease and a guide RNA (gRNA). The gRNA contains a Cas-binding sequence and a ~20 nucleotide spacer sequence, which is specific and complementary to the target sequence on the DNA of interest. Editing specificity can therefore be changed by modifying this spacer sequence. Upon system delivery to a cell, Cas9 and the gRNA bind, forming a ribonucleoprotein complex. This causes a conformational change in Cas9, allowing it to cleave DNA if the gRNA spacer sequence binds with sufficient homology to a particular sequence in the host genome. When the gRNA binds to the target sequence, Cas will cleave the locus, causing a double-strand break (DSB). The resulting DSB can be repaired by one of two mechanisms – Non-Homologous End Joining (NHEJ) - an efficient but error-prone mechanism, which often introduces insertions and deletions (indels) at the DSB site. This means it is often used in knockout experiments to disrupt genes and introduce loss of function mutations. Homology Directed Repair (HDR) - a less efficient but high-fidelity process which is used to introduce precise modifications into the target sequence. The process requires adding a DNA repair template including a desired sequence, which the cell's machinery uses to repair the DSB, incorporating the sequence of interest into the genome. Since NHEJ is more efficient than HDR, most DSBs will be repaired via NHEJ, introducing gene knockouts. To increase frequency of HDR, inhibiting genes associated with NHEJ and performing the process in particular cell cycle phases (primarily S and G2) appear effective. CRISPR/Cas9 is an effective way of manipulating the genome in vivo in animals as well as in human cells in vitro, but some issues with the efficiency of delivery and editing mean that it is not considered safe for use in viable human embryos or the body's germ cells. As well as the higher efficiency of NHEJ making inadvertent knockouts likely, CRISPR can introduce DSBs to unintended parts of the genome, called off-target effects. These arise due to the spacer sequence of the gRNA conferring sufficient sequence homology to random loci in the genome, which can introduce random mutations throughout. If performed in germline cells, mutations could be introduced to all the cells of a developing embryo. There are developments to prevent unintended consequences otherwise known as off-target effects due to gene editing. There is a race to develop new gene editing technologies that prevent off-target effects from occurring with some of the technologies being known as biased off-target detection, and Anti-CRISPR Proteins. For biased off-target effects detection, there are several tools to predict the locations where off-target effects may take place. Within the technology of biased off-target effects detection, there are two main models, Alignment Based Models that involve having the sequences of gRNA being aligned with sequences of genome, after which then the off-target locations are predicted. The second model is known as the Scoring-Based Model where each piece of gRNA is scored for their off-target effects in accordance with their positioning. Regulation on CRISPR use In 2015, the International Summit on Human Gene Editing was held in Washington D.C., hosted by scientists from China, the UK and the U.S. The summit concluded that genome editing of somatic cells using CRISPR and other genome editing tools would be allowed to proceed under FDA regulations, but human germline engineering would not be pursued. In February 2016, scientists at the Francis Crick Institute in London were given a license permitting them to edit human embryos using CRISPR to investigate early development. Regulations were imposed to prevent the researchers from implanting the embryos and to ensure experiments were stopped and embryos destroyed after seven days. In November 2018, Chinese scientist He Jiankui announced that he had performed the first germline engineering on viable human embryos, which have since been brought to term. The research claims received significant criticism, and Chinese authorities suspended He's research activity. Following the event, scientists and government bodies have called for more stringent regulations to be imposed on the use of CRISPR technology in embryos, with some calling for a global moratorium on germline genetic engineering. Chinese authorities have announced stricter controls will be imposed, with Communist Party general secretary Xi Jinping and government premier Li Keqiang calling for new gene-editing legislations to be introduced. As of January 2020, germline genetic alterations are prohibited in 24 countries by law and also in 9 other countries by their guidelines. The Council of Europe's Convention on Human Rights and Biomedicine, also known as the Oviedo Convention, has stated in its article 13 "Interventions on the human genome" as follows: "An intervention seeking to modify the human genome may only be undertaken for preventive, diagnostic or therapeutic purposes and only if its aim is not to introduce any modification in the genome of any descendants". Nonetheless, wide public debate has emerged, targeting the fact that the Oviedo Convention Article 13 should be revisited and renewed, especially due to the fact that it was constructed in 1997 and may be out of date, given recent technological advancements in the field of genetic engineering. Lulu and Nana controversy The Lulu and Nana controversy refers to the two Chinese twin girls born in November 2018, who had been genetically modified as embryos by the Chinese scientist He Jiankui. The twins are believed to be the first genetically modified babies. The girls' parents had participated in a clinical project run by He, which involved IVF, PGD and genome editing procedures in an attempt to edit the gene CCR5. CCR5 encodes a protein used by HIV to enter host cells, so by introducing a specific mutation into the gene CCR5 Δ32 He claimed that the process would confer innate resistance to HIV. The project run by He recruited couples wanting children where the man was HIV-positive and the woman uninfected. During the project, He performed IVF with sperm and eggs from the couples and then introduced the CCR5 Δ32 mutation into the genomes of the embryos using CRISPR/Cas9. He then used PGD on the edited embryos during which he sequenced biopsied cells to identify whether the mutation had been successfully introduced. He reported some mosaicism in the embryos, whereby the mutation had integrated into some cells but not all, suggesting the offspring would not be entirely protected against HIV. He claimed that during the PGD and throughout the pregnancy, fetal DNA was sequenced to check for off-target errors introduced by the CRISPR/Cas9 technology, however the NIH released a statement in which they announced "the possibility of damaging off-target effects has not been satisfactorily explored". The girls were born in early November 2018, and were reported by He to be healthy. His research was conducted in secret until November 2018, when documents were posted on the Chinese clinical trials registry and MIT Technology Review published a story about the project. Following this, He was interviewed by the Associated Press and presented his work on 27 November at the Second International Human Genome Editing Summit which was held in Hong Kong. Although the information available about this experiment is relatively limited, it is deemed that the scientist erred against many ethical, social and moral rules but also China's guidelines and regulations, which prohibited germ-line genetic modifications in human embryos, while conducting this trial. From a technological point of view, the CRISPR/Cas9 technique is one of the most precise and least expensive methods of gene modification to this day, whereas there are still a number of limitations that keep the technique from being labelled as safe and efficient. During the First International Summit on Human Gene Editing in 2015 the participants agreed that a halt must be set on germline genetic alterations in clinical settings unless and until: "(1) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (2) there is broad societal consensus about the appropriateness of the proposed application". However, during the second International Summit in 2018 the topic was once again brought up by stating: "Progress over the last three years and the discussions at the current summit, however, suggest that it is time to define a rigorous, responsible translational pathway toward such trials". Inciting that the ethical and legal aspects should indeed be revisited G. Daley, representative of the summit's management and Dean of Harvard Medical School depicted Dr. He's experiment as "a wrong turn on the right path". The experiment was met with widespread criticism and was very controversial, globally as well as in China. Several bioethicists, researchers and medical professionals have released statements condemning the research, including Nobel laureate David Baltimore who deemed the work "irresponsible" and one pioneer of the CRISPR/Cas9 technology, biochemist Jennifer Doudna at University of California, Berkeley. The director of the NIH, Francis S. Collins stated that the "medical necessity for inactivation of CCR5 in these infants is utterly unconvincing" and condemned He Jiankui and his research team for 'irresponsible work'. Other scientists, including geneticist George Church of Harvard University suggested gene editing for disease resistance was "justifiable" but expressed reservations regarding the conduct of He's work. The Safe Genes program by DARPA has the goal to protect soldiers against gene editing war tactics. They receive information from ethical experts to better predict and understand future and current potential gene editing issues. The World Health Organization has launched a global registry to track research on human genome editing, after a call to halt all work on genome editing. The Chinese Academy of Medical Sciences responded to the controversy in the journal Lancet, condemning He for violating ethical guidelines documented by the government and emphasising that germline engineering should not be performed for reproductive purposes. The academy ensured they would "issue further operational, technical and ethical guidelines as soon as possible" to impose tighter regulation on human embryo editing. Ethical considerations Editing embryos, germ cells and the generation of designer babies is the subject of ethical debate, as a result of the implications in modifying genomic information in a heritable manner. This includes arguments over unbalanced gender selection and gamete selection. Despite regulations set by individual countries' governing bodies, the absence of a standardized regulatory framework leads to frequent discourse in discussion of germline engineering among scientists, ethicists and the general public. Arthur Caplan, the head of the Division of Bioethics at New York University suggests that establishing an international group to set guidelines for the topic would greatly benefit global discussion and proposes instating "religious and ethics and legal leaders" to impose well-informed regulations. In many countries, editing embryos and germline modification for reproductive use is illegal. As of 2017, the U.S. restricts the use of germline modification and the procedure is under heavy regulation by the FDA and NIH. The American National Academy of Sciences and National Academy of Medicine indicated they would provide qualified support for human germline editing "for serious conditions under stringent oversight", should safety and efficiency issues be addressed. In 2019, World Health Organization called human germline genome editing as "irresponsible". Since genetic modification poses risk to any organism, researchers and medical professionals must give the prospect of germline engineering careful consideration. The main ethical concern is that these types of treatments will produce a change that can be passed down to future generations and therefore any error, known or unknown, will also be passed down and will affect the offspring. Theologian Ronald Green of Dartmouth College has raised concern that this could result in a decrease in genetic diversity and the accidental introduction of new diseases in the future. When considering support for research into germline engineering, ethicists have often suggested that it can be considered unethical not to consider a technology that could improve the lives of children who would be born with congenital disorders. Geneticist George Church claims that he does not expect germline engineering to increase societal disadvantage, and recommends lowering costs and improving education surrounding the topic to dispel these views. He emphasizes that allowing germline engineering in children who would otherwise be born with congenital defects could save around 5% of babies from living with potentially avoidable diseases. Jackie Leach Scully, professor of social and bioethics at Newcastle University, acknowledges that the prospect of designer babies could leave those living with diseases and unable to afford the technology feeling marginalized and without medical support. However, Professor Leach Scully also suggests that germline editing provides the option for parents "to try and secure what they think is the best start in life" and does not believe it should be ruled out. Similarly, Nick Bostrom, an Oxford philosopher known for his work on the risks of artificial intelligence, proposed that "super-enhanced" individuals could "change the world through their creativity and discoveries, and through innovations that everyone else would use". Many bioethicists emphasize that germline engineering is usually considered in the best interest of a child, therefore associated should be supported. Dr James Hughes, a bioethicist at Trinity College, Connecticut, suggests that the decision may not differ greatly from others made by parents which are well accepted – choosing with whom to have a child and using contraception to denote when a child is conceived. Julian Savulescu, a bioethicist and philosopher at Oxford University believes parents "should allow selection for non‐disease genes even if this maintains or increases social inequality", coining the term procreative beneficence to describe the idea that the children "expected to have the best life" should be selected. The Nuffield Council on Bioethics said in 2017 that there was "no reason to rule out" changing the DNA of a human embryo if performed in the child's interest, but stressed that this was only provided that it did not contribute to societal inequality. Furthermore, Nuffield Council in 2018 detailed applications, which would preserve equality and benefit humanity, such as elimination of hereditary disorders and adjusting to warmer climate. Philosopher and Director of Bioethics at non-profit Invincible Wellbeing David Pearce argues that "the question [of designer babies] comes down to an analysis of risk-reward ratios - and our basic ethical values, themselves shaped by our evolutionary past." According to Pearce,"it's worth recalling that each act of old-fashioned sexual reproduction is itself an untested genetic experiment", often compromising a child's wellbeing and pro-social capacities even if the child grows in a healthy environment. Pearce thinks that as technology matures, more people may find it unacceptable to rely on "genetic roulette of natural selection". Conversely, several concerns have been raised regarding the possibility of generating designer babies, especially concerning the inefficiencies currently presented by the technologies. Green stated that although the technology was "unavoidably in our future", he foresaw "serious errors and health problems as unknown genetic side effects in 'edited' children" arise. Furthermore, Green warned against the possibility that "the well-to-do" could more easily access the technologies "..that make them even better off". This concern regarding germline editing exacerbating a societal and financial divide is shared amongst other researches, with the chair of the Nuffield Bioethics Council Professor Karen Yeung stressing that if funding of the procedures "were to exacerbate social injustice, in our view that would not be an ethical approach". Social and religious worries also arise over the possibility of editing human embryos. In a survey conducted by the Pew Research Centre, it was found that only a third of the Americans surveyed who identified as strongly Christian approved of germline editing. Catholic leaders are in the middle ground. This stance is because, according to Catholicism, a baby is a gift from God, and Catholics believe that people are created to be perfect in God's eyes. Thus, altering the genetic makeup of an infant is unnatural. In 1984, Pope John Paul II addressed that genetic manipulation in aiming to heal diseases is acceptable in the Church. He stated that it "will be considered in principle as desirable provided that it tends to the real promotion of the personal well-being of man, without harming his integrity or worsening his life conditions". However, it is unacceptable if designer babies are used to create a super/superior race including cloning humans. The Catholic Church rejects human cloning even if its purpose is to produce organs for therapeutic usage. The Vatican has stated that "The fundamental values connected with the techniques of artificial human procreation are two: the life of the human being called into existence and the special nature of the transmission of human life in marriage". According to them, it violates the dignity of the individual and is morally illicit. A survey conducted by the Mayo Clinic in the Midwestern United States in 2017 saw that most of the participants agreed against the creation of designer babies with some noting its eugenic undertones. The participants also felt that gene editing may have unintended consequences that it may be manifested later in life for those that undergo gene editing. Some that took the survey worried that gene editing may lead to a decrease in the genetic diversity of the population in societies. The survey also noted how the participants were worried about the potential socioeconomic effects designer babies may exacerbate. The authors of the survey noted that the results of the survey showed that there is a greater need for interaction between the public and the scientific community concerning the possible implications and the recommended regulation of gene editing as it was unclear to them how much those that participated knew about gene editing and its effects prior to taking the survey. In Islam, the positive attitude towards genetic engineering is based on the general principle that Islam aims at facilitating human life. However, the negative view comes from the process used to create a designer baby. Oftentimes, it involves the destruction of some embryos. Muslims believe that "embryos already has a soul" at conception. Thus, the destruction of embryos is against the teaching of the Qur'an, Hadith, and Shari'ah law, that teaches our responsibility to protect human life. To clarify, the procedure would be viewed as "acting like God/Allah". With the idea, that parents could choose the gender of their child, Islam believes that humans have no decision to choose the gender, and that "gender selection is only up to God". Since 2020, there have been discussions about American studies that use embryos without embryonic implantation with the CRISPR/Cas9 technique that had been modified with HDR (homology-directed repair), and the conclusions from the results were that gene editing technologies are currently not mature enough for real world use and that there is a need for more studies that generate safe results over a longer period of time. An article in the journal Bioscience Reports discussed how health in terms of genetics is not straightforward and thus there should be extensive deliberation for operations involving gene editing when the technology gets mature enough for real world use, where all of the potential effects are known on a case-by-case basis to prevent undesired effects on the subject or patient being operated on. Social aspects also raise concern, as highlighted by Josephine Quintavelle, director of Comment on Reproductive Ethics at Queen Mary University of London, who states that selecting children's traits is "turning parenthood into an unhealthy model of self-gratification rather than a relationship". One major worry among scientists, including Marcy Darnovsky at the Center for Genetics and Society in California, is that permitting germline engineering for correction of disease phenotypes is likely to lead to its use for cosmetic purposes and enhancement. Meanwhile, Henry Greely, a bioethicist at Stanford University in California, states that "almost everything you can accomplish by gene editing, you can accomplish by embryo selection", suggesting the risks undertaken by germline engineering may not be necessary. Alongside this, Greely emphasizes that the beliefs that genetic engineering will lead to enhancement are unfounded, and that claims that we will enhance intelligence and personality are far off – "we just don't know enough and are unlikely to for a long time – or maybe for ever". See also Biohappiness Directed evolution (transhumanism) Epidemiology of genetic disorder Eugenics New eugenics Genetically modified organism Human enhancement Human genetic engineering Human germline engineering Lulu and Nana (Gene edited babies in China 2018) Moral enhancement Reprogenetics Transhumanism References Further reading A non-fiction account of Strongin's pioneering use of IVF and PGD to have a healthy child whose cord blood could save the life of her son Henry 1989 introductions 2018 introductions Bioethics Fertility medicine Genetic engineering Genome editing Human reproduction Transhumanism
Designer baby
[ "Chemistry", "Technology", "Engineering", "Biology" ]
7,477
[ "Bioethics", "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Transhumanism", "Ethics of science and technology", "Molecular biology" ]
1,708,335
https://en.wikipedia.org/wiki/Sanger%20sequencing
Sanger sequencing is a method of DNA sequencing that involves electrophoresis and is based on the random incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. After first being developed by Frederick Sanger and colleagues in 1977, it became the most widely used sequencing method for approximately 40 years. An automated instrument using slab gel electrophoresis and fluorescent labels was first commercialized by Applied Biosystems in March 1987. Later, automated slab gels were replaced with automated capillary array electrophoresis. More recently, higher volume Sanger sequencing has been replaced by next generation sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use for smaller-scale projects and for validation of deep sequencing results. It still has the advantage over short-read sequencing technologies (like Illumina) in that it can produce DNA sequence reads of > 500 nucleotides and maintains a very low error rate with accuracies around 99.99%. Sanger sequencing is still actively being used in efforts for public health initiatives such as sequencing the spike protein from SARS-CoV-2 as well as for the surveillance of norovirus outbreaks through the Center for Disease Control and Prevention's (CDC) CaliciNet surveillance network. Method The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotide triphosphates (dNTPs), and modified di-deoxynucleotide triphosphates (ddNTPs), the latter of which terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a modified ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in automated sequencing machines. The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase. To each reaction is added only one of the four dideoxynucleotides (ddATP, ddGTP, ddCTP, or ddTTP), while the other added nucleotides are ordinary ones. The deoxynucleotide concentration should be approximately 100-fold higher than that of the corresponding dideoxynucleotide (e.g. 0.5mM dTTP : 0.005mM ddTTP) to allow enough fragments to be produced while still transcribing the complete sequence (but the concentration of ddNTP also depends on the desired length of sequence). Putting it in a more sensible order, four separate reactions are needed in this process to test all four ddNTPs. Following rounds of template DNA extension from the bound primer, the resulting DNA fragments are heat denatured and separated by size using gel electrophoresis. In the original publication of 1977, the formation of base-paired loops of ssDNA was a cause of serious difficulty in resolving bands at some locations. This is frequently performed using a denaturing polyacrylamide-urea gel with each of the four reactions run in one of four individual lanes (lanes A, T, G, C). The DNA bands may then be visualized by autoradiography or UV light, and the DNA sequence can be directly read off the X-ray film or gel image. In the image on the right, X-ray film was exposed to the gel, and the dark bands correspond to DNA fragments of different lengths. A dark band in a lane indicates a DNA fragment that is the result of chain termination after incorporation of a dideoxynucleotide (ddATP, ddGTP, ddCTP, or ddTTP). The relative positions of the different bands among the four lanes, from bottom to top, are then used to read the DNA sequence. Technical variations of chain-termination sequencing include tagging with nucleotides containing radioactive phosphorus for radiolabelling, or using a primer labeled at the 5' end with a fluorescent dye. Dye-primer sequencing facilitates reading in an optical system for faster and more economical analysis and automation. The later development by Leroy Hood and coworkers of fluorescently labeled ddNTPs and primers set the stage for automated, high-throughput DNA sequencing. Chain-termination methods have greatly simplified DNA sequencing. For example, chain-termination-based kits are commercially available that contain the reagents needed for sequencing, pre-aliquoted and ready to use. Limitations include non-specific binding of the primer to the DNA, affecting accurate read-out of the DNA sequence, and DNA secondary structures affecting the fidelity of the sequence. Dye-terminator sequencing Dye-terminator sequencing utilizes labelling of the chain terminator ddNTPs, which permits sequencing in a single reaction rather than four reactions as in the labelled-primer method. In dye-terminator sequencing, each of the four dideoxynucleotide chain terminators is labelled with fluorescent dyes, each of which emits light at different wavelengths. Owing to its greater expediency and speed, dye-terminator sequencing is now the mainstay in automated sequencing. Its limitations include dye effects due to differences in the incorporation of the dye-labelled chain terminators into the DNA fragment, resulting in unequal peak heights and shapes in the electronic DNA sequence trace electropherogram (a type of chromatogram) after capillary electrophoresis (see figure to the left). This problem has been addressed with the use of modified DNA polymerase enzyme systems and dyes that minimize incorporation variability, as well as methods for eliminating "dye blobs". The dye-terminator sequencing method, along with automated high-throughput DNA sequence analyzers, was used for the vast majority of sequencing projects until the introduction of next generation sequencing. Automation and sample preparation Automated DNA-sequencing instruments (DNA sequencers) can sequence up to 384 DNA samples in a single batch. Batch runs may occur up to 24 times a day. DNA sequencers separate strands by size (or length) using capillary electrophoresis, they detect and record dye fluorescence, and output data as fluorescent peak trace chromatograms. Sequencing reactions (thermocycling and labelling), cleanup and re-suspension of samples in a buffer solution are performed separately, before loading samples onto the sequencer. A number of commercial and non-commercial software packages can trim low-quality DNA traces automatically. These programs score the quality of each peak and remove low-quality base peaks (which are generally located at the ends of the sequence). The accuracy of such algorithms is inferior to visual examination by a human operator, but is adequate for automated processing of large sequence data sets. Applications of dye-terminating sequencing The field of public health plays many roles to support patient diagnostics as well as environmental surveillance of potential toxic substances and circulating biological pathogens. Public health laboratories (PHL) and other laboratories around the world have played a pivotal role in providing rapid sequencing data for the surveillance of the virus SARS-CoV-2, causative agent for COVID-19, during the pandemic that was declared a public health emergency on January 30, 2020. Laboratories were tasked with the rapid implementation of sequencing methods and asked to provide accurate data to assist in the decision-making models for the development of policies to mitigate spread of the virus. Many laboratories resorted to next generation sequencing methodologies while others supported efforts with Sanger sequencing. The sequencing efforts of SARS-CoV-2 are many, while most laboratories implemented whole genome sequencing of the virus, others have opted to sequence very specific genes of the virus such as the S-gene, encoding the information needed to produce the spike protein. The high mutation rate of SARS-CoV-2 leads to genetic differences within the S-gene and these differences have played a role in the infectivity of the virus. Sanger sequencing of the S-gene provides a quick, accurate, and more affordable method to retrieving the genetic code. Laboratories in lower income countries may not have the capabilities to implement expensive applications such as next generation sequencing, so Sanger methods may prevail in supporting the generation of sequencing data for surveillance of variants. Sanger sequencing is also the "gold standard" for norovirus surveillance methods for the Center for Disease Control and Prevention's (CDC) CaliciNet network. CalciNet is an outbreak surveillance network that was established in March 2009. The goal of the network is to collect sequencing data of circulating noroviruses in the United States and activate downstream action to determine the source of infection to mitigate the spread of the virus. The CalciNet network has identified many infections as foodborne illnesses. This data can then be published and used to develop recommendations for future action to prevent tainting food. The methods employed for detection of norovirus involve targeted amplification of specific areas of the genome. The amplicons are then sequenced using dye-terminating Sanger sequencing and the chromatograms and sequences generated are analyzed with a software package developed in BioNumerics. Sequences are tracked and strain relatedness is studied to infer epidemiological relevance. Challenges Common challenges of DNA sequencing with the Sanger method include poor quality in the first 15–40 bases of the sequence due to primer binding and deteriorating quality of sequencing traces after 700–900 bases. Base calling software such as Phred typically provides an estimate of quality to aid in trimming of low-quality regions of sequences. In cases where DNA fragments are cloned before sequencing, the resulting sequence may contain parts of the cloning vector. In contrast, PCR-based cloning and next-generation sequencing technologies based on pyrosequencing often avoid using cloning vectors. Recently, one-step Sanger sequencing (combined amplification and sequencing) methods such as Ampliseq and SeqSharp have been developed that allow rapid sequencing of target genes without cloning or prior amplification. Current methods can directly sequence only relatively short (300–1000 nucleotides long) DNA fragments in a single reaction. The main obstacle to sequencing DNA fragments above this size limit is insufficient power of separation for resolving large DNA fragments that differ in length by only one nucleotide. Microfluidic Sanger sequencing Microfluidic Sanger sequencing is a lab-on-a-chip application for DNA sequencing, in which the Sanger sequencing steps (thermal cycling, sample purification, and capillary electrophoresis) are integrated on a wafer-scale chip using nanoliter-scale sample volumes. This technology generates long and accurate sequence reads, while obviating many of the significant shortcomings of the conventional Sanger method (e.g. high consumption of expensive reagents, reliance on expensive equipment, personnel-intensive manipulations, etc.) by integrating and automating the Sanger sequencing steps. In its modern inception, high-throughput genome sequencing involves fragmenting the genome into small single-stranded pieces, followed by amplification of the fragments by polymerase chain reaction (PCR). Adopting the Sanger method, each DNA fragment is irreversibly terminated with the incorporation of a fluorescently labeled dideoxy chain-terminating nucleotide, thereby producing a DNA “ladder” of fragments that each differ in length by one base and bear a base-specific fluorescent label at the terminal base. Amplified base ladders are then separated by capillary array electrophoresis (CAE) with automated, in situ “finish-line” detection of the fluorescently labeled ssDNA fragments, which provides an ordered sequence of the fragments. These sequence reads are then computer assembled into overlapping or contiguous sequences (termed "contigs") which resemble the full genomic sequence once fully assembled. Sanger methods achieve maximum read lengths of approximately 800 bp (typically 500–600 bp with non-enriched DNA). The longer read lengths in Sanger methods display significant advantages over other sequencing methods especially in terms of sequencing repetitive regions of the genome. A challenge of short-read sequence data is particularly an issue in sequencing new genomes (de novo) and in sequencing highly rearranged genome segments, typically those seen of cancer genomes or in regions of chromosomes that exhibit structural variation. Applications of microfluidic sequencing technologies Other useful applications of DNA sequencing include single nucleotide polymorphism (SNP) detection, single-strand conformation polymorphism (SSCP) heteroduplex analysis, and short tandem repeat (STR) analysis. Resolving DNA fragments according to differences in size and/or conformation is the most critical step in studying these features of the genome. Device design The sequencing chip has a four-layer construction, consisting of three 100-mm-diameter glass wafers (on which device elements are microfabricated) and a polydimethylsiloxane (PDMS) membrane. Reaction chambers and capillary electrophoresis channels are etched between the top two glass wafers, which are thermally bonded. Three-dimensional channel interconnections and microvalves are formed by the PDMS and bottom manifold glass wafer. The device consists of three functional units, each corresponding to the Sanger sequencing steps. The thermal cycling (TC) unit is a 250-nanoliter reaction chamber with integrated resistive temperature detector, microvalves, and a surface heater. Movement of reagent between the top all-glass layer and the lower glass-PDMS layer occurs through 500-μm-diameter via-holes. After thermal-cycling, the reaction mixture undergoes purification in the capture/purification chamber, and then is injected into the capillary electrophoresis (CE) chamber. The CE unit consists of a 30-cm capillary which is folded into a compact switchback pattern via 65-μm-wide turns. Sequencing chemistry Thermal cycling In the TC reaction chamber, dye-terminator sequencing reagent, template DNA, and primers are loaded into the TC chamber and thermal-cycled for 35 cycles ( at 95 °C for 12 seconds and at 60 °C for 55 seconds). Purification The charged reaction mixture (containing extension fragments, template DNA, and excess sequencing reagent) is conducted through a capture/purification chamber at 30 °C via a 33-Volts/cm electric field applied between capture outlet and inlet ports. The capture gel through which the sample is driven, consists of 40 μM of oligonucleotide (complementary to the primers) covalently bound to a polyacrylamide matrix. Extension fragments are immobilized by the gel matrix, and excess primer, template, free nucleotides, and salts are eluted through the capture waste port. The capture gel is heated to 67–75 °C to release extension fragments. Capillary electrophoresis Extension fragments are injected into the CE chamber where they are electrophoresed through a 125-167-V/cm field. Platforms The Apollo 100 platform (Microchip Biotechnologies Inc., Dublin, California) integrates the first two Sanger sequencing steps (thermal cycling and purification) in a fully automated system. The manufacturer claims that samples are ready for capillary electrophoresis within three hours of the sample and reagents being loaded into the system. The Apollo 100 platform requires sub-microliter volumes of reagents. Comparisons to other sequencing techniques The ultimate goal of high-throughput sequencing is to develop systems that are low-cost, and extremely efficient at obtaining extended (longer) read lengths. Longer read lengths of each single electrophoretic separation, substantially reduces the cost associated with de novo DNA sequencing and the number of templates needed to sequence DNA contigs at a given redundancy. Microfluidics may allow for faster, cheaper and easier sequence assembly. See also Maxam–Gilbert sequencing Second-generation sequencing Third-generation sequencing References Further reading External links MBI Says New Tool That Automates Sanger Sample Prep Cuts Reagent and Labor Costs DNA sequencing methods Molecular biology techniques 1977 in biotechnology de:Sanger-Methode
Sanger sequencing
[ "Chemistry", "Biology" ]
3,402
[ "Genetics techniques", "DNA sequencing methods", "Molecular biology techniques", "DNA sequencing", "Molecular biology" ]
1,708,398
https://en.wikipedia.org/wiki/Cryptobiosis
Cryptobiosis or anabiosis is a metabolic state in extremophilic organisms in response to adverse environmental conditions such as desiccation, freezing, and oxygen deficiency. In the cryptobiotic state, all measurable metabolic processes stop, preventing reproduction, development, and repair. When environmental conditions return to being hospitable, the organism will return to its metabolic state of life as it was prior to cryptobiosis. Forms Anhydrobiosis Anhydrobiosis is the most studied form of cryptobiosis and occurs in situations of extreme desiccation. The term anhydrobiosis derives from the Greek for "life without water" and is most commonly used for the desiccation tolerance observed in certain invertebrate animals such as bdelloid rotifers, tardigrades, brine shrimp, nematodes, and at least one insect, a species of chironomid (Polypedilum vanderplanki). However, other life forms exhibit desiccation tolerance. These include the resurrection plant Craterostigma plantagineum, the majority of plant seeds, and many microorganisms such as bakers' yeast. Studies have shown that some anhydrobiotic organisms can survive for decades, even centuries, in the dry state. Invertebrates undergoing anhydrobiosis often contract into a smaller shape and some proceed to form a sugar called trehalose. Desiccation tolerance in plants is associated with the production of another sugar, sucrose. These sugars are thought to protect the organism from desiccation damage. In some creatures, such as bdelloid rotifers, no trehalose has been found, which has led scientists to propose other mechanisms of anhydrobiosis, possibly involving intrinsically disordered proteins. In 2011, Caenorhabditis elegans, a nematode that is also one of the best-studied model organisms, was shown to undergo anhydrobiosis in the dauer larva stage. Further research taking advantage of genetic and biochemical tools available for this organism revealed that in addition to trehalose biosynthesis, a set of other functional pathways is involved in anhydrobiosis at the molecular level. These are mainly defense mechanisms against reactive oxygen species and xenobiotics, expression of heat shock proteins and intrinsically disordered proteins as well as biosynthesis of polyunsaturated fatty acids and polyamines. Some of them are conserved among anhydrobiotic plants and animals, suggesting that anhydrobiotic ability may depend on a set of common mechanisms. Understanding these mechanisms in detail might enable modification of non-anhydrobiotic cells, tissues, organs or even organisms so that they can be preserved in a dried state of suspended animation over long time periods. As of 2004, such an application of anhydrobiosis is being applied to vaccines. In vaccines, the process can produce a dry vaccine that reactivates once it is injected into the body. In theory, dry-vaccine technology could be used on any vaccine, including live vaccines such as the one for measles. It could also potentially be adapted to allow a vaccine's slow release, eliminating the need for boosters. This proposes to eliminate the need for refrigerating vaccines, thus making dry vaccines more widely available throughout the developing world where refrigeration, electricity, and proper storage are less accessible. Based on similar principles, lyopreservation has been developed as a technique for preservation of biological samples at ambient temperatures. Lyopreservation is a biomimetic strategy based on anhydrobiosis to preserve cells at ambient temperatures. It has been explored as an alternative technique for cryopreservation. The technique has the advantages of being able to preserve biological samples at ambient temperatures, without the need for refrigeration or use of cryogenic temperatures. Anoxybiosis In situations lacking oxygen (a.k.a., anoxia), many cryptobionts (such as M. tardigradum) take in water and become turgid and immobile, but can survive for prolonged periods of time. Some ectothermic vertebrates and some invertebrates, such as brine shrimps, copepods, nematodes, and sponge gemmules, are capable of surviving in a seemingly inactive state during anoxic conditions for months to decades. Studies of the metabolic activity of these idling organisms during anoxia have been mostly inconclusive. This is because it is difficult to measure very small degrees of metabolic activity reliably enough to prove a cryptobiotic state rather than ordinary metabolic rate depression (MRD). Many experts are skeptical of the biological feasibility of anoxybiosis, as the organism is managing to prevent damage to its cellular structures from the environmental negative free energy, despite being both surrounded by plenty of water and thermal energy and without using any free energy of its own. However, there is evidence that the stress-induced protein p26 may act as a protein chaperone that requires no energy in cystic Artemia franciscana (sea monkey) embryos, and most likely an extremely specialized and slow guanine polynucleotide pathway continues to provide metabolic free energy to the A. franciscana embryos during anoxic conditions. It seems that A. franciscana approaches but does not reach true anoxybiosis. Chemobiosis Chemobiosis is the cryptobiotic response to high levels of environmental toxins. It has been observed in tardigrades. Cryobiosis Cryobiosis is a form of cryptobiosis that takes place in reaction to decreased temperature. Cryobiosis begins when the water surrounding the organism's cells has been frozen. Stopping molecule mobility allows the organism to endure the freezing temperatures until more hospitable conditions return. Organisms capable of enduring these conditions typically feature molecules that facilitate freezing of water in preferential locations while also prohibiting the growth of large ice crystals that could otherwise damage cells. One such organism is the lobster. Osmobiosis Osmobiosis is the least studied of all types of cryptobiosis. Osmobiosis occurs in response to increased solute concentration in the solution the organism lives in. Little is known for certain, other than that osmobiosis appears to involve a cessation of metabolism. Examples The brine shrimp Artemia salina, which can be found in the Makgadikgadi Pans in Botswana, survives over the dry season when the water of the pans evaporates, leaving a virtually desiccated lake bed. The tardigrade, or water bear, can undergo all five types of cryptobiosis. While in a cryptobiotic state, its metabolism reduces to less than 0.01% of what is normal, and its water content can drop to 1% of normal. It can withstand extreme temperature, radiation, and pressure while in a cryptobiotic state. Some nematodes and rotifers can also undergo cryptobiosis. See also References Further reading David A. Wharton, Life at the Limits: Organisms in Extreme Environments, Cambridge University Press, 2002, hardcover, Physiology Senescence Articles containing video clips
Cryptobiosis
[ "Chemistry", "Biology" ]
1,506
[ "Senescence", "Cellular processes", "Metabolism", "Physiology" ]
1,708,412
https://en.wikipedia.org/wiki/Restriction%20digest
A restriction digest is a procedure used in molecular biology to prepare DNA for analysis or other processing. It is sometimes termed DNA fragmentation, though this term is used for other procedures as well. In a restriction digest, DNA molecules are cleaved at specific restriction sites of 4-12 nucleotides in length by use of restriction enzymes which recognize these sequences. The resulting digested DNA is very often selectively amplified using polymerase chain reaction (PCR), making it more suitable for analytical techniques such as agarose gel electrophoresis, and chromatography. It is used in genetic fingerprinting, plasmid subcloning, and RFLP analysis. Restriction site A given restriction enzyme cuts DNA segments within a specific nucleotide sequence, at what is called a restriction site. These recognition sequences are typically four, six, eight, ten, or twelve nucleotides long and generally palindromic (i.e. the same nucleotide sequence in the 5' – 3' direction). Because there are only so many ways to arrange the four nucleotides that compose DNA (Adenine, Thymine, Guanine and Cytosine) into a four- to twelve-nucleotide sequence, recognition sequences tend to occur by chance in any long sequence. Restriction enzymes specific to hundreds of distinct sequences have been identified and synthesized for sale to laboratories, and as a result, several potential "restriction sites" appear in almost any gene or locus of interest on any chromosome. Furthermore, almost all artificial plasmids include a (often entirely synthetic) polylinker (also called "multiple cloning site") that contains dozens of restriction enzyme recognition sequences within a very short segment of DNA. This allows the insertion of almost any specific fragment of DNA into plasmid vectors, which can be efficiently "cloned" by insertion into replicating bacterial cells. After restriction digest, DNA can then be analysed using agarose gel electrophoresis. In gel electrophoresis, a sample of DNA is first "loaded" onto a slab of agarose gel (literally pipetted into small wells at one end of the slab). The gel is then subjected to an electric field, which draws the negatively charged DNA across it. The molecules travel at different rates (and therefore end up at different distances) depending on their net charge (more highly charged particles travel further), and size (smaller particles travel further). Since none of the four nucleotide bases carry any charge, net charge becomes insignificant and size is the main factor affecting rate of diffusion through the gel. Net charge in DNA is produced by the sugar-phosphate backbone. This is in contrast to proteins, in which there is no "backbone", and net charge is generated by different combinations and numbers of charged amino acids. Possible uses Restriction digest is most commonly used as part of the process of the molecular cloning of DNA fragment into a vector (such as a cloning vector or an expression vector). The vector typically contains a multiple cloning site where many restriction site may be found, and a foreign piece of DNA may be inserted into the vector by first cutting the restriction sites in the vector as well the DNA fragment, followed by ligation of the DNA fragment into the vector. Restriction digests are also necessary for performing any of the following analytical techniques: RFLP – Restriction fragment length polymorphism AFLP – Amplified fragment length polymorphism STRP – Short tandem repeat polymorphism Various restriction enzymes There are numerous types of restriction enzymes, each of which will cut DNA differently. Most commonly used restriction enzymes are Type II restriction endonuclease (See article on Restriction enzymes for examples). There are some that cut a three base pair sequence while others can cut four, six, and even eight. Each enzyme has distinct properties that determine how efficiently it can cut and under what conditions. Most manufacturers that produce such enzymes will often provide a specific buffer solution that contains the unique mix of cations and other components that aid the enzyme in cutting as efficiently as possible. Different restriction enzymes may also have different optimal temperatures under which they function. Note that for efficient digest of DNA, the restriction site should not be located at the very end of a DNA fragment. The restriction enzymes may require a minimum number of base pairs between the restriction site and the end of the DNA for the enzyme to work efficiently. This number may vary between enzymes, but for most commonly used restriction enzymes around 6–10 base pair is sufficient. See also Agarose gel electrophoresis DNA sequencing Genetic fingerprinting PCR Restriction fragment length polymorphism References External links New England Biolabs – Producer of restriction enzymes. This site contains highly detailed information on numerous enzymes, their optimal temperatures, and recognition sequences. REBASE Genetics techniques Molecular biology
Restriction digest
[ "Chemistry", "Engineering", "Biology" ]
982
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "Molecular biology" ]
1,708,801
https://en.wikipedia.org/wiki/Transverse%20isotropy
A transversely isotropic material is one with physical properties that are symmetric about an axis that is normal to a plane of isotropy. This transverse plane has infinite planes of symmetry and thus, within this plane, the material properties are the same in all directions. Hence, such materials are also known as "polar anisotropic" materials. In geophysics, vertically transverse isotropy (VTI) is also known as radial anisotropy. This type of material exhibits hexagonal symmetry (though technically this ceases to be true for tensors of rank 6 and higher), so the number of independent constants in the (fourth-rank) elasticity tensor are reduced to 5 (from a total of 21 independent constants in the case of a fully anisotropic solid). The (second-rank) tensors of electrical resistivity, permeability, etc. have two independent constants. Example of transversely isotropic materials An example of a transversely isotropic material is the so-called on-axis unidirectional fiber composite lamina where the fibers are circular in cross section. In a unidirectional composite, the plane normal to the fiber direction can be considered as the isotropic plane, at long wavelengths (low frequencies) of excitation. In the figure to the right, the fibers would be aligned with the axis, which is normal to the plane of isotropy. In terms of effective properties, geological layers of rocks are often interpreted as being transversely isotropic. Calculating the effective elastic properties of such layers in petrology has been coined Backus upscaling, which is described below. Material symmetry matrix The material matrix has a symmetry with respect to a given orthogonal transformation () if it does not change when subjected to that transformation. For invariance of the material properties under such a transformation we require Hence the condition for material symmetry is (using the definition of an orthogonal transformation) Orthogonal transformations can be represented in Cartesian coordinates by a matrix given by Therefore, the symmetry condition can be written in matrix form as For a transversely isotropic material, the matrix has the form where the -axis is the axis of symmetry. The material matrix remains invariant under rotation by any angle about the -axis. In physics Linear material constitutive relations in physics can be expressed in the form where are two vectors representing physical quantities and is a second-order material tensor. In matrix form, Examples of physical problems that fit the above template are listed in the table below. Using in the matrix implies that . Using leads to and . Energy restrictions usually require and hence we must have . Therefore, the material properties of a transversely isotropic material are described by the matrix In linear elasticity Condition for material symmetry In linear elasticity, the stress and strain are related by Hooke's law, i.e., or, using Voigt notation, The condition for material symmetry in linear elastic materials is. where Elasticity tensor Using the specific values of in matrix , it can be shown that the fourth-rank elasticity stiffness tensor may be written in 2-index Voigt notation as the matrix The elasticity stiffness matrix has 5 independent constants, which are related to well known engineering elastic moduli in the following way. These engineering moduli are experimentally determined. The compliance matrix (inverse of the elastic stiffness matrix) is where . In engineering notation, Comparing these two forms of the compliance matrix shows us that the longitudinal Young's modulus is given by Similarly, the transverse Young's modulus is The inplane shear modulus is and the Poisson's ratio for loading along the polar axis is . Here, L represents the longitudinal (polar) direction and T represents the transverse direction. In geophysics In geophysics, a common assumption is that the rock formations of the crust are locally polar anisotropic (transversely isotropic); this is the simplest case of geophysical interest. Backus upscaling is often used to determine the effective transversely isotropic elastic constants of layered media for long wavelength seismic waves. Assumptions that are made in the Backus approximation are: All materials are linearly elastic No sources of intrinsic energy dissipation (e.g. friction) Valid in the infinite wavelength limit, hence good results only if layer thickness is much smaller than wavelength The statistics of distribution of layer elastic properties are stationary, i.e., there is no correlated trend in these properties. For shorter wavelengths, the behavior of seismic waves is described using the superposition of plane waves. Transversely isotropic media support three types of elastic plane waves: a quasi-P wave (polarization direction almost equal to propagation direction) a quasi-S wave a S-wave (polarized orthogonal to the quasi-S wave, to the symmetry axis, and to the direction of propagation). Solutions to wave propagation problems in such media may be constructed from these plane waves, using Fourier synthesis. Backus upscaling (long wavelength approximation) A layered model of homogeneous and isotropic material, can be up-scaled to a transverse isotropic medium, proposed by Backus. Backus presented an equivalent medium theory, a heterogeneous medium can be replaced by a homogeneous one that predicts wave propagation in the actual medium. Backus showed that layering on a scale much finer than the wavelength has an impact and that a number of isotropic layers can be replaced by a homogeneous transversely isotropic medium that behaves exactly in the same manner as the actual medium under static load in the infinite wavelength limit. If each layer is described by 5 transversely isotropic parameters , specifying the matrix The elastic moduli for the effective medium will be where denotes the volume weighted average over all layers. This includes isotropic layers, as the layer is isotropic if , and . Short and medium wavelength approximation Solutions to wave propagation problems in linear elastic transversely isotropic media can be constructed by superposing solutions for the quasi-P wave, the quasi S-wave, and a S-wave polarized orthogonal to the quasi S-wave. However, the equations for the angular variation of velocity are algebraically complex and the plane-wave velocities are functions of the propagation angle are. The direction dependent wave speeds for elastic waves through the material can be found by using the Christoffel equation and are given by where is the angle between the axis of symmetry and the wave propagation direction, is mass density and the are elements of the elastic stiffness matrix. The Thomsen parameters are used to simplify these expressions and make them easier to understand. Thomsen parameters Thomsen parameters are dimensionless combinations of elastic moduli that characterize transversely isotropic materials, which are encountered, for example, in geophysics. In terms of the components of the elastic stiffness matrix, these parameters are defined as: where index 3 indicates the axis of symmetry () . These parameters, in conjunction with the associated P wave and S wave velocities, can be used to characterize wave propagation through weakly anisotropic, layered media. Empirically, the Thomsen parameters for most layered rock formations are much lower than 1. The name refers to Leon Thomsen, professor of geophysics at the University of Houston, who proposed these parameters in his 1986 paper "Weak Elastic Anisotropy". Simplified expressions for wave velocities In geophysics the anisotropy in elastic properties is usually weak, in which case . When the exact expressions for the wave velocities above are linearized in these small quantities, they simplify to where are the P and S wave velocities in the direction of the axis of symmetry () (in geophysics, this is usually, but not always, the vertical direction). Note that may be further linearized, but this does not lead to further simplification. The approximate expressions for the wave velocities are simple enough to be physically interpreted, and sufficiently accurate for most geophysical applications. These expressions are also useful in some contexts where the anisotropy is not weak. See also Hooke's law Linear elasticity Orthotropic material References Crystallography Orientation (geometry) Elasticity (physics)
Transverse isotropy
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,708
[ "Physical phenomena", "Continuum mechanics", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Materials science", "Crystallography", "Materials", "Space", "Condensed matter physics", "Topology", "Geometry", "Spacetime", "Orientation (geometry)", "Physical proper...
1,710,040
https://en.wikipedia.org/wiki/Indentation%20hardness
Indentation hardness tests are used in mechanical engineering to determine the hardness of a material to deformation. Several such tests exist, wherein the examined material is indented until an impression is formed; these tests can be performed on a macroscopic or microscopic scale. When testing metals, indentation hardness correlates roughly linearly with tensile strength, but it is an imperfect correlation often limited to small ranges of strength and hardness for each indentation geometry. This relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers. Material hardness Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films, cannot be done using conventional uniaxial tensile testing. As a result, techniques testing material "hardness" by indenting a material with a very small impression have been developed to attempt to estimate these properties. Hardness measurements quantify the resistance of a material to plastic deformation. Indentation hardness tests compose the majority of processes used to determine material hardness, and can be divided into three classes: macro, micro and nanoindentation tests. Microindentation tests typically have forces less than . Hardness, however, cannot be considered to be a fundamental material property. Classical hardness testing usually creates a number which can be used to provide a relative idea of material properties. As such, hardness can only offer a comparative idea of the material's resistance to plastic deformation since different hardness techniques have different scales. The equation based definition of hardness is the pressure applied over the contact area between the indenter and the material being tested. As a result hardness values are typically reported in units of pressure, although this is only a "true" pressure if the indenter and surface interface is perfectly flat. Instrumented indentation Instrumented indentation basically indents a sharp tip into the surface of a material to obtain a force-displacement curve. The results provide a lot of information about the mechanical behavior of the material, including hardness, e.g., elastic moduli and plastic deformation. One key factor of instrumented indentation test is that the tip needs to be controlled by force or displacement that can be measured simultaneously throughout the indentation cycle. Current technology can realize accurate force control in a wide range. Therefore hardness can be characterized at many different length scales, from hard materials like ceramics to soft materials like polymers. The earliest work was finished by Bulychev, Alekhin, Shorshorov in the 1970s, who determined that Young's modulus of a material can be determined from the slope of a force vs. displacement indentation curve as: : material stiffness, which is the slope of the curve : the tip-sample contact area : reduced modulus, defined as: Where and are the Young's modulus and Poisson's ratio of the sample, an and are that of the indenter. Since typically, , the second term can typically be ignored. The most critical information, hardness, can be calculated by: Commonly used indentation techniques, as well as detailed calculation of each different method, are discussed as follows. Macroindentation tests The term "macroindentation" is applied to tests with a larger test load, such as 1 kgf or more. There are various macroindentation tests, including: Vickers hardness test (HV), which has one of the widest scales. Widely used to test hardness of all kinds of metal materials (steel, nonferrous metals, tinsel, cemented carbide, sheet metal, etc.); surface layer / coating (Carburization, nitriding, decarburization layer, surface hardening layer, galvanized coating, etc.). Brinell hardness test (HB) BHN and HBW are widely used Knoop hardness test (HK), for measurement over small areas, widely used to test glass or ceramic material. Janka hardness test, for wood Meyer hardness test Rockwell hardness test (HR), principally used in the USA. HRA, HRB and HRC scales are most widely used. Shore hardness test, for polymers, widely used in the rubber industry. Barcol hardness test, for composite materials. There is, in general, no simple relationship between the results of different hardness tests. Though there are practical conversion tables for hard steels, for example, some materials show qualitatively different behaviors under the various measurement methods. The Vickers and Brinell hardness scales correlate well over a wide range, however, with Brinell only producing overestimated values at high loads. Indentation procedures can, however, be used to extract genuine stress-strain relationships. Certain criteria need to be met if reliable results are to be obtained. These include the need to deform a relatively large volume, and hence to use large loads. The methodologies involved are often grouped under the term Indentation plastometry, which is described in a separate article. Microindentation tests The term "microhardness" has been widely employed in the literature to describe the hardness testing of materials with low applied loads. A more precise term is "microindentation hardness testing." In microindentation hardness testing, a diamond indenter of specific geometry is impressed into the surface of the test specimen using a known applied force (commonly called a "load" or "test load") of 1 to 1000 gf. Microindentation tests typically have forces of 2 N (roughly 200 gf) and produce indentations of about 50 μm. Due to their specificity, microhardness testing can be used to observe changes in hardness on the microscopic scale. Unfortunately, it is difficult to standardize microhardness measurements; it has been found that the microhardness of almost any material is higher than its macrohardness. Additionally, microhardness values vary with load and work-hardening effects of materials. The two most commonly used microhardness tests are tests that also can be applied with heavier loads as macroindentation tests: Vickers hardness test (HV) Knoop hardness test (HK) In microindentation testing, the hardness number is based on measurements made of the indent formed in the surface of the test specimen. The hardness number is based on the applied force divided by the surface area of the indent itself, giving hardness units in kgf/mm2. Microindentation hardness testing can be done using Vickers as well as Knoop indenters. For the Vickers test, both the diagonals are measured and the average value is used to compute the Vickers pyramid number. In the Knoop test, only the longer diagonal is measured, and the Knoop hardness is calculated based on the projected area of the indent divided by the applied force, also giving test units in kgf/mm2. The Vickers microindentation test is carried out in a similar manner welling to the Vickers macroindentation tests, using the same pyramid. The Knoop test uses an elongated pyramid to indent material samples. This elongated pyramid creates a shallow impression, which is beneficial for measuring the hardness of brittle materials or thin components. Both the Knoop and Vickers indenters require polishing of the surface to achieve accurate results. Scratch tests at low loads, such as the Bierbaum microcharacter test, performed with either 3 gf or 9 gf loads, preceded the development of microhardness testers using traditional indenters. In 1925, Smith and Sandland of the UK developed an indentation test that employed a square-based pyramidal indenter made from diamond. They chose the pyramidal shape with an angle of 136° between opposite faces in order to obtain hardness numbers that would be as close as possible to Brinell hardness numbers for the specimen. The Vickers test has a great advantage of using one hardness scale to test all materials. The first reference to the Vickers indenter with low loads was made in the annual report of the National Physical Laboratory in 1932. Lips and Sack describes the first Vickers tester using low loads in 1936. There is some disagreement in the literature regarding the load range applicable to microhardness testing. ASTM Specification E384, for example, states that the load range for microhardness testing is 1 to 1000 gf. For loads of 1 kgf and below, the Vickers hardness (HV) is calculated with an equation, wherein load (L) is in grams force and the mean of two diagonals (d) is in millimeters: For any given load, the hardness increases rapidly at low diagonal lengths, with the effect becoming more pronounced as the load decreases. Thus at low loads, small measurement errors will produce large hardness deviations. Thus one should always use the highest possible load in any test. Also, in the vertical portion of the curves, small measurement errors will produce large hardness deviations. Nanoindentation tests Sources of error The main sources of error with indentation tests are poor technique, poor calibration of the equipment, and the strain hardening effect of the process. However, it has been experimentally determined through "strainless hardness tests" that the effect is minimal with smaller indentations. Surface finish of the part and the indenter do not have an effect on the hardness measurement, as long as the indentation is large compared to the surface roughness. This proves to be useful when measuring the hardness of practical surfaces. It also is helpful when leaving a shallow indentation, because a finely etched indenter leaves a much easier to read indentation than a smooth indenter. The indentation that is left after the indenter and load are removed is known to "recover", or spring back slightly. This effect is properly known as shallowing. For spherical indenters the indentation is known to stay symmetrical and spherical, but with a larger radius. For very hard materials the radius can be three times as large as the indenter's radius. This effect is attributed to the release of elastic stresses. Because of this effect the diameter and depth of the indentation do contain errors. The error from the change in diameter is known to be only a few percent, with the error for the depth being greater. Another effect the load has on the indentation is the piling-up or sinking-in of the surrounding material. If the metal is work hardened it has a tendency to pile up and form a "crater". If the metal is annealed it will sink in around the indentation. Both of these effects add to the error of the hardness measurement. Relation to yield stress When hardness, , is defined as the mean contact pressure (load/ projected contact area), the yield stress, , of many materials is proportional to the hardness by a constant known as the constrain factor, C. where: The hardness differs from the uni-axial compressive yield stress of the material because different compressive failure modes apply. A uni-axial test only constrains the material in one dimension, which allows the material to fail as a result of shear. Indentation hardness on the other hand is constrained in three dimensions which prevent shear from dominating the failure. See also Leeb rebound hardness test Meyer's law References External links "Pinball Tester Reveals Hardness." Popular Mechanics, November 1945, p. 75. Bibliography . Hardness tests Physical quantities
Indentation hardness
[ "Physics", "Materials_science", "Mathematics" ]
2,321
[ "Physical phenomena", "Physical quantities", "Quantity", "Materials testing", "Hardness tests", "Physical properties" ]