id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
621,215
https://en.wikipedia.org/wiki/Cointerpretability
In mathematical logic, cointerpretability is a binary relation on formal theories: a formal theory T is cointerpretable in another such theory S, when the language of S can be translated into the language of T in such a way that S proves every formula whose translation is a theorem of T. The "translation" here is required to preserve the logical structure of formulas. This concept, in a sense dual to interpretability, was introduced by , who also proved that, for theories of Peano arithmetic and any stronger theories with effective axiomatizations, cointerpretability is equivalent to -conservativity. See also Cotolerance Interpretability logic Tolerance (in logic) References . . Mathematical relations Mathematical logic
Cointerpretability
[ "Mathematics" ]
151
[ "Mathematical analysis", "Predicate logic", "Mathematical logic", "Basic concepts in set theory", "Mathematical relations" ]
621,887
https://en.wikipedia.org/wiki/Tectonophysics
Tectonophysics, a branch of geophysics, is the study of the physical processes that underlie tectonic deformation. This includes measurement or calculation of the stress- and strain fields on Earth’s surface and the rheologies of the crust, mantle, lithosphere and asthenosphere. Overview Tectonophysics is concerned with movements in the Earth's crust and deformations over scales from meters to thousands of kilometers. These govern processes on local and regional scales and at structural boundaries, such as the destruction of continental crust (e.g. gravitational instability) and oceanic crust (e.g. subduction), convection in the Earth's mantle (availability of melts), the course of continental drift, and second-order effects of plate tectonics such as thermal contraction of the lithosphere. This involves the measurement of a hierarchy of strains in rocks and plates as well as deformation rates; the study of laboratory analogues of natural systems; and the construction of models for the history of deformation. History Tectonophysics was adopted as the name of a new section of AGU on April 19, 1940, at AGU's 21st Annual Meeting. According to the AGU website (https://tectonophysics.agu.org/agu-100/section-history/), using the words from Norman Bowen, the main goal of the tectonophysics section was to “designate this new borderline field between geophysics, physics and geology … for the solution of problems of tectonics.” Consequently, the claim below that the term was defined in 1954 by Gzolvskii is clearly incorrect. Since 1940 members of AGU had been presenting papers at AGU meetings, the contents of which defined the meaning of the field. Tectonophysics was defined as a field in 1954 when Mikhail Vladimirovich Gzovskii published three papers in the journal Izvestiya Akad. Nauk SSSR, Sireya Geofizicheskaya: "On the tasks and content of tectonophysics", "Tectonic stress fields", and "Modeling of tectonic stress fields". He defined the main goals of tectonophysical research to be study of the mechanisms of folding and faulting as well as large structural units of the Earth's crust. He later created the Laboratory of Tectonophysics at the Institute of Physics of the Earth, Academy of Sciences of the USSR, Moscow. Applications In coal mines, large amounts of horizontal stress on the rock (around two to three times greater than the vertical pressure from the overlying rock) are caused by tectonic stress and can be predicted with plate tectonics stress maps. For example, because West Virginia experiences tectonic stress from east to west, as of around the 1990s significantly more roof collapses occurred in mines running north to south than in mines running east to west. See also Geodynamics Palaeogeography Rock mechanics Seafloor spreading Structural geology Tectonophysics (journal) Notes References External links American Geophysical Union Tectonophysics Section Geophysics Tectonics
Tectonophysics
[ "Physics" ]
669
[ "Applied and interdisciplinary physics", "Geophysics" ]
622,053
https://en.wikipedia.org/wiki/S-matrix
In physics, the S-matrix or scattering matrix is a matrix that relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT). More formally, in the context of QFT, the S-matrix is defined as the unitary matrix connecting sets of asymptotically free particle states (the in-states and the out-states) in the Hilbert space of physical states: a multi-particle state is said to be free (or non-interacting) if it transforms under Lorentz transformations as a tensor product, or direct product in physics parlance, of one-particle states as prescribed by equation below. Asymptotically free then means that the state has this appearance in either the distant past or the distant future. While the S-matrix may be defined for any background (spacetime) that is asymptotically solvable and has no event horizons, it has a simple form in the case of the Minkowski space. In this special case, the Hilbert space is a space of irreducible unitary representations of the inhomogeneous Lorentz group (the Poincaré group); the S-matrix is the evolution operator between (the distant past), and (the distant future). It is defined only in the limit of zero energy density (or infinite particle separation distance). It can be shown that if a quantum field theory in Minkowski space has a mass gap, the state in the asymptotic past and in the asymptotic future are both described by Fock spaces. History The initial elements of S-matrix theory are found in Paul Dirac's 1927 paper "Über die Quantenmechanik der Stoßvorgänge". The S-matrix was first properly introduced by John Archibald Wheeler in the 1937 paper "On the Mathematical Description of Light Nuclei by the Method of Resonating Group Structure". In this paper Wheeler introduced a scattering matrix – a unitary matrix of coefficients connecting "the asymptotic behaviour of an arbitrary particular solution [of the integral equations] with that of solutions of a standard form", but did not develop it fully. In the 1940s, Werner Heisenberg independently developed and substantiated the idea of the S-matrix. Because of the problematic divergences present in quantum field theory at that time, Heisenberg was motivated to isolate the essential features of the theory that would not be affected by future changes as the theory developed. In doing so, he was led to introduce a unitary "characteristic" S-matrix. Today, however, exact S-matrix results are important for conformal field theory, integrable systems, and several further areas of quantum field theory and string theory. S-matrices are not substitutes for a field-theoretic treatment, but rather, complement the end results of such. Motivation In high-energy particle physics one is interested in computing the probability for different outcomes in scattering experiments. These experiments can be broken down into three stages: Making a collection of incoming particles collide (usually two kinds of particles with high energies). Allowing the incoming particles to interact. These interactions may change the types of particles present (e.g. if an electron and a positron annihilate they may produce two photons). Measuring the resulting outgoing particles. The process by which the incoming particles are transformed (through their interaction) into the outgoing particles is called scattering. For particle physics, a physical theory of these processes must be able to compute the probability for different outgoing particles when different incoming particles collide with different energies. The S-matrix in quantum field theory achieves exactly this. It is assumed that the small-energy-density approximation is valid in these cases. Use The S-matrix is closely related to the transition probability amplitude in quantum mechanics and to cross sections of various interactions; the elements (individual numerical entries) in the S-matrix are known as scattering amplitudes. Poles of the S-matrix in the complex-energy plane are identified with bound states, virtual states or resonances. Branch cuts of the S-matrix in the complex-energy plane are associated to the opening of a scattering channel. In the Hamiltonian approach to quantum field theory, the S-matrix may be calculated as a time-ordered exponential of the integrated Hamiltonian in the interaction picture; it may also be expressed using Feynman's path integrals. In both cases, the perturbative calculation of the S-matrix leads to Feynman diagrams. In scattering theory, the S-matrix is an operator mapping free particle in-states to free particle out-states (scattering channels) in the Heisenberg picture. This is very useful because often we cannot describe the interaction (at least, not the most interesting ones) exactly. In one-dimensional quantum mechanics A simple prototype in which the S-matrix is 2-dimensional is considered first, for the purposes of illustration. In it, particles with sharp energy scatter from a localized potential according to the rules of 1-dimensional quantum mechanics. Already this simple model displays some features of more general cases, but is easier to handle. Each energy yields a matrix that depends on . Thus, the total S-matrix could, figuratively speaking, be visualized, in a suitable basis, as a "continuous matrix" with every element zero except for -blocks along the diagonal for a given . Definition Consider a localized one dimensional potential barrier , subjected to a beam of quantum particles with energy . These particles are incident on the potential barrier from left to right. The solutions of Schrödinger's equation outside the potential barrier are plane waves given by for the region to the left of the potential barrier, and for the region to the right to the potential barrier, where is the wave vector. The time dependence is not needed in our overview and is hence omitted. The term with coefficient represents the incoming wave, whereas term with coefficient represents the outgoing wave. stands for the reflecting wave. Since we set the incoming wave moving in the positive direction (coming from the left), is zero and can be omitted. The "scattering amplitude", i.e., the transition overlap of the outgoing waves with the incoming waves is a linear relation defining the S-matrix, The above relation can be written as where The elements of completely characterize the scattering properties of the potential barrier . Unitary property The unitary property of the S-matrix is directly related to the conservation of the probability current in quantum mechanics. The probability current density of the wave function is defined as The probability current density of to the left of the barrier is while the probability current density of to the right of the barrier is For conservation of the probability current, . This implies the S-matrix is a unitary matrix. Time-reversal symmetry If the potential is real, then the system possesses time-reversal symmetry. Under this condition, if is a solution of Schrödinger's equation, then is also a solution. The time-reversed solution is given by for the region to the left to the potential barrier, and for the region to the right to the potential barrier, where the terms with coefficient , represent incoming wave, and terms with coefficient , represent outgoing wave. They are again related by the S-matrix, that is, Now, the relations together yield a condition This condition, in conjunction with the unitarity relation, implies that the S-matrix is symmetric, as a result of time reversal symmetry, By combining the symmetry and the unitarity, the S-matrix can be expressed in the form: with and . So the S-matrix is determined by three real parameters. Transfer matrix The transfer matrix relates the plane waves and on the right side of scattering potential to the plane waves and on the left side: and its components can be derived from the components of the S-matrix via: and , whereby time-reversal symmetry is assumed. In the case of time-reversal symmetry, the transfer matrix can be expressed by three real parameters: with and (in case there would be no connection between the left and the right side) Finite square well The one-dimensional, non-relativistic problem with time-reversal symmetry of a particle with mass m that approaches a (static) finite square well, has the potential function with The scattering can be solved by decomposing the wave packet of the free particle into plane waves with wave numbers for a plane wave coming (faraway) from the left side or likewise (faraway) from the right side. The S-matrix for the plane wave with wave number has the solution: and ; hence and therefore and in this case. Whereby is the (increased) wave number of the plane wave inside the square well, as the energy eigenvalue associated with the plane wave has to stay constant: The transmission is In the case of then and therefore and i.e. a plane wave with wave number k passes the well without reflection if for a Finite square barrier The square barrier is similar to the square well with the difference that for . There are three different cases depending on the energy eigenvalue of the plane waves (with wave numbers resp. ) far away from the barrier: Transmission coefficient and reflection coefficient The transmission coefficient from the left of the potential barrier is, when , The reflection coefficient from the left of the potential barrier is, when , Similarly, the transmission coefficient from the right of the potential barrier is, when , The reflection coefficient from the right of the potential barrier is, when , The relations between the transmission and reflection coefficients are and This identity is a consequence of the unitarity property of the S-matrix. With time-reversal symmetry, the S-matrix is symmetric and hence and . Optical theorem in one dimension In the case of free particles , the S-matrix is Whenever is different from zero, however, there is a departure of the S-matrix from the above form, to This departure is parameterized by two complex functions of energy, and . From unitarity there also follows a relationship between these two functions, The analogue of this identity in three dimensions is known as the optical theorem. Definition in quantum field theory Interaction picture A straightforward way to define the S-matrix begins with considering the interaction picture. Let the Hamiltonian be split into the free part and the interaction , . In this picture, the operators behave as free field operators and the state vectors have dynamics according to the interaction . Let denote a state that has evolved from a free initial state The S-matrix element is then defined as the projection of this state on the final state Thus where is the S-operator. The great advantage of this definition is that the time-evolution operator evolving a state in the interaction picture is formally known, where denotes the time-ordered product. Expressed in this operator, from which Expanding using the knowledge about gives a Dyson series, or, if comes as a Hamiltonian density , Being a special type of time-evolution operator, is unitary. For any initial state and any final state one finds This approach is somewhat naïve in that potential problems are swept under the carpet. This is intentional. The approach works in practice and some of the technical issues are addressed in the other sections. In and out states Here a slightly more rigorous approach is taken in order to address potential problems that were disregarded in the interaction picture approach of above. The final outcome is, of course, the same as when taking the quicker route. For this, the notions of in and out states are needed. These will be developed in two ways, from vacuum, and from free particle states. Needless to say, the two approaches are equivalent, but they illuminate matters from different angles. From vacuum If is a creation operator, its hermitian adjoint is an annihilation operator and destroys the vacuum, In Dirac notation, define as a vacuum quantum state, i.e. a state without real particles. The asterisk signifies that not all vacua are necessarily equal, and certainly not equal to the Hilbert space zero state . All vacuum states are assumed Poincaré invariant, invariance under translations, rotations and boosts, formally, where is the generator of translation in space and time, and is the generator of Lorentz transformations. Thus the description of the vacuum is independent of the frame of reference. Associated to the in and out states to be defined are the in and out field operators (aka fields) and . Attention is here focused to the simplest case, that of a scalar theory in order to exemplify with the least possible cluttering of the notation. The in and out fields satisfy the free Klein–Gordon equation. These fields are postulated to have the same equal time commutation relations (ETCR) as the free fields, where is the field canonically conjugate to . Associated to the in and out fields are two sets of creation and annihilation operators, and , acting in the same Hilbert space, on two distinct complete sets (Fock spaces; initial space , final space ). These operators satisfy the usual commutation rules, The action of the creation operators on their respective vacua and states with a finite number of particles in the in and out states is given by where issues of normalization have been ignored. See the next section for a detailed account on how a general state is normalized. The initial and final spaces are defined by The asymptotic states are assumed to have well defined Poincaré transformation properties, i.e. they are assumed to transform as a direct product of one-particle states. This is a characteristic of a non-interacting field. From this follows that the asymptotic states are all eigenstates of the momentum operator , In particular, they are eigenstates of the full Hamiltonian, The vacuum is usually postulated to be stable and unique, The interaction is assumed adiabatically turned on and off. Heisenberg picture The Heisenberg picture is employed henceforth. In this picture, the states are time-independent. A Heisenberg state vector thus represents the complete spacetime history of a system of particles. The labeling of the in and out states refers to the asymptotic appearance. A state is characterized by that as the particle content is that represented collectively by . Likewise, a state will have the particle content represented by for . Using the assumption that the in and out states, as well as the interacting states, inhabit the same Hilbert space and assuming completeness of the normalized in and out states (postulate of asymptotic completeness), the initial states can be expanded in a basis of final states (or vice versa). The explicit expression is given later after more notation and terminology has been introduced. The expansion coefficients are precisely the S-matrix elements to be defined below. While the state vectors are constant in time in the Heisenberg picture, the physical states they represent are not. If a system is found to be in a state at time , then it will be found in the state at time . This is not (necessarily) the same Heisenberg state vector, but it is an equivalent state vector, meaning that it will, upon measurement, be found to be one of the final states from the expansion with nonzero coefficient. Letting vary one sees that the observed (not measured) is indeed the Schrödinger picture state vector. By repeating the measurement sufficiently many times and averaging, one may say that the same state vector is indeed found at time as at time . This reflects the expansion above of an in state into out states. From free particle states For this viewpoint, one should consider how the archetypical scattering experiment is performed. The initial particles are prepared in well defined states where they are so far apart that they don't interact. They are somehow made to interact, and the final particles are registered when they are so far apart that they have ceased to interact. The idea is to look for states in the Heisenberg picture that in the distant past had the appearance of free particle states. This will be the in states. Likewise, an out state will be a state that in the distant future has the appearance of a free particle state. The notation from the general reference for this section, will be used. A general non-interacting multi-particle state is given by where is momentum, is spin z-component or, in the massless case, helicity, is particle species. These states are normalized as Permutations work as such; if is a permutation of objects (for a state) such that then a nonzero term results. The sign is plus unless involves an odd number of fermion transpositions, in which case it is minus. The notation is usually abbreviated letting one Greek letter stand for the whole collection describing the state. In abbreviated form the normalization becomes When integrating over free-particle states one writes in this notation where the sum includes only terms such that no two terms are equal modulo a permutation of the particle type indices. The sets of states sought for are supposed to be complete. This is expressed as which could be paraphrased as where for each fixed , the right hand side is a projection operator onto the state . Under an inhomogeneous Lorentz transformation , the field transforms according to the rule where is the Wigner rotation and is the representation of . By putting , for which is , in , it immediately follows that so the in and out states sought after are eigenstates of the full Hamiltonian that are necessarily non-interacting due to the absence of mixed particle energy terms. The discussion in the section above suggests that the in states and the out states should be such that for large positive and negative has the appearance of the corresponding package, represented by , of free-particle states, assumed smooth and suitably localized in momentum. Wave packages are necessary, else the time evolution will yield only a phase factor indicating free particles, which cannot be the case. The right hand side follows from that the in and out states are eigenstates of the Hamiltonian per above. To formalize this requirement, assume that the full Hamiltonian can be divided into two terms, a free-particle Hamiltonian and an interaction , such that the eigenstates of have the same appearance as the in- and out-states with respect to normalization and Lorentz transformation properties, The in and out states are defined as eigenstates of the full Hamiltonian, satisfying for or respectively. Define then This last expression will work only using wave packages.From these definitions follow that the in and out states are normalized in the same way as the free-particle states, and the three sets are unitarily equivalent. Now rewrite the eigenvalue equation, where the terms has been added to make the operator on the LHS invertible. Since the in and out states reduce to the free-particle states for , put on the RHS to obtain Then use the completeness of the free-particle states, to finally obtain Here has been replaced by its eigenvalue on the free-particle states. This is the Lippmann–Schwinger equation. In states expressed as out states The initial states can be expanded in a basis of final states (or vice versa). Using the completeness relation, where is the probability that the interaction transforms into By the ordinary rules of quantum mechanics, and one may write The expansion coefficients are precisely the S-matrix elements to be defined below. The S-matrix The S-matrix is now defined by Here and are shorthands that represent the particle content but suppresses the individual labels. Associated to the S-matrix there is the S-operator defined by where the are free particle states. This definition conforms with the direct approach used in the interaction picture. Also, due to unitary equivalence, As a physical requirement, must be a unitary operator. This is a statement of conservation of probability in quantum field theory. But By completeness then, so S is the unitary transformation from in-states to out states. Lorentz invariance is another crucial requirement on the S-matrix. The S-operator represents the quantum canonical transformation of the initial in states to the final out states. Moreover, leaves the vacuum state invariant and transforms in-space fields to out-space fields, In terms of creation and annihilation operators, this becomes hence A similar expression holds when operates to the left on an out state. This means that the S-matrix can be expressed as If describes an interaction correctly, these properties must be also true: If the system is made up with a single particle in momentum eigenstate , then . This follows from the calculation above as a special case. The S-matrix element may be nonzero only where the output state has the same total momentum as the input state. This follows from the required Lorentz invariance of the S-matrix. Evolution operator U Define a time-dependent creation and annihilation operator as follows, so, for the fields, where We allow for a phase difference, given by because for , Substituting the explicit expression for , one has where is the interaction part of the Hamiltonian and is the time ordering. By inspection, it can be seen that this formula is not explicitly covariant. Dyson series The most widely used expression for the S-matrix is the Dyson series. This expresses the S-matrix operator as the series: where: denotes time-ordering, denotes the interaction Hamiltonian density which describes the interactions in the theory. The not-S-matrix Since the transformation of particles from black hole to Hawking radiation could not be described with an S-matrix, Stephen Hawking proposed a "not-S-matrix", for which he used the dollar sign ($), and which therefore was also called "dollar matrix". See also Feynman diagram LSZ reduction formula Wick's theorem Haag's theorem Interaction picture Levinson's theorem Initial and final state radiation Remarks Notes References §125 Quantum field theory Scattering theory Matrices Mathematical physics
S-matrix
[ "Physics", "Chemistry", "Mathematics" ]
4,527
[ "Quantum field theory", "Scattering theory", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Matrices (mathematics)", "Scattering", "Mathematical physics" ]
622,571
https://en.wikipedia.org/wiki/Ubiquitin%20ligase
A ubiquitin ligase (also called an E3 ubiquitin ligase) is a protein that recruits an E2 ubiquitin-conjugating enzyme that has been loaded with ubiquitin, recognizes a protein substrate, and assists or directly catalyzes the transfer of ubiquitin from the E2 to the protein substrate. In simple and more general terms, the ligase enables movement of ubiquitin from a ubiquitin carrier to another protein (the substrate) by some mechanism. The ubiquitin, once it reaches its destination, ends up being attached by an isopeptide bond to a lysine residue, which is part of the target protein. E3 ligases interact with both the target protein and the E2 enzyme, and so impart substrate specificity to the E2. Commonly, E3s polyubiquitinate their substrate with Lys48-linked chains of ubiquitin, targeting the substrate for destruction by the proteasome. However, many other types of linkages are possible and alter a protein's activity, interactions, or localization. Ubiquitination by E3 ligases regulates diverse areas such as cell trafficking, DNA repair, and signaling and is of profound importance in cell biology. E3 ligases are also key players in cell cycle control, mediating the degradation of cyclins, as well as cyclin dependent kinase inhibitor proteins. The human genome encodes over 600 putative E3 ligases, allowing for tremendous diversity in substrates. Ubiquitination system The ubiquitin ligase is referred to as an E3, and operates in conjunction with an E1 ubiquitin-activating enzyme and an E2 ubiquitin-conjugating enzyme. There is one major E1 enzyme, shared by all ubiquitin ligases, that uses ATP to activate ubiquitin for conjugation and transfers it to an E2 enzyme. The E2 enzyme interacts with a specific E3 partner and transfers the ubiquitin to the target protein. The E3, which may be a multi-protein complex, is, in general, responsible for targeting ubiquitination to specific substrate proteins. The ubiquitylation reaction proceeds in three or four steps depending on the mechanism of action of the E3 ubiquitin ligase. In the conserved first step, an E1 cysteine residue attacks the ATP-activated C-terminal glycine on ubiquitin, resulting in a thioester Ub-S-E1 complex. The energy from ATP and diphosphate hydrolysis drives the formation of this reactive thioester, and subsequent steps are thermoneutral. Next, a transthiolation reaction occurs, in which an E2 cysteine residue attacks and replaces the E1. HECT domain type E3 ligases will have one more transthiolation reaction to transfer the ubiquitin molecule onto the E3, whereas the much more common RING finger domain type ligases transfer ubiquitin directly from E2 to the substrate. The final step in the first ubiquitylation event is an attack from the target protein lysine amine group, which will remove the cysteine, and form a stable isopeptide bond. One notable exception to this is p21 protein, which appears to be ubiquitylated using its N-terminal amine, thus forming a peptide bond with ubiquitin. Ubiquitin ligase families Humans have an estimated 500-1000 E3 ligases, which impart substrate specificity onto the E1 and E2. The E3 ligases are classified into four families: HECT, RING-finger, U-box, and PHD-finger. The RING-finger E3 ligases are the largest family and contain ligases such as the anaphase-promoting complex (APC) and the SCF complex (Skp1-Cullin-F-box protein complex). SCF complexes consist of four proteins: Rbx1, Cul1, Skp1, which are invariant among SCF complexes, and an F-box protein, which varies. Around 70 human F-box proteins have been identified. F-box proteins contain an F-box, which binds the rest of the SCF complex, and a substrate binding domain, which gives the E3 its substrate specificity. Mono- and poly-ubiquitylation Ubiquitin signaling relies on the diversity of ubiquitin tags for the specificity of its message. A protein can be tagged with a single ubiquitin molecule (monoubiquitylation), or variety of different chains of ubiquitin molecules (polyubiquitylation). E3 ubiquitin ligases catalyze polyubiquitination events much in the same way as the single ubiquitylation mechanism, using instead a lysine residue from a ubiquitin molecule currently attached to substrate protein to attack the C-terminus of a new ubiquitin molecule. For example, a common 4-ubiquitin tag, linked through the lysine at position 48 (K48) recruits the tagged protein to the proteasome, and subsequent degradation. However, all seven of the ubiquitin lysine residues (K6, K11, K27, K29, K33, K48, and K63), as well as the N-terminal methionine are used in chains in vivo. Monoubiquitination has been linked to membrane protein endocytosis pathways. For example, phosphorylation of the Tyrosine at position 1045 in the Epidermal Growth Factor Receptor (EGFR) can recruit the RING type E3 ligase c-Cbl, via an SH2 domain. C-Cbl monoubiquitylates EGFR, signaling for its internalization and trafficking to the lysosome. Monoubiquitination also can regulate cytosolic protein localization. For example, the E3 ligase MDM2 ubiquitylates p53 either for degradation (K48 polyubiquitin chain), or for nuclear export (monoubiquitylation). These events occur in a concentration dependent fashion, suggesting that modulating E3 ligase concentration is a cellular regulatory strategy for controlling protein homeostasis and localization. Substrate recognition Ubiquitin ligases are the final, and potentially the most important determinant of substrate specificity in ubiquitination of proteins. The ligases must simultaneously distinguish their protein substrate from thousands of other proteins in the cell, and from other (ubiquitination-inactive) forms of the same protein. This can be achieved by different mechanisms, most of which involve recognition of degrons: specific short amino acid sequences or chemical motifs on the substrate. N-degrons Proteolytic cleavage can lead to exposure of residues at the N-terminus of a protein. According to the N-end rule, different N-terminal amino acids (or N-degrons) are recognized to a different extent by their appropriate ubiquitin ligase (N-recognin), influencing the half-life of the protein. For instance, positively charged (Arg, Lys, His) and bulky hydrophobic amino acids (Phe, Trp, Tyr, Leu, Ile) are recognized preferentially and thus considered destabilizing degrons since they allow faster degradation of their proteins. Phosphodegrons A degron can be converted into its active form by a post-translational modification such as phosphorylation of a tyrosine, serine or threonine residue. In this case, the ubiquitin ligase exclusively recognizes the phosphorylated version of the substrate due to stabilization within the binding site. For example, FBW7, the F-box substrate recognition unit of an SCFFBW7ubiquitin ligase, stabilizes a phosphorylated substrate by hydrogen binding its arginine residues to the phosphate, as shown in the figure to the right. In absence of the phosphate, residues of FBW7 repel the substrate. Oxygen and small molecule dependent degrons The presence of oxygen or other small molecules can influence degron recognition. The von Hippel-Lindau (VHL) protein (substrate recognition part of a specific E3 ligase), for instance, recognizes the hypoxia-inducible factor alpha (HIF-α) only under normal oxygen conditions, when its proline is hydroxylated. Under hypoxia, on the other hand, HIF-a is not hydroxylated, evades ubiquitination and thus operates in the cell at higher concentrations which can initiate transcriptional response to hypoxia. Another example of small molecule control of protein degradation is phytohormone auxin in plants. Auxin binds to TIR1 (the substrate recognition domain of SCFTIR1ubiquitin ligase) increasing the affinity of TIR1 for its substrates (transcriptional repressors: Aux/IAA), and promoting their degradation. Misfolded and sugar degrons In addition to recognizing amino acids, ubiquitin ligases can also detect unusual features on substrates that serve as signals for their destruction. For example, San1 (Sir antagonist 1), a nuclear protein quality control in yeast, has a disordered substrate binding domain, which allows it to bind to hydrophobic domains of misfolded proteins. Misfolded or excess unassembled glycoproteins of the ERAD pathway, on the other hand, are recognized by Fbs1 and Fbs2, mammalian F-box proteins of E3 ligases SCFFbs1and SCFFbs2. These recognition domains have small hydrophobic pockets allowing them to bind high-mannose containing glycans. Structural motifs In addition to linear degrons, the E3 ligase can in some cases also recognize structural motifs on the substrate. In this case, the 3D motif can allow the substrate to directly relate its biochemical function to ubiquitination. This relation can be demonstrated with TRF1 protein (regulator of human telomere length), which is recognized by its corresponding E3 ligase (FBXO4) via an intermolecular beta sheet interaction. TRF1 cannot be ubiquinated while telomere bound, likely because the same TRF1 domain that binds to its E3 ligase also binds to telomeres. Disease relevance E3 ubiquitin ligases regulate homeostasis, cell cycle, and DNA repair pathways, and as a result, a number of these proteins are involved in a variety of cancers, including famously MDM2, BRCA1, and Von Hippel-Lindau tumor suppressor. For example, a mutation of MDM2 has been found in stomach cancer, renal cell carcinoma, and liver cancer (amongst others) to deregulate MDM2 concentrations by increasing its promoter’s affinity for the Sp1 transcription factor, causing increased transcription of MDM2 mRNA. Several proteomics-based experimental techniques are available for identifying E3 ubiquitin ligase-substrate pairs, such as proximity-dependent biotin identification (BioID), ubiquitin ligase-substrate trapping, and tandem ubiquitin-binding entities (TUBEs). Examples A RING (Really Interesting New Gene) domain binds the E2 conjugase and might be found to mediate enzymatic activity in the E2-E3 complex An F-box domain (as in the SCF complex) binds the ubiquitinated substrate. (e.g., Cdc 4, which binds the target protein Sic1; Grr1, which binds Cln). A HECT domain, which is involved in the transfer of ubiquitin from the E2 to the substrate. Individual E3 ubiquitin ligases E3A mdm2 Anaphase-promoting complex (APC) UBR5 (EDD1) SOCS/ BC-box/ eloBC/ CUL5/ RING LNXp80 CBX4, CBLL1 HACE1 HECTD1, HECTD2, HECTD3, HECTD4 HECW1, HECW2 HERC1, HERC2, HERC3, HERC4, HERC5, HERC6 HUWE1, ITCH NEDD4, NEDD4L PPIL2 PRPF19 PIAS1, PIAS2, PIAS3, PIAS4 RANBP2 RNF4, RNF167 RBX1 SMURF1, SMURF2 STUB1 TOPORS TRIP12 UBE3A, UBE3B, UBE3C, UBE3D UBE4A, UBE4B UBOX5 UBR5 VHL WWP1, WWP2 Parkin MKRN1 See also ERAD Ubiquitin Ubiquitin-activating enzyme Ubiquitin-conjugating enzyme References External links Quips article describing E3 Ligase function at PDBe EC 6.3 Post-translational modification
Ubiquitin ligase
[ "Chemistry" ]
2,810
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
623,150
https://en.wikipedia.org/wiki/Windage
Windage is a term used in aerodynamics, firearms ballistics, and automobiles that mainly relates to the effects of air (e.g., wind) on an object of interest. The term is also used for the similar effects of liquids, such as oil. Usage Aerodynamics Windage is a force created on an object by friction when there is relative movement between air and the object. Windage loss is the reduction in efficiency due to windage forces. For example, electric motors are affected by friction between the rotor and air. Large alternators have significant losses due to windage. To reduce losses, hydrogen gas may be used, since it is less dense. There are two causes of windage: The object is moving and being slowed by resistance from the air. A wind is blowing, producing a force on the object. The term can refer to: The effect of the force, for example the deflection of a missile or an aircraft by a cross wind. The area and shape of the object that make it susceptible to friction, for example those parts of a boat that are exposed to the wind. Aerodynamic streamlining can be used to reduce windage. There is a hydrodynamic effect similar to windage, hydrodynamic drag. Ballistics In firearms parlance, the word windage refers to the sight adjustment used to compensate for the horizontal deviation of the projectile trajectory from the intended point of impact due to wind drift or Coriolis effect. By contrast, the adjustment for the vertical deviation is the elevation. The colloquial term "Kentucky windage" refers to the practice of holding the aim to the upwind side of the target (also known as deflection shooting or "leading" the wind) to compensate for wind drift, without actually changing the existing adjustment settings on the gunsight. In muzzleloading firearms, windage also refers to the difference in diameter between the bore and the ball, especially in muskets and cannons. The bore gap allows the shot to be loaded quickly, but reduces the efficiency of the weapon's internal ballistics, as it allows gas to leak past the projectile. It also reduces the accuracy, as the ball takes a zig-zag path along the barrel, emerging out of the muzzle at an unpredictable angle. Automobiles In automotive parlance, windage refers to parasitic drag on the crankshaft due to sump oil splashing on the crank train during rough driving, as well as dissipating energy in turbulence from the crank train moving the crankcase gas and oil mist at high RPM. Windage may also inhibit the migration of oil into the sump and back to the oil pump, creating lubrication problems. Some manufacturers and aftermarket vendors have developed special scrapers to remove excess oil from the counterweights and windage screens to create a barrier between the crankshaft and oil sump. See also Deflection (ballistics) Drag (physics) References Navigation Ballistics Nautical terminology Engines
Windage
[ "Physics", "Technology" ]
608
[ "Machines", "Applied and interdisciplinary physics", "Engines", "Physical systems", "Ballistics" ]
624,033
https://en.wikipedia.org/wiki/Reynolds-averaged%20Navier%E2%80%93Stokes%20equations
The Reynolds-averaged Navier–Stokes equations (RANS equations) are time-averaged equations of motion for fluid flow. The idea behind the equations is Reynolds decomposition, whereby an instantaneous quantity is decomposed into its time-averaged and fluctuating quantities, an idea first proposed by Osborne Reynolds. The RANS equations are primarily used to describe turbulent flows. These equations can be used with approximations based on knowledge of the properties of flow turbulence to give approximate time-averaged solutions to the Navier–Stokes equations. For a stationary flow of an incompressible Newtonian fluid, these equations can be written in Einstein notation in Cartesian coordinates as: The left hand side of this equation represents the change in mean momentum of a fluid element owing to the unsteadiness in the mean flow and the convection by the mean flow. This change is balanced by the mean body force, the isotropic stress owing to the mean pressure field, the viscous stresses, and apparent stress owing to the fluctuating velocity field, generally referred to as the Reynolds stress. This nonlinear Reynolds stress term requires additional modeling to close the RANS equation for solving, and has led to the creation of many different turbulence models. The time-average operator is a Reynolds operator. Derivation of RANS equations The basic tool required for the derivation of the RANS equations from the instantaneous Navier–Stokes equations is the Reynolds decomposition. Reynolds decomposition refers to separation of the flow variable (like velocity ) into the mean (time-averaged) component () and the fluctuating component (). Because the mean operator is a Reynolds operator, it has a set of properties. One of these properties is that the mean of the fluctuating quantity is equal to zero . Thus, where is the position vector. Some authors prefer using instead of for the mean term (since an overbar is sometimes used to represent a vector). In this case, the fluctuating term is represented instead by . This is possible because the two terms do not appear simultaneously in the same equation. To avoid confusion, the notation , , and will be used to represent the instantaneous, mean, and fluctuating terms, respectively. The properties of Reynolds operators are useful in the derivation of the RANS equations. Using these properties, the Navier–Stokes equations of motion, expressed in tensor notation, are (for an incompressible Newtonian fluid): where is a vector representing external forces. Next, each instantaneous quantity can be split into time-averaged and fluctuating components, and the resulting equation time-averaged, to yield: The momentum equation can also be written as, On further manipulations this yields, where, is the mean rate of strain tensor. Finally, since integration in time removes the time dependence of the resultant terms, the time derivative must be eliminated, leaving: Equations of Reynolds stress The time evolution equation of Reynolds stress is given by: This equation is very complicated. If is traced, turbulence kinetic energy is obtained. The last term is turbulent dissipation rate. All RANS models are based on the above equation. Applications (RANS modelling) A model for testing performance was determined that, when combined with the vortex lattice (VLM) or boundary element method (BEM), RANS was found useful for modelling the flow of water between two contrary rotation propellers, where VLM or BEM are applied to the propellers and RANS is used for the dynamically fluxing inter-propeller state. The RANS equations have been widely utilized as a model for determining flow characteristics and assessing wind comfort in urban environments. This computational approach can be executed through direct calculations involving the solution of the RANS equations, or through an indirect method involving the training of machine learning algorithms using the RANS equations as a basis. The direct approach is more accurate than the indirect approach but it requires expertise in numerical methods and computational fluid dynamics (CFD), as well as substantial computational resources to handle the complexity of the equations. Notes See also Favre averaging References Fluid dynamics Turbulence Turbulence models Computational fluid dynamics
Reynolds-averaged Navier–Stokes equations
[ "Physics", "Chemistry", "Engineering" ]
829
[ "Turbulence", "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
9,590,201
https://en.wikipedia.org/wiki/Valve%20actuator
A valve actuator is the mechanism for opening and closing a valve. Manually operated valves require someone in attendance to adjust them using a direct or geared mechanism attached to the valve stem. Power-operated actuators, using gas pressure, hydraulic pressure or electricity, allow a valve to be adjusted remotely, or allow rapid operation of large valves. Power-operated valve actuators may be the final elements of an automatic control loop which automatically regulates some flow, level or other process. Actuators may be only to open and close the valve, or may allow intermediate positioning; some valve actuators include switches or other ways to remotely indicate the position of the valve. Used for the automation of industrial valves, actuators can be found in all kinds of process plants. They are used in waste water treatment plants, power plants, refineries, mining and nuclear processes, food factories, and pipelines. Valve actuators play a major part in automating process control. The valves to be automated vary both in design and dimension. The diameters of the valves range from one-tenth of an inch to several feet. Types The common types of actuators are: manual, pneumatic, hydraulic, electric and spring. Manual A manual actuator employs levers, gears, or wheels to move the valve stem with a certain action. Manual actuators are powered by hand. Manual actuators are inexpensive, typically self-contained, and easy to operate by humans. However, some large valves are impossible to operate manually and some valves may be located in remote, toxic, or hostile environments that prevent manual operations in some conditions. As a safety feature, certain types of situations may require quicker operation than manual actuators can provide to close the valve. Pneumatic Air (or other gas) pressure is the power source for pneumatic valve actuators. They are used on linear or quarter-turn valves. Air pressure acts on a piston or bellows diaphragm creating linear force on a valve stem. Alternatively, a quarter-turn vane-type actuator produces torque to provide rotary motion to operate a quarter-turn valve. A pneumatic actuator may be arranged to be spring-closed or spring-opened, with air pressure overcoming the spring to provide movement. A "double acting" actuator use air applied to different inlets to move the valve in the opening or closing direction. A central compressed air system can provide the clean, dry, compressed air needed for pneumatic actuators. In some types, for example, regulators for compressed gas, the supply pressure is provided from the process gas stream and waste gas either vented to air or dumped into lower-pressure process piping. Hydraulic Hydraulic actuators convert fluid pressure into motion. Similar to pneumatic actuators, they are used on linear or quarter-turn valves. Fluid pressure acting on a piston provides linear thrust for gate or globe valves. A quarter-turn actuator produces torque to provide rotary motion to operate a quarter-turn valve. Most types of hydraulic actuators can be supplied with fail-safe features to close or open a valve under emergency circumstances. Hydraulic pressure can be supplied by a self-contained hydraulic pressure pump. In some applications, such as water pumping stations, the process fluid can provide hydraulic pressure, although the actuators must use materials compatible with the fluid. Electric The electric actuator uses an electric motor to provide torque to operate a valve. They are quiet, non-toxic and energy efficient. However, electricity must be available, which is not always the case, they can also operate on batteries. Spring Spring-based actuators hold back a spring. Once any anomaly is detected, or power is lost, the spring is released, operating the valve. They can only operate once, without resetting, and so are used for one-use purposes such as emergencies. They have the advantage that they do not require a powerful electric supply to move the valve, so they can operate from restricted battery power, or automatically when all power has been lost. Actuator movement A linear actuator opens and closes valves that can be operated via linear force, the type sometimes called a "rising stem" valve. These types of valves include globe valves, rising stem ball valves, control valves and gate valves. The two main types of linear actuators are diaphragm and piston. Diaphragm actuators are made out of a round piece of rubber and squeezed around its edges between two side of a cylinder or chamber that allows air pressure to enter either side pushing the piece of rubber one direction or the other. A rod is connected to the center of the diaphragm so that it moves as the pressure is applied. The rod is then connected to a valve stem which allows the valve to experience the linear motion thereby opening or closing. A diaphragm actuator is useful if the supply pressure is moderate and the valve travel and thrust required are low. Piston actuators use a piston which moves along the length of a cylinder. The piston rod conveys the force on the piston to the valve stem. Piston actuators allow higher pressures, longer travel ranges, and higher thrust forces than diaphragm actuators. A spring is used to provide defined behavior in the case of loss of power. This is important in safety related incidents and is sometimes the driving factor in specifications. An example of loss of power is when the air compressor (the main source of compressed air that provides the fluid for the actuator to move) shuts down. If there is a spring inside of the actuator, it will force the valve open or closed and will keep it in that position while power is restored. An actuator may be specified "fail open" or "fail close" to describe its behavior. In the case of an electric actuator, losing power will keep the valve stationary unless there is a backup power supply. A typical representative of the valves to be automated is a plug-type control valve. Just like the plug in the bathtub is pressed into the drain, the plug is pressed into the plug seat by a stroke movement. The pressure of the medium acts upon the plug while the thrust unit has to provide the same amount of thrust to be able to hold and move the plug against this pressure. Features of an electric actuator Motor (1) Robust asynchronous three-phase AC motors are mostly used as the driving force, for some applications also single-phase AC or DC motors are used. These motors are specially adapted for valve automation as they provide higher torques from standstill than comparable conventional motors, a necessary requirement to unseat sticky valves. The actuators are expected to operate under extreme ambient conditions, however they are generally not used for continuous operation since the motor heat buildup can be excessive. Limit and torque sensors (2) The limit switches signal when an end position has been reached. The torque switching measures the torque present in the valve. When exceeding a set limit, this is signaled in the same way. Actuators are often equipped with a remote position transmitter which indicates the valve position as continuous 4-20mA current or voltage signal. Gearing (3) Often a worm gearing is used to reduce the high output speed of the electric motor. This enables a high reduction ratio within the gear stage, leading to a low efficiency which is desired for the actuators. The gearing is therefore self-locking i.e. it prevents accidental and undesired changes of the valve position by acting upon the valve’s closing element. Valve attachment (4) The valve attachment consists of two elements. First: The flange used to firmly connect the actuator to the counterpart on the valve side. The higher the torque to be transmitted, the larger the flange required. Second: The output drive type used to transmit the torque or the thrust from the actuator to the valve shaft. Just like there is a multitude of valves there is also a multitude of valve attachments. Dimensions and design of valve mounting flange and valve attachments are stipulated in the standards EN ISO 5210 for multi-turn actuators or EN ISO 5211 for part-turn actuators. The design of valve attachments for linear actuators is generally based on DIN 3358. Manual operation (5) In their basic version most electric actuators are equipped with a handwheel for operating the actuators during commissioning or power failure. The handwheel does not move during motor operation. The electronic torque limiting switches are not functional during manual operation. Mechanical torque-limiting devices are commonly used to prevent torque overload during manual operation. Actuator controls (6) Both actuator signals and operation commands of the DCS are processed within the actuator controls. This task can in principle be assumed by external controls, e.g. a PLC. Modern actuators include integral controls which process signals locally without any delay. The controls also include the switchgear required to control the electric motor. This can either be reversing contactors or thyristors which, being an electric component, are not subject to mechanic wear. Controls use the switchgear to switch the electric motor on or off depending on the signals or commands present. Another task of the actuator controls is to provide the DCS with feedback signals, e.g. when reaching a valve end position. Electrical connection (7) The supply cables of the motor and the signal cables for transmitting the commands to the actuator and sending feedback signals on the actuator status are connected to the electrical connection. The electrical connection can be designed as a separately sealed terminal bung or plug/socket connector. For maintenance purposes, the wiring should be easily disconnected and reconnected. Fieldbus connection (8) Fieldbus technology is increasingly used for data transmission in process automation applications. Electric actuators can therefore be equipped with all common fieldbus interfaces used in process automation. Special connections are required for the connection of fieldbus data cables. Functions Automatic switching off in the end positions After receiving an operation command, the actuator moves the valve in direction OPEN or CLOSE. When reaching the end position, an automatic switch-off procedure is started. Two fundamentally different switch-off mechanisms can be used. The controls switch off the actuator as soon as the set tripping point has been reached. This is called limit seating. However, there are valve types for which the closing element has to be moved in the end position at a defined force or a defined torque to ensure that the valve seals tightly. This is called torque seating. The controls are programmed as to ensure that the actuator is switched off when exceeding the set torque limit. The end position is signalled by a limit switch. Safety functions The torque switching is not only used for torque seating in the end position, but it also serves as overload protection over the whole travel and protects the valve against excessive torque. If excessive torque acts upon the closing element in an intermediate position, e.g. due to a trapped object, the torque switching will trip when reaching the set tripping torque. In this situation the end position is not signalled by the limit switch. The controls can therefore distinguish between normal operation torque switch tripping in one of the end positions and switching off in an intermediate position due to excessive torque. Temperature sensors are required to protect the motor against overheating. For some applications by other manufacturers, the increase of the motor current is also monitored. Thermoswitches or PTC thermistors which are embedded in the motor windings mostly reliably fulfil this task. They trip when the temperature limit has been exceeded and the controls switch off the motor. ] Process control functions Due to increasing decentralisation in automation technology and the introduction of micro processors, more and more functions have been transferred from the DCS to the field devices. The data volume to be transmitted was reduced accordingly, in particular by the introduction of fieldbus technology. Electric actuators whose functions have been considerably expanded are also affected by this development. The simplest example is the position control. Modern positioners are equipped with self-adaptation i.e. the positioning behaviour is monitored and continuously optimised via controller parameters. Meanwhile, electric actuators are equipped with fully-fledged process controllers (PID controllers). Especially for remote installations, e.g. the flow control to an elevated tank, the actuator can assume the tasks of a PLC which otherwise would have to be additionally installed. Diagnosis Modern actuators have extensive diagnostic functions which can help identify the cause of a failure. They also log the operating data. Study of the logged data allows the operation to be optimised by changing the parameters and the wear of both actuator and valve to be reduced. Duty types Open-close duty If a valve is used as a shut-off valve, then it will be either open or closed and intermediate positions are not held... Positioning duty Defined intermediate positions are approached for setting a static flow through a pipeline. The same running time limits as in open-close duty apply. Modulating duty The most distinctive feature of a closed-loop application is that changing conditions require frequent adjustment of the actuator, for example, to set a certain flow rate. Sensitive closed-loop applications require adjustments within intervals of a few seconds. The demands on the actuator are higher than in open-close or positioning duty. Actuator design must be able to withstand the high number of starts without any deterioration in control accuracy. Service conditions Actuators are specified for the desired life and reliability for a given set of application service conditions. In addition to the static and dynamic load and response time required for the valve, the actuator must withstand the temperature range, corrosion environment and other conditions of a specific application. Valve actuator applications are often safety related, therefore the plant operators put high demands on the reliability of the devices. Failure of an actuator may cause accidents in process-controlled plants and toxic substances may leak into the environment. Process-control plants are often operated for several decades which justifies the higher demands put on the lifetime of the devices. For this reason, actuators are always designed in high enclosure protection. The manufacturers put a lot of work and knowledge into corrosion protection. Enclosure protection The enclosure protection types are defined according to the IP codes of EN 60529. The basic versions of most electric actuators are designed to the second highest enclosure protection IP 67. This means they are protected against the ingress of dust and water during immersion (30 min at a max. head of water of 1 m). Most actuator manufacturers also supply devices to enclosure protection IP 68 which provides protection against submersion up to a max. head of water of 6 m. Ambient temperatures In Siberia, temperatures down to – 60 °C may occur, and in technical process plants + 100 °C may be exceeded. Using the proper lubricant is crucial for full operation under these conditions. Greases which may be used at room temperature can become too solid at low temperatures for the actuator to overcome the resistance within the device. At high temperatures, these greases can liquify and lose their lubricating power. When sizing the actuator, the ambient temperature and the selection of the correct lubricant are of major importance. Explosion protection Actuators are used in applications where potentially explosive atmospheres may occur. This includes among others refineries, pipelines, oil and gas exploration or even mining. When a potentially explosive gas-air-mixture or gas-dust-mixture occurs, the actuator must not act as ignition source. Hot surfaces on the actuator as well as ignition sparks created by the actuator have to be avoided. This can be achieved by a flameproof enclosure, where the housing is designed to prevent ignition sparks from leaving the housing even if there is an explosion inside. Actuators designed for these applications, being explosion-proof devices, have to be qualified by a test authority (notified body). Explosion protection is not standardized worldwide. Within the European Union, ATEX 94/9/EC applies, in US, the NEC (approval by FM) or the CEC in Canada (approval by the CSA). Explosion-proof actuators have to meet the design requirements of these directives and regulations. Additional uses Small electric actuators can be used in a wide variety of assembly, packaging and testing applications. Such actuators can be linear, rotary, or a combination of the two, and can be combined to perform work in three dimensions. Such actuators are often used to replace pneumatic cylinders. References Actuators Fluid technology Actuators
Valve actuator
[ "Physics", "Chemistry", "Engineering" ]
3,449
[ "Fluid technology", "Physical systems", "Valves", "Hydraulics", "Mechanical engineering by discipline", "Piping" ]
9,590,785
https://en.wikipedia.org/wiki/Feature-oriented%20positioning
Feature-oriented positioning (FOP) is a method of precise movement of the scanning microscope probe across the surface under investigation. With this method, surface features (objects) are used as reference points for microscope probe attachment. Actually, FOP is a simplified variant of the feature-oriented scanning (FOS). With FOP, no topographical image of a surface is acquired. Instead, a probe movement by surface features is only carried out from the start surface point A (neighborhood of the start feature) to the destination point B (neighborhood of the destination feature) along some route that goes through intermediate features of the surface. The method may also be referred to by another name—object-oriented positioning (OOP). To be distinguished are a "blind" FOP when the coordinates of features used for probe movement are unknown in advance and FOP by existing feature "map" when the relative coordinates of all features are known, for example, in case they were obtained during preliminary FOS. Probe movement by a navigation structure is a combination of the above-pointed methods. FOP method may be used in bottom-up nanofabrication to implement high-precision movement of the nanolithograph/nanoassembler probe along the substrate surface. Moreover, once made along some route, FOP may be then exactly repeated the required number of times. After movement in the specified position, an influence on the surface or manipulation of a surface object (nanoparticle, molecule, atom) is performed. All the operations are carried out in automatic mode. With multiprobe instruments, FOP approach allows to apply any number of specialized technological and/or analytical probes successively to a surface feature/object or to a specified point of the feature/object neighborhood. That opens a prospect for building a complex nanofabrication consisting of a large number of technological, measuring, and checking operations. See also Feature-oriented scanning References External links Feature-oriented positioning, Research section, Lapshin's Personal Page on SPM & Nanotechnology Microscopes Nanotechnology Scanning probe microscopy ru:ООП
Feature-oriented positioning
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
429
[ "Materials science", "Measuring instruments", "Microscopes", "Scanning probe microscopy", "Microscopy", "Nanotechnology" ]
9,593,892
https://en.wikipedia.org/wiki/Carbamoyl%20aspartic%20acid
Carbamoyl aspartic acid (or ureidosuccinic acid) is a carbamate derivative, serving as an intermediate in pyrimidine biosynthesis. References Ureas Dicarboxylic acids
Carbamoyl aspartic acid
[ "Chemistry" ]
50
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Ureas" ]
9,596,101
https://en.wikipedia.org/wiki/Shared%20mesh
A shared mesh (also known as 'traditional' or 'best effort' mesh) is a wireless mesh network that uses a single radio to communicate via mesh backhaul links to all the neighboring nodes in the mesh. This is a first generation mesh where the total available bandwidth of the radio channel is ‘shared’ between all the neighboring nodes in the mesh. The capacity of the channel is further consumed by traffic being forwarded from one node to the next in the mesh – reducing the end to end traffic that can be passed. Because bandwidth is shared amongst all nodes in the mesh, and because every link in the mesh uses additional capacity, this type of network offers much lower end to end transmission rates than a switched mesh and degrades in capacity as nodes are added to the mesh. Wireless mesh nodes typically include both mesh backhaul links and client access. A dual radio shared mesh node uses separate access and mesh backhaul radios. Only the mesh backhaul radio is shared. In a single radio shared mesh node, access and mesh backhaul are collapsed onto a single radio. Now the available bandwidth is shared between both the mesh links and client access, further reducing the end to end traffic available. See also Wireless mesh network IEEE 802.11 Mesh networking Switched mesh Wi-Fi Wireless LAN 802.16 External links White Paper: Capacity of Wireless Mesh Networks Understanding single radio, dual radio and multi radio wireless mesh networks. What is Third Generation Mesh? Review of three generation of mesh networking architectures. Ugly Truths About Mesh Networks Performance issues of First and Second Generation Mesh products. Wireless networking Network topology Radio technology
Shared mesh
[ "Mathematics", "Technology", "Engineering" ]
327
[ "Information and communications technology", "Telecommunications engineering", "Network topology", "Wireless networking", "Computer networks engineering", "Radio technology", "Topology" ]
9,596,904
https://en.wikipedia.org/wiki/F%C3%BCrst-Plattner%20Rule
The Fürst-Plattner rule (also known as the trans-diaxial effect) describes the stereoselective addition of nucleophiles to cyclohexene derivatives. Introduction Cyclohexene derivatives, such as imines, epoxides, and halonium ions, react with nucleophiles in a stereoselective fashion, affording trans-diaxial addition products. The term “Trans-diaxial addition” describes the mechanism of the addition, however the products are likely to equilibrate by ring flip to the lower energy conformer, placing the new substituents in the equatorial position. Mechanism and Stereochemistry Epoxidation of a substituted cyclohexene affords a product where the R group resides in the pseudo-equatorial position. Nucleophilic ring-opening of this class of epoxides can occur by an attack at either the C1 or C2-position. It is well known that nucleophilic ring-opening reactions of these substrates can proceed with excellent regioselectivity. The Fürst-Plattner rule attributes this regiochemical control to a large preference for the reaction pathway that follows the more stable chair-like transition state (attack at the C1-position) compared to the one proceeding through the unfavored twist boat-like transition state (attack at the C2-position). The attack at the C1-position follows a substantially lower reaction barrier of around 5 kcal mol–1 depending on the specific conditions. Similarly, the Fürst-Plattner rule applies to nucleophilic additions to imines and halonium ions. Examples Epoxide addition A recent example of the Fürst-Plattner rule can be seen from Chrisman et al. where limonene is epoxidized to give a 1:1 mixture of diastereomers. Exposure to a nitrogen nucleophile in water at reflux provides only one ring opened product in 75-85% ee. Mechanism The half-chair conformation indicates that attack occurs stereoselectively on the diastereomer where the electrophilic carbon can receive the nucleophile and proceed to the favored chair conformation. Woodward's Reserpine Synthesis Although not well understood at the time, the Fürst-Plattner rule played a critical role during R. B. Woodward's synthesis of Reserpine. The problematic stereocenter is highlighted in red, below. Woodward's synthetic strategy used a Bischler-Napieralski reaction to form the tetrahydrocarbazole portion of Reserpine. The subsequent imine intermediate was treated with sodium borohydride, affording the wrong stereoisomer due to the Fürst-Plattner effect. Examining the intermediate structure shows that the hydride preferentially added to the 3-carbon via the top face of the imine to avoid an unfavorable twist-boat intermediate. Unfortunately, this outcome required Woodward to perform several additional steps to complete the total synthesis of reserpine with the proper stereochemistry. References Stereochemistry
Fürst-Plattner Rule
[ "Physics", "Chemistry" ]
648
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
9,597,496
https://en.wikipedia.org/wiki/Ataxia%20telangiectasia%20and%20Rad3%20related
Serine/threonine-protein kinase ATR, also known as ataxia telangiectasia and Rad3-related protein (ATR) or FRAP-related protein 1 (FRP1), is an enzyme that, in humans, is encoded by the ATR gene. It is a large kinase of about 301.66 kDa. ATR belongs to the phosphatidylinositol 3-kinase-related kinase protein family. ATR is activated in response to single strand breaks, and works with ATM to ensure genome integrity. Function ATR is a serine/threonine-specific protein kinase that is involved in sensing DNA damage and activating the DNA damage checkpoint, leading to cell cycle arrest in eukaryotes. ATR is activated in response to persistent single-stranded DNA, which is a common intermediate formed during DNA damage detection and repair. Single-stranded DNA occurs at stalled replication forks and as an intermediate in DNA repair pathways such as nucleotide excision repair and homologous recombination repair. ATR is activated during more persistent issues with DNA damage; within cells, most DNA damage is repaired quickly and faithfully through other mechanisms. ATR works with a partner protein called ATRIP to recognize single-stranded DNA coated with RPA. RPA binds specifically to ATRIP, which then recruits ATR through an ATR activating domain (AAD) on its surface. This association of ATR with RPA is how ATR specifically binds to and works on single-stranded DNA—this was proven through experiments with cells that had mutated nucleotide excision pathways. In these cells, ATR was unable to activate after UV damage, showing the need for single stranded DNA for ATR activity. The acidic alpha-helix of ATRIP binds to a basic cleft in the large RPA subunit to create a site for effective ATR binding. Many other proteins exist that are recruited to the cite of ssDNA that are needed for ATR activation. While RPA recruits ATRIP, the RAD9-RAD1-HUS1 (9-1-1) complex is loaded onto the DNA adjacent to the ssDNA; though ATRIP and the 9-1-1 complex are recruited independently to the site of DNA damage, they interact extensively through massive phosphorylation once colocalized. The 9-1-1 complex, a ring-shaped molecule related to PCNA, allows the accumulation of ATR in a damage specific way. For effective association of the 9-1-1 complex with DNA, RAD17-RFC is also needed.   This complex also brings in topoisomerase binding protein 1 (TOPBP1) which binds ATR through a highly conserved AAD. TOPBP1 binding is dependent on the phosphorylation of the Ser387 residue of the RAD9 subunit of the 9-1-1 complex. This is likely one of the main functions of the 9-1-1 complex within this DNA damage response. Another important protein that binds TR was identified by Haahr et al. in 2016: Ewings tumor-associated antigen 1 (ETAA1). This protein works in parallel with TOPBP1 to activate ATR through a conserved AAD. It is hypothesized that this pathway, which works independently of TOPBP1 pathway, is used to divide labor and possibly respond to differential needs within the cell. It is hypothesized that one pathway may be most active when ATR is carrying out normal support for replicating cells, and the other may be active when the cell is under more extreme replicative stress. It is not just ssDNA that activates ATR, though the existence of RPA associated ssDNA is important. Instead, ATR activation is heavily dependent on the existence of all the proteins previously described, that colocalize around the site of DNA damage. An experiment where RAD9, ATRIP, and TOPBP1 were overexpressed proved that these proteins alone were enough to activate ATR in the absence of ssDNA, showing their importance in triggering this pathway. Once ATR is activated, it phosphorylates Chk1, initiating a signal transduction cascade that culminates in cell cycle arrest. It acts to activate Chk1 through a claspin intermediate which binds the two proteins together. This claspin intermediate needs to be phosphorylated at two sites in order to do this job, something that can be carried out by ATR but is most likely under the control of some other kinase. This response, mediated by Chk1, is essential to regulating replication within a cell; through the Chk1-CDC25 pathway, which effects levels of CDC2, this response is thought to reduce the rate of DNA synthesis in the cell and inhibit origin firing during replication. In addition to its role in activating the DNA damage checkpoint, ATR is thought to function in unperturbed DNA replication. The response is dependent on how much ssDNA accumulates at stalled replication forks. ATR is activated during every S phase, even in normally cycling cells, as it works to monitor replication forks to repair and stop cell cycling when needed.  This means that ATR is activated at normal, background levels within all healthy cells. There are many points in the genome that are susceptible to stalling during replication due to complex sequences of DNA or endogenous damage that occurs during the replication. In these cases, ATR works to stabilize the forks so that DNA replication can occur as it should. ATR is related to a second checkpoint-activating kinase, ATM, which is activated by double strand breaks in DNA or chromatin disruption. ATR has also been shown to work on double strand breaks (DSB), acting a slower response to address the common end resections that occur in DSBs, and thus leave long strands of ssDNA (which then go on to signal ATR). In this circumstance, ATM recruits ATR and they work in partnership to respond to this DNA damage. They are responsible for the “slow” DNA damage response that can eventually trigger p53 in healthy cells and thus lead to cell cycle arrest or apoptosis. ATR as an essential protein Mutations in ATR are very uncommon. The total knockout of ATR is responsible for early death of mouse embryos, showing that it is a protein with essential life functions. It is hypothesized that this could be related to its likely activity in stabilizing Okazaki fragments on the lagging strands of DNA during replication, or due to its job stabilizing stalled replication forks, which naturally occur. In this setting, ATR is essential to preventing fork collapse, which would lead to extensive double strand breakage across the genome. The accumulation of these double strand breaks could lead to cell death. Clinical significance Mutations in ATR are responsible for Seckel syndrome, a rare human disorder that shares some characteristics with ataxia telangiectasia, which results from ATM mutation. ATR is also linked to familial cutaneous telangiectasia and cancer syndrome. Inhibitors ATR/ChK1 inhibitors can potentiate the effect of DNA cross-linking agents such as cisplatin and nucleoside analogues such as gemcitabine. The first clinical trials using inhibitors of ATR have been initiated by AstraZeneca, preferably in ATM-mutated chronic lymphocytic leukaemia (CLL), prolymphocytic leukaemia (PLL) or B-cell lymphoma patients and by Vertex Pharmaceuticals in advanced solid tumours. ATR provided and exciting point for potential targeting in these solid tumors, as many tumors function through activating the DNA damage response. These tumor cells rely on pathways like ATR to reduce replicative stress within the cancerous cells that are uncontrollably dividing, and thus these same cells could be very susceptible to ATR knockout. In ATR-Seckel mice, after exposure to cancer-causing agents, the damage DNA damage response pathway actually conferred resistance to tumor development (6). After many screens to identify specific ATR inhibitors, currently four made it into phase I or phase II clinical trials since 2013; these include AZD6738, M6620 (VX-970), BAY1895344 (Elimusertib). and M4344 (VX-803) (10). These ATR inhibitors work to help the cell proceed through p53 independent apoptosis, as well as force mitotic entry that leads to mitotic catastrophe. One study by Flynn et al. found that ATR inhibitors work especially well in cancer cells which rely on the alternative lengthening of telomeres (ALT) pathway. This is due to RPA presence when ALT is being established, which recruits ATR to regulate homologous recombination. This ALT pathway was extremely fragile with ATR inhibition and thus using these inhibitors to target this pathway that keeps cancer cell immortal could provide high specificity to stubborn cancer cells. Examples include Berzosertib Aging Deficiency of ATR expression in adult mice leads to the appearance of age-related alterations such as hair graying, hair loss, kyphosis (rounded upper back), osteoporosis and thymic involution. Furthermore, there are dramatic reductions with age in tissue-specific stem and progenitor cells, and exhaustion of tissue renewal and homeostatic capacity. There was also an early and permanent loss of spermatogenesis. However, there was no significant increase in tumor risk. Seckel syndrome In humans, hypomorphic mutations (partial loss of gene function) in the ATR gene are linked to Seckel syndrome, an autosomal recessive condition characterized by proportionate dwarfism, developmental delay, marked microcephaly, dental malocclusion and thoracic kyphosis. A senile or progeroid appearance has also been frequently noted in Seckel patients. For many years, the mutation found in the two families first diagnosed with Seckel Syndrome were the only mutations known to cause the disease. In 2012, Ogi and colleagues discovered multiple new mutations that also caused the disease. One form of the disease, which involved mutation in genes encoding the ATRIP partner protein, is considered more severe that the form that was first discovered. This mutation led to severe microcephaly and growth delay, microtia, micrognathia, dental crowding, and skeletal issues (evidenced in unique patellar growth). Sequencing revealed that this ATRIP mutation occurred most likely due to missplicing which led to fragments of the gene without exon 2. The cells also had a nonsense mutation in exon 12 of the ATR gene which led to a truncated ATR protein. Both of these mutations resulted in lower levels of ATR and ATRIP than in wild-type cells, leading to insufficient DNA damage response and the severe form of Seckel Syndrome noted above. Researchers also found that heterozygous mutations in ATR were responsible for causing Seckel Syndrome. Two novel mutations in one copy of the ATR gene caused under-expression of both ATR and ATRIP. Homologous recombinational repair Somatic cells of mice deficient in ATR have a decreased frequency of homologous recombination and an increased level of chromosomal damage. This finding implies that ATR is required for homologous recombinational repair of endogenous DNA damage. Drosophila mitosis and meiosis Mei-41 is the Drosophila ortholog of ATR. During mitosis in Drosophila DNA damages caused by exogenous agents are repaired by a homologous recombination process that depends on mei-41(ATR). Mutants defective in mei-41(ATR) have increased sensitivity to killing by exposure to the DNA damaging agents UV , and methyl methanesulfonate. Deficiency of mei-41(ATR) also causes reduced spontaneous allelic recombination (crossing over) during meiosis suggesting that wild-type mei-41(ATR) is employed in recombinational repair of spontaneous DNA damages during meiosis. Interactions Ataxia telangiectasia and Rad3-related protein has been shown to interact with: BRCA1, CHD4, HDAC2, MSH2, P53 RAD17, and RHEB. See also Ceralasertib, investigational new drug References Further reading External links Drosophila meiotic-41 - The Interactive Fly Proteins EC 2.7.11
Ataxia telangiectasia and Rad3 related
[ "Chemistry" ]
2,621
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
4,249,694
https://en.wikipedia.org/wiki/Coupling%20%28electronics%29
In electronics, electric power and telecommunication, coupling is the transfer of electrical energy from one circuit to another, or between parts of a circuit. Coupling can be deliberate as part of the function of the circuit, or it may be undesirable, for instance due to coupling to stray fields. For example, energy is transferred from a power source to an electrical load by means of conductive coupling, which may be either resistive or direct coupling. An AC potential may be transferred from one circuit segment to another having a DC potential by use of a capacitor. Electrical energy may be transferred from one circuit segment to another segment with different impedance by use of a transformer; this is known as impedance matching. These are examples of electrostatic and electrodynamic inductive coupling. Types Electrical conduction: Direct coupling, also called conductive coupling and galvanic coupling Resistive conduction Atmospheric plasma channel coupling Electromagnetic induction: Electrodynamic induction — commonly called inductive coupling, also magnetic coupling Capacitive coupling Evanescent wave coupling Electromagnetic radiation: Radio waves — Wireless telecommunications. Electromagnetic interference (EMI) — Sometimes called radio frequency interference (RFI), is unwanted coupling. Electromagnetic compatibility (EMC) requires techniques to avoid such unwanted coupling, such as electromagnetic shielding. Microwave power transmission Other kinds of energy coupling: Acoustic coupler See also Antenna noise temperature Coupling loss Aperture-to-medium coupling loss Coupling coefficient of resonators Directional coupler Equilibrium length Optocoupler Fiber-optic coupling Loading coil Shield List of electronics topics AC Coupling Impedance matching Impedance bridging Decoupling Crosstalk Wireless power transfer References Communication circuits Electromagnetic compatibility Electronics
Coupling (electronics)
[ "Engineering" ]
341
[ "Electromagnetic compatibility", "Radio electronics", "Telecommunications engineering", "Electrical engineering", "Communication circuits" ]
4,250,298
https://en.wikipedia.org/wiki/Auxiliary%20field
In physics, and especially quantum field theory, an auxiliary field is one whose equations of motion admit a single solution. Therefore, the Lagrangian describing such a field contains an algebraic quadratic term and an arbitrary linear term, while it contains no kinetic terms (derivatives of the field): The equation of motion for is and the Lagrangian becomes Auxiliary fields generally do not propagate, and hence the content of any theory can remain unchanged in many circumstances by adding such fields by hand. If we have an initial Lagrangian describing a field , then the Lagrangian describing both fields is Therefore, auxiliary fields can be employed to cancel quadratic terms in in and linearize the action . Examples of auxiliary fields are the complex scalar field F in a chiral superfield, the real scalar field D in a vector superfield, the scalar field B in BRST and the field in the Hubbard–Stratonovich transformation. The quantum mechanical effect of adding an auxiliary field is the same as the classical, since the path integral over such a field is Gaussian. To wit: See also Bosonic field Fermionic field Composite Field References Quantum field theory
Auxiliary field
[ "Physics" ]
244
[ "Quantum field theory", "Quantum mechanics" ]
4,250,553
https://en.wikipedia.org/wiki/Gene
In biology, the word gene has two meanings. The Mendelian gene is a basic unit of heredity. The molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes. During gene expression (the synthesis of RNA or protein from a gene), DNA is first copied into RNA. RNA can be directly functional or be the intermediate template for the synthesis of a protein. The transmission of genes to an organism's offspring, is the basis of the inheritance of phenotypic traits from one generation to the next. These genes make up different DNA sequences, together called a genotype, that is specific to every given individual, within the gene pool of the population of a given species. The genotype, along with environmental and developmental factors, ultimately determines the phenotype of the individual. Most biological traits occur under the combined influence of polygenes (a set of different genes) and gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, others are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life. A gene can acquire mutations in its sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Genes evolve due to natural selection or survival of the fittest and genetic drift of the alleles. Definitions There are many different ways to use the term "gene" based on different aspects of their inheritance, selection, biological function, or molecular structure but most of these definitions fall into two categories, the Mendelian gene or the molecular gene. The Mendelian gene is the classical gene of genetics and it refers to any heritable trait. This is the gene described in The Selfish Gene. More thorough discussions of this version of a gene can be found in the articles Genetics and Gene-centered view of evolution. The molecular gene definition is more commonly used across biochemistry, molecular biology, and most of genetics—the gene that is described in terms of DNA sequence. There are many different definitions of this gene—some of which are misleading or incorrect. Very early work in the field that became molecular genetics suggested the concept that one gene makes one protein (originally 'one gene – one enzyme'). However, genes that produce repressor RNAs were proposed in the 1950s and by the 1960s, textbooks were using molecular gene definitions that included those that specified functional RNA molecules such as ribosomal RNA and tRNA (noncoding genes) as well as protein-coding genes. This idea of two kinds of genes is still part of the definition of a gene in most textbooks. For example, The important parts of such definitions are: (1) that a gene corresponds to a transcription unit; (2) that genes produce both mRNA and noncoding RNAs; and (3) regulatory sequences control gene expression but are not part of the gene itself. However, there's one other important part of the definition and it is emphasized in Kostas Kampourakis' book Making Sense of Genes. The emphasis on function is essential because there are stretches of DNA that produce non-functional transcripts and they do not qualify as genes. These include obvious examples such as transcribed pseudogenes as well as less obvious examples such as junk RNA produced as noise due to transcription errors. In order to qualify as a true gene, by this definition, one has to prove that the transcript has a biological function. Early speculations on the size of a typical gene were based on high-resolution genetic mapping and on the size of proteins and RNA molecules. A length of 1500 base pairs seemed reasonable at the time (1965). This was based on the idea that the gene was the DNA that was directly responsible for production of the functional product. The discovery of introns in the 1970s meant that many eukaryotic genes were much larger than the size of the functional product would imply. Typical mammalian protein-coding genes, for example, are about 62,000 base pairs in length (transcribed region) and since there are about 20,000 of them they occupy about 35–40% of the mammalian genome (including the human genome). In spite of the fact that both protein-coding genes and noncoding genes have been known for more than 50 years, there are still a number of textbooks, websites, and scientific publications that define a gene as a DNA sequence that specifies a protein. In other words, the definition is restricted to protein-coding genes. Here is an example from a recent article in American Scientist. This restricted definition is so common that it has spawned many recent articles that criticize this "standard definition" and call for a new expanded definition that includes noncoding genes. However, some modern writers still do not acknowledge noncoding genes although this so-called "new" definition has been recognised for more than half a century. Although some definitions can be more broadly applicable than others, the fundamental complexity of biology means that no definition of a gene can capture all aspects perfectly. Not all genomes are DNA (e.g. RNA viruses), bacterial operons are multiple protein-coding regions transcribed into single large mRNAs, alternative splicing enables a single genomic region to encode multiple district products and trans-splicing concatenates mRNAs from shorter coding sequence across the genome. Since molecular definitions exclude elements such as introns, promotors, and other regulatory regions, these are instead thought of as "associated" with the gene and affect its function. An even broader operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions. History Discovery of discrete inherited units The existence of discrete inheritable units was first suggested by Gregor Mendel (1822–1884). From 1857 to 1864, in Brno, Austrian Empire (today's Czech Republic), he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured Wilhelm Johannsen's distinction between genotype (the genetic material of an organism) and phenotype (the observable traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance. Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilization process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin"). Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction. Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research. Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis, in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory. Twenty years later, in 1909, Wilhelm Johannsen introduced the term "gene" (inspired by the ancient Greek: γόνος, gonos, meaning offspring and procreation) and, in 1906, William Bateson, that of "genetics" while Eduard Strasburger, among others, still used the term "pangene" for the fundamental physical and functional unit of heredity. Discovery of DNA Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s. The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication. In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955–1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA. Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics. In 1972, Walter Fiers and his team were the first to determine the sequence of a gene: that of bacteriophage MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method was used in early phases of the Human Genome Project. Modern synthesis and its successors The theories developed in the early 20th century to integrate Mendelian genetics with Darwinian evolution are called the modern synthesis, a term introduced by Julian Huxley. This view of evolution was emphasized by George C. Williams' gene-centric view of evolution. He proposed that the Mendelian gene is a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency." Related ideas emphasizing the centrality of Mendelian genes and the importance of natural selection in evolution were popularized by Richard Dawkins. The development of the neutral theory of evolution in the late 1960s led to the recognition that random genetic drift is a major player in evolution and that neutral theory should be the null hypothesis of molecular evolution. This led to the construction of phylogenetic trees and the development of the molecular clock, which is the basis of all dating techniques using DNA sequences. These techniques are not confined to molecular gene sequences but can be used on all DNA segments in the genome. Molecular basis DNA The vast majority of organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine. Two chains of DNA twist around each other to form a DNA double helix with the phosphate–sugar backbone spiralling around the outside, and the bases pointing inward with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must, therefore, be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on. Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3' direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile. The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms. Chromosomes The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded. The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence. The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin. The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres, and the centromere. Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequences that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division. Prokaryotes (bacteria and archaea) typically store their genomes on a single, large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes. Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer. Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer. Structure and function Structure The structure of a protein-coding gene consists of many elements of which the actual protein coding sequence is often only a small part. These include introns and untranslated regions of the mature mRNA. Noncoding genes can also contain introns that are removed during processing to produce the mature functional RNA. All genes are associated with regulatory sequences that are required for their expression. First, genes require a promoter sequence. The promoter is recognized and bound by transcription factors that recruit and help RNA polymerase bind to the region to initiate transcription. The recognition typically occurs as a consensus sequence like the TATA box. A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5' end. Highly transcribed genes have "strong" promoter sequences that form strong associations with transcription factors, thereby initiating transcription at a high rate. Others genes have "weak" promoters that form weak associations with transcription factors and initiate transcription less frequently. Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters. Additionally, genes can have regulatory regions many kilobases upstream or downstream of the gene that alter expression. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase. The mature messenger RNA produced from protein-coding genes contains untranslated regions at both ends which contain binding sites for ribosomes, RNA-binding proteins, miRNA, as well as terminator, and start and stop codons. In addition, most eukaryotic open reading frames contain untranslated introns, which are removed and exons, which are connected together in a process known as RNA splicing. Finally, the ends of gene transcripts are defined by cleavage and polyadenylation (CPA) sites, where newly produced pre-mRNA gets cleaved and a string of ~200 adenosine monophosphates is added at the 3' end. The poly(A) tail protects mature mRNA from degradation and has other functions, affecting translation, localization, and transport of the transcript from the nucleus. Splicing, followed by CPA, generate the final mature mRNA, which encodes the protein or RNA product. Many noncoding genes in eukaryotes have different transcription termination mechanisms and they do not have poly(A) tails. Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit. The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operon's mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of specific metabolites. When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network. Complexity Though many genes have simple structures, as with much of biology, others can be quite complex or represent unusual edge-cases. Eukaryotic genes often have introns that are much larger than their exons, and those introns can even have other genes nested inside them. Associated enhancers may be many kilobase away, or even on entirely different chromosomes operating via physical contact between two chromosomes. A single gene can encode multiple different functional products by alternative splicing, and conversely a gene may be split across chromosomes but those transcripts are concatenated back together into a functional sequence by trans-splicing. It is also possible for overlapping genes to share some of their DNA sequence, either on opposite strands or the same strand (in a different reading frame, or even the same reading frame). Gene expression In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA). Second, that mRNA is translated to protein. RNA-coding genes must still go through the first step, but are not translated into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product. Genetic code The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid. The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4 (see Crick, Brenner et al. experiment). Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions, hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms. Transcription Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed. The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5' direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase or by organizing the DNA so that the promoter region is not accessible. In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode a protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes. Translation Translation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein. Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions. Regulation Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources. A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961. RNA genes A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product. In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, while others such as microRNAs and riboswitches have regulatory roles. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes. Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized. Inheritance Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent. Mendelian inheritance According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with a different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent. Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. If you know the genotypes of the organisms, you can determine which alleles are dominant and which are recessive. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division. DNA replication and cell division The growth, development, and reproduction of organisms relies on cell division; the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication. The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA. The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid. During the period of exponential DNA increase at 37 °C, the rate of elongation was 749 nucleotides per second. After DNA replication, the cell must physically separate the two genome copies and divide into two distinct membrane-bound cells. In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase. Molecular inheritance The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene. The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father. During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles. The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together (known as genetic linkage). Genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them. Genome The genome is the total genetic material of an organism and includes both the genes and non-coding sequences. Eukaryotic genes can be annotated using FINDER. Number of genes The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses, and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is estimated to be 5 million sequences. Although the number of base-pairs of DNA in the human genome has been known since the 1950s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes in the 1960s and 1970s were based on mutation load estimates and the numbers of mRNAs and these estimates tended to be about 30,000 protein-coding genes. During the 1990s there were guesstimates of up to 100,000 genes and early data on detection of mRNAs (expressed sequence tags) suggested more than the traditional value of 30,000 genes that had been reported in the textbooks during the 1980s. The initial draft sequences of the human genome confirmed the earlier predictions of about 30,000 protein-coding genes however that estimate has fallen to about 19,000 with the ongoing GENCODE annotation project. The number of noncoding genes is not known with certainty but the latest estimates from Ensembl suggest 26,000 noncoding genes. Essential genes Essential genes are the set of genes thought to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes). The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function. Essential genes include housekeeping genes (critical for basic cell functions) as well as genes that are expressed at different times in the organisms development or life cycle. Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level. Genetic and genomic nomenclature Gene nomenclature was established by the HUGO Gene Nomenclature Committee (HGNC), a committee of the Human Genome Organisation, for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism. Genetic engineering Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired. The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism. Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function. Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine. For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism. However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases. See also References Citations Sources Main textbook – A molecular biology textbook available free online through NCBI Bookshelf. Glossary Ch 1: Cells and genomes 1.1: The Universal Features of Cells on Earth Ch 2: Cell Chemistry and Biosynthesis 2.1: The Chemical Components of a Cell Ch 3: Proteins Ch 4: DNA and Chromosomes 4.1: The Structure and Function of DNA 4.2: Chromosomal DNA and Its Packaging in the Chromatin Fiber Ch 5: DNA Replication, Repair, and Recombination 5.2: DNA Replication Mechanisms 5.4: DNA Repair 5.5: General Recombination Ch 6: How Cells Read the Genome: From DNA to Protein 6.1: DNA to RNA 6.2: RNA to Protein Ch 7: Control of Gene Expression 7.1: An Overview of Gene Control 7.2: DNA-Binding Motifs in Gene Regulatory Proteins 7.3: How Genetic Switches Work 7.5: Posttranscriptional Controls 7.6: How Genomes Evolve Ch 14: Energy Conversion: Mitochondria and Chloroplasts 14.4: The Genetic Systems of Mitochondria and Plastids Ch 18: The Mechanics of Cell Division 18.1: An Overview of M Phase 18.2: Mitosis Ch 20: Germ Cells and Fertilization 20.2: Meiosis Further reading External links Comparative Toxicogenomics Database DNA From The Beginning – a primer on genes and DNA Gene – a searchable database of genes Genes – an Open Access journal IDconverter – converts gene IDs between public databases iHOP – Information Hyperlinked over Proteins TranscriptomeBrowser – Gene expression profile analysis The Protein Naming Utility, a database to identify and correct deficient gene names IMPC (International Mouse Phenotyping Consortium) – Encyclopedia of mammalian gene function Global Genes Project – Leading non-profit organization supporting people living with genetic diseases Encode threads explorer, Nature Characterization of intergenic regions and gene definition, Nature Cloning Molecular biology Wikipedia articles with sections published in WikiJournal of Medicine
Gene
[ "Chemistry", "Engineering", "Biology" ]
7,934
[ "Cloning", "Biochemistry", "Genetic engineering", "Molecular biology" ]
4,251,102
https://en.wikipedia.org/wiki/Three-point%20flexural%20test
The three-point bending flexural test provides values for the modulus of elasticity in bending , flexural stress , flexural strain and the flexural stress–strain response of the material. This test is performed on a universal testing machine (tensile testing machine or tensile tester) with a three-point or four-point bend fixture. The main advantage of a three-point flexural test is the ease of the specimen preparation and testing. However, this method has also some disadvantages: the results of the testing method are sensitive to specimen and loading geometry and strain rate. Testing method The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart. Calculation of the flexural stress for a rectangular cross section for a circular cross section Calculation of the flexural strain Calculation of flexural modulus in these formulas the following parameters are used: = Modulus of Rupture, the stress required to fracture the sample (MPa) = Strain in the outer surface, (mm/mm) = flexural Modulus of elasticity,(MPa) = load at a given point on the load deflection curve, (N) = Support span, (mm) = Width of test beam, (mm) = Depth or thickness of tested beam, (mm) = maximum deflection of the center of the beam, (mm) = The gradient (i.e., slope) of the initial straight-line portion of the load deflection curve, (N/mm) = The radius of the beam, (mm) Fracture toughness testing The fracture toughness of a specimen can also be determined using a three-point flexural test. The stress intensity factor at the crack tip of a single edge notch bending specimen is where is the applied load, is the thickness of the specimen, is the crack length, and is the width of the specimen. In a three-point bend test, a fatigue crack is created at the tip of the notch by cyclic loading. The length of the crack is measured. The specimen is then loaded monotonically. A plot of the load versus the crack opening displacement is used to determine the load at which the crack starts growing. This load is substituted into the above formula to find the fracture toughness . The ASTM D5045-14 and E1290-08 Standards suggests the relation where The predicted values of are nearly identical for the ASTM and Bower equations for crack lengths less than 0.6. Standards ISO 12135: Metallic materials. Unified method for the determination of quasi-static fracture toughness. ISO 12737: Metallic materials. Determination of plane-strain fracture toughness. ISO 178: Plastics—Determination of flexural properties. ASTM C293: Standard Test Method for Flexural Strength of Concrete (Using Simple Beam With Center-Point Loading). ASTM D790: Standard test methods for flexural properties of unreinforced and reinforced plastics and electrical insulating materials. ASTM E1290: Standard Test Method for Crack-Tip Opening Displacement (CTOD) Fracture Toughness Measurement. ASTM D7264: Standard Test Method for Flexural Properties of Polymer Matrix Composite Materials. ASTM D5045: Standard Test Methods for Plane-Strain Fracture Toughness and Strain Energy Release Rate of Plastic Materials. See also References Materials testing Mechanics
Three-point flexural test
[ "Physics", "Materials_science", "Engineering" ]
715
[ "Materials testing", "Mechanics", "Materials science", "Mechanical engineering" ]
4,253,950
https://en.wikipedia.org/wiki/List%20of%20elements%20by%20stability%20of%20isotopes
This is a list of chemical elements by the stability of their isotopes. Of the first 82 elements in the periodic table, 80 have isotopes considered to be stable. Overall, there are 251 known stable isotopes in total. Background Atomic nuclei consist of protons and neutrons, which attract each other through the nuclear force, while protons repel each other via the electric force due to their positive charge. These two forces compete, leading to some combinations of neutrons and protons being more stable than others. Neutrons stabilize the nucleus, because they attract protons, which helps offset the electrical repulsion between protons. As a result, as the number of protons increases, an increasing ratio of neutrons to protons is needed to form a stable nucleus; if too many or too few neutrons are present with regard to the optimum ratio, the nucleus becomes unstable and subject to certain types of nuclear decay. Unstable isotopes decay through various radioactive decay pathways, most commonly alpha decay, beta decay, or electron capture. Many rare types of decay, such as spontaneous fission or cluster decay, are known. (See Radioactive decay for details.) Of the first 82 elements in the periodic table, 80 have isotopes considered to be stable. The 83rd element, bismuth, was traditionally regarded as having the heaviest stable isotope, bismuth-209, but in 2003 researchers in Orsay, France, measured the half-life of to be . Technetium and promethium (atomic numbers 43 and 61, respectively) and all the elements with an atomic number over 82 only have isotopes that are known to decompose through radioactive decay. No undiscovered elements are expected to be stable; therefore, lead is considered the heaviest stable element. However, it is possible that some isotopes that are now considered stable will be revealed to decay with extremely long half-lives (as with ). This list depicts what is agreed upon by the consensus of the scientific community as of 2023. For each of the 80 stable elements, the number of the stable isotopes is given. Only 90 isotopes are expected to be perfectly stable, and an additional 161 are energetically unstable, but have never been observed to decay. Thus, 251 isotopes (nuclides) are stable by definition (including tantalum-180m, for which no decay has yet been observed). Those that may in the future be found to be radioactive are expected to have half-lives longer than 1022 years (for example, xenon-134). In April 2019 it was announced that the half-life of xenon-124 had been measured to 1.8 × 1022 years. This is the longest half-life directly measured for any unstable isotope; only the half-life of tellurium-128 is longer. Of the chemical elements, only 1 element (tin) has 10 such stable isotopes, 5 have 7 stable isotopes, 7 have 6 stable isotopes, 11 have 5 stable isotopes, 9 have 4 stable isotopes, 5 have 3 stable isotopes, 16 have 2 stable isotopes, and 26 have 1 stable isotope. Additionally, about 31 nuclides of the naturally occurring elements have unstable isotopes with a half-life larger than the age of the Solar System (~109 years or more). An additional four nuclides have half-lives longer than 100 million years, which is far less than the age of the Solar System, but long enough for some of them to have survived. These 35 radioactive naturally occurring nuclides comprise the radioactive primordial nuclides. The total number of primordial nuclides is then 251 (the stable nuclides) plus the 35 radioactive primordial nuclides, for a total of 286 primordial nuclides. This number is subject to change if new shorter-lived primordials are identified on Earth. One of the primordial nuclides is tantalum-180m, which is predicted to have a half-life in excess of 1015 years, but has never been observed to decay. The even-longer half-life of 2.2 × 1024 years of tellurium-128 was measured by a unique method of detecting its radiogenic daughter xenon-128 and is the longest known experimentally measured half-life. Another notable example is the only naturally occurring isotope of bismuth, bismuth-209, which has been predicted to be unstable with a very long half-life, but has been observed to decay. Because of their long half-lives, such isotopes are still found on Earth in various quantities, and together with the stable isotopes they are called primordial isotopes. All the primordial isotopes are given in order of their decreasing abundance on Earth. For a list of primordial nuclides in order of half-life, see List of nuclides. 118 chemical elements are known to exist. All elements to element 94 are found in nature, and the remainder of the discovered elements are artificially produced, with isotopes all known to be highly radioactive with relatively short half-lives (see below). The elements in this list are ordered according to the lifetime of their most stable isotope. Of these, three elements (bismuth, thorium, and uranium) are primordial because they have half-lives long enough to still be found on the Earth, while all the others are produced either by radioactive decay or are synthesized in laboratories and nuclear reactors. Only 13 of the 38 known-but-unstable elements have isotopes with a half-life of at least 100 years. Every known isotope of the remaining 25 elements is highly radioactive; these are used in academic research and sometimes in industry and medicine. Some of the heavier elements in the periodic table may be revealed to have yet-undiscovered isotopes with longer lifetimes than those listed here. About 338 nuclides are found naturally on Earth. These comprise 251 stable isotopes, and with the addition of the 35 long-lived radioisotopes with half-lives longer than 100 million years, a total of 286 primordial nuclides, as noted above. The nuclides found naturally comprise not only the 286 primordials, but also include about 52 more short-lived isotopes (defined by a half-life less than 100 million years, too short to have survived from the formation of the Earth) that are daughters of primordial isotopes (such as radium from uranium); or else are made by energetic natural processes, such as carbon-14 made from atmospheric nitrogen by bombardment from cosmic rays. Elements by number of primordial isotopes An even number of protons or neutrons is more stable (higher binding energy) because of pairing effects, so even–even nuclides are much more stable than odd–odd. One effect is that there are few stable odd–odd nuclides: in fact only five are stable, with another four having half-lives longer than a billion years. Another effect is to prevent beta decay of many even–even nuclides into another even–even nuclide of the same mass number but lower energy, because decay proceeding one step at a time would have to pass through an odd–odd nuclide of higher energy. (Double beta decay directly from even–even to even–even, skipping over an odd-odd nuclide, is only occasionally possible, and is a process so strongly hindered that it has a half-life greater than a billion times the age of the universe.) This makes for a larger number of stable even–even nuclides, up to three for some mass numbers, and up to seven for some atomic (proton) numbers and at least four for all stable even-Z elements beyond iron (except strontium and lead). Since a nucleus with an odd number of protons is relatively less stable, odd-numbered elements tend to have fewer stable isotopes. Of the 26 "monoisotopic" elements that have only a single stable isotope, all but one have an odd atomic number—the single exception being beryllium. In addition, no odd-numbered element has more than two stable isotopes, while every even-numbered element with stable isotopes, except for helium, beryllium, and carbon, has at least three. Only a single odd-numbered element, potassium, has three primordial isotopes; none have more than three. Tables The following tables give the elements with primordial nuclides, which means that the element may still be identified on Earth from natural sources, having been present since the Earth was formed out of the solar nebula. Thus, none are shorter-lived daughters of longer-lived parental primordials. Two nuclides which have half-lives long enough to be primordial, but have not yet been conclusively observed as such (244Pu and 146Sm), have been excluded. The tables of elements are sorted in order of decreasing number of nuclides associated with each element. (For a list sorted entirely in terms of half-lives of nuclides, with mixing of elements, see List of nuclides.) Stable and unstable (marked decays) nuclides are given, with symbols for unstable (radioactive) nuclides in italics. Note that the sorting does not quite give the elements purely in order of stable nuclides, since some elements have a larger number of long-lived unstable nuclides, which place them ahead of elements with a larger number of stable nuclides. By convention, nuclides are counted as "stable" if they have never been observed to decay by experiment or from observation of decay products (extremely long-lived nuclides unstable only in theory, such as tantalum-180m, are counted as stable). The first table is for even-atomic numbered elements, which tend to have far more primordial nuclides, due to the stability conferred by proton-proton pairing. A second separate table is given for odd-atomic numbered elements, which tend to have far fewer stable and long-lived (primordial) unstable nuclides. Elements with no primordial isotopes See also Island of stability List of nuclides List of radioactive nuclides by half-life Primordial nuclide Stable nuclide Stable isotope ratio Table of nuclides Footnotes References Stability of isotopes
List of elements by stability of isotopes
[ "Chemistry" ]
2,157
[ "Lists of chemical elements" ]
4,254,315
https://en.wikipedia.org/wiki/Atkinson%20friction%20factor
Atkinson friction factor is a measure of the resistance to airflow of a duct. It is widely used in the mine ventilation industry but is rarely referred to outside of it. Atkinson friction factor is represented by the symbol and has the same units as air density (kilograms per cubic metre in SI units, lbfmin^2/ft^4 in Imperial units). It is related to the more widespread Fanning friction factor by in which is the density of air in the shaft or roadway under consideration and is Fanning friction factor (dimensionless). It is related to the Darcy friction factor by in which is the Darcy friction factor (dimensionless). It was introduced by John J Atkinson in an early mathematical treatment of mine ventilation (1862) and has been known under his name ever since. See also Atkinson resistance References NCB Mining Dept., Ventilation in coal mines: a handbook for colliery ventilation officers, National Coal Board 1979. Further reading 1999 paper giving the derivation of Atkinson, J J, Gases met with in Coal Mines, and the general principles of Ventilation Transactions of the Manchester Geological Society, Vol. III, p.218, 1862 Fluid dynamics Mine ventilation
Atkinson friction factor
[ "Chemistry", "Engineering" ]
235
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
4,254,363
https://en.wikipedia.org/wiki/Size%20consistency%20and%20size%20extensivity
In quantum chemistry, size consistency and size extensivity are concepts relating to how the behaviour of quantum-chemistry calculations changes with the system size. Size consistency (or strict separability) is a property that guarantees the consistency of the energy behaviour when interaction between the involved molecular subsystems is nullified (for example, by distance). Size extensivity, introduced by Bartlett, is a more mathematically formal characteristic which refers to the correct (linear) scaling of a method with the number of electrons. Let A and B be two non-interacting systems. If a given theory for the evaluation of the energy is size-consistent, then the energy of the supersystem A + B, separated by a sufficiently large distance such that there is essentially no shared electron density, is equal to the sum of the energy of A plus the energy of B taken by themselves: This property of size consistency is of particular importance to obtain correctly behaving dissociation curves. Others have more recently argued that the entire potential energy surface should be well-defined. Size consistency and size extensivity are sometimes used interchangeably in the literature. However, there are very important distinctions to be made between them. Hartree–Fock (HF), coupled cluster, many-body perturbation theory (to any order), and full configuration interaction (FCI) are size-extensive but not always size-consistent. For example, the restricted Hartree–Fock model is not able to correctly describe the dissociation curves of H2, and therefore all post-HF methods that employ HF as a starting point will fail in that matter (so-called single-reference methods). Sometimes numerical errors can cause a method that is formally size-consistent to behave in a non-size-consistent manner. Core extensivity is yet another related property, which extends the requirement to the proper treatment of excited states. References Quantum chemistry
Size consistency and size extensivity
[ "Physics", "Chemistry" ]
395
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
3,121,095
https://en.wikipedia.org/wiki/Polyhedral%20skeletal%20electron%20pair%20theory
In chemistry the polyhedral skeletal electron pair theory (PSEPT) provides electron counting rules useful for predicting the structures of clusters such as borane and carborane clusters. The electron counting rules were originally formulated by Kenneth Wade, and were further developed by others including Michael Mingos; they are sometimes known as Wade's rules or the Wade–Mingos rules. The rules are based on a molecular orbital treatment of the bonding. These rules have been extended and unified in the form of the Jemmis mno rules. Predicting structures of cluster compounds Different rules (4n, 5n, or 6n) are invoked depending on the number of electrons per vertex. The 4n rules are reasonably accurate in predicting the structures of clusters having about 4 electrons per vertex, as is the case for many boranes and carboranes. For such clusters, the structures are based on deltahedra, which are polyhedra in which every face is triangular. The 4n clusters are classified as closo-, nido-, arachno- or hypho-, based on whether they represent a complete (closo-) deltahedron, or a deltahedron that is missing one (nido-), two (arachno-) or three (hypho-) vertices. However, hypho clusters are relatively uncommon due to the fact that the electron count is high enough to start to fill antibonding orbitals and destabilize the 4n structure. If the electron count is close to 5 electrons per vertex, the structure often changes to one governed by the 5n rules, which are based on 3-connected polyhedra. As the electron count increases further, the structures of clusters with 5n electron counts become unstable, so the 6n rules can be implemented. The 6n clusters have structures that are based on rings. A molecular orbital treatment can be used to rationalize the bonding of cluster compounds of the 4n, 5n, and 6n types. 4n rules The following polyhedra are closo polyhedra, and are the basis for the 4n rules; each of these have triangular faces. The number of vertices in the cluster determines what polyhedron the structure is based on. Using the electron count, the predicted structure can be found. n is the number of vertices in the cluster. The 4n rules are enumerated in the following table. When counting electrons for each cluster, the number of valence electrons is enumerated. For each transition metal present, 10 electrons are subtracted from the total electron count. For example, in Rh6(CO)16 the total number of electrons would be = = 26. Therefore, the cluster is a closo polyhedron because , with . Other rules may be considered when predicting the structure of clusters: For clusters consisting mostly of transition metals, any main group elements present are often best counted as ligands or interstitial atoms, rather than vertices. Larger and more electropositive atoms tend to occupy vertices of high connectivity and smaller more electronegative atoms tend to occupy vertices of low connectivity. In the special case of boron hydride clusters, each boron atom connected to 3 or more vertices has one terminal hydride, while a boron atom connected to two other vertices has two terminal hydrogen atoms. If more hydrogen atoms are present, they are placed in open face positions to even out the coordination number of the vertices. For the special case of transition metal clusters, ligands are added to the metal centers to give the metals reasonable coordination numbers, and if any hydrogen atoms are present they are placed in bridging positions to even out the coordination numbers of the vertices. In general, closo structures with n vertices are n-vertex polyhedra. To predict the structure of a nido cluster, the closo cluster with n + 1 vertices is used as a starting point; if the cluster is composed of small atoms a high connectivity vertex is removed, while if the cluster is composed of large atoms a low connectivity vertex is removed. To predict the structure of an arachno cluster, the closo polyhedron with n + 2 vertices is used as the starting point, and the n + 1 vertex nido complex is generated by following the rule above; a second vertex adjacent to the first is removed if the cluster is composed of mostly small atoms, a second vertex not adjacent to the first is removed if the cluster is composed mostly of large atoms. Example: Electron count: 10 × Pb + 2 (for the negative charge) = 10 × 4 + 2 = 42 electrons. Since n = 10, 4n + 2 = 42, so the cluster is a closo bicapped square antiprism. Example: Electron count: 4 × S – 2 (for the positive charge) = 4 × 6 – 2 = 22 electrons. Since n = 4, 4n + 6 = 22, so the cluster is arachno. Starting from an octahedron, a vertex of high connectivity is removed, and then a non-adjacent vertex is removed. Example: Os6(CO)18 Electron count: 6 × Os + 18 × CO – 60 (for 6 osmium atoms) = 6 × 8 + 18 × 2 – 60 = 24 Since n = 6, 4n = 24, so the cluster is capped closo. Starting from a trigonal bipyramid, a face is capped. The carbonyls have been omitted for clarity. Example: Electron count: 5 × B + 5 × H + 4 (for the negative charge) = 5 × 3 + 5 × 1 + 4 = 24 Since n = 5, 4n + 4 = 24, so the cluster is nido. Starting from an octahedron, one of the vertices is removed. The rules are useful in also predicting the structure of carboranes. Example: C2B7H13 Electron count = 2 × C + 7 × B + 13 × H = 2 × 4 + 7 × 3 + 13 × 1 = 42 Since n in this case is 9, 4n + 6 = 42, the cluster is arachno. The bookkeeping for deltahedral clusters is sometimes carried out by counting skeletal electrons instead of the total number of electrons. The skeletal orbital (electron pair) and skeletal electron counts for the four types of deltahedral clusters are: n-vertex closo: n + 1 skeletal orbitals, 2n + 2 skeletal electrons n-vertex nido: n + 2 skeletal orbitals, 2n + 4 skeletal electrons n-vertex arachno: n + 3 skeletal orbitals, 2n + 6 skeletal electrons n-vertex hypho: n + 4 skeletal orbitals, 2n + 8 skeletal electrons The skeletal electron counts are determined by summing the total of the following number of electrons: 2 from each BH unit 3 from each CH unit 1 from each additional hydrogen atom (over and above the ones on the BH and CH units) the anionic charge electrons 5n rules As discussed previously, the 4n rule mainly deals with clusters with electron counts of , in which approximately 4 electrons are on each vertex. As more electrons are added per vertex, the number of the electrons per vertex approaches 5. Rather than adopting structures based on deltahedra, the 5n-type clusters have structures based on a different series of polyhedra known as the 3-connected polyhedra, in which each vertex is connected to 3 other vertices. The 3-connected polyhedra are the duals of the deltahedra. The common types of 3-connected polyhedra are listed below. The 5n rules are as follows. Example: P4 Electron count: 4 × P = 4 × 5 = 20 It is a 5n structure with n = 4, so it is tetrahedral Example: P4S3 Electron count 4 × P + 3 × S = 4 × 5 + 3 × 6 = 38 It is a 5n + 3 structure with n = 7. Three vertices are inserted into edges Example: P4O6 Electron count 4 × P + 6 × O = 4 × 5 + 6 × 6 = 56 It is a 5n + 6 structure with n = 10. Six vertices are inserted into edges 6n rules As more electrons are added to a 5n cluster, the number of electrons per vertex approaches 6. Instead of adopting structures based on 4n or 5n rules, the clusters tend to have structures governed by the 6n rules, which are based on rings. The rules for the 6n structures are as follows. Example: S8 Electron count = 8 × S = 8 × 6 = 48 electrons. Since n = 8, 6n = 48, so the cluster is an 8-membered ring. Hexane (C6H14) Electron count = 6 × C + 14 × H = 6 × 4 + 14 × 1 = 38 Since n = 6, 6n = 36 and 6n + 2 = 38, so the cluster is a 6-membered chain. Isolobal vertex units Provided a vertex unit is isolobal with BH then it can, in principle at least, be substituted for a BH unit, even though BH and CH are not isoelectronic. The CH+ unit is isolobal, hence the rules are applicable to carboranes. This can be explained due to a frontier orbital treatment. Additionally there are isolobal transition-metal units. For example, Fe(CO)3 provides 2 electrons. The derivation of this is briefly as follows: Fe has 8 valence electrons. Each carbonyl group is a net 2 electron donor after the internal σ- and π-bonding are taken into account making 14 electrons. 3 pairs are considered to be involved in Fe–CO σ-bonding and 3 pairs are involved in π-backbonding from Fe to CO reducing the 14 to 2. Bonding in cluster compounds closo- The boron atoms lie on each vertex of the octahedron and are sp hybridized. One sp-hybrid radiates away from the structure forming the bond with the hydrogen atom. The other sp-hybrid radiates into the center of the structure forming a large bonding molecular orbital at the center of the cluster. The remaining two unhybridized orbitals lie along the tangent of the sphere like structure creating more bonding and antibonding orbitals between the boron vertices. The orbital diagram breaks down as follows: The 18 framework molecular orbitals, (MOs), derived from the 18 boron atomic orbitals are: 1 bonding MO at the center of the cluster and 5 antibonding MOs from the 6 sp-radial hybrid orbitals 6 bonding MOs and 6 antibonding MOs from the 12 tangential p-orbitals. The total skeletal bonding orbitals is therefore 7, i.e. . Transition metal clusters Transition metal clusters use the d orbitals for bonding. Thus, they have up to nine bonding orbitals, instead of only the four present in boron and main group clusters. PSEPT also applies to metallaboranes Clusters with interstitial atoms Owing their large radii, transition metals generally form clusters that are larger than main group elements. One consequence of their increased size, these clusters often contain atoms at their centers. A prominent example is [Fe6C(CO)16]2-. In such cases, the rules of electron counting assume that the interstitial atom contributes all valence electrons to cluster bonding. In this way, [Fe6C(CO)16]2- is equivalent to [Fe6(CO)16]6- or [Fe6(CO)18]2-. See Also Styx rule References General references Chemical bonding Inorganic chemistry Organometallic chemistry Cluster chemistry
Polyhedral skeletal electron pair theory
[ "Physics", "Chemistry", "Materials_science" ]
2,421
[ "Cluster chemistry", "Condensed matter physics", "nan", "Chemical bonding", "Organometallic chemistry" ]
3,122,600
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Steenrod%20axioms
In mathematics, specifically in algebraic topology, the Eilenberg–Steenrod axioms are properties that homology theories of topological spaces have in common. The quintessential example of a homology theory satisfying the axioms is singular homology, developed by Samuel Eilenberg and Norman Steenrod. One can define a homology theory as a sequence of functors satisfying the Eilenberg–Steenrod axioms. The axiomatic approach, which was developed in 1945, allows one to prove results, such as the Mayer–Vietoris sequence, that are common to all homology theories satisfying the axioms. If one omits the dimension axiom (described below), then the remaining axioms define what is called an extraordinary homology theory. Extraordinary cohomology theories first arose in K-theory and cobordism. Formal definition The Eilenberg–Steenrod axioms apply to a sequence of functors from the category of pairs of topological spaces to the category of abelian groups, together with a natural transformation called the boundary map (here is a shorthand for ). The axioms are: Homotopy: Homotopic maps induce the same map in homology. That is, if is homotopic to , then their induced homomorphisms are the same. Excision: If is a pair and U is a subset of A such that the closure of U is contained in the interior of A, then the inclusion map induces an isomorphism in homology. Dimension: Let P be the one-point space; then for all . Additivity: If , the disjoint union of a family of topological spaces , then Exactness: Each pair (X, A) induces a long exact sequence in homology, via the inclusions and : If P is the one point space, then is called the coefficient group. For example, singular homology (taken with integer coefficients, as is most common) has as coefficients the integers. Consequences Some facts about homology groups can be derived directly from the axioms, such as the fact that homotopically equivalent spaces have isomorphic homology groups. The homology of some relatively simple spaces, such as n-spheres, can be calculated directly from the axioms. From this it can be easily shown that the (n − 1)-sphere is not a retract of the n-disk. This is used in a proof of the Brouwer fixed point theorem. Dimension axiom A "homology-like" theory satisfying all of the Eilenberg–Steenrod axioms except the dimension axiom is called an extraordinary homology theory (dually, extraordinary cohomology theory). Important examples of these were found in the 1950s, such as topological K-theory and cobordism theory, which are extraordinary cohomology theories, and come with homology theories dual to them. See also Zig-zag lemma Notes References Homology theory Mathematical axioms
Eilenberg–Steenrod axioms
[ "Mathematics" ]
614
[ "Mathematical logic", "Mathematical axioms" ]
3,123,067
https://en.wikipedia.org/wiki/Corey%E2%80%93Fuchs%20reaction
The Corey–Fuchs reaction, also known as the Ramirez–Corey–Fuchs reaction, is a series of chemical reactions designed to transform an aldehyde into an alkyne. The formation of the 1,1-dibromoolefins via phosphine-dibromomethylenes was originally discovered by Desai, McKelvie and Ramirez. The phosphine can be partially substituted by zinc dust, which can improve yields and simplify product separation. The second step of the reaction to convert dibromoolefins to alkynes is known as Fritsch–Buttenberg–Wiechell rearrangement. The overall combined transformation of an aldehyde to an alkyne by this method is named after its developers, American chemists Elias James Corey and Philip L. Fuchs. By suitable choice of base, it is often possible to stop the reaction at the 1-bromoalkyne, a useful functional group for further transformation. Reaction mechanism The Corey–Fuchs reaction is based on a special case of the Wittig reaction, where two equivalents of triphenylphosphine are used with carbon tetrabromide to produce the triphenylphosphine-dibromomethylene ylide. This ylide undergoes a Wittig reaction when exposed to an aldehyde. Alternatively, using a ketone generates a gem-dibromoalkene. The second part of the reaction converts the isolable gem-dibromoalkene intermediate to the alkyne. Deuterium-labelling studies show that this step proceeds through a carbene mechanism. Lithium-Bromide exchange is followed by α-elimination to afford the carbene. 1,2-shift then affords the deuterium-labelled terminal alkyne. The 50% H-incorporation could be explained by deprotonation of the (acidic) terminal deuterium with excess BuLi. See also Appel reaction Fritsch-Buttenberg-Wiechell rearrangement Seyferth-Gilbert homologation Wittig reaction References Corey, E. J.; Fuchs, P. L. Tetrahedron Lett. 1972, 13, 3769–3772. Mori, M.; Tonogaki, K.; Kinoshita, A. Organic Syntheses, Vol. 81, p. 1 (2005). (Article ) Marshall, J. A.; Yanik, M. M.; Adams, N. D.; Ellis, K. C.; Chobanian, H. R. Organic Syntheses, Vol. 81, p. 157 (2005). (Article ) N. B. Desai, N. McKelvie, F. Ramirez JACS, Vol. 84, p. 1745-1747 (1962). External links Corey-Fuchs Alkyne Synthesis Carbon-carbon bond forming reactions Rearrangement reactions Name reactions
Corey–Fuchs reaction
[ "Chemistry" ]
615
[ "Name reactions", "Carbon-carbon bond forming reactions", "Rearrangement reactions", "Organic reactions" ]
3,123,914
https://en.wikipedia.org/wiki/Feigenbaum%20function
In the study of dynamical systems the term Feigenbaum function has been used to describe two different functions introduced by the physicist Mitchell Feigenbaum: the solution to the Feigenbaum-Cvitanović functional equation; and the scaling function that described the covers of the attractor of the logistic map Idea Period-doubling route to chaos In the logistic map, we have a function , and we want to study what happens when we iterate the map many times. The map might fall into a fixed point, a fixed cycle, or chaos. When the map falls into a stable fixed cycle of length , we would find that the graph of and the graph of intersects at points, and the slope of the graph of is bounded in at those intersections. For example, when , we have a single intersection, with slope bounded in , indicating that it is a stable single fixed point. As increases to beyond , the intersection point splits to two, which is a period doubling. For example, when , there are three intersection points, with the middle one unstable, and the two others stable. As approaches , another period-doubling occurs in the same way. The period-doublings occur more and more frequently, until at a certain , the period doublings become infinite, and the map becomes chaotic. This is the period-doubling route to chaos. Scaling limit Looking at the images, one can notice that at the point of chaos , the curve of looks like a fractal. Furthermore, as we repeat the period-doublings, the graphs seem to resemble each other, except that they are shrunken towards the middle, and rotated by 180 degrees. This suggests to us a scaling limit: if we repeatedly double the function, then scale it up by for a certain constant : then at the limit, we would end up with a function that satisfies . Further, as the period-doubling intervals become shorter and shorter, the ratio between two period-doubling intervals converges to a limit, the first Feigenbaum constant . The constant can be numerically found by trying many possible values. For the wrong values, the map does not converge to a limit, but when it is , it converges. This is the second Feigenbaum constant. Chaotic regime In the chaotic regime, , the limit of the iterates of the map, becomes chaotic dark bands interspersed with non-chaotic bright bands. Other scaling limits When approaches , we have another period-doubling approach to chaos, but this time with periods 3, 6, 12, ... This again has the same Feigenbaum constants . The limit of is also the same function. This is an example of universality. We can also consider period-tripling route to chaos by picking a sequence of such that is the lowest value in the period- window of the bifurcation diagram. For example, we have , with the limit . This has a different pair of Feigenbaum constants . And converges to the fixed point toAs another example, period-4-pling has a pair of Feigenbaum constants distinct from that of period-doubling, even though period-4-pling is reached by two period-doublings. In detail, define such that is the lowest value in the period- window of the bifurcation diagram. Then we have , with the limit . This has a different pair of Feigenbaum constants . In general, each period-multiplying route to chaos has its own pair of Feigenbaum constants. In fact, there are typically more than one. For example, for period-7-pling, there are at least 9 different pairs of Feigenbaum constants. Generally, , and the relation becomes exact as both numbers increase to infinity: . Feigenbaum-Cvitanović functional equation This functional equation arises in the study of one-dimensional maps that, as a function of a parameter, go through a period-doubling cascade. Discovered by Mitchell Feigenbaum and Predrag Cvitanović, the equation is the mathematical expression of the universality of period doubling. It specifies a function g and a parameter by the relation with the initial conditionsFor a particular form of solution with a quadratic dependence of the solution near is one of the Feigenbaum constants. The power series of is approximately Renormalization The Feigenbaum function can be derived by a renormalization argument. The Feigenbaum function satisfies for any map on the real line at the onset of chaos. Scaling function The Feigenbaum scaling function provides a complete description of the attractor of the logistic map at the end of the period-doubling cascade. The attractor is a Cantor set, and just as the middle-third Cantor set, it can be covered by a finite set of segments, all bigger than a minimal size dn. For a fixed dn the set of segments forms a cover Δn of the attractor. The ratio of segments from two consecutive covers, Δn and Δn+1 can be arranged to approximate a function σ, the Feigenbaum scaling function. See also Logistic map Presentation function Notes Bibliography Bound as Order in Chaos, Proceedings of the International Conference on Order and Chaos held at the Center for Nonlinear Studies, Los Alamos, New Mexico 87545, USA 24–28 May 1982, Eds. David Campbell, Harvey Rose; North-Holland Amsterdam . Chaos theory Dynamical systems
Feigenbaum function
[ "Physics", "Mathematics" ]
1,107
[ "Mechanics", "Dynamical systems" ]
12,059,679
https://en.wikipedia.org/wiki/Umbrella%20sampling
Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system (or different systems) where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977. It is a particular physical application of the more general importance sampling in statistics. Systems in which an energy barrier separates two regions of configuration space may suffer from poor sampling. In Metropolis Monte Carlo runs, the low probability of overcoming the potential barrier can leave inaccessible configurations poorly sampled—or even entirely unsampled—by the simulation. An easily visualised example occurs with a solid at its melting point: considering the state of the system with an order parameter Q, both liquid (low Q) and solid (high Q) phases are low in energy, but are separated by a free-energy barrier at intermediate values of Q. This prevents the simulation from adequately sampling both phases. Umbrella sampling is a means of "bridging the gap" in this situation. The standard Boltzmann weighting for Monte Carlo sampling is replaced by a potential chosen to cancel the influence of the energy barrier present. The Markov chain generated has a distribution given by with U the potential energy, w(rN) a function chosen to promote configurations that would otherwise be inaccessible to a Boltzmann-weighted Monte Carlo run. In the example above, w may be chosen such that w = w(Q), taking high values at intermediate Q and low values at low/high Q, facilitating barrier crossing. Values for a thermodynamic property A deduced from a sampling run performed in this manner can be transformed into canonical-ensemble values by applying the formula with the subscript indicating values from the umbrella-sampled simulation. The effect of introducing the weighting function w(rN) is equivalent to adding a biasing potential to the potential energy of the system. If the biasing potential is strictly a function of a reaction coordinate or order parameter , then the (unbiased) free-energy profile on the reaction coordinate can be calculated by subtracting the biasing potential from the biased free-energy profile: where is the free-energy profile of the unbiased system, and is the free-energy profile calculated for the biased, umbrella-sampled system. Series of umbrella sampling simulations can be analyzed using the weighted histogram analysis method (WHAM) or its generalization. WHAM can be derived using the maximum likelihood method. Subtleties exist in deciding the most computationally efficient way to apply the umbrella sampling method, as described in Frenkel and Smit's book Understanding Molecular Simulation. Alternatives to umbrella sampling for computing potentials of mean force or reaction rates are free-energy perturbation and transition interface sampling. A further alternative, which functions in full non-equilibrium, is S-PRES. References Further reading Daan Frenkel and Berend Smit: "Understanding Molecular Simulation: From Algorithms to Applications". Academic Press 2001, Johannes Kästner: “Umbrella Sampling”, WIREs Computational Molecular Science 1, 932 (2011) doi:10.1002/wcms.66 Monte Carlo methods Molecular dynamics Computational chemistry Computational physics Theoretical chemistry
Umbrella sampling
[ "Physics", "Chemistry" ]
656
[ "Molecular physics", "Monte Carlo methods", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry", "nan" ]
12,063,277
https://en.wikipedia.org/wiki/Methoxy%20arachidonyl%20fluorophosphonate
Methoxy arachidonyl fluorophosphonate, commonly referred as MAFP, is an irreversible active site-directed enzyme inhibitor that inhibits nearly all serine hydrolases and serine proteases. It inhibits phospholipase A2 and fatty acid amide hydrolase with special potency, displaying IC50 values in the low-nanomolar range. In addition, it binds to the CB1 receptor in rat brain membrane preparations (IC50 = 20 nM), but does not appear to agonize or antagonize the receptor, though some related derivatives do show cannabinoid-like properties. See also DIFP – diisopropyl fluorophosphate, a related inhibitor IDFP – isopropyl dodecylfluorophosphonate, another related inhibitor with selectivity for FAAH and MAGL Activity-based probes References Cannabinoids Phosphonofluoridates Serine protease inhibitors Arachidonyl compounds
Methoxy arachidonyl fluorophosphonate
[ "Chemistry", "Biology" ]
215
[ "Biotechnology stubs", "Functional groups", "Biochemistry stubs", "Phosphonofluoridates", "Biochemistry" ]
12,065,768
https://en.wikipedia.org/wiki/Fire%20retardant%20gel
Fire-retardant gels are superabsorbent polymer slurries with a "consistency almost like petroleum jelly." Fire-retardant gels can also be slurries that are composed of a combination of water, starch, and clay. Used as fire retardants, they can be used for structure protection and in direct-attack applications against wildfires. Fire-retardant gels are short-term fire suppressants typically applied with ground equipment. They are also used in the movie industry to protect stunt persons from flames when filming action movie scenes. History The practical use of gels was limited until the 1950s as advances in copolymerization techniques led to reproducible, batchwise preparation of swellable resins with uniform cross-linking. This technology was later used in the development of a "substantially continuous, adherent, particulate coating composition of water-swollen, gelled particles of a crosslinked, water-insoluble, water-swellable polymer." The water-absorbent polymers in fire-retardant gels are similar to those used in diapers. Mechanism of retardation The polymer in gels soaks up hundreds of times its weight in water creating millions of tiny drops of water surrounded by and protected by a polymer shell. The result is a "bubblet" or a drop of water surrounded by a polymer shell in contrast to a bubble which is air surrounded by liquid. As the gel and water are sprayed onto an exposed surface, millions of tiny "bubblets" are stacked one on top of another. The stacking of the water "bubblets" form a thermal protective "blanket" over the surface to which it is applied. In order for the heat of the fire to penetrate the protected surface, it must burn off each layer of the gel "bubblets" coating. Each layer holds the heat away from the next layer of bubblets beneath. The polymer shell of each bubblets and their stacking significantly prevent water evaporation. The stacking of the bubblets is similar to aspirated fire fighting foam or compressed air foam systems, except that bubblets are water filled, whereas foam bubbles are only filled with air. Due to the high specific heat of water, it requires more energy to raise the temperature of water than air. Therefore, water-filled bubblets will absorb more heat than the air-filled foam bubbles (which are more effective for vapor suppression). When gel is applied to a surface such as an exterior wall, the water-filled bubblets can absorb much of the heat given off by the fire, thereby slowing the fire from reaching the wall. Gels can provide thermal protection from fire for extended periods even at . Depending on the fire conditions, applied fire retardant gels offer fire protection for periods of 6 to 36 hours. After the retained water is completely evaporated from a gel, fire resistance is lost, but can be restored by re-wetting the surface if gel material is still adherent. Uses Fire retardant gels create a fire protective gel coating that completely repels flaming embers and is extremely effective in cases of emergency in protecting structures from the flame fronts typical of most wildfires. During a fire in the Black Hills National Forest, "nearly all homes coated with a slimy gel were saved while dozens of houses nearby burned to the ground." Certain supplemental fire protection insurance may include the application of fire-retardant gel to homes during wildfire. Claimed to work "best when applied hours before a fire approaches", gel is applied using specially designed trucks by private firms. However, danger may be high and private firms may interfere with fire efforts. In response to such a concern, Sam DiGiovanna, chief of Firebreak response program, a private response team, stated: "If whoever is running the fire thinks it's too dangerous to go into a particular area, we don't go into that area." These gels are useful when filming scenes in which it is desired to give the illusion that someone is on fire. To do so, the gel is applied to an area of the body. Next, a fuel is placed on top of the gel. When ready to film the scene, the fuel is lit on fire. The gel insulates the person from the energy released from the burning fuel. The energy from the burning fuel goes into the gel, but not the stunt person. Thus, the stunt person is protected from being burned. References James H. Meidl: "Flammable Hazardous Materials", Glencoe Press Fire Science Series, 1970. External links Fire suppression Fire suppression agents Wildfire suppression Fire protection
Fire retardant gel
[ "Engineering" ]
955
[ "Building engineering", "Fire protection" ]
12,069,013
https://en.wikipedia.org/wiki/Recurrence%20period%20density%20entropy
Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal. Overview Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. The RPDE value is a scalar in the range zero to one. For purely periodic signals, , whereas for purely i.i.d., uniform white noise, . Method description The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors: for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point in the phase space, an -neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density: is the RPDE value, where is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE is intended to be applied to both deterministic and stochastic signals, therefore, strictly speaking, Taken's original embedding theorem does not apply, and needs some modification. RPDE in practice RPDE has the ability to detect subtle changes in natural biological time series such as the breakdown of regular periodic oscillation in abnormal cardiac function which are hard to detect using classical signal processing tools such as the Fourier transform or linear prediction. The recurrence period density is a sparse representation for nonlinear, non-Gaussian and nondeterministic signals, whereas the Fourier transform is only sparse for purely periodic signals. See also Recurrence plot, a powerful visualisation tool of recurrences in dynamical (and other) systems. Recurrence quantification analysis, another approach to quantify recurrence properties. References External links Fast MATLAB code for calculating the RPDE value. http://www.recurrence-plot.tk/ Signal processing Entropy Stochastic processes Dynamical systems
Recurrence period density entropy
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
652
[ "Thermodynamic properties", "Telecommunications engineering", "Physical quantities", "Computer engineering", "Signal processing", "Quantity", "Entropy", "Mechanics", "Asymmetry", "Wikipedia categories named after physical quantities", "Symmetry", "Dynamical systems" ]
17,575,156
https://en.wikipedia.org/wiki/Bacterial%20motility
Bacterial motility is the ability of bacteria to move independently using metabolic energy. Most motility mechanisms that evolved among bacteria also evolved in parallel among the archaea. Most rod-shaped bacteria can move using their own power, which allows colonization of new environments and discovery of new resources for survival. Bacterial movement depends not only on the characteristics of the medium, but also on the use of different appendages to propel. Swarming and swimming movements are both powered by rotating flagella. Whereas swarming is a multicellular 2D movement over a surface and requires the presence of surfactants, swimming is movement of individual cells in liquid environments. Other types of movement occurring on solid surfaces include twitching, gliding and sliding, which are all independent of flagella. Twitching depends on the extension, attachment to a surface, and retraction of type IV pili which pull the cell forwards in a manner similar to the action of a grappling hook, providing energy to move the cell forward. Gliding uses different motor complexes, such as the focal adhesion complexes of Myxococcus. Unlike twitching and gliding motilities, which are active movements where the motive force is generated by the individual cell, sliding is a passive movement. It relies on the motive force generated by the cell community due to the expansive forces caused by cell growth within the colony in the presence of surfactants, which reduce the friction between the cells and the surface. The overall movement of a bacterium can be the result of alternating tumble and swim phases. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria can also exhibit taxis, which is the ability to move towards or away from stimuli in their environment. In chemotaxis the overall motion of bacteria responds to the presence of chemical gradients. In phototaxis bacteria can move towards or away from light. This can be particularly useful for cyanobacteria, which use light for photosynthesis. Likewise, magnetotactic bacteria align their movement with the Earth's magnetic field. Some bacteria have escape reactions allowing them to back away from stimuli that might harm or kill. This is fundamentally different from navigation or exploration, since response times must be rapid. Escape reactions are achieved by action potential-like phenomena, and have been observed in biofilms as well as in single cells such as cable bacteria. Currently there is interest in developing biohybrid microswimmers, microscopic swimmers which are part biological and part engineered by humans, such as swimming bacteria modified to carry cargo. Background In 1828, the British biologist Robert Brown discovered the incessant jiggling motion of pollen in water and described his finding in his article "A Brief Account of Microscopical Observations…", leading to extended scientific discussion about the origin of this motion. This enigma was resolved only in 1905, when Albert Einstein published his celebrated essay Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Einstein not only deduced the diffusion of suspended particles in quiescent liquids, but also suggested these findings could be used to determine particle size — in a sense, he was the world's first microrheologist. Ever since Newton established his equations of motion, the mystery of motion on the microscale has emerged frequently in scientific history, as famously demonstrated by a couple of articles that should be discussed briefly. First, an essential concept, popularized by Osborne Reynolds, is that the relative importance of inertia and viscosity for the motion of a fluid depends on certain details of the system under consideration. The Reynolds number , named in his honor, quantifies this comparison as a dimensionless ratio of characteristic inertial and viscous forces: Here, represents the density of the fluid; is a characteristic velocity of the system (for instance, the velocity of a swimming particle); is a characteristic length scale (e.g., the swimmer size); and is the viscosity of the fluid. Taking the suspending fluid to be water, and using experimentally observed values for , one can determine that inertia is important for macroscopic swimmers like fish ( = 100), while viscosity dominates the motion of microscale swimmers like bacteria ( = 10−4). The overwhelming importance of viscosity for swimming at the micrometer scale has profound implications for swimming strategy. This has been discussed memorably by E. M. Purcell, who invited the reader into the world of microorganisms and theoretically studied the conditions of their motion. In the first place, propulsion strategies of large scale swimmers often involve imparting momentum to the surrounding fluid in periodic discrete events, such as vortex shedding, and coasting between these events through inertia. This cannot be effective for microscale swimmers like bacteria: due to the large viscous damping, the inertial coasting time of a micron-sized object is on the order of 1 μs. The coasting distance of a microorganism moving at a typical speed is about 0.1 angstroms (Å). Purcell concluded that only forces that are exerted in the present moment on a microscale body contribute to its propulsion, so a constant energy conversion method is essential. Microorganisms have optimized their metabolism for continuous energy production, while purely artificial microswimmers (microrobots) must obtain energy from the environment, since their on-board-storage-capacity is very limited. As a further consequence of the continuous dissipation of energy, biological and artificial microswimmers do not obey the laws of equilibrium statistical physics, and need to be described by non-equilibrium dynamics. Mathematically, Purcell explored the implications of low Reynolds number by taking the Navier-Stokes equation and eliminating the inertial terms: where is the velocity of the fluid and is the gradient of the pressure. As Purcell noted, the resulting equation — the Stokes equation — contains no explicit time dependence. This has some important consequences for how a suspended body (e.g., a bacterium) can swim through periodic mechanical motions or deformations (e.g., of a flagellum). First, the rate of motion is practically irrelevant for the motion of the microswimmer and of the surrounding fluid: changing the rate of motion will change the scale of the velocities of the fluid and of the microswimmer, but it will not change the pattern of fluid flow. Secondly, reversing the direction of mechanical motion will simply reverse all velocities in the system. These properties of the Stokes equation severely restrict the range of feasible swimming strategies. As a concrete illustration, consider a mathematical scallop that consists of two rigid pieces connected by a hinge. Can the "scallop" swim by periodically opening and closing the hinge? No: regardless of how the cycle of opening and closing depends on time, the scallop will always return to its starting point at the end of the cycle. Here originated the striking quote: "Fast or slow, it exactly retraces its trajectory and it's back where it started". In light of this scallop theorem, Purcell developed approaches concerning how artificial motion at the micro scale can be generated. This paper continues to inspire ongoing scientific discussion; for example, recent work by the Fischer group from the Max Planck Institute for Intelligent Systems experimentally confirmed that the scallop principle is only valid for Newtonian fluids. Motile systems have developed in the natural world over time and length scales spanning several orders of magnitude, and have evolved anatomically and physiologically to attain optimal strategies for self-propulsion and overcome the implications of high viscosity forces and Brownian motion, as shown in the diagram on the right. Some of the smallest known motile systems are motor proteins, i.e., proteins and protein complexes present in cells that carry out a variety of physiological functions by transducing chemical energy into mechanical energy. These motor proteins are classified as myosins, kinesins, or dyneins. Myosin motors are responsible for muscle contractions and the transport of cargousing actin filaments as tracks. Dynein motors and kinesin motors, on the other hand, use microtubules to transport vesicles across the cell. The mechanism these protein motors use to convert chemical energy into movement depends on ATP hydrolysis, which leads to a conformation modification in the globular motor domain, leading to directed motion. Bacteria can be roughly divided into two fundamentally different groups, gram-positive and gram-negative bacteria, distinguished by the architecture of their cell envelope. In each case the cell envelope is a complex multi-layered structure that protects the cell from its environment. In gram-positive bacteria, the cytoplasmic membrane is only surrounded by a thick cell wall of peptidoglycan. By contrast, the envelope of gram-negative bacteria is more complex and consists (from inside to outside) of the cytoplasmic membrane, a thin layer of peptidoglycan, and an additional outer membrane, also called the lipopolysaccharide layer. Other bacterial cell surface structures range from disorganised slime layers to highly structured capsules. These are made from secreted slimy or sticky polysaccharides or proteins that provide protection for the cells and are in direct contact with the environment. They have other functions, including attachment to solid surfaces. Additionally, protein appendages can be present on the surface: fimbriae and pili can have different lengths and diameters and their functions include adhesion and twitching motility. Specifically, for microorganisms that live in aqueous environments, locomotion refers to swimming, and hence the world is full of different classes of swimming microorganisms, such as bacteria, spermatozoa, protozoa, and algae. Bacteria move due to rotation of hair-like filaments called flagella, which are anchored to a protein motor complex on the bacteria cell wall. Movement mechanisms Bacteria have two different primary mechanisms they use for movement. The flagellum is used for swimming and swarming, and the pilus (or fimbria) is used for twitching. Flagellum The flagellum (plural, flagella; a group of flagella is called a tuft) is a helical, thin and long appendage attached to the cell surface by one of its ends, performing a rotational motion to push or pull the cell. During the rotation of the bacterial flagellar motor, which is located in the membrane, the flagella rotate at speeds between 200 and 2000 rpm, depending on the bacterial species. The hook substructure of the bacterial flagellum acts as a universal joint connecting the motor to the flagellar filament. Prokaryotes, both bacteria and archaea, primarily use flagella for locomotion. Bacterial flagella are helical filaments, each with a rotary motor at its base which can turn clockwise or counterclockwise. They provide two of several kinds of bacterial motility. Archaeal flagella are called archaella, and function in much the same way as bacterial flagella. Structurally the archaellum is superficially similar to a bacterial flagellum, but it differs in many details and is considered non-homologous. Some eukaryotic cells also use flagella — and they can be found in some protists and plants as well as animal cells. Eukaryotic flagella are complex cellular projections that lash back and forth, rather than in a circular motion. Prokaryotic flagella use a rotary motor, and the eukaryotic flagella use a complex sliding filament system. Eukaryotic flagella are ATP-driven, while prokaryotic flagella can be ATP-driven (archaea) or proton-driven (bacteria). Different types of cell flagellation are found depending on the number and arrangement of the flagella on the cell surface, e.g., only at the cell poles or spread over the cell surface. In polar flagellation, the flagella are present at one or both ends of the cell: if a single flagellum is attached at one pole, the cell is called monotrichous; if a tuft of flagella is located at one pole, the cells is lophotrichous; when flagella are present at both ends, the cell is amphitrichous. In peritrichous flagellation, the flagella are distributed in different locations around the cell surface. Nevertheless, variations within this classification can be found, like lateral and subpolar—instead of polar—monotrichous and lophotrichous flagellation. The rotary motor model used by bacteria uses the protons of an electrochemical gradient in order to move their flagella. Torque in the flagella of bacteria is created by particles that conduct protons around the base of the flagellum. The direction of rotation of the flagella in bacteria comes from the occupancy of the proton channels along the perimeter of the flagellar motor. The bacterial flagellum is a protein-nanomachine that converts electrochemical energy in the form of a gradient of H+ or Na+ ions into mechanical work. The flagellum is composed of three parts: the basal body, the hook, and the filament. The basal body is a reversible motor that spans the bacterial cell envelope. It is composed of the central rod and several rings: in Gram-negative bacteria, these are the outer L-ring (lipopolysaccharide) and P-ring (peptidoglycan), and the inner MS-ring (membrane/supramembrane) and C-ring (cytoplasmic). In Gram-positive bacteria only the inner rings are present. The Mot proteins (MotA and MotB) surround the inner rings in the cytoplasmic membrane; ion translocation through the Mot proteins provide the energy for flagella rotation. The Fli proteins allow reversal of the direction of rotation of the flagella in response to specific stimuli. The hook connects the filament to the motor protein in the base. The helical filament is composed of many copies of the protein flagellin, and it can rotate clockwise (CW) and counterclockwise (CCW). Pilus (fimbria) A pilus (Latin for 'hair') is a hair-like appendage found on the surface of many bacteria and archaea. The terms pilus and fimbria (Latin for 'fringe') can be used interchangeably, although some researchers reserve the term pilus for the appendage required for bacterial conjugation. Dozens of these structures can exist on the bacterial and archaeal surface. Twitching motility is a form of crawling bacterial motility used to move over surfaces. Twitching is mediated by the activity of a particular type of pilus called type IV pilus which extends from the cell's exterior, binds to surrounding solid substrates and retracts, pulling the cell forwards in a manner similar to the action of a grappling hook. Pili are not used just for twitching. They are also antigenic and are required for the formation of biofilm, as they attach bacteria to host surfaces for colonisation during infection. They are fragile and constantly replaced, sometimes with pili of different composition. Other Gliding motility is a type of translocation that is independent of propulsive structures such as flagella or pili. Gliding allows microorganisms to travel along the surface of low aqueous films. The mechanisms of this motility are only partially known. Gliding motility uses a highly diverse set of different motor complexes, including e.g., the focal adhesion complexes of Myxococcus. The speed of gliding varies between organisms, and the reversal of direction is seemingly regulated by some sort of internal clock. Modes of locomotion Most rod-shaped bacteria can move using their own power, which allows colonization of new environments and discovery of new resources for survival. Bacterial movement depends not only on the characteristics of the medium, but also on the use of different appendages to propel. Swarming and swimming movements are both powered by rotating flagella. Whereas swarming is a multicellular 2D movement over a surface and requires the presence of surfactant substances, swimming is movement of individual cells in liquid environments. Other types of movement occurring on solid surfaces include twitching, gliding and sliding, which are all independent of flagella. Twitching motility depends on the extension, attachment to a surface, and retraction of type IV pili which provide the energy required to push the cell forward. Gliding motility uses a highly diverse set of different motor complexes, including e.g., the focal adhesion complexes of Myxococcus. Unlike twitching and gliding motilities, which are active movements where the motive force is generated by the individual cell, sliding is a passive movement. It relies on the motive force generated by the cell community due to the expansive forces caused by cell growth within the colony in the presence of surfactants, which reduce the friction between the cells and the surface. Swimming Many bacteria swim, propelled by rotation of the flagella outside the cell body. In contrast to protist flagella, bacterial flagella are rotors and — irrespective of species and type of flagellation — they have only two modes of operation: clockwise (CW) or counterclockwise (CCW) rotation. Bacterial swimming is used in bacterial taxis (mediated by specific receptors and signal transduction pathways) for the bacterium to move in a directed manner along gradients and reach more favorable conditions for life. The direction of flagellar rotation is controlled by the type of molecules detected by the receptors on the surface of the cell: in the presence of an attractant gradient, the rate of smooth swimming increases, while the presence of a repellent gradient increases the rate of tumbling. The archetype of bacterial swimming is represented by the well-studied model organism Escherichia coli. With its peritrichous flagellation, E. coli performs a run-and-tumble swimming pattern, as shown in the diagram on the right. CCW rotation of the flagellar motors leads to flagellar bundle formation that pushes the cell in a forward run, parallel to the long axis of the cell. CW rotation disassembles the bundle and the cell rotates randomly (tumbling). After the tumbling event, straight swimming is recovered in a new direction. That is, CCW rotation results in steady motion and CW rotation in tumbling; CCW rotation in a given direction is maintained longer in the presence of molecules of interest (like sugars or aminoacids). However, the type of swimming movement (propelled by rotation of flagella outside the cell body) varies significantly with the species and number/distribution of flagella on the cell body. For example, the marine bacterium Vibrio alginolyticus, with its single polar flagellum, swims in a cyclic, three-step (forward, reverse, and flick) pattern. Forward swimming occurs when the flagellum pushes the cell head, while backward swimming is based on the flagellum pulling the head upon motor reversal. Besides these 180° reversals, the cells can reorient (a "flick") by an angle around 90°, referred to as turning by buckling. Rhodobacter sphaeroides with its subpolar monotrichous flagellation, represents yet another motility strategy: the flagellum only rotates in one direction, and it stops and coils against the cell body from time to time, leading to cell body reorientations, In the soil bacterium Pseudomonas putida, a tuft of helical flagella is attached to its posterior pole. P. putida alternates between three swimming modes: pushing, pulling, and wrapping. In the pushing mode, the rotating flagella (assembled in a bundle or as an open tuft of individual filaments) drive the motion from the rear end of the cell body. The trajectories are either straight or, in the vicinity of a solid surface, curved to the right, due to hydrodynamic interaction of the cell with the surface. The direction of curvature indicates that pushers are driven by a left-handed helix turning in CCW direction. In the pulling mode, the rotating flagellar bundle is pointing ahead. In this case the trajectories are either straight or with a tendency to bend to the left, indicating that pullers swim by turning a left-handed helical bundle in CW direction. Finally, P. putida can swim by wrapping the filament bundle around its cell body, with the posterior pole pointing in the direction of motion. In that case, the flagellar bundle takes the form of a left-handed helix that turns in CW direction, and the trajectories are predominantly straight. Swarming Swarming motility is a rapid (2–10 μm/s) and coordinated translocation of a bacterial population across solid or semi-solid surfaces, and is an example of bacterial multicellularity and swarm behaviour. Swarming motility was first reported in 1972 by Jorgen Henrichsen. The transition from swimming to swarming mobility is usually associated with an increase in the number of flagella per cell, accompanied by cell elongation. Experiments with Proteus mirabilis showed that swarming requires contact between cells: swarming cells move in side-by-side groups called rafts, which dynamically add or lose cells: when a cell is left behind the raft, its movement stops after a short time; when a group of cells moving in a raft make contact with a stationary cell, it is reactivated and incorporated into the raft. More recently, Swiecicki and coworkers designed a polymer microfluidic system to confine E. coli cells in a quasi-two-dimensional layer of motility buffer in order to study different behaviors of cells transitioning from swimming to swarming movement. For this, they forced E. coli planktonic cells into a swarming-cell-phenotype by inhibiting cell division (leading to cell elongation) and by deletion of the chemosensory system (leading to smooth swimming cells that do not tumble). The increase of bacterial density inside the channel led to the formation of progressively larger rafts. Cells colliding with the raft contributed to increase its size, while cells moving at a velocity different from the mean velocity within the raft separated from it. Cell trajectories and flagellar motion during swarming was thoroughly studied for E. coli, in combination with fluorescently labeled flagella. The authors described four different types of tracks during bacterial swarming: forward movement, reversals, lateral movement, and stalls. In forward movement, the long axis of the cell, the flagellar bundle and the direction of movement are aligned, and propulsion is similar to the propulsion of a freely swimming cell. In a reversal, the flagellar bundle loosens, with the filaments in the bundle changing from their "normal form" (left-handed helices) into a "curly" form of right-handed helices with lower pitch and amplitude. Without changing its orientation, the cell body moves backwards through the loosened bundle. The bundle re-forms from curly filaments on the opposite pole of the cell body, and the filaments eventually relax back into their normal form. Lateral motion can be caused by collisions with other cells or by a motor reversal. Finally, stalled cells are paused but the flagella continue spinning and pumping fluid in front of the swarm, usually at the swarm edge. Twitching Twitching motility is a form of crawling bacterial motility used to move over surfaces. Twitching is mediated by the activity of hair-like filaments called type IV pili which extend from the cell's exterior, bind to surrounding solid substrates and retract, pulling the cell forwards in a manner similar to the action of a grappling hook. The name twitching motility is derived from the characteristic jerky and irregular motions of individual cells when viewed under the microscope. A bacterial biofilm is a bacterial community attached into a surface through extracellular polymeric materials. Prior to biofilm formation, bacteria may need to deposit on the surface from their planktonic state. After bacteria deposit on surfaces they may "twitch" or crawl over the surface using appendages called type IV pili to "explore" the substratum to find suitable sites for growth and thus biofilm formation. Pili emanate from bacterial surface and they can be up to several micrometres long (though they are nanometres in diameter). Bacterial twitching occurs through cycles of polymerization and depolymerization of type IV pili. Polymerization causes the pilus to elongate and eventually attaching into surfaces. Depolymerization makes the pilus retract and detach from the surfaces. Pili retraction produces pulling forces on the bacterium, which will be pulled in the direction of the vector sum of the pili forces, resulting in a jerky movement. A typical type IV pilus can produce a force exceeding 100 piconewtons and then a bundle of pili can produce pulling forces up to several nanonewtons. Bacteria may use pili not only for twitching but also for cell-cell interactions, surface sensing, and DNA uptake. Gliding Gliding motility is a type of translocation that is independent of propulsive structures such as flagella or pili. Gliding allows microorganisms to travel along the surface of low aqueous films. The mechanisms of this motility are only partially known. The speed of gliding varies between organisms, and the reversal of direction is seemingly regulated by some sort of internal clock. For example the apicomplexans are able to travel at fast rates between 1–10 μm/s. In contrast Myxococcus xanthus, a slime bacterium, can glide at a rate of 5 μm/min. In myxobacteria individual bacteria move together to form waves of cells that then differentiate to form fruiting bodies containing spores. Myxobacteria move only when on solid surfaces, unlike say E. coli, which is motile in liquid or solid media. Non-motile Non-motile species lack the ability and structures that would allow them to propel themselves, under their own power, through their environment. When non-motile bacteria are cultured in a stab tube, they only grow along the stab line. If the bacteria are mobile, the line will appear diffuse and extend into the medium. Bacterial taxis: Directed motion Bacteria are said to exhibit taxis if they move in a manner directed toward or away from some stimulus in their environment. This behaviour allows bacteria to reposition themselves in relation to the stimulus. Different types of taxis can be distinguished according to the nature of the stimulus controlling the directed movement, such as chemotaxis (chemical gradients like glucose), aerotaxis (oxygen), phototaxis (light), thermotaxis (heat), and magnetotaxis (magnetic fields). Chemotaxis The overall movement of a bacterium can be the result of alternating tumble and swim phases. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria such as E. coli are unable to choose the direction in which they swim, and are unable to swim in a straight line for more than a few seconds due to rotational diffusion; in other words, bacteria "forget" the direction in which they are going. By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations. In the presence of a chemical gradient bacteria will chemotax, or direct their overall motion based on the gradient. If the bacterium senses that it is moving in the correct direction (toward attractant/away from repellent), it will keep swimming in a straight line for a longer time before tumbling; however, if it is moving in the wrong direction, it will tumble sooner. Bacteria like E. coli use temporal sensing to decide whether their situation is improving or not, and in this way, find the location with the highest concentration of attractant, detecting even small differences in concentration. This biased random walk is a result of simply choosing between two methods of random movement; namely tumbling and straight swimming. The helical nature of the individual flagellar filament is critical for this movement to occur. The protein structure that makes up the flagellar filament, flagellin, is conserved among all flagellated bacteria. Vertebrates seem to have taken advantage of this fact by possessing an immune receptor (TLR5) designed to recognize this conserved protein. As in many instances in biology, there are bacteria that do not follow this rule. Many bacteria, such as Vibrio, are monoflagellated and have a single flagellum at one pole of the cell. Their method of chemotaxis is different. Others possess a single flagellum that is kept inside the cell wall. These bacteria move by spinning the whole cell, which is shaped like a corkscrew. The ability of marine microbes to navigate toward chemical hotspots can determine their nutrient uptake and has the potential to affect the cycling of elements in the ocean. The link between bacterial navigation and nutrient cycling highlights the need to understand how chemotaxis functions in the context of marine microenvironments. Chemotaxis hinges on the stochastic binding/unbinding of molecules with surface receptors, the transduction of this information through an intracellular signaling cascade, and the activation and control of flagellar motors. The intrinsic randomness of these processes is a central challenge that cells must deal with in order to navigate, particularly under dilute conditions where noise and signal are similar in magnitude. Such conditions are ubiquitous in the ocean, where nutrient concentrations are often extremely low and subject to rapid variation in space (e.g., particulate matter, nutrient plumes) and time (e.g., diffusing sources, fluid mixing). The fine-scale interactions between marine bacteria and both dissolved and particulate organic matter underpin marine biogeochemistry, thereby supporting productivity and influencing carbon storage and sequestration in the planet's oceans. It has been historically very difficult to characterize marine environments on the microscales that are most relevant to individual bacteria. Rather, research efforts have typically sampled much larger volumes of water and made comparisons from one sampling site to another. However, at the length scales relevant to individual microbes, the ocean is an intricate and dynamic landscape of nutrient patches, at times too small to be mixed by turbulence. The capacity for microbes to actively navigate these structured environments using chemotaxis can strongly influence their nutrient uptake. Although some work has examined time-dependent chemical profiles, past investigations of chemotaxis using E. coli and other model organisms have routinely examined steady chemical gradients strong enough to elicit a discernible chemotactic response. However, the typical chemical gradients wild marine bacteria encounter are often very weak, ephemeral in nature, and with low background concentrations. Shallow gradients are relevant for marine bacteria because, in general, gradients become weaker as one moves away from the source. Yet, detecting such gradients at distance has tremendous value, because they point toward nutrient sources. Shallow gradients are important precisely because they can be used to navigate to regions in the vicinity of sources where gradients become steep, concentrations are high, and bacteria can acquire resources at a high rate. Phototaxis Phototaxis is a kind of taxis, or locomotory movement, that occurs when a whole organism moves towards or away from a stimulus of light. This is advantageous for phototrophic organisms as they can orient themselves most efficiently to receive light for photosynthesis. Phototaxis is called positive if the movement is in the direction of increasing light intensity and negative if the direction is opposite. Two types of positive phototaxis are observed in prokaryotes. The first is called "scotophobotaxis" (from the word "scotophobia"), which is observed only under a microscope. This occurs when a bacterium swims by chance out of the area illuminated by the microscope. Entering darkness signals the cell to reverse flagella rotation direction and reenter the light. The second type of phototaxis is true phototaxis, which is a directed movement up a gradient to an increasing amount of light. This is analogous to positive chemotaxis except that the attractant is light rather than a chemical. Phototactic responses are observed in a number of bacteria and archae, such as Serratia marcescens. Photoreceptor proteins are light-sensitive proteins involved in the sensing and response to light in a variety of organisms. Some examples are bacteriorhodopsin and bacteriophytochromes in some bacteria. See also: phytochrome and phototropism. Most prokaryotes (bacteria and archaea) are unable to sense the direction of light, because at such a small scale it is very difficult to make a detector that can distinguish a single light direction. Still, prokaryotes can measure light intensity and move in a light-intensity gradient. Some gliding filamentous prokaryotes can even sense light direction and make directed turns, but their phototactic movement is very slow. Some bacteria and archaea are phototactic. In most cases the mechanism of phototaxis is a biased random walk, analogous to bacterial chemotaxis. Halophilic archaea, such as Halobacterium salinarum, use sensory rhodopsins (SRs) for phototaxis. Rhodopsins are 7 transmembrane proteins that bind retinal as a chromophore. Light triggers the isomerization of retinal, which leads to phototransductory signalling via a two-component phosphotransfer relay system. Halobacterium salinarum has two SRs, SRI and SRII, which signal via the transducer proteins HtrI and HtrII (halobacterial transducers for SRs I and II), respectively. The downstream signalling in phototactic archaebacteria involves CheA, a histidine kinase, which phosphorylates the response regulator, CheY. Phosphorylated CheY induces swimming reversals. The two SRs in Halobacterium have different functions. SRI acts as an attractant receptor for orange light and, through a two-photon reaction, a repellent receptor for near-UV light, while SRII is a repellent receptor for blue light. Depending on which receptor is expressed, if a cell swims up or down a steep light gradient, the probability of flagellar switch will be low. If light intensity is constant or changes in the wrong direction, a switch in the direction of flagellar rotation will reorient the cell in a new, random direction. As the length of the tracks is longer when the cell follows a light gradient, cells will eventually get closer to or further away from the light source. This strategy does not allow orientation along the light vector and only works if a steep light gradient is present (i.e. not in open water). Some cyanobacteria (e.g. Anabaena, Synechocystis) can slowly orient along a light vector. This orientation occurs in filaments or colonies, but only on surfaces and not in suspension. The filamentous cyanobacterium Synechocystis is capable of both positive and negative two-dimensional phototactic orientation. The positive response is probably mediated by a bacteriophytochrome photoreceptor, TaxD1. This protein has two chromophore-binding GAF domains, which bind biliverdin chromophore, and a C-terminal domain typical for bacterial taxis receptors (MCP signal domain). TaxD1 also has two N-terminal transmembrane segments that anchor the protein to the membrane. The photoreceptor and signalling domains are cytoplasmic and signal via a CheA/CheY-type signal transduction system to regulate motility by type IV pili. TaxD1 is localized at the poles of the rod-shaped cells of Synechococcus elongatus, similarly to MCP containing chemosensory receptors in bacteria and archaea. How the steering of the filaments is achieved is not known. The slow steering of these cyanobacterial filaments is the only light-direction sensing behaviour prokaryotes could evolve owing to the difficulty in detecting light direction at this small scale. Magnetotaxis Magnetotactic bacteria orient themselves along the magnetic field lines of Earth's magnetic field. This alignment is believed to aid these organisms in reaching regions of optimal oxygen concentration. To perform this task, these bacteria have biomineralised organelles called magnetosomes that contain magnetic crystals. The biological phenomenon of microorganisms tending to move in response to the environment's magnetic characteristics is known as magnetotaxis. However, this term is misleading in that every other application of the term taxis involves a stimulus-response mechanism. In contrast to the magnetoreception of animals, the bacteria contain fixed magnets that force the bacteria into alignment—even dead cells are dragged into alignment, just like a compass needle. Escape response An escape response is a form of negative taxis. Stimuli that have the potential to harm or kill demand rapid detection. This is fundamentally distinct from navigation or exploration, in terms of the timescales available for response. Most motile species harbour a form of phobic or emergency response distinct from their steady state locomotion. Escape reactions are not strictly oriented—but commonly involve backward movement, sometimes with a negatively geotactic component. In bacteria and archaea, action potential-like phenomena have been observed in biofilms and also single cells such as cable bacteria. The archaeon Halobacterium salinarium shows a photophobic response characterized by a 180° reversal of its swimming direction induced by a reversal in the direction of flagellar rotation. At least some aspects of this response are likely mediated by changes in membrane potential by bacteriorhodopsin, a light-driven proton pump. Action potential-like phenomena in prokaryotes are dissimilar from classical eukaryotic action potentials. The former are less reproducible, slower and exhibit a broader distribution in pulse amplitude and duration. Other taxes Aerotaxis is the response of an organism to variation in oxygen concentration, and is mainly found in aerobic bacteria. Energy taxis is the orientation of bacteria towards conditions of optimal metabolic activity by sensing the internal energetic conditions of cell. Therefore, in contrast to chemotaxis (taxis towards or away from a specific extracellular compound), energy taxis responds on an intracellular stimulus (e.g. proton motive force, activity of NDH- 1) and requires metabolic activity. Mathematical modelling The mathematical models used to describe the bacterial swimming dynamics can be classified into two categories. The first category is based on a microscopic (i.e. cell-level) view of bacterial swimming through a set of equations where each equation describes the state of a single agent. The second category provides a macroscopic (i.e. population-level) view via continuum-based partial differential equations that capture the dynamics of population density over space and time, without considering the intracellular characteristics directly. Among the present models, Schnitzer uses the Smoluchowski equation to describe the biased random walk of the bacteria during chemotaxis to search for food. To focus on a detailed description of the motion taking place during one run interval of the bacteria, de Gennes derives the average run length travelled by bacteria during one counterclockwise interval. Along the same direction, to consider the environmental condition affecting the biased random walk of bacteria, Croze and his co-workers study experimentally and theoretically the effect of concentration of soft agar on chemotaxis of bacteria. To study the effect of obstacles (another environmental condition) on the motion of bacteria, Chepizhko and his co-workers study the motion of self-propelled particles in a heterogeneous two-dimensional environment and show that the mean square displacement of particles is dependent on the density of obstacles and the particle turning speed. Building on these models, Cates highlights that bacterial dynamics does not always obey detailed balance, which means it is a biased diffusion process depending on the environmental conditions. Moreover, Ariel and his co-workers focus on diffusion of bacteria and show that the bacteria perform super-diffusion during swarming on a surface. See also Cyanobacterial movement Protist locomotion References External links Review of the hydrodynamics of bacterial swimming: On-line text book on bacteriology (2015) Bacteria Bacteriology Microswimmers
Bacterial motility
[ "Physics", "Biology" ]
8,610
[ "Physical phenomena", "Prokaryotes", "Microswimmers", "Motion (physics)", "Bacteria", "Microorganisms" ]
17,578,531
https://en.wikipedia.org/wiki/Aerobic%20granulation
The biological treatment of wastewater in the sewage treatment plant is often accomplished using conventional activated sludge systems. These systems generally require large surface areas for treatment and biomass separation units due to the generally poor settling properties of the sludge. Aerobic granules are a type of sludge that can self-immobilize flocs and microorganisms into spherical and strong compact structures. The advantages of aerobic granular sludge are excellent settleability, high biomass retention, simultaneous nutrient removal and tolerance to toxicity. Recent studies show that aerobic granular sludge treatment could be a potentially good method to treat high strength wastewaters with nutrients, toxic substances. The aerobic granular sludge usually is cultivated in SBR (sequencing batch reactor) and applied successfully as a wastewater treatment for high strength wastewater, toxic wastewater and domestic wastewater. Compared with conventional aerobic granular processes for COD removal, current research focuses more on simultaneous nutrient removal, particularly COD, phosphorus and nitrogen, under pressure conditions, such as high salinity or thermophilic condition. In recent years, new technologies have been developed to improve settleability. The use of aerobic granular sludge technology is one of them. Context Proponents of aerobic granular sludge technology claim "it will play an important role as an innovative technology alternative to the present activated sludge process in industrial and municipal wastewater treatment in the near future" and that it "can be readily established and profitably used in activated sludge plants". However, in 2011 it was characterised as "not yet established as a large-scale application ... with limited and unpublished full-scale applications for municipal wastewater treatment." Aerobic granular biomass The following definition differentiates an aerobic granule from a simple floc with relatively good settling properties and came out of discussions which took place at the 1st IWA-Workshop Aerobic Granular Sludge in Munich (2004): Formation of aerobic granules Granular sludge biomass is developed in sequencing batch reactors (SBR) and without carrier materials. These systems fulfil most of the requirements for their formation as: Feast – Famine regime: short feeding periods must be selected to create feast and famine periods (Beun et al. 1999), characterized by the presence or absence of organic matter in the liquid media, respectively. With this feeding strategy the selection of the appropriate micro-organisms to form granules is achieved. When the substrate concentration in the bulk liquid is high, the granule-former organisms can store the organic matter in form of poly-β-hydroxybutyrate to be consumed in the famine period, giving an advantage over filamentous organisms. When an anaerobic feeding is applied this factor is enhanced, minimising the importance of short settling time and higher hydrodynamic forces. Short settling time: This hydraulic selection pressure on the microbial community allows the retention granular biomass inside the reactor while flocculent biomass is washed-out. (Qin et al. 2004) Hydrodynamic shear force : Evidences show that the application of high shear forces favours the formation of aerobic granules and the physical granule integrity. It was found that aerobic granules could be formed only above a threshold shear force value in terms of superficial upflow air velocity above 1.2 cm/s in a column SBR, and more regular, rounder, and more compact aerobic granules were developed at high hydrodynamic shear forces (Tay et al., 2001 ). Granular activated sludge is also developed in flow-through reactors using the Hybrid Activated Sludge (HYBACS) process, comprising an attached-growth reactor with short retention time upstream of a suspended growth reactor. The attached bacteria in the first reactor, known as a SMART unit, are exposed to a constant high COD, triggering the expression of high concentrations of hydrolytic enzymes in the EPS layer around the bacteria. The accelerated hydrolysis liberates soluble readily-degradable COD which promotes the formation of granular activated sludge. Advantages The development of biomass in the form of aerobic granules is being studied for its application in the removal of organic matter, nitrogen and phosphorus compounds from wastewater. Aerobic granules in an aerobic SBR present several advantages compared to conventional activated sludge process such as: Stability and flexibility: the SBR system can be adapted to fluctuating conditions with the ability to withstand shock and toxic loadings Low energy requirements: the aerobic granular sludge process has a higher aeration efficiency due to operation at increased height, while there are neither return sludge or nitrate recycle streams nor mixing and propulsion requirements Reduced footprint: The increase in biomass concentration that is possible because of the high settling velocity of the aerobic sludge granules and the absence of a final settler result in a significant reduction in the required footprint. Good biomass retention: higher biomass concentrations inside the reactor can be achieved, and higher substrate loading rates can be treated. Presence of aerobic and anoxic zones inside the granules: to perform simultaneously different biological processes in the same system (Beun et al. 1999 ) Reduced investment and operational costs: the cost of running a wastewater treatment plant working with aerobic granular sludge can be reduced by at least 20% and space requirements can be reduced by as much as 75% (de Kreuk et al., 2004). The HYBACS process has the additional benefit of being a flow-through process, thus avoiding the complexities of SBR systems. It is also readily applied to the upgrading of existing flow-through activated sludge processes, by installing the attached growth reactors upstream of the aeration tank. Upgrading to granular activated sludge process enables the capacity of an existing wastewater treatment plant to be doubled. Treatment of industrial wastewater Synthetic wastewater was used in most of the works carried out with aerobic granules. These works were mainly focused on the study of granules formation, stability and nutrient removal efficiencies under different operational conditions and their potential use to remove toxic compounds. The potential of this technology to treat industrial wastewater is under study, some of the results: Arrojo et al. (2004) operated two reactors that were fed with industrial wastewater produced in a laboratory for analysis of dairy products (Total COD : 1500–3000 mg/L; soluble COD: 300–1500 mg/L; total nitrogen: 50–200 mg/L). These authors applied organic and nitrogen loading rates up to 7 g COD/(L·d) and 0.7 g N/(L·d) obtaining removal efficiencies of 80%. Schwarzenbeck et al. (2004) treated malting wastewater which had a high content of particulate organic matter (0.9 g TSS/L). They found that particles with average diameters lower than 25–50 μm were removed at 80% efficiency, whereas particles bigger than 50 μm were only removed at 40% efficiency. These authors observed that the ability of aerobic granular sludge to remove particulate organic matter from the wastewaters was due to both incorporation into the biofilm matrix and metabolic activity of protozoa population covering the surface of the granules. Cassidy and Belia (2005) obtained removal efficiencies for COD and P of 98% and for N and VSS over 97% operating a granular reactor fed with slaughterhouse wastewater (Total COD: 7685 mg/L; soluble COD: 5163 mg/L; TKN: 1057 mg/L and VSS: 1520 mg/L). To obtain these high removal percentages, they operated the reactor at a DO saturation level of 40%, which is the optimal value predicted by Beun et al. (2001) for N removal, and with an anaerobic feeding period which helped to maintain the stability of the granules when the DO concentration was limited. Inizan et al. (2005) treated industrial wastewaters from pharmaceutical industry and observed that the suspended solids in the inlet wastewater were not removed in the reactor. Tsuneda et al. (2006), when treating wastewater from metal-refinery process (1.0–1.5 g NH4+-N/L and up to 22 g/L of sodium sulphate), removed a nitrogen loading rate of 1.0 kg-N/m3·d with an efficiency of 95% in a system containing autotrophic granules. Usmani et al. (2008) high superficial air velocity, a relatively short settling time of 5–30 min, a high ratio of height to diameter (H/D=20) of the reactor and optimum organic load facilitates the cultivation of regular compact and circular granules. Figueroa et al. (2008), treated wastewater from a fish canning industry. Applied OLR were up to 1.72 kg COD/(m3·d) with fully organic matter depletion. Ammonia nitrogen was removed via nitrification-denitrification up to 40% when nitrogen loading rates were of 0.18 kg N/(m3·d). The formation of mature aerobic granules occurred after 75 days of operation with 3.4 mm of diameter, SVI of 30 mL/g VSS and density around 60 g VSS/L-granule Farooqi et al. (2008), Wastewaters from fossil fuel refining, pharmaceuticals, and pesticides are the main sources of phenolic compounds. Those with more complex structures are often more toxic than the simple phenol. This study was aimed at assessing the efficacy of granular sludge in UASB and SBR for the treatment of mixtures of phenolics compounds. The results indicates that anaerobic treatment by UASB and aerobic treatment by SBR can be successfully used for phenol/cresol mixture, representative of major substrates in chemical and petrochemical wastewater and the results shows proper acclimatization period is essential for the degradation of m – cresol and phenol. Moreover, SBR was found as a better alternative than UASB reactor as it is more efficient and higher concentration of m cresols can be successfully degraded. López-Palau et al. (2009), treated wastewater from a winery industry. The formation of granules was performed using a synthetic substrate and after 120 days of operation, synthetic media was replaced by real winery wastewater, with a COD loading of 6 kg COD/(m3·d). Dobbeleers "et al." (2017), treated wastewater from potato industry. Granulation was successful achieved and simultaneous nitrification/denitrification was possible by short cutting the nitrogen cycle. Caluwé "et al." (2017), Compared an aerobic feast/famine strategy and an anaerobic feast, aerobic famine strategy for the formation of aerobic granular sludge during the treatment of industrial petrochemical wastewater. Both strategies were successful. Pilot research in aerobic granular sludge Aerobic granulation technology for the application in wastewater treatment is widely developed at laboratory scales. The large-scale experience is growing rapidly and multiple institutions are making efforts to improve this technology: Since 1999 Royal HaskoningDHV (former DHV Water), Delft University of technology (TUD), STW (Dutch Foundation for Applied Technology) and STOWA (Dutch Foundation for Applied Water Research) have been cooperating closely on the development of the aerobic granular sludge technology (Nereda). In September 2003, a first extensive pilot plant research was executed at STP Ede, the Netherlands with focus on obtaining stable granulation and biological nutrient removal. Following the positive outcome together with six Dutch Water Boards the parties decided to establish a Public-Private Partnership (PPP)- the National Nereda Research Program (NNOP)- to mature, further scale-up and implement several full-scale units. As part of this PPP extensive pilot tests have been executed between 2003 and 2010 at multiple sewage treatment plants. Currently more than 20 plants are running or under construction across 3 continents. From the basis of the aerobic granular sludge but using a contention system for the granules, a sequencing batch biofilter granular reactor (SBBGR) with a volume of 3.1m3 was developed by IRSA (Istituto di Ricerca Sulle Acque, Italy). Different studies were carried out in this plant treating sewage at an Italian wastewater treatment plant. The use of aerobic granules prepared in laboratory, as a starter culture, before adding in main system, is the base of the technology ARGUS (Aerobic granules upgrade system) developed by EcoEngineering Ltd.. The granules are cultivated on-site in small bioreactors called propagators and fill up only 2 to 3% of the main bioreactor or fermentor (digestor) capacity. This system is being used in a pilot plant with a volume of 2.7 m3 located in one Hungarian pharmaceutical industry. The Group of Environmental Engineering and Bioprocesses from the University of Santiago de Compostela is currently operating a 100 L pilot plant reactor. The feasibility study showed that the aerobic granular sludge technology seems very promising (de Bruin et al., 2004. Based on total annual costs a GSBR (Granular sludge sequencing batch reactors) with pre-treatment and a GSBR with post-treatment proves to be more attractive than the reference activated sludge alternatives (6–16%). A sensitivity analysis shows that the GSBR technology is less sensitive to land price and more sensitive to rain water flow. Because of the high allowable volumetric load the footprint of the GSBR variants is only 25% compared to the references. However, the GSBR with only primary treatment cannot meet the present effluent standards for municipal wastewater, mainly because of exceeding the suspended solids effluent standard caused by washout of not well settleable biomass. Full scale application Aerobic granulation technology is already successfully applied for treatment of wastewater. Since 2005, RoyalHaskoningDHV has implemented more than 100 full-scale aerobic granular sludge technology systems (Nereda) for the treatment of both industrial and municipal wastewater across 5 continents. One example is STP Epe, The Netherlands, with a capacity of 59.000 pe and 1,500 m3.h-1, being the first full-scale municipal Nereda in The Netherlands. Examples of the latest Nereda sewage treatment plants (2012–2013) include Wemmershoek- South Africa, Dinxperlo, Vroomshoop, Garmerwolde – The Netherlands. EcoEngineering applied aerobic granulation process in three pharmaceutical industries, Krka d.d. Novo mesto Slovenia, Lek d.d. Lendava, Slovenia and Gedeon Richter Rt. Dorog, Hungary. Wastewater treatment plants are already running more than five years. See also Agricultural wastewater treatment Effluent guidelines Industrial wastewater treatment List of waste water treatment technologies Sedimentation (water treatment) Water purification Sequencing batch reactor References General references Van der Roest H., de Bruin B., van Dalen R., Uijterlinde C. (2012) Maakt Nereda-installatie Epe hooggespannen verwachtingen waar?, Vakblad H2O, nr.23, 2012, p30-p34. Giesen A., van Loosdrecht M.C.M., Niermans R. (2012) Aerobic granular biomass: the new standard for domestic and industrial wastewater treatment?, Water21, April 2012, p28-p30. Zilverentant A., de Bruin B., Giesen A. (2011) Nereda: The new Standard for Energy and Cost Effective Industrial and Municipal Wastewater treatment, SKIW, Het National Water Symposium, May 2011. Water Sewage & Effluent (2010) 'Water Nymph' at Gansbaai, Water Sewage & Effluent, Water Management solutions for Africa, Volume 30 no.2, 2010, p50-p53. Gao D. Liu L. Liang H. Wu W.M. (2010), Aerobic granular sludge: characterization, mechanism of granulation and application to wastewater treatment, Critical reviews in Biotechnology Dutch Water Sector (2012), Commissioning Nereda at wwtp Epe: Wonder granule keeps its promise Kolver (2012), Success at Gansbaai leads to construction of another Nereda plant, engineeringnews Nadaba (2009), Gansbaai wastewater project incorporates techno innovation , engineeringnews Euronews (2012), Dutch Investor cleans up water treatment External links Royal HaskoningDHV-NEREDA TUDELFT – Delft University Aquatic ecology Environmental engineering Environmental soil science Waste treatment technology
Aerobic granulation
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
3,500
[ "Water treatment", "Chemical engineering", "Aquatic ecology", "Environmental soil science", "Civil engineering", "Ecosystems", "Environmental engineering", "Waste treatment technology" ]
17,580,393
https://en.wikipedia.org/wiki/Quantum%20dissipation
Quantum dissipation is the branch of physics that studies the quantum analogues of the process of irreversible loss of energy observed at the classical level. Its main purpose is to derive the laws of classical dissipation from the framework of quantum mechanics. It shares many features with the subjects of quantum decoherence and quantum theory of measurement. Models The typical approach to describe dissipation is to split the total system in two parts: the quantum system where dissipation occurs, and a so-called environment or bath into which the energy of the former will flow. The way both systems are coupled depends on the details of the microscopic model, and hence, the description of the bath. To include an irreversible flow of energy (i.e., to avoid Poincaré recurrences in which the energy eventually flows back to the system), requires that the bath contain an infinite number of degrees of freedom. Notice that by virtue of the principle of universality, it is expected that the particular description of the bath will not affect the essential features of the dissipative process, as far as the model contains the minimal ingredients to provide the effect. The simplest way to model the bath was proposed by Feynman and Vernon in a seminal paper from 1963. In this description the bath is a sum of an infinite number of harmonic oscillators, that in quantum mechanics represents a set of free bosonic particles. Caldeira–Leggett or harmonic bath model In 1981, Amir Caldeira and Anthony J. Leggett proposed a simple model to study in detail the way dissipation arises from a quantum point of view. It describes a quantum particle in one dimension coupled to a bath. The Hamiltonian reads: , The first two terms correspond to the Hamiltonian of a quantum particle of mass and momentum , in a potential at position . The third term describes the bath as an infinite sum of harmonic oscillators with masses and momentum , at positions . are the frequencies of the harmonic oscillators. The next term describes the way that the system and bath are coupled. In the Caldeira–Leggett model, the bath is coupled to the position of the particle. are coefficients which depend on the details of the coupling. The last term is a counter-term which must be included to ensure that dissipation is homogeneous in all space. As the bath couples to the position, if this term is not included the model is not translationally invariant, in the sense that the coupling is different wherever the quantum particle is located. This gives rise to an unphysical renormalization of the potential, which can be shown to be suppressed by employing real potentials. To provide a good description of the dissipation mechanism, a relevant quantity is the bath spectral function, defined as follows: The bath spectral function provides a constraint in the choice of the coefficients . When this function has the form , the corresponding classical kind of dissipation can be shown to be Ohmic. A more generic form is . In this case, if the dissipation is called "super-ohmic", while if is sub-ohmic. An example of a super-ohmic bath is the electro-magnetic field under certain circumstances. As mentioned, the main idea in the field of quantum dissipation is to explain the way classical dissipation can be described from a quantum mechanics point of view. To get the classical limit of the Caldeira–Leggett model, the bath must be integrated out (or traced out), which can be understood as taking the average over all the possible realizations of the bath and studying the effective dynamics of the quantum system. As a second step, the limit must be taken to recover classical mechanics. To proceed with those technical steps mathematically, the path integral description of quantum mechanics is usually employed. The resulting classical equations of motion are: where: is a kernel which characterizes the effective force that affects the motion of the particle in the presence of dissipation. For so-called Markovian baths, which do not keep memory of the interaction with the system, and for Ohmic dissipation, the equations of motion simplify to the classical equations of motion of a particle with friction: Hence, one can see how Caldeira–Leggett model fulfills the goal of getting classical dissipation from the quantum mechanics framework. The Caldeira–Leggett model has been used to study quantum dissipation problems since its introduction in 1981, being extensively used as well in the field of quantum decoherence. Dissipative two-level system The dissipative two-level system is a particular realization of the Caldeira–Leggett model that deserves special attention due to its interest in the field of quantum computation. The aim of the model is to study the effects of dissipation in the dynamics of a particle that can hop between two different positions rather than a continuous degree of freedom. This reduced Hilbert space allows the problem to be described in terms of -spin operators. This is sometimes referred in the literature as the spin-boson model, and it is closely related to the Jaynes–Cummings model. The Hamiltonian for the dissipative two-level system reads: , where and are the Pauli matrices and is the amplitude of hopping between the two possible positions. Notice that in this model the counter-term is no longer needed, as the coupling to gives already homogeneous dissipation. The model has many applications. In quantum dissipation, it is used as a simple model to study the dynamics of a dissipative particle confined in a double-well potential. In the context of quantum computation, it represents a qubit coupled to an environment, which can produce decoherence. In the study of amorphous solids, it provides the basis of the standard theory to describe their thermodynamic properties. The dissipative two-level system represents also a paradigm in the study of quantum phase transitions. For a critical value of the coupling to the bath it shows a phase transition from a regime in which the particle is delocalized among the two positions to another in which it is localized in only one of them. The transition is of Kosterlitz–Thouless kind, as can be seen by deriving the renormalization group flow equations for the hopping term. Energy dissipation in Hamiltonian formalism A different approach to describe energy dissipation is to consider time dependent Hamiltonians. Against a common misunderstanding, the resulting unitary dynamics can describe energy dissipation, as certain degrees of freedom loose energy and others gain energy. However, the quantum mechanical state of the system stays pure, thus such an approach can not describe dephasing unless a subsystem is chosen and the reduced density matrix of this open quantum system is analyzed. Dephasing leads to quantum decoherence or information dissipation and is often important when describing open quantum systems. However, this approach is typically used e.g. in the description of optical experiments. There a light pulse (described by a time dependent semi-classical Hamiltonian) can change the energy in the system by stimulated absorption or emission. See also Dissipation model for extended environment Jaynes–Cummings model Open quantum system Lindblad equation Quantum decoherence Dephasing References Sources U. Weiss, Quantum Dissipative Systems (1992), World Scientific. P. Hänggi and G.L. Ingold, Fundamental Aspects of quantum Brownian motion, Chaos, vol. 15, ARTN 026105 (2005); http://www.physik.uni-augsburg.de/theo1/hanggi/Papers/378.pdf External links Visualizing Quantum Dynamics: The Spin-Boson Hamiltonian , Jared Ostmeyer and Julio Gea-Banacloche, University of Arkansas. Visualizing Quantum Dynamics: The Jaynes-Cummings Model , Jared Ostmeyer and Julio Gea-Banacloche, University of Arkansas. Condensed matter physics Statistical mechanics Quantum mechanics
Quantum dissipation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,683
[ "Theoretical physics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
17,583,603
https://en.wikipedia.org/wiki/Macrophage-activating%20factor
A macrophage-activating factor (MAF) is a lymphokine or other receptor based signal that primes macrophages towards cytotoxicity to tumors, cytokine secretion, or clearance of pathogens. Similar molecules may cause development of an inhibitory, regulatory phenotype. A MAF can also alter the ability of macrophages to present MHC I antigen, participate in Th responses, and/or affect other immune responses. MAFs act typically in combination to produce a specific phenotype. Macrophage activated phenotypes Macrophages inherently display tissue and environment-dependent plasticity. In addition, the phenotypes of the macrophages in a certain environment play a fundamental role in determining the immune activity and response within the tissue. Depending on the combination of MAFs signaling to the macrophage, the macrophage’s activated phenotype becomes one of three major categories: classically activated, wound healing, or regulatory. Regulatory-phenotype macrophages have only recently been recognized as an important contributor to tissue microenvironments. Tumor-associated macrophages may be any of these types, and they have been found to be important players in the tumor microenvironment. Analysis of the macrophage population and signaling in a tumor may provide useful clinical data. Clarifications on terminology Macrophages have been classified as M1 or M2 depending on the adaptive immune response that elicited the phenotype: Th1 or Th2 respectively. The phrase 'alternatively activated macrophage' is used to refer to M2 macrophages. Regulatory macrophages do not fit into the M1/M2 classification system, and they display different markers. Classically activated macrophages After receiving signaling from both IFNγ and TNF, macrophages acquire a phenotype with higher activity against both pathogens and tumor cells. They also secrete inflammatory cytokines. IFNγ signaling can initially originate from Natural Killer (NK) cells, but adaptive immune cells are required to sustain a population of classically activated macrophages. Toll-like receptor agonists may also cause macrophage activation. Wound healing macrophages Interleukin 4, secreted by granulocytes after tissue damage or by adaptive immune cells within a Th2 response, causes macrophages to secrete minimal amounts of pro-inflammatory cytokines and to have lower activity against intracellular pathogens. They also promote extracellular matrix synthesis via production of ornithine, via arginase; this is used as a precursor for extracellular matrix components. The overall result is a macrophage population that promotes wound healing. The specific roles macrophages play in the Th2 response are still under investigation. Regulatory macrophages Glucocorticoids can contribute to the development of regulatory macrophages. These macrophages produce Interleukin 10 and inhibit immune system response (See below for Effect on cancer). Tumor-associated macrophages may contain a large population of regulatory macrophages. Effect on cancer Initially, MAFs were thought to increase a macrophage’s cytotoxic response, allowing enhanced clearance of the tumor cells. However, they also have wider ranging effects. Chronic inflammation associated with activated macrophages may lead to the development of neoplasia, such as those found surrounding tuberculosis scars. Dysregulation of macrophage activation may cause increased inflammation and eventual neoplasia. Moreover, macrophages infiltrating the tumor microenvironment can transition towards a regulatory phenotype. Regulatory macrophages produce Interleukin 10, which can inhibit cytotoxic responses of other lymphocytes to cancer cell antigens. The stromal reaction surrounding a tumor, as well as prostaglandins and hypoxia may play a role in this transition. Epithelial-mesenchymal transition has been found to be influenced by all types of macrophages, which cause both pro and anti-inflammatory responses that can promote EMT. Non-cytokine examples of macrophage-activating factors Pathogenic antigens can bind to toll-like receptors that stimulate macrophage activation and response. Examples include heat shock proteins released during apoptosis, and bacterial lipopolysaccharide. Examples Interferon-gamma Interleukin 4 TNF alpha CD36 Miscellaneous It has been suggested that MAF can be formed by probiotic bacteria in a yoghurt medium. This probiotic mixture has been found to be helpful in various immune disturbances including ME/CFS. References External links Cytokines Macrophages
Macrophage-activating factor
[ "Chemistry" ]
971
[ "Cytokines", "Signal transduction" ]
17,584,701
https://en.wikipedia.org/wiki/Arithmetic%20dynamics
Arithmetic dynamics is a field that amalgamates two areas of mathematics, dynamical systems and number theory. Part of the inspiration comes from complex dynamics, the study of the iteration of self-maps of the complex plane or other complex algebraic varieties. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, -adic, or algebraic points under repeated application of a polynomial or rational function. A fundamental goal is to describe arithmetic properties in terms of underlying geometric structures. Global arithmetic dynamics is the study of analogues of classical diophantine geometry in the setting of discrete dynamical systems, while local arithmetic dynamics, also called p-adic or nonarchimedean dynamics, is an analogue of complex dynamics in which one replaces the complex numbers by a -adic field such as or and studies chaotic behavior and the Fatou and Julia sets. The following table describes a rough correspondence between Diophantine equations, especially abelian varieties, and dynamical systems: Definitions and notation from discrete dynamics Let be a set and let be a map from to itself. The iterate of with itself times is denoted A point is periodic if for some . The point is preperiodic if is periodic for some . The (forward) orbit of is the set Thus is preperiodic if and only if its orbit is finite. Number theoretic properties of preperiodic points Let be a rational function of degree at least two with coefficients in . A theorem of Douglas Northcott says that has only finitely many -rational preperiodic points, i.e., has only finitely many preperiodic points in . The uniform boundedness conjecture for preperiodic points of Patrick Morton and Joseph Silverman says that the number of preperiodic points of in is bounded by a constant that depends only on the degree of . More generally, let be a morphism of degree at least two defined over a number field . Northcott's theorem says that has only finitely many preperiodic points in , and the general Uniform Boundedness Conjecture says that the number of preperiodic points in may be bounded solely in terms of , the degree of , and the degree of over . The Uniform Boundedness Conjecture is not known even for quadratic polynomials over the rational numbers . It is known in this case that cannot have periodic points of period four, five, or six, although the result for period six is contingent on the validity of the conjecture of Birch and Swinnerton-Dyer. Bjorn Poonen has conjectured that cannot have rational periodic points of any period strictly larger than three. Integer points in orbits The orbit of a rational map may contain infinitely many integers. For example, if is a polynomial with integer coefficients and if is an integer, then it is clear that the entire orbit consists of integers. Similarly, if is a rational map and some iterate is a polynomial with integer coefficients, then every -th entry in the orbit is an integer. An example of this phenomenon is the map , whose second iterate is a polynomial. It turns out that this is the only way that an orbit can contain infinitely many integers. Theorem. Let be a rational function of degree at least two, and assume that no iterate of is a polynomial. Let . Then the orbit contains only finitely many integers. Dynamically defined points lying on subvarieties There are general conjectures due to Shouwu Zhang and others concerning subvarieties that contain infinitely many periodic points or that intersect an orbit in infinitely many points. These are dynamical analogues of, respectively, the Manin–Mumford conjecture, proven by Michel Raynaud, and the Mordell–Lang conjecture, proven by Gerd Faltings. The following conjectures illustrate the general theory in the case that the subvariety is a curve. Conjecture. Let be a morphism and let be an irreducible algebraic curve. Suppose that there is a point such that contains infinitely many points in the orbit . Then is periodic for in the sense that there is some iterate of that maps to itself. p-adic dynamics The field of -adic (or nonarchimedean) dynamics is the study of classical dynamical questions over a field that is complete with respect to a nonarchimedean absolute value. Examples of such fields are the field of -adic rationals and the completion of its algebraic closure . The metric on and the standard definition of equicontinuity leads to the usual definition of the Fatou and Julia sets of a rational map . There are many similarities between the complex and the nonarchimedean theories, but also many differences. A striking difference is that in the nonarchimedean setting, the Fatou set is always nonempty, but the Julia set may be empty. This is the reverse of what is true over the complex numbers. Nonarchimedean dynamics has been extended to Berkovich space, which is a compact connected space that contains the totally disconnected non-locally compact field . Generalizations There are natural generalizations of arithmetic dynamics in which and are replaced by number fields and their -adic completions. Another natural generalization is to replace self-maps of or with self-maps (morphisms) of other affine or projective varieties. Other areas in which number theory and dynamics interact There are many other problems of a number theoretic nature that appear in the setting of dynamical systems, including: dynamics over finite fields. dynamics over function fields such as . iteration of formal and -adic power series. dynamics on Lie groups. arithmetic properties of dynamically defined moduli spaces. equidistribution and invariant measures, especially on -adic spaces. dynamics on Drinfeld modules. number-theoretic iteration problems that are not described by rational maps on varieties, for example, the Collatz problem. symbolic codings of dynamical systems based on explicit arithmetic expansions of real numbers. The Arithmetic Dynamics Reference List gives an extensive list of articles and books covering a wide range of arithmetical dynamical topics. See also Arithmetic geometry Arithmetic topology Combinatorics and dynamical systems Arboreal Galois representation Notes and references Further reading Lecture Notes on Arithmetic Dynamics Arizona Winter School, March 13–17, 2010, Joseph H. Silverman Chapter 15 of A first course in dynamics: with a panorama of recent developments, Boris Hasselblatt, A. B. Katok, Cambridge University Press, 2003, External links The Arithmetic of Dynamical Systems home page Arithmetic dynamics bibliography Analysis and dynamics on the Berkovich projective line Book review of Joseph H. Silverman's "The Arithmetic of Dynamical Systems", reviewed by Robert L. Benedetto Dynamical systems Algebraic number theory
Arithmetic dynamics
[ "Physics", "Mathematics" ]
1,387
[ "Recreational mathematics", "Arithmetic dynamics", "Mechanics", "Algebraic number theory", "Number theory", "Dynamical systems" ]
17,585,131
https://en.wikipedia.org/wiki/Killer%20yeast
A killer yeast is a yeast, such as Saccharomyces cerevisiae, which is able to secrete one of a number of toxic proteins which are lethal to susceptible cells. These "killer toxins" are polypeptides that kill sensitive cells of the same or related species, often functioning by creating pores in target cell membranes. These yeast cells are immune to the toxic effects of the protein due to an intrinsic immunity. Killer yeast strains can be a problem in commercial processing because they can kill desirable strains. The killer yeast system was first described in 1963. Study of killer toxins helped to better understand the secretion pathway of yeast, which is similar to those of more complex eukaryotes. It also can be used in treatment of some diseases, mainly those caused by fungi. Saccharomyces cerevisiae The best characterized toxin system is from yeast (Saccharomyces cerevisiae), which was found to spoil brewing of beer. In S. cerevisiae are toxins encoded by a double-stranded RNA virus, translated to a precursor protein, cleaved and secreted outside of the cells, where they may affect susceptible yeast. There are other killer systems in S. cerevisiae, such as KHR1 and KHS1 genes encoded on chromosomes IX and V, respectively. RNA virus The virus, L-A, is an icosahedral virus of S. cerevisiae comprising a 4.6 kb genomic segment and several satellite double-stranded RNA sequences, called M dsRNAs. The genomic segment encodes for the viral coat protein and a protein which replicates the viral genomes. The M dsRNAs encode the toxin, of which there are at least three variants in S. cerevisiae, and many more variants across all species. L-A virus uses yeast Ski complex (super killer) and MAK (maintenance of killer) chromosomal genes for its preservation in the cell. The virus is not released into the environment. It spreads between cells during yeast mating. The family of Totiviridae in general helps M-type dsRNAs in a wide variety of yeasts. Toxins The initial protein product from translation of the M dsRNA is called the preprotoxin, which is targeted to the yeast secretory pathway. The preprotoxin is processed and cleaved to produce an α/β dimer, which is the active form of the toxin, and is released into the environment. The two most studied variant toxins in S. cerevisiae are K1 and K28. There are numerous appearently unrelated M dsRNAs, their only similarity being their genome and preprotoxin organization. K1 binds to the β-1,6-D-glucan receptor on the target cell wall, moves inside, and then binds to the plasma membrane receptor Kre1p. It forms a cation-selective ion channel in the membrane, which is lethal to the cell. K28 uses the α-1,6-mannoprotein receptor to enter the cell, and utilizes the secretory pathway in reverse by displaying the endoplasmic reticulum HDEL signal. From the ER, K28 moves into the cytoplasm and shuts down DNA synthesis in the nucleus, triggering apoptosis. Immunity Sesti, Shih, Nikolaeva and Goldstein (2001) claimed that K1 inhibits the TOK1 membrane potassium channel before secretion, and although the toxin reenters through the cell wall it is unable to reactivate TOK1. However Breinig, Tipper and Schmitt (2002) showed that the TOK1 channel was not the primary receptor for K1, and that TOK1 inhibition does not confer immunity. Vališ, Mašek, Novotná, Pospíšek and Janderová (2006) experimented with mutants which produce K1 but do not have immunity to it, and suggested that cell membrane receptors were being degraded in the secretion pathway of immune cells, apparently due to the actions of unprocessed α chains. Breinig, Sendzik, Eisfeld and Schmitt (2006) showed that K28 toxin is neutralized in toxin-expressing cells by the α chain in the cytosol, which has not yet been fully processed and still contains part of a γ chain attached to the C terminus. The uncleaved α chain neutralizes the K28 toxin by forming a complex with it. Kluyveromyces lactis Killer properties of Kluyveromyces lactis are associated with linear DNA plasmids, which have on their 5'end associated proteins, which enable them to replicate themselves, in a way similar to adenoviruses. It is an example of protein priming in DNA replication. MAK genes are not known. The toxin consists of three subunits, which are matured in golgi complex by signal peptidase and glycosylated. The mechanism of action appears to be the inhibition of adenylate cyclase in sensitive cells. Affected cells are arrested in G1 phase and lose viability. Other yeast Other toxin systems are found in other yeasts: Pichia and Williopsis Hanseniaspora uvarum Zygosaccharomyces bailii Ustilago maydis: a smut fungus that produces killer toxin Kp4 family fungal killer toxins. Debaryomyces hansenii Use of toxins The susceptibility to toxins varies greatly between yeast species and strains. Several experiments have made use of this to reliably identify strains. Morace, Archibusacci, Sestito and Polonelli (1984) used the toxins produced by 25 species of yeasts to differentiate between 112 pathogenic strains, based on their sensitivity to each toxin. This was extended by Morace et al. (1989) to use toxins to differentiate between 58 bacterial cultures. Vaughan-Martini, Cardinali and Martini (1996) used 24 strains of killer yeast from 13 species to find a resistance signature for each of 13 strains of S. cerevisiae which were used as starters in wine-making. It was shown that sensitivity to toxins could be used to discriminate between 91 strains of Candida albicans and 223 other Candida strains. Others experimented with using killer yeasts to control undesirable yeasts. Palpacelli, Ciani and Rosini (1991) found that Kluyveromyces phaffii was effective against Kloeckera apiculata, Saccharomycodes ludwigii and Zygosaccharomyces rouxii – all of which cause problems in the food industry. Polonelli et al. (1994) used a killer yeast to vaccinate against C. albicans in rats. Lowes et al. (2000) created a synthetic gene for the toxin HMK normally produced by Williopsis mrakii, which they inserted into Aspergillus niger and showed that the engineered strain could control aerobic spoilage in maize silage and yoghurt. A toxin-producing strain of Kluyveromyces phaffii to control apiculate yeasts in wine-making. A toxin produced by Candida nodaensis was effective at preventing spoilage of highly salted food by yeasts. Several experiments suggest that antibodies that mimic the biological activity of killer toxins have application as antifungal agents. Killer yeasts from flowers of Indian medicinal plants were isolated and the effect of their killer toxin was determined on sensitive yeast cells as well as fungal pathogens. The toxin of Saccharomyces cerevisiae and Pichia kluyveri inhibited Dekkera anomala accumulating methylene blue cells on Yeast Extract Peptone Dextrose agar (pH 4.2) at 21°C. There was no inhibition of growth or competition between the yeast cells in the mixed population of S. cerevisiae isolated from Acalypha indica. S. cerevisiae and P. kluyveri were found to tolerate 50% and 40% glucose, while D. anomala tolerated 40% glucose. Both S. cerevisiae and P. kluyveri did not inhibit the growth of Aspergillus niger. Control methods Young and Yagiu (1978) experimented with methods of curing killer yeasts. They found that using a cycloheximine solution at 0.05 ppm was effective in eliminating killer activity in one strain of S. cerevisiae. Incubating the yeast at 37 °C eliminated activity in another strain. The methods were not effective at reducing toxin production in other yeast species. Many toxins are sensitive to pH levels; for example, K1 is permanently inactivated at pH levels over 6.5. The greatest potential for control of killer yeasts appears to be the addition of the L-A virus and M dsRNA, or an equivalent gene, into the industrially desirable variants of yeast, so they achieve immunity to the toxin, and also kill competing strains. See also Yeast in winemaking References Further reading Yeasts
Killer yeast
[ "Biology" ]
1,911
[ "Yeasts", "Fungi" ]
17,586,014
https://en.wikipedia.org/wiki/Pedestrian%20village
A pedestrian village is a compact, pedestrian-oriented neighborhood or town with a mixed-use village center. Shared-use lanes for pedestrians and those using bicycles, Segways, wheelchairs, and other small rolling conveyances that do not use internal combustion engines. Generally, these lanes are in front of the houses and businesses, and streets for motor vehicles are always at the rear. Some pedestrian villages might be nearly car-free with cars either hidden below the buildings, or on the boundary of the village. Venice, Italy is essentially a pedestrian village with canals. Other examples of a pedestrian village include Giethoorn village located in the Dutch province of Overijssel, Netherlands, Mont-Tremblant Pedestrian Village located beside Mont-Tremblant, Quebec, Canada, and Culdesac Tempe in Tempe, Arizona. The canal district in Venice, California, on the other hand, combines the front lane/rear street approach with canals and walkways, or just walkways. See also List of car-free islands New Urbanism Principles of intelligent urbanism Urban vitality Walkability Walking audit Walking city Infrastructure: References External links Pedestrian Villages website World Carfree Network Village Homes, Davis, California Urban planning Transportation planning Neighbourhoods by type
Pedestrian village
[ "Engineering" ]
257
[ "Urban planning", "Architecture" ]
13,620,098
https://en.wikipedia.org/wiki/Human%20Genome%20Sequencing%20Center
The Baylor College of Medicine Human Genome Sequencing Center (BCM-HGSC) was established by Richard A. Gibbs in 1996 when Baylor College of Medicine was chosen as one of six worldwide sites to complete the final phase of the international Human Genome Project. Gibbs is the current director of the BCM-HGSC. It occupies more than , employing over 180 staff, and is one of three National Institutes of Health funded genome centers that were involved in the completion of the first human genome sequence. The BCM-HGSC contributed approximately 10 percent of the total project by sequencing chromosomes 3, 12 and X. The BCM-HGSC collaborated with researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory and Celera Genomics to sequence the first species of fruit fly, Drosophila melanogaster. The BCM-HGSC also completed the second species of fruit fly (Drosophila pseudoobscura), the honeybee (Apis mellifera), and led an international consortium to sequence the brown Norway rat. The BCM-HGSC subsequently sequenced and annotated the genome of the cow (Bos taurus), the sea urchin, rhesus macaque, tammar wallaby, Dictyostelium discoideum, and a number of bacteria that cause serious infections (Rickettsia typhi, Enterococcus faecium, Mannheimia haemolytica, and Fusobacterium nucleatum). The BCM-HGSC was a major contributor to the Mammalian Gene Collection program, to sequence all human cDNAs, as well as the International Haplotype Mapping Project (HapMap). Other research within the BCM-HGSC includes new molecular technologies for mapping and sequencing, novel chemistries for DNA tagging, instrumentation for DNA manipulation, new computer programs for genomic data analysis, the genes expressed in childhood leukemias, the genomic differences that lead to evolutionary changes, the role of host genetic variation in the course of infectious disease, and the molecular basis of specific genetic diseases. The sequencing for the Drosophila Genetic Reference Panel (DGRP) was performed here. The DGRP is a collaborative effort started by Trudy Mackay to establish a common standard for Drosophila melanogaster research. The HGSC has an active bioinformatics program, with research projects involving biologists and computer scientists. Problems under study focus on developing tools for generating, manipulating, and analyzing genome data. The BCM-HGSC is also involved with the Human Heredity and Health in Africa (H3Africa) Consortium. This collaboration resulted in a major study led by Neil Hanchard in which whole genome sequencing was performed on 426 individuals from 50 ethnolinguistic groups across Africa. As part of this study, more than 3 million previously un-described variants were uncovered. References Human genome projects
Human Genome Sequencing Center
[ "Biology" ]
610
[ "Human genome projects", "Genome projects" ]
13,620,523
https://en.wikipedia.org/wiki/Cauchy%E2%80%93Hadamard%20theorem
In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis. Theorem for one complex variable Consider the formal power series in one complex variable z of the form where Then the radius of convergence of f at the point a is given by where denotes the limit superior, the limit as approaches infinity of the supremum of the sequence values after the nth position. If the sequence values is unbounded so that the is ∞, then the power series does not converge near , while if the is 0 then the radius of convergence is ∞, meaning that the series converges on the entire plane. Proof Without loss of generality assume that . We will show first that the power series converges for , and then that it diverges for . First suppose . Let not be or For any , there exists only a finite number of such that . Now for all but a finite number of , so the series converges if . This proves the first part. Conversely, for , for infinitely many , so if , we see that the series cannot converge because its nth term does not tend to 0. Theorem for several complex variables Let be an n-dimensional vector of natural numbers () with , then converges with radius of convergence , if and only if of the multidimensional power series Proof From Set Then This is a power series in one variable which converges for and diverges for . Therefore, by the Cauchy–Hadamard theorem for one variable Setting gives us an estimate Because as Therefore Notes External links Augustin-Louis Cauchy Mathematical series Theorems in complex analysis
Cauchy–Hadamard theorem
[ "Mathematics" ]
403
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical structures", "Series (mathematics)", "Calculus", "Theorems in complex analysis" ]
13,624,160
https://en.wikipedia.org/wiki/Churchill%E2%80%93Bernstein%20equation
In convective heat transfer, the Churchill–Bernstein equation is used to estimate the surface averaged Nusselt number for a cylinder in cross flow at various velocities. The need for the equation arises from the inability to solve the Navier–Stokes equations in the turbulent flow regime, even for a Newtonian fluid. When the concentration and temperature profiles are independent of one another, the mass-heat transfer analogy can be employed. In the mass-heat transfer analogy, heat transfer dimensionless quantities are replaced with analogous mass transfer dimensionless quantities. This equation is named after Stuart W. Churchill and M. Bernstein, who introduced it in 1977. This equation is also called the Churchill–Bernstein correlation. Heat transfer definition where: is the surface averaged Nusselt number with characteristic length of diameter; is the Reynolds number with the cylinder diameter as its characteristic length; is the Prandtl number. The Churchill–Bernstein equation is valid for a wide range of Reynolds numbers and Prandtl numbers, as long as the product of the two is greater than or equal to 0.2, as defined above. The Churchill–Bernstein equation can be used for any object of cylindrical geometry in which boundary layers develop freely, without constraints imposed by other surfaces. Properties of the external free stream fluid are to be evaluated at the film temperature in order to account for the variation of the fluid properties at different temperatures. One should not expect much more than 20% accuracy from the above equation due to the wide range of flow conditions that the equation encompasses. The Churchill–Bernstein equation is a correlation and cannot be derived from principles of fluid dynamics. The equation yields the surface averaged Nusselt number, which is used to determine the average convective heat transfer coefficient. Newton's law of cooling (in the form of heat loss per surface area being equal to heat transfer coefficient multiplied by temperature gradient) can then be invoked to determine the heat loss or gain from the object, fluid and/or surface temperatures, and the area of the object, depending on what information is known. Mass transfer definition where: is the Sherwood number related to hydraulic diameter is the Schmidt number Using the mass-heat transfer analogy, the Nusselt number is replaced by the Sherwood number, and the Prandtl number is replaced by the Schmidt number. The same restrictions described in the heat transfer definition are applied to the mass transfer definition. The Sherwood number can be used to find an overall mass transfer coefficient and applied to Fick's law of diffusion to find concentration profiles and mass transfer fluxes. See also Prandtl number Reynolds number Notes References Heat transfer Convection
Churchill–Bernstein equation
[ "Physics", "Chemistry" ]
529
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Convection", "Thermodynamics" ]
13,625,345
https://en.wikipedia.org/wiki/Schr%C3%B6dinger%20field
In quantum mechanics and quantum field theory, a Schrödinger field, named after Erwin Schrödinger, is a quantum field which obeys the Schrödinger equation. While any situation described by a Schrödinger field can also be described by a many-body Schrödinger equation for identical particles, the field theory is more suitable for situations where the particle number changes. A Schrödinger field is also the classical limit of a quantum Schrödinger field, a classical wave which satisfies the Schrödinger equation. Unlike the quantum mechanical wavefunction, if there are interactions between the particles the equation will be nonlinear. These nonlinear equations describe the classical wave limit of a system of interacting identical particles. The path integral of a Schrödinger field is also known as a coherent state path integral, because the field itself is an annihilation operator whose eigenstates can be thought of as coherent states of the harmonic oscillations of the field modes. Schrödinger fields are useful for describing Bose–Einstein condensation, the Bogolyubov–de Gennes equation of superconductivity, superfluidity, and many-body theory in general. They are also a useful alternative formalism for nonrelativistic quantum mechanics. A Schrödinger field is the nonrelativistic limit of a Klein–Gordon field. Summary A Schrödinger field is a quantum field whose quanta obey the Schrödinger equation. In the classical limit, it can be understood as the quantized wave equation of a Bose Einstein condensate or a superfluid. Free field A Schrödinger field has the free field Lagrangian When is a complex valued field in a path integral, or equivalently an operator with canonical commutation relations, it describes a collection of identical non-relativistic bosons. When is a Grassmann valued field, or equivalently an operator with canonical anti-commutation relations, the field describes identical fermions. External potential If the particles interact with an external potential , the interaction makes a local contribution to the action: The field operators obey the Euler–Lagrange equations of motion, corresponding to the Schrödinger field Lagrangian density: Yielding the Schrödinger equations of motion: If the ordinary Schrödinger equation for V has known energy eigenstates with energies , then the field in the action can be rotated into a diagonal basis by a mode expansion: The action becomes: which is the position-momentum path integral for a collection of independent Harmonic oscillators. To see the equivalence, note that decomposed into real and imaginary parts the action is: after an integration by parts. Integrating over gives the action which, rescaling , is a harmonic oscillator action with frequency . Pair potential When the particles interact with a pair potential , the interaction is a nonlocal contribution to the action: A pair-potential is the non-relativistic limit of a relativistic field coupled to electrodynamics. Ignoring the propagating degrees of freedom, the interaction between nonrelativistic electrons is the Coulomb repulsion. In 2+1 dimensions, this is: When coupled to an external potential to model classical positions of nuclei, a Schrödinger field with this pair potential describes nearly all of condensed matter physics. The exceptions are effects like superfluidity, where the quantum mechanical interference of nuclei is important, and inner shell electrons where the electron motion can be relativistic. Nonlinear Schrödinger equation A special case of a delta-function interaction is widely studied, and is known as the nonlinear Schrödinger equation. Because the interactions always happen when two particles occupy the same point, the action for the nonlinear Schrödinger equation is local: The interaction strength requires renormalization in dimensions higher than 2 and in two dimensions it has logarithmic divergence. In any dimensions, and even with power-law divergence, the theory is well defined. If the particles are fermions, the interaction vanishes. Many-body potentials The potentials can include many-body contributions. The interacting Lagrangian is then: These types of potentials are important in some effective descriptions of close-packed atoms. Higher order interactions are less and less important. Canonical formalism The canonical momentum association with the field is The canonical commutation relations are like an independent harmonic oscillator at each point: The field Hamiltonian is and the field equation for any interaction is a nonlinear and nonlocal version of the Schrödinger equation. For pairwise interactions: Perturbation theory The expansion in Feynman diagrams is called many-body perturbation theory. The propagator is The interaction vertex is the Fourier transform of the pair-potential. In all the interactions, the number of incoming and outgoing lines is equal. Exposition Identical particles The many body Schrödinger equation for identical particles describes the time evolution of the many-body wavefunction ψ(x1, x2...xN) which is the probability amplitude for N particles to have the listed positions. The Schrödinger equation for ψ is: with Hamiltonian Since the particles are indistinguishable, the wavefunction has some symmetry under switching positions. Either , . Since the particles are indistinguishable, the potential V must be unchanged under permutations. If then it must be the case that . If then and so on. In the Schrödinger equation formalism, the restrictions on the potential are ad-hoc, and the classical wave limit is hard to reach. It also has limited usefulness if a system is open to the environment, because particles might coherently enter and leave. Nonrelativistic Fock space A Schrödinger field is defined by extending the Hilbert space of states to include configurations with arbitrary particle number. A nearly complete basis for this set of states is the collection: labeled by the total number of particles and their position. An arbitrary state with particles at separated positions is described by a superposition of states of this form. In this formalism, keep in mind that any two states whose positions can be permuted into each other are really the same, so the integration domains need to avoid double counting. Also keep in mind that the states with more than one particle at the same point have not yet been defined. The quantity is the amplitude that no particles are present, and its absolute square is the probability that the system is in the vacuum. In order to reproduce the Schrödinger description, the inner product on the basis states should be and so on. Since the discussion is nearly formally identical for bosons and fermions, although the physical properties are different, from here on the particles will be bosons. There are natural operators in this Hilbert space. One operator, called , is the operator which introduces an extra particle at x. It is defined on each basis state: with slight ambiguity when a particle is already at x. Another operator removes a particle at x, and is called . This operator is the conjugate of the operator . Because has no matrix elements which connect to states with no particle at x, must give zero when acting on such a state. The position basis is an inconvenient way to understand coincident particles because states with a particle localized at one point have infinite energy, so intuition is difficult. In order to see what happens when two particles are at exactly the same point, it is mathematically simplest either to make space into a discrete lattice, or to Fourier transform the field in a finite volume. The operator creates a superposition of one particle states in a plane wave state with momentum k, in other words, it produces a new particle with momentum k. The operator annihilates a particle with momentum k. If the potential energy for interaction of infinitely distant particles vanishes, the Fourier transformed operators in infinite volume create states which are noninteracting. The states are infinitely spread out, and the chance that the particles are nearby is zero. The matrix elements for the operators between non-coincident points reconstructs the matrix elements of the Fourier transform between all modes: where the delta function is either the Dirac delta function or the Kronecker delta, depending on whether the volume is infinite or finite. The commutation relations now determine the operators completely, and when the spatial volume is finite, there are no conceptual hurdle to understand coinciding momenta because momenta are discrete. In a discrete momentum basis, the basis states are: where the n's are the number of particles at each momentum. For fermions and anyons, the number of particles at any momentum is always either zero or one. The operators have harmonic-oscillator like matrix elements between states, independent of the interaction: So that the operator counts the total number of particles. Now it is easy to see that the matrix elements of and have harmonic oscillator commutation relations too. So that there really is no difficulty with coincident particles in position space. The operator which removes and replaces a particle, acts as a sensor to detect if a particle is present at x. The operator acts to multiply the state by the gradient of the many body wavefunction. The operator acts to reproduce the right hand side of the Schrödinger equation when acting on any basis state, so that holds as an operator equation. Since this is true for an arbitrary state, it is also true without the . To add interactions, add nonlinear terms in the field equations. The field form automatically ensures that the potentials obey the restrictions from symmetry. Field Hamiltonian The field Hamiltonian which reproduces the equations of motion is The Heisenberg equations of motion for this operator reproduces the equation of motion for the field. To find the classical field Lagrangian, apply a Legendre transform to the classical limit of the Hamiltonian. Although this is correct classically, the quantum mechanical transformation is not completely conceptually straightforward because the path integral is over eigenvalues of operators ψ which are not hermitian and whose eigenvectors are not orthogonal. The path integral over field states therefore seems naively to be overcounting. This is not the case, because the time derivative term in L includes the overlap between the different field states. Relation to Klein–Gordon field The non-relativistic limit as of any Klein–Gordon field is two Schrödinger fields, representing the particle and anti-particle. For clarity, all units and constants are preserved in this derivation. From the momentum space annihilation operators of the relativistic field, one defines , such that . Defining two "non-relativistic" fields and , , which factor out a rapidly oscillating phase due to the rest mass plus a vestige of the relativistic measure, the Lagrangian density becomes where terms proportional to are represented with ellipses and disappear in the non-relativistic limit. When the four-gradient is expanded, the total divergence is ignored and terms proportional to also disappear in the non-relativistic limit. After an integration by parts, The final Lagrangian takes the form Notes References Field Quantum field theory
Schrödinger field
[ "Physics" ]
2,343
[ "Quantum field theory", "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", "Schrödinger equation" ]
13,629,438
https://en.wikipedia.org/wiki/U4atac%20minor%20spliceosomal%20RNA
U4atac minor spliceosomal RNA is a ncRNA which is an essential component of the minor U12-type spliceosome complex. The U12-type spliceosome is required for removal of the rarer class of eukaryotic introns (AT-AC, U12-type). U4atac snRNA is proposed to form a base-paired complex with another spliceosomal RNA U6atac via two stem loop regions. These interacting stem loops have been shown to be required for in vivo splicing. U4atac also contains a 3' Sm protein binding site which has been shown to be essential for splicing activity. U4atac is the functional analog of U4 spliceosomal RNA in the major U2-type spliceosomal complex. The Drosophila U4atac snRNA has an additional predicted 3' stem loop terminal to the Sm binding site. Disease It has been shown that mutations in the U4atac snRNA can cause microcephalic osteodysplastic primordial dwarfism type I (MOPD I), also called Taybi-Linder syndrome (TALS). MOPD I is a developmental disorder that is associated with brain and skeletal abnormalities. It has been shown that the mutations cause defective U12 splicing. References External links Non-coding RNA Spliceosome RNA splicing
U4atac minor spliceosomal RNA
[ "Chemistry" ]
296
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
13,629,713
https://en.wikipedia.org/wiki/Applicability%20domain
The applicability domain (AD) (for both chemistry and machine learning) of a QSAR model is the physico-chemical, structural or biological space, knowledge or information on which the training set of the model has been developed, and for which it is applicable to make predictions for new compounds. The purpose of AD is to state whether the model's assumptions are met, and for which chemicals the model can be reliably applicable. In general, this is the case for interpolation rather than for extrapolation. Up to now there is no single generally accepted algorithm for determining the AD: a comprehensive survey can be found in a Report and Recommendations of ECVAM Workshop 52. There exists a rather systematic approach for defining interpolation regions. The process involves the removal of outliers and a probability density distribution method using kernel-weighted sampling. Another widely used approach for the structural AD of the regression QSAR models is based on the leverage calculated from the diagonal values of the hat matrix of the modeling molecular descriptors. A recent rigorous benchmarking study of several AD algorithms identified standard-deviation of model predictions as the most reliable approach. To investigate the AD of a training set of chemicals one can directly analyse properties of the multivariate descriptor space of the training compounds or more indirectly via distance (or similarity) metrics. When using distance metrics care should be taken to use an orthogonal and significant vector space. This can be achieved by different means of feature selection and successive principal components analysis. Notes Cheminformatics Medicinal chemistry Drug discovery
Applicability domain
[ "Chemistry", "Biology" ]
322
[ "Biochemistry", "Life sciences industry", "Drug discovery", "Medicinal chemistry stubs", "Biochemistry stubs", "Computational chemistry", "nan", "Medicinal chemistry", "Cheminformatics" ]
11,071,463
https://en.wikipedia.org/wiki/Entropy%20rate
In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process. For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value. Definition A process with a countable index gives rise to the sequence of its joint entropies . If the limit exists, the entropy rate is defined as Note that given any sequence with and letting , by telescoping one has . The entropy rate thus computes the mean of the first such entropy changes, with going to infinity. The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy. Discussion While may be understood as a sequence of random variables, the entropy rate represents the average entropy change per one random variable, in the long term. It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property. For strongly stationary processes A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables. For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence The quantity given by the limit on the right is also denoted , which is motivated to the extent that here this is then again a rate associated with the process, in the above sense. For Markov chains Since a stochastic process defined by a Markov chain that is irreducible, aperiodic and positive recurrent has a stationary distribution, the entropy rate is independent of the initial distribution. For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix and an entropy associated with each state, one finds where is the asymptotic distribution of the chain. In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process. For hidden Markov models The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain be stationary, and let be the observable states, then we haveand at the limit of , both sides converge to the middle. Applications The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning. See also Information source (mathematics) Markov information source Asymptotic equipartition property Maximal entropy random walk - chosen to maximize entropy rate References Cover, T. and Thomas, J. (1991) Elements of Information Theory, John Wiley and Sons, Inc., Information theory Entropy Markov models Temporal rates
Entropy rate
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
612
[ "Temporal quantities", "Thermodynamic properties", "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Quantity", "Temporal rates", "Computer science", "Entropy", "Information theory", "Asymmetry", "Wikipedia categories named after physical quantities", "Symmetry...
11,074,125
https://en.wikipedia.org/wiki/Abradable%20coating
An abradable coating is a coating made of an abradable material – meaning if it rubs against a more abrasive material in motion, the former will be worn whereas the latter will face no wear. Abradable coatings provide a .1 to .2% performance improvement compared to those without coating. Abradable coatings are used in aircraft jet engines in the compressor and turbine sections where a minimal clearance is needed between the blade tips and the casing. Abradable coatings have been in use by aero-engine manufacturers in some form or fashion for roughly 50 years Abradable powder coatings provide an economical and environmentally friendly way to improve the efficiency of engines, compressors and pumps by fine-tuning the operational fit of internal components such as pistons, rotors and cases. In typical turbo machinery, the clearance between blade tips and the casing must account for thermal and inertial expansion as well as changes in concentricity due to shock loading events. To prevent catastrophic tip to casing contact, conservatively large clearances must be employed. In small turboprop aircraft, the angle at which abradable coating is applied is impacted by the necessity of the coating process performed at spray angles less than 60 degrees. The role of abradable coatings is not only to allow for closer clearances, but to automatically adjust clearances, in-situ, to accept physical events and/or thermal scenarios that may be found in a devices operational history. Manufacturing Thermal spray: Many techniques are available (Plasma, Flame, etc.). Sintering: Honeycomb coatings are sintered on the casing Casting: in the case of polymer coating. References Gas turbine technology
Abradable coating
[ "Physics" ]
346
[ "Materials stubs", "Materials", "Matter" ]
11,074,998
https://en.wikipedia.org/wiki/Mooring%20%28oceanography%29
A mooring in oceanography is a collection of devices connected to a wire and anchored on the sea floor. It is the Eulerian way of measuring ocean currents, since a mooring is stationary at a fixed location. In contrast to that, the Lagrangian way measures the motion of an oceanographic drifter, the Lagrangian drifter. Construction principle The mooring is held up in the water column with various forms of buoyancy such as glass balls and syntactic foam floats. The attached instrumentation is wide-ranging but often includes CTDs (conductivity, temperature depth sensors), current meters (e.g. acoustic Doppler current profilers or deprecated rotor current meters), and biological sensors to measure various parameters. Long-term moorings can be deployed for durations of two years or more, powered with alkaline or lithium battery packs. Components Top buoy Surface buoys Moorings often include surface buoys that transmit real time data back to shore. The traditional approach is to use the Argos System. Alternatively, one may use the commercial Iridium satellites which allow higher data rates. Submerged buoys In deeper waters, areas covered by sea ice, areas within or near shipping lines or areas that are prone to theft or vandalism, moorings are often submerged with no surface markers. Submerged moorings typically use an acoustic release or a Timed Release that connects the mooring to an anchor weight on the sea floor. The weight is released by sending a coded acoustic command signal and stays on the ground. Deep water anchors are typically made from steel and may be as large as 100 kg. A common deep water anchor consists of a stack of 2–4 railroad wheels. In shallow waters anchors may consist of a concrete block or small portable anchor. The buoyancy of the floats, i.e. of the top buoy plus additional packs of glass bulbs of foam, is sufficient to carry the instruments back to the surface. In order to avoid entangled ropes, it has been practical to place additional floats directly above each instrument. Instrument housing Prawlers Prawlers (profiling crawlers) are sensor bodies which climb and descend the cable, to observe multiple depths. The energy to move is "free," harnessed by ratcheting upward via wave energy, then returning downward via gravity. Depth correction Similar to a kite in the wind, the mooring line will follow a so-called (half-)catenary. The influence of currents (and wind if the top buoy is above the sea surface) can be modeled and the shape of the mooring line can be determined by software. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m. See also Benthic lander, a mooring which does not have any mooring line References Oceanography Physical oceanography Oceanographic instrumentation Ocean currents Biological oceanography de:Boje (Schifffahrt)
Mooring (oceanography)
[ "Physics", "Chemistry", "Technology", "Engineering", "Environmental_science" ]
623
[ "Ocean currents", "Hydrology", "Oceanographic instrumentation", "Applied and interdisciplinary physics", "Oceanography", "Measuring instruments", "Physical oceanography", "Fluid dynamics" ]
11,076,807
https://en.wikipedia.org/wiki/C%C3%A9a%27s%20lemma
Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations. Lemma statement Let be a real Hilbert space with the norm Let be a bilinear form with the properties for some constant and all in (continuity) for some constant and all in (coercivity or -ellipticity). Let be a bounded linear operator. Consider the problem of finding an element in such that for all in Consider the same problem on a finite-dimensional subspace of so, in satisfies for all in By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that for all in That is to say, the subspace solution is "the best" approximation of in up to the constant The proof is straightforward for all in We used the -orthogonality of and which follows directly from for all in . Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form instead of a bilinear one. The coercivity assumption then becomes for all in (notice the absolute value sign around ). Error estimate in the energy norm In many applications, the bilinear form is symmetric, so for all in This, together with the above properties of this form, implies that is an inner product on The resulting norm is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm Using the -orthogonality of and and the Cauchy–Schwarz inequality for all in . Hence, in the energy norm, the inequality in Céa's lemma becomes for all in (notice that the constant on the right-hand side is no longer present). This states that the subspace solution is the best approximation to the full-space solution in respect to the energy norm. Geometrically, this means that is the projection of the solution onto the subspace in respect to the inner product (see the adjacent picture). Using this result, one can also derive a sharper estimate in the norm . Since for all in , it follows that for all in . An application of Céa's lemma We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method. Consider the problem of finding a function satisfying the conditions where is a given continuous function. Physically, the solution to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point between and the force density is (where is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when is a constant function (since the gravitational force is the same at all points). Let the Hilbert space be the Sobolev space which is the space of all square-integrable functions defined on that have a weak derivative on with also being square integrable, and satisfies the conditions The inner product on this space is for all and in After multiplying the original boundary value problem by in this space and performing an integration by parts, one obtains the equivalent problem for all in , with , and It can be shown that the bilinear form and the operator satisfy the assumptions of Céa's lemma. In order to determine a finite-dimensional subspace of consider a partition of the interval and let be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in takes the value 0 at the endpoints of It follows that is a vector subspace of whose dimension is (the number of points in the partition that are not endpoints). Let be the solution to the subspace problem for all in so one can think of as of a piecewise-linear approximation to the exact solution By Céa's lemma, there exists a constant dependent only on the bilinear form such that for all in To explicitly calculate the error between and consider the function in that has the same values as at the nodes of the partition (so is obtained by linear interpolation on each interval from the values of at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant that depends only on the endpoints and such that for all in where is the largest length of the subintervals in the partition, and the norm on the right-hand side is the L2 norm. This inequality then yields an estimate for the error Then, by substituting in Céa's lemma it follows that where is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval ). This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of was in one dimension), and while using higher order polynomials for the subspace References (Original work from J. Céa) Numerical differential equations Hilbert spaces Lemmas in analysis
Céa's lemma
[ "Physics", "Mathematics" ]
1,137
[ "Theorems in mathematical analysis", "Quantum mechanics", "Lemmas in mathematical analysis", "Hilbert spaces", "Lemmas" ]
11,081,803
https://en.wikipedia.org/wiki/Phase%20retrieval
Phase retrieval is the process of algorithmically finding solutions to the phase problem. Given a complex spectrum , of amplitude , and phase : where x is an M-dimensional spatial coordinate and k is an M-dimensional spatial frequency coordinate. Phase retrieval consists of finding the phase that satisfies a set of constraints for a measured amplitude. Important applications of phase retrieval include X-ray crystallography, transmission electron microscopy and coherent diffractive imaging, for which . Uniqueness theorems for both 1-D and 2-D cases of the phase retrieval problem, including the phaseless 1-D inverse scattering problem, were proven by Klibanov and his collaborators (see References). Problem formulation Here we consider 1-D discrete Fourier transform (DFT) phase retrieval problem. The DFT of a complex signal is given by , and the oversampled DFT of is given by , where . Since the DFT operator is bijective, this is equivalent to recovering the phase . It is common recovering a signal from its autocorrelation sequence instead of its Fourier magnitude. That is, denote by the vector after padding with zeros. The autocorrelation sequence of is then defined as , and the DFT of , denoted by , satisfies . Methods Error reduction algorithm The error reduction is a generalization of the Gerchberg–Saxton algorithm. It solves for from measurements of by iterating a four-step process. For the th iteration the steps are as follows: Step (1): , , and are estimates of, respectively, , and . In the first step we calculate the Fourier transform of : Step (2): The experimental value of , calculated from the diffraction pattern via the signal equation, is then substituted for , giving an estimate of the Fourier transform: where the ' denotes an intermediate result that will be discarded later on. Step (3): the estimate of the Fourier transform is then inverse Fourier transformed: Step (4): then must be changed so that the new estimate of the object, , satisfies the object constraints. is therefore defined piecewise as: where is the domain in which does not satisfy the object constraints. A new estimate is obtained and the four step process is repeated. This process is continued until both the Fourier constraint and object constraint are satisfied. Theoretically, the process will always lead to a convergence, but the large number of iterations needed to produce a satisfactory image (generally >2000) results in the error-reduction algorithm by itself being unsuitable for practical applications. Hybrid input-output algorithm The hybrid input-output algorithm is a modification of the error-reduction algorithm - the first three stages are identical. However, no longer acts as an estimate of , but the input function corresponding to the output function , which is an estimate of . In the fourth step, when the function violates the object constraints, the value of is forced towards zero, but optimally not to zero. The chief advantage of the hybrid input-output algorithm is that the function contains feedback information concerning previous iterations, reducing the probability of stagnation. It has been shown that the hybrid input-output algorithm converges to a solution significantly faster than the error reduction algorithm. Its convergence rate can be further improved through step size optimization algorithms. Here is a feedback parameter which can take a value between 0 and 1. For most applications, gives optimal results.{Scientific Reports volume 8, Article number: 6436 (2018)} Shrinkwrap For a two dimensional phase retrieval problem, there is a degeneracy of solutions as and its conjugate have the same Fourier modulus. This leads to "image twinning" in which the phase retrieval algorithm stagnates producing an image with features of both the object and its conjugate. The shrinkwrap technique periodically updates the estimate of the support by low-pass filtering the current estimate of the object amplitude (by convolution with a Gaussian) and applying a threshold, leading to a reduction in the image ambiguity. Semidefinite relaxation-based algorithm for short time Fourier transform The phase retrieval is an ill-posed problem. To uniquely identify the underlying signal, in addition to the methods that adds additional prior information like Gerchberg–Saxton algorithm, the other way is to add magnitude-only measurements like short time Fourier transform (STFT). The method introduced below mainly based on the work of Jaganathan et al. Short time Fourier transform Given a discrete signal which is sampled from . We use a window of length W: to compute the STFT of , denoted by : for and , where the parameter denotes the separation in time between adjacent short-time sections and the parameter denotes the number of short-time sections considered. The other interpretation (called sliding window interpretation) of STFT can be used with the help of discrete Fourier transform (DFT). Let denotes the window element obtained from shifted and flipped window . Then we have , where . Problem definition Let be the measurements corresponding to the magnitude-square of the STFT of , be the diagonal matrix with diagonal elements STFT phase retrieval can be stated as: Find such that for and , where is the -th column of the -point inverse DFT matrix. Intuitively, the computational complexity growing with makes the method impractical. In fact, however, for the most cases in practical we only need to consider the measurements corresponding to , for any parameter satisfying . To be more specifically, if both the signal and the window are not vanishing, that is, for all and for all , signal can be uniquely identified from its STFT magnitude if the following requirements are satisfied: , . The proof can be found in Jaganathan' s work, which reformulates STFT phase retrieval as the following least-squares problem: . The algorithm, although without theoretical recovery guarantees, empirically able to converge to the global minimum when there is substantial overlap between adjacent short-time sections. Semidefinite relaxation-based algorithm To establish recovery guarantees, one way is to formulate the problems as a semidefinite program (SDP), by embedding the problem in a higher dimensional space using the transformation and relax the rank-one constraint to obtain a convex program. The problem reformulated is stated below: Obtain by solving:for and Once is found, we can recover signal by best rank-one approximation. Applications Phase retrieval is a key component of coherent diffraction imaging (CDI). In CDI, the intensity of the diffraction pattern scattered from a target is measured. The phase of the diffraction pattern is then obtained using phase retrieval algorithms and an image of the target is constructed. In this way, phase retrieval allows for the conversion of a diffraction pattern into an image without an optical lens. Using phase retrieval algorithms, it is possible to characterize complex optical systems and their aberrations. For example, phase retrieval was used to diagnose and repair the flawed optics of the Hubble Space Telescope. Other applications of phase retrieval include X-ray crystallography and transmission electron microscopy. See also Phase problem Crystallography X-ray crystallography Coherent diffraction imaging Transport-of-Intensity Equation Phase correlation References Crystallography Mathematical physics Mathematical chemistry Inverse problems
Phase retrieval
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,483
[ "Drug discovery", "Applied mathematics", "Theoretical physics", "Materials science", "Molecular modelling", "Mathematical chemistry", "Crystallography", "Theoretical chemistry", "Condensed matter physics", "Inverse problems", "Mathematical physics" ]
11,082,896
https://en.wikipedia.org/wiki/Medical%20statistics
Medical statistics (also health statistics) deals with applications of statistics to medicine and the health sciences, including epidemiology, public health, forensic medicine, and clinical research. Medical statistics has been a recognized branch of statistics in the United Kingdom for more than 40 years, but the term has not come into general use in North America, where the wider term 'biostatistics' is more commonly used. However, "biostatistics" more commonly connotes all applications of statistics to biology. Medical statistics is a subdiscipline of statistics. It is the science of summarizing, collecting, presenting and interpreting data in medical practice, and using them to estimate the magnitude of associations and test hypotheses. It has a central role in medical investigations. It not only provides a way of organizing information on a wider and more formal basis than relying on the exchange of anecdotes and personal experience, but also takes into account the intrinsic variation inherent in most biological processes. Use in medical hypothesis testing In medical hypothesis testing, the medical research is often evaluated by means of the confidence interval, the P value, or both. Confidence interval Frequently reported in medical research studies is the confidence interval (CI), which indicates the consistency and variability of the medical results of repeated medical trials. In other words, the confidence interval shows the range of values where the expected true estimate would exist within this specific range, if the study was performed many times. Most biomedical research is not able to use a total population for a study. Instead, samples of the total population are what are often used for a study. From the sample, inferences can be made of the total population by means of a sample statistic and the estimation of error, presented as a range of values. P value Frequently used in medical studies is the statistical significance of P < 0.05. The P value is the probability of no effect or no difference (null hypothesis) of obtaining a result essentially equal to what was actually observed. The P stands for probability and measures how likely it is that any observed difference between groups is due to chance. The P value function between 0 and 1. The closer to 0, the less likely the results are due to chance. The closer to 1, the higher the probability that the results are actually due to chance. Pharmaceutical statistics Pharmaceutical statistics is the application of statistics to matters concerning the pharmaceutical industry. This can be from issues of design of experiments, to analysis of drug trials, to issues of commercialization of a medicine. There are many professional bodies concerned with this field including: European Federation of Statisticians in the Pharmaceutical Industry Statisticians In The Pharmaceutical Industry Clinical biostatistics Clinical biostatistics is concerned with research into the principles and methodology used in the design and analysis of clinical research and to apply statistical theory to clinical medicine. Clinical biostatistics is taught in postgraduate biostatistical and applied statistical degrees, for example as part of the BCA Master of Biostatistics program in Australia. Basic concepts For describing situations Incidence (epidemiology) vs. Prevalence vs. Cumulative incidence Many medical tests (such as pregnancy tests) have two possible results: positive or negative. However, tests will sometimes yield incorrect results in the form of false positives or false negatives. False positives and false negatives can be described by the statistical concepts of type I and type II errors, respectively, where the null hypothesis is that the patient will test negative. The precision of a medical test is usually calculated in the form of positive predictive values (PPVs) and negative predicted values (NPVs). PPVs and NPVs of medical tests depend on intrinsic properties of the test as well as the prevalence of the condition being tested for. For example, if any pregnancy test was administered to a population of individuals who were biologically incapable of becoming pregnant, then the test's PPV will be 0% and its NPV will be 100% simply because true positives and false negatives cannot exist in this population. Mortality rate vs. standardized mortality ratio vs. age-standardized mortality rate Pandemic vs. epidemic vs. endemic vs. syndemic Serial interval vs. incubation period Cancer cluster Sexual network Years of potential life lost Maternal mortality rate Perinatal mortality rate Low birth weight ratio For assessing the effectiveness of an intervention Absolute risk reduction Control event rate Experimental event rate Number needed to harm Number needed to treat Odds ratio Relative risk reduction Relative risk Relative survival Minimal clinically important difference Related statistical theory Survival analysis Proportional hazards models Active control trials: clinical trials in which a kind of new treatment is compared with some other active agent rather than a placebo. ADLS(Activities of daily living scale): a scale designed to measure physical ability/disability that is used in investigations of a variety of chronic disabling conditions, such as arthritis. This scale is based on scoring responses to questions about self-care, grooming, etc. Actuarial statistics: the statistics used by actuaries to calculate liabilities, evaluate risks and plan the financial course of insurance, pensions, etc. See also Herd immunity False positives and false negatives Rare disease Hilda Mary Woods – the first author (with William Russell) of the first British textbook of medical statistics, published in 1931 References Further reading External links Health-EU Portal EU health statistics Biostatistics Medical specialties Applied statistics Pharmaceutical statistics Clinical research
Medical statistics
[ "Mathematics" ]
1,097
[ "Applied mathematics", "Applied statistics" ]
11,083,602
https://en.wikipedia.org/wiki/Omapatrilat
Omapatrilat (INN, proposed trade name Vanlev) is an experimental antihypertensive agent that was never marketed. It inhibits both neprilysin (neutral endopeptidase, NEP) and angiotensin-converting enzyme (ACE). NEP inhibition results in elevated natriuretic peptide levels, promoting natriuresis, diuresis, vasodilation, and reductions in preload and ventricular remodeling. It was discovered and developed by Bristol-Myers Squibb but failed in clinical trials as a potential treatment for congestive heart failure due to safety concerns about its causing angioedema. Omapatrilat angioedema was attributed to its dual mechanism of action, inhibiting both angiotensin-converting enzyme (ACE), and neprilysin (neutral endopeptidase), both of these enzymes are responsible for the metabolism of bradykinin which causes vasodilation, angioedema, and airway obstruction. See also Gemopatrilat Cilazapril Sacubitril References Further reading ACE inhibitors Heterocyclic compounds with 2 rings Carboxylic acids Lactams Propionamides Thiols Nitrogen heterocycles Sulfur heterocycles Abandoned drugs
Omapatrilat
[ "Chemistry" ]
272
[ "Carboxylic acids", "Thiols", "Drug safety", "Functional groups", "Organic compounds", "Abandoned drugs" ]
8,914,599
https://en.wikipedia.org/wiki/Helix%E2%80%93coil%20transition%20model
Helix–coil transition models are formalized techniques in statistical mechanics developed to describe conformations of linear polymers in solution. The models are usually but not exclusively applied to polypeptides as a measure of the relative fraction of the molecule in an alpha helix conformation versus turn or random coil. The main attraction in investigating alpha helix formation is that one encounters many of the features of protein folding but in their simplest version. Most of the helix–coil models contain parameters for the likelihood of helix nucleation from a coil region, and helix propagation along the sequence once nucleated; because polypeptides are directional and have distinct N-terminal and C-terminal ends, propagation parameters may differ in each direction. The two states are helix state: characterized by a common rotating pattern kept together by hydrogen bonds, (see alpha-helix). coil state: conglomerate of randomly ordered sequence of atoms (see random coil). Common transition models include the Zimm–Bragg model and the Lifson–Roig model, and their extensions and variations. Energy of host poly-alanine helix in aqueous solution: where m is number of residues in the helix. References Protein structure Statistical mechanics Thermodynamic models
Helix–coil transition model
[ "Physics", "Chemistry" ]
249
[ "Statistical mechanics stubs", "Thermodynamic models", "Thermodynamics", "Structural biology", "Statistical mechanics", "Protein structure" ]
8,918,323
https://en.wikipedia.org/wiki/Conservative%20system
In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited. Alternately, conservative systems are those to which the Poincaré recurrence theorem applies. An important special case of conservative systems are the measure-preserving dynamical systems. Informal introduction Informally, dynamical systems describe the time evolution of the phase space of some mechanical system. Commonly, such evolution is given by some differential equations, or quite often in terms of discrete time steps. However, in the present case, instead of focusing on the time evolution of discrete points, one shifts attention to the time evolution of collections of points. One such example would be Saturn's rings: rather than tracking the time evolution of individual grains of sand in the rings, one is instead interested in the time evolution of the density of the rings: how the density thins out, spreads, or becomes concentrated. Over short time-scales (hundreds of thousands of years), Saturn's rings are stable, and are thus a reasonable example of a conservative system and more precisely, a measure-preserving dynamical system. It is measure-preserving, as the number of particles in the rings does not change, and, per Newtonian orbital mechanics, the phase space is incompressible: it can be stretched or squeezed, but not shrunk (this is the content of Liouville's theorem). Formal definition Formally, a measurable dynamical system is conservative if and only if it is non-singular, and has no wandering sets. A measurable dynamical system (X, Σ, μ, τ) is a Borel space (X, Σ) equipped with a sigma-finite measure μ and a transformation τ. Here, X is a set, and Σ is a sigma-algebra on X, so that the pair (X, Σ) is a measurable space. μ is a sigma-finite measure on the sigma-algebra. The space X is the phase space of the dynamical system. A transformation (a map) is said to be Σ-measurable if and only if, for every σ ∈ Σ, one has . The transformation is a single "time-step" in the evolution of the dynamical system. One is interested in invertible transformations, so that the current state of the dynamical system came from a well-defined past state. A measurable transformation is called non-singular when if and only if . In this case, the system (X, Σ, μ, τ) is called a non-singular dynamical system. The condition of being non-singular is necessary for a dynamical system to be suitable for modeling (non-equilibrium) systems. That is, if a certain configuration of the system is "impossible" (i.e. ) then it must stay "impossible" (was always impossible: ), but otherwise, the system can evolve arbitrarily. Non-singular systems preserve the negligible sets, but are not required to preserve any other class of sets. The sense of the word singular here is the same as in the definition of a singular measure in that no portion of is singular with respect to and vice versa. A non-singular dynamical system for which is called invariant, or, more commonly, a measure-preserving dynamical system. A non-singular dynamical system is conservative if, for every set of positive measure and for every , one has some integer such that . Informally, this can be interpreted as saying that the current state of the system revisits or comes arbitrarily close to a prior state; see Poincaré recurrence for more. A non-singular transformation is incompressible if, whenever one has , then . Properties For a non-singular transformation , the following statements are equivalent: τ is conservative. τ is incompressible. Every wandering set of τ is null. For all sets σ of positive measure, . The above implies that, if and is measure-preserving, then the dynamical system is conservative. This is effectively the modern statement of the Poincaré recurrence theorem. A sketch of a proof of the equivalence of these four properties is given in the article on the Hopf decomposition. Suppose that and is measure-preserving. Let be a wandering set of . By definition of wandering sets and since preserves , would thus contain a countably infinite union of pairwise disjoint sets that have the same -measure as . Since it was assumed , it follows that is a null set, and so all wandering sets must be null sets. This argumentation fails for even the simplest examples if . Indeed, consider for instance , where denotes the Lebesgue measure, and consider the shift operator . Since the Lebesgue measure is translation-invariant, is measure-preserving. However, is not conservative. In fact, every interval of length strictly less than contained in is wandering. In particular, can be written as a countable union of wandering sets. Hopf decomposition The Hopf decomposition states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and a wandering (dissipative) set. A commonplace informal example of Hopf decomposition is the mixing of two liquids (some textbooks mention rum and coke): The initial state, where the two liquids are not yet mixed, can never recur again after mixing; it is part of the dissipative set. Likewise any of the partially-mixed states. The result, after mixing (a cuba libre, in the canonical example), is stable, and forms the conservative set; further mixing does not alter it. In this example, the conservative set is also ergodic: if one added one more drop of liquid (say, lemon juice), it would not stay in one place, but would come to mix in everywhere. One word of caution about this example: although mixing systems are ergodic, ergodic systems are not in general mixing systems! Mixing implies an interaction which may not exist. The canonical example of an ergodic system that does not mix is the Bernoulli process: it is the set of all possible infinite sequences of coin flips (equivalently, the set of infinite strings of zeros and ones); each individual coin flip is independent of the others. Ergodic decomposition The ergodic decomposition theorem states, roughly, that every conservative system can be split up into components, each component of which is individually ergodic. An informal example of this would be a tub, with a divider down the middle, with liquids filling each compartment. The liquid on one side can clearly mix with itself, and so can the other, but, due to the partition, the two sides cannot interact. Clearly, this can be treated as two independent systems; leakage between the two sides, of measure zero, can be ignored. The ergodic decomposition theorem states that all conservative systems can be split into such independent parts, and that this splitting is unique (up to differences of measure zero). Thus, by convention, the study of conservative systems becomes the study of their ergodic components. Formally, every ergodic system is conservative. Recall that an invariant set σ ∈ Σ is one for which τ(σ) = σ. For an ergodic system, the only invariant sets are those with measure zero or with full measure (are null or are conull); that they are conservative then follows trivially from this. When τ is ergodic, the following statements are equivalent: τ is conservative and ergodic For all measurable sets σ, ; that is, σ "sweeps out" all of X. For all sets σ of positive measure, and for almost every , there exists a positive integer n such that . For all sets and of positive measure, there exists a positive integer n such that If , then either or the complement has zero measure: . See also KMS state, a description of thermodynamic equilibrium in quantum mechanical systems; dual to modular theories for von Neumann algebras. Notes References Further reading Ergodic theory Dynamical systems
Conservative system
[ "Physics", "Mathematics" ]
1,733
[ "Mechanics", "Ergodic theory", "Dynamical systems" ]
1,047,942
https://en.wikipedia.org/wiki/Muirhead%27s%20inequality
In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means. Preliminary definitions a-mean For any real vector define the "a-mean" [a] of positive real numbers x1, ..., xn by where the sum extends over all permutations σ of { 1, ..., n }. When the elements of a are nonnegative integers, the a-mean can be equivalently defined via the monomial symmetric polynomial as where ℓ is the number of distinct elements in a, and k1, ..., kℓ are their multiplicities. Notice that the a-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if . In the general case, one can consider instead , which is called a Muirhead mean. Examples For a = (1, 0, ..., 0), the a-mean is just the ordinary arithmetic mean of x1, ..., xn. For a = (1/n, ..., 1/n), the a-mean is the geometric mean of x1, ..., xn. For a = (x, 1 − x), the a-mean is the Heinz mean. The Muirhead mean for a = (−1, 0, ..., 0) is the harmonic mean. Doubly stochastic matrices An n × n matrix P is doubly stochastic precisely if both P and its transpose PT are stochastic matrices. A stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each column is 1. Thus, a doubly stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each row and the sum of the entries in each column is 1. Statement Muirhead's inequality states that [a] ≤ [b] for all x such that xi > 0 for every i ∈ { 1, ..., n } if and only if there is some doubly stochastic matrix P for which a = Pb. Furthermore, in that case we have [a] = [b] if and only if a = b or all xi are equal. The latter condition can be expressed in several equivalent ways; one of them is given below. The proof makes use of the fact that every doubly stochastic matrix is a weighted average of permutation matrices (Birkhoff-von Neumann theorem). Another equivalent condition Because of the symmetry of the sum, no generality is lost by sorting the exponents into decreasing order: Then the existence of a doubly stochastic matrix P such that a = Pb is equivalent to the following system of inequalities: (The last one is an equality; the others are weak inequalities.) The sequence is said to majorize the sequence . Symmetric sum notation It is convenient to use a special notation for the sums. A success in reducing an inequality in this form means that the only condition for testing it is to verify whether one exponent sequence () majorizes the other one. This notation requires developing every permutation, developing an expression made of n! monomials, for instance: Examples Arithmetic-geometric mean inequality Let and We have Then [aA] ≥ [aG], which is yielding the inequality. Other examples We seek to prove that x2 + y2 ≥ 2xy by using bunching (Muirhead's inequality). We transform it in the symmetric-sum notation: The sequence (2, 0) majorizes the sequence (1, 1), thus the inequality holds by bunching. Similarly, we can prove the inequality by writing it using the symmetric-sum notation as which is the same as Since the sequence (3, 0, 0) majorizes the sequence (1, 1, 1), the inequality holds by bunching. See also Inequality of arithmetic and geometric means Doubly stochastic matrix Maclaurin's inequality Monomial symmetric polynomial Newton's inequalities Notes References Combinatorial Theory by John N. Guidi, based on lectures given by Gian-Carlo Rota in 1998, MIT Copy Technology Center, 2002. Kiran Kedlaya, A < B (A less than B), a guide to solving inequalities Hardy, G.H.; Littlewood, J.E.; Pólya, G. (1952), Inequalities, Cambridge Mathematical Library (2. ed.), Cambridge: Cambridge University Press, , , , Section 2.18, Theorem 45. Inequalities Means
Muirhead's inequality
[ "Physics", "Mathematics" ]
1,006
[ "Means", "Point (geometry)", "Mathematical theorems", "Mathematical analysis", "Geometric centers", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Symmetry" ]
1,048,371
https://en.wikipedia.org/wiki/Edward%20Jenner%20Institute%20for%20Vaccine%20Research
The Edward Jenner Institute for Vaccine Research (EJIVR) was an independent research institute named after Edward Jenner, the inventor of vaccination. It was co-located with the Compton Laboratory of the Institute for Animal Health on a campus in the village of Compton in Berkshire, England. After occupying temporary laboratory space at the Institute for Animal Health from 1996, the Institute moved to a newly completed laboratory building in 1998. Funding of the Institute continued until October 2005 when it was closed. Jenner Institute A successor institute, formed by a partnership between the University of Oxford and the UK Institute for Animal Health, was established in November 2005. This Jenner Institute is headquartered in Oxford on the Old Road Campus and is supported by a specific charity, the Jenner Vaccine Foundation. References Research institutes established in 1996 Research institutes disestablished in 2005 Research institutes in Berkshire Former research institutes Medical research institutes in the United Kingdom Vaccination-related organizations
Edward Jenner Institute for Vaccine Research
[ "Biology" ]
190
[ "Vaccination-related organizations", "Vaccination" ]
1,048,518
https://en.wikipedia.org/wiki/Supramolecular%20chemistry
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules. The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces, electrostatic charge, or hydrogen bonding to strong covalent bonding, provided that the electronic coupling strength remains small relative to the energy parameters of the component. While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. These forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi–pi interactions and electrostatic effects. Important concepts advanced by supramolecular chemistry include molecular self-assembly, molecular folding, molecular recognition, host–guest chemistry, mechanically-interlocked molecular architectures, and dynamic covalent chemistry. The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research. History The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920. With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. In 2016, Bernard L. Feringa, Sir J. Fraser Stoddart, and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines". The term supermolecule (or supramolecule) was introduced by Karl Lothar Wolf et al. (Übermoleküle) in 1937 to describe hydrogen-bonded acetic acid dimers. The term supermolecule is also used in biochemistry to describe complexes of biomolecules, such as peptides and oligonucleotides composed of multiple strands. Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other researchers such as Donald J. Cram, Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging. The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution. Concepts Molecular self-assembly Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles, membranes, vesicles, liquid crystals, and is important to crystal engineering. Molecular recognition and complexation Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis. Template-directed synthesis Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis. Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry. After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. Mechanically interlocked molecular architectures Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, molecular Borromean rings and ravels. Dynamic covalent chemistry In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. Biomimetics Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication. Imprinting Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. Molecular machinery Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology, and prototypes have been demonstrated using supramolecular concepts. Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. Building blocks Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen. Synthetic recognition motifs The pi-pi charge-transfer interactions of bipyridinium with dioxyarenes or diaminoarenes have been used extensively for the construction of mechanically interlocked systems and in crystal engineering. The use of crown ether binding with metal or ammonium cations is ubiquitous in supramolecular chemistry. The formation of carboxylic acid dimers and other simple hydrogen bonding interactions. The complexation of bipyridines or terpyridines with ruthenium, silver or other metal ions is of great utility in the construction of complex architectures of many individual molecules. The complexation of porphyrins or phthalocyanines around metal ions gives access to catalytic, photochemical and electrochemical properties in addition to the complexation itself. These units are used a great deal by nature. Macrocycles Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties. Cyclodextrins, calixarenes, cucurbiturils and crown ethers are readily synthesized in large quantities, and are therefore convenient for use in supramolecular systems. More complex cyclophanes, and cryptands can be synthesised to provide more tailored recognition properties. Supramolecular metallocycles are macrocyclic aggregates with metal ions in the ring, often formed from angular and linear modules. Common metallocycle shapes in these types of applications include triangles, squares, and pentagons, each bearing functional groups that connect the pieces via "self-assembly." Metallacrowns are metallomacrocycles generated via a similar self-assembly approach from fused chelate-rings. Structural units Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. Commonly used spacers and connecting groups include polyether chains, biphenyls and triphenyls, and simple alkyl chains. The chemistry for creating and connecting these units is very well understood. nanoparticles, nanorods, fullerenes and dendrimers offer nanometer-sized structure and encapsulation units. Surfaces can be used as scaffolds for the construction of complex systems and also for interfacing electrochemical systems with electrodes. Regular surfaces can be used for the construction of self-assembled monolayers and multilayers. The understanding of intermolecular interactions in solids has undergone a major renaissance via inputs from different experimental and computational methods in the last decade. This includes high-pressure studies in solids and "in situ" crystallization of compounds which are liquids at room temperature along with the use of electron density analysis, crystal structure prediction and DFT calculations in solid state to enable a quantitative understanding of the nature, energetics and topological properties associated with such interactions in crystals. Photo-chemically and electro-chemically active units Porphyrins, and phthalocyanines have highly tunable photochemical and electrochemical activity as well as the potential to form complexes. Photochromic and photoisomerizable groups can change their shapes and properties, including binding properties, upon exposure to light. Tetrathiafulvalene (TTF) and quinones have multiple stable oxidation states, and therefore can be used in redox reactions and electrochemistry. Other units, such as benzidine derivatives, viologens, and fullerenes, are useful in supramolecular electrochemical devices. Biologically-derived units The extremely strong complexation between avidin and biotin is instrumental in blood clotting, and has been used as the recognition motif to construct synthetic systems. The binding of enzymes with their cofactors has been used as a route to produce modified enzymes, electrically contacted enzymes, and even photoswitchable enzymes. DNA has been used both as a structural and as a functional unit in synthetic supramolecular systems. Applications Materials technology Supramolecular chemistry has found many applications, in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. Many smart materials are based on molecular recognition. Catalysis A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. Medicine Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions. A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. Data storage and processing Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox-switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers. See also Organic chemistry Nanotechnology Reading References External links 2D and 3D Models of Dodecahedrane and Cuneane Assemblies Supramolecular Chemistry and Supramolecular Chemistry II – Thematic Series in the Open Access Beilstein Journal of Organic Chemistry Chemistry
Supramolecular chemistry
[ "Chemistry", "Materials_science" ]
3,037
[ "Nanotechnology", "nan", "Supramolecular chemistry" ]
1,048,909
https://en.wikipedia.org/wiki/ROOT
ROOT is an object-oriented computer program and library developed by CERN. It was originally designed for particle physics data analysis and contains several features specific to the field, but it is also used in other applications such as astronomy and data mining. The latest minor release is 6.32, as of 2024-05-26. Description CERN maintained the CERN Program Library written in FORTRAN for many years. Its development and maintenance were discontinued in 2003 in favour of ROOT, which is written in the C++ programming language. ROOT development was initiated by René Brun and Fons Rademakers in 1994. Some parts are published under the GNU Lesser General Public License (LGPL) and others are based on GNU General Public License (GPL) software, and are thus also published under the terms of the GPL. It provides platform independent access to a computer's graphics subsystem and operating system using abstract layers. Parts of the abstract platform are: a graphical user interface and a GUI builder, container classes, reflection, a C++ script and command line interpreter (CINT in version 5, cling in version 6), object serialization and persistence. The packages provided by ROOT include those for Histogramming and graphing to view and analyze distributions and functions, curve fitting (regression analysis) and minimization of functionals, statistics tools used for data analysis, matrix algebra, four-vector computations, as used in high energy physics, standard mathematical functions, multivariate data analysis, e.g. using neural networks, image manipulation, used, for instance, to analyze astronomical pictures, access to distributed data (in the context of the Grid), distributed computing, to parallelize data analyses, persistence and serialization of objects, which can cope with changes in class definitions of persistent data, access to databases, 3D visualizations (geometry), creating files in various graphics formats, like PDF, PostScript, PNG, SVG, LaTeX, etc. interfacing Python code in both directions, interfacing Monte Carlo event generators. A key feature of ROOT is a data container called tree, with its substructures branches and leaves. A tree can be seen as a sliding window to the raw data, as stored in a file. Data from the next entry in the file can be retrieved by advancing the index in the tree. This avoids memory allocation problems associated with object creation, and allows the tree to act as a lightweight container while handling buffering invisibly. ROOT is designed for high computing efficiency, as it is required to process data from the Large Hadron Collider's experiments estimated at several petabytes per year. ROOT is mainly used in data analysis and data acquisition in particle physics (high energy physics) experiments, and most experimental plots and results in those subfields are obtained using ROOT. The inclusion of a C++ interpreter (CINT until version 5.34, Cling from version 6.00) makes this package very versatile as it can be used in interactive, scripted and compiled modes in a manner similar to commercial products like MATLAB. On July 4, 2012 the ATLAS and CMS LHC's experiments presented the status of the Standard Model Higgs search. All data plotting presented that day used ROOT. Applications Several particle physics collaborations have written software based on ROOT, often in favor of using more generic solutions (e.g. using ROOT containers instead of STL). Some of the running particle physics experiments using software based on ROOT ALICE ATLAS BaBar experiment Belle Experiment (an electron positron collider at KEK (Japan)) Belle II experiment (successor of the Belle experiment) BES III CB-ELSA/TAPS CMS COMPASS experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy) CUORE (Cryogenic Underground Observatory for Rare Events) D0 experiment GlueX Experiment GRAPES-3 (Gamma Ray Astronomy PeV EnergieS) H1 (particle detector) at HERA collider at DESY, Hamburg LHCb MINERνA (Main Injector Experiment for ν-A) MINOS (Main injector neutrino oscillation search) NA61 experiment (SPS Heavy Ion and Neutrino Experiment) NOνA OPERA experiment PHENIX detector PHOBOS experiment at Relativistic Heavy Ion Collider SNO+ STAR detector (Solenoidal Tracker at RHIC) T2K experiment Future particle physics experiments currently developing software based on ROOT Mu2e Compressed Baryonic Matter experiment (CBM) PANDA experiment (antiProton Annihilation at Darmstadt (PANDA)) Deep Underground Neutrino Experiment (DUNE) Hyper-Kamiokande (HK (Japan)) Astrophysics (X-ray and gamma-ray astronomy, astroparticle physics) projects using ROOT AGILE Alpha Magnetic Spectrometer (AMS) Antarctic Impulse Transient Antenna (ANITA) ANTARES neutrino detector CRESST (Dark Matter Search) DMTPC DEAP-3600/Cryogenic Low-Energy Astrophysics with Neon(CLEAN) Fermi Gamma-ray Space Telescope ICECUBE HAWC High Energy Stereoscopic System (H.E.S.S.) Hitomi (ASTRO-H) MAGIC Milagro Pierre Auger Observatory VERITAS PAMELA POLAR PoGOLite Criticisms Criticisms of ROOT include its difficulty for beginners, as well as various aspects of its design and implementation. Frequent causes of frustration include extreme code bloat, heavy use of global variables, and an overcomplicated class hierarchy. From time to time these issues are discussed on the ROOT users mailing list. While scientists dissatisfied with ROOT have in the past managed to work around its flaws, some of the shortcomings are regularly addressed by the ROOT team. The CINT interpreter, for example, has been replaced by the Cling interpreter, and numerous bugs are fixed with every release. See also Matplotlib – a plotting and analysis system for Python SciPy – a scientific data analysis system for Python, based on the NumPy classes Perl Data Language – a set of array programming extensions to the Perl programming language HippoDraw – an alternative C++-based data analysis system Java Analysis Studio – a Java-based AIDA-compliant data analysis system R programming language AIDA (computing) – open interfaces and formats for particle physics data processing Geant4 – a platform for the simulation of the passage of particles through matter using Monte Carlo methods PAW IGOR Pro Scientific Linux Scientific computing OpenDX OpenScientist CERN Program Library – legacy program library written in Fortran77, still available but not updated References External links The ROOT System Home Page Image galleries ROOT User's Guide ROOT Reference Guide ROOT Forum The RooFit Toolkit for Data Modeling, an extension to ROOT to facilitate maximum likelihood fits The Toolkit for Multivariate Data Analysis with ROOT (TMVA) is a ROOT-integrated project providing a machine learning environment for the processing and evaluation of multivariate classification, both binary and multi class, and regression techniques targeting applications in high-energy physics (here or here). C++ libraries Data analysis software Data management software Experimental particle physics Free physics software Free plotting software Free science software Free software programmed in C++ Numerical software Physics software Plotting software CERN software
ROOT
[ "Physics", "Mathematics" ]
1,499
[ "Mathematical software", "Computational physics", "Experimental physics", "Particle physics", "Numerical software", "Experimental particle physics", "Physics software" ]
1,050,195
https://en.wikipedia.org/wiki/Evolutionary%20robotics
Evolutionary robotics is an embodied approach to Artificial Intelligence (AI) in which robots are automatically designed using Darwinian principles of natural selection. The design of a robot, or a subsystem of a robot such as a neural controller, is optimized against a behavioral goal (e.g. run as fast as possible). Usually, designs are evaluated in simulations as fabricating thousands or millions of designs and testing them in the real world is prohibitively expensive in terms of time, money, and safety. An evolutionary robotics experiment starts with a population of randomly generated robot designs. The worst performing designs are discarded and replaced with mutations and/or combinations of the better designs. This evolutionary algorithm continues until a prespecified amount of time elapses or some target performance metric is surpassed. Evolutionary robotics methods are particularly useful for engineering machines that must operate in environments in which humans have limited intuition (nanoscale, space, etc.). Evolved simulated robots can also be used as scientific tools to generate new hypotheses in biology and cognitive science, and to test old hypothesis that require experiments that have proven difficult or impossible to carry out in reality. History In the early 1990s, two separate European groups demonstrated different approaches to the evolution of robot control systems. Dario Floreano and Francesco Mondada at EPFL evolved controllers for the Khepera robot. Adrian Thompson, Nick Jakobi, Dave Cliff, Inman Harvey, and Phil Husbands evolved controllers for a Gantry robot at the University of Sussex. However the body of these robots was presupposed before evolution. The first simulations of evolved robots were reported by Karl Sims and Jeffrey Ventrella of the MIT Media Lab, also in the early 1990s. However these so-called virtual creatures never left their simulated worlds. The first evolved robots to be built in reality were 3D-printed by Hod Lipson and Jordan Pollack at Brandeis University at the turn of the 21st century. See also Bio-inspired robotics Evolutionary computation References Evolutionary computation Robotics
Evolutionary robotics
[ "Engineering", "Biology" ]
409
[ "Bioinformatics", "Evolutionary computation", "Robotics", "Automation" ]
1,050,506
https://en.wikipedia.org/wiki/Glycosyl
In organic chemistry, a glycosyl group is a univalent free radical or substituent structure obtained by removing the hydroxyl () group from the hemiacetal () group found in the cyclic form of a monosaccharide and, by extension, of a lower oligosaccharide. Glycosyl groups are exchanged during glycosylation from the glycosyl donor, the electrophile, to the glycosyl acceptor, the nucleophile. The outcome of the glycosylation reaction is largely dependent on the reactivity of each partner. Glycosyl also reacts with inorganic acids, such as phosphoric acid, forming an ester such as glucose 1-phosphate. Examples In cellulose, glycosyl groups link together 1,4-β-D-glucosyl units to form chains of (1,4-β-D-glucosyl)n. Other examples include ribityl in 6,7-Dimethyl-8-ribityllumazine, and glycosylamines. Alternative substituent groups Instead of the hemiacetal hydroxyl group, a hydrogen atom can be removed to form a substituent, for example the hydrogen from the C3 hydroxyl of a glucose molecule. Then the substituent is called D-glucopyranos-3-O-yl as it appears in the name of the drug Mifamurtide. Recent detection of Au3+ in vivo used C-glycosyl pyrene. Its fluorescence and permeability through cell membranes helped detect Au3+. See also Acyl group References Substituents Biomolecules Monosaccharides Oligosaccharides
Glycosyl
[ "Chemistry", "Biology" ]
386
[ "Carbohydrates", "Natural products", "Substituents", "Monosaccharides", "Organic compounds", "Oligosaccharides", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
1,050,551
https://en.wikipedia.org/wiki/Multiple-criteria%20decision%20analysis
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis. Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria. In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition. On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences. Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software, have been developed for their application in an array of disciplines, ranging from politics and business to the environment and energy. Foundations, concepts, definitions MCDM or MCDA are acronyms for multiple-criteria decision-making and multiple-criteria decision analysis. Stanley Zionts helped popularizing the acronym with his 1979 article "MCDM – If not a Roman Numeral, then What?", intended for an entrepreneurial audience. MCDM is concerned with structuring and solving decision and planning problems involving multiple criteria. The purpose is to support decision-makers facing such problems. Typically, there does not exist a unique optimal solution for such problems and it is necessary to use decision-makers' preferences to differentiate between solutions. "Solving" can be interpreted in different ways. It could correspond to choosing the "best" alternative from a set of available alternatives (where "best" can be interpreted as "the most preferred alternative" of a decision-maker). Another interpretation of "solving" could be choosing a small set of good alternatives, or grouping alternatives into different preference sets. An extreme interpretation could be to find all "efficient" or "nondominated" alternatives (which we will define shortly). The difficulty of the problem originates from the presence of more than one criterion. There is no longer a unique optimal solution to an MCDM problem that can be obtained without incorporating preference information. The concept of an optimal solution is often replaced by the set of nondominated solutions. A solution is called nondominated if it is not possible to improve it in any criterion without sacrificing it in another. Therefore, it makes sense for the decision-maker to choose a solution from the nondominated set. Otherwise, they could do better in terms of some or all of the criteria, and not do worse in any of them. Generally, however, the set of nondominated solutions is too large to be presented to the decision-maker for the final choice. Hence we need tools that help the decision-maker focus on the preferred solutions (or alternatives). Normally one has to "tradeoff" certain criteria for others. MCDM has been an active area of research since the 1970s. There are several MCDM-related organizations including the International Society on Multi-criteria Decision Making, Euro Working Group on MCDA, and INFORMS Section on MCDM. For a history see: Köksalan, Wallenius and Zionts (2011). MCDM draws upon knowledge in many fields including: Mathematics Decision analysis Economics Computer technology Software engineering Information systems A typology There are different classifications of MCDM problems and methods. A major distinction between MCDM problems is based on whether the solutions are explicitly or implicitly defined. Multiple-criteria evaluation problems: These problems consist of a finite number of alternatives, explicitly known in the beginning of the solution process. Each alternative is represented by its performance in multiple criteria. The problem may be defined as finding the best alternative for a decision-maker (DM), or finding a set of good alternatives. One may also be interested in "sorting" or "classifying" alternatives. Sorting refers to placing alternatives in a set of preference-ordered classes (such as assigning credit-ratings to countries), and classifying refers to assigning alternatives to non-ordered sets (such as diagnosing patients based on their symptoms). Some of the MCDM methods in this category have been studied in a comparative manner in the book by Triantaphyllou on this subject, 2000. Multiple-criteria design problems (multiple objective mathematical programming problems): In these problems, the alternatives are not explicitly known. An alternative (solution) can be found by solving a mathematical model. The number of alternatives is either finite or infinite (countable or not countable), but typically exponentially large (in the number of variables ranging over finite domains.) Whether it is an evaluation problem or a design problem, preference information of DMs is required in order to differentiate between solutions. The solution methods for MCDM problems are commonly classified based on the timing of preference information obtained from the DM. There are methods that require the DM's preference information at the start of the process, transforming the problem into essentially a single criterion problem. These methods are said to operate by "prior articulation of preferences". Methods based on estimating a value function or using the concept of "outranking relations", analytical hierarchy process, and some rule-based decision methods try to solve multiple criteria evaluation problems utilizing prior articulation of preferences. Similarly, there are methods developed to solve multiple-criteria design problems using prior articulation of preferences by constructing a value function. Perhaps the most well-known of these methods is goal programming. Once the value function is constructed, the resulting single objective mathematical program is solved to obtain a preferred solution. Some methods require preference information from the DM throughout the solution process. These are referred to as interactive methods or methods that require "progressive articulation of preferences". These methods have been well-developed for both the multiple criteria evaluation (see for example, Geoffrion, Dyer and Feinberg, 1972, and Köksalan and Sagala, 1995 ) and design problems (see Steuer, 1986). Multiple-criteria design problems typically require the solution of a series of mathematical programming models in order to reveal implicitly defined solutions. For these problems, a representation or approximation of "efficient solutions" may also be of interest. This category is referred to as "posterior articulation of preferences", implying that the DM's involvement starts posterior to the explicit revelation of "interesting" solutions (see for example Karasakal and Köksalan, 2009). When the mathematical programming models contain integer variables, the design problems become harder to solve. Multiobjective Combinatorial Optimization (MOCO) constitutes a special category of such problems posing substantial computational difficulty (see Ehrgott and Gandibleux, 2002, for a review). Representations and definitions The MCDM problem can be represented in the criterion space or the decision space. Alternatively, if different criteria are combined by a weighted linear function, it is also possible to represent the problem in the weight space. Below are the demonstrations of the criterion and weight spaces as well as some formal definitions. Criterion space representation Let us assume that we evaluate solutions in a specific problem situation using several criteria. Let us further assume that more is better in each criterion. Then, among all possible solutions, we are ideally interested in those solutions that perform well in all considered criteria. However, it is unlikely to have a single solution that performs well in all considered criteria. Typically, some solutions perform well in some criteria and some perform well in others. Finding a way of trading off between criteria is one of the main endeavors in the MCDM literature. Mathematically, the MCDM problem corresponding to the above arguments can be represented as subject to where is the vector of k criterion functions (objective functions) and is the feasible set, . If is defined explicitly (by a set of alternatives), the resulting problem is called a multiple-criteria evaluation problem. If is defined implicitly (by a set of constraints), the resulting problem is called a multiple-criteria design problem. The quotation marks are used to indicate that the maximization of a vector is not a well-defined mathematical operation. This corresponds to the argument that we will have to find a way to resolve the trade-off between criteria (typically based on the preferences of a decision maker) when a solution that performs well in all criteria does not exist. Decision space representation The decision space corresponds to the set of possible decisions that are available to us. The criteria values will be consequences of the decisions we make. Hence, we can define a corresponding problem in the decision space. For example, in designing a product, we decide on the design parameters (decision variables) each of which affects the performance measures (criteria) with which we evaluate our product. Mathematically, a multiple-criteria design problem can be represented in the decision space as follows: where is the feasible set and is the decision variable vector of size n. A well-developed special case is obtained when is a polyhedron defined by linear inequalities and equalities. If all the objective functions are linear in terms of the decision variables, this variation leads to multiple objective linear programming (MOLP), an important subclass of MCDM problems. There are several definitions that are central in MCDM. Two closely related definitions are those of nondominance (defined based on the criterion space representation) and efficiency (defined based on the decision variable representation). Definition 1. is nondominated if there does not exist another such that and . Roughly speaking, a solution is nondominated so long as it is not inferior to any other available solution in all the considered criteria. Definition 2. is efficient if there does not exist another such that and . If an MCDM problem represents a decision situation well, then the most preferred solution of a DM has to be an efficient solution in the decision space, and its image is a nondominated point in the criterion space. Following definitions are also important. Definition 3. is weakly nondominated if there does not exist another such that . Definition 4. is weakly efficient if there does not exist another such that . Weakly nondominated points include all nondominated points and some special dominated points. The importance of these special dominated points comes from the fact that they commonly appear in practice and special care is necessary to distinguish them from nondominated points. If, for example, we maximize a single objective, we may end up with a weakly nondominated point that is dominated. The dominated points of the weakly nondominated set are located either on vertical or horizontal planes (hyperplanes) in the criterion space. Ideal point: (in criterion space) represents the best (the maximum for maximization problems and the minimum for minimization problems) of each objective function and typically corresponds to an infeasible solution. Nadir point: (in criterion space) represents the worst (the minimum for maximization problems and the maximum for minimization problems) of each objective function among the points in the nondominated set and is typically a dominated point. The ideal point and the nadir point are useful to the DM to get the "feel" of the range of solutions (although it is not straightforward to find the nadir point for design problems having more than two criteria). Illustrations of the decision and criterion spaces The following two-variable MOLP problem in the decision variable space will help demonstrate some of the key concepts graphically. In Figure 1, the extreme points "e" and "b" maximize the first and second objectives, respectively. The red boundary between those two extreme points represents the efficient set. It can be seen from the figure that, for any feasible solution outside the efficient set, it is possible to improve both objectives by some points on the efficient set. Conversely, for any point on the efficient set, it is not possible to improve both objectives by moving to any other feasible solution. At these solutions, one has to sacrifice from one of the objectives in order to improve the other objective. Due to its simplicity, the above problem can be represented in criterion space by replacing the with the as follows: subject to We present the criterion space graphically in Figure 2. It is easier to detect the nondominated points (corresponding to efficient solutions in the decision space) in the criterion space. The north-east region of the feasible space constitutes the set of nondominated points (for maximization problems). Generating nondominated solutions There are several ways to generate nondominated solutions. We will discuss two of these. The first approach can generate a special class of nondominated solutions whereas the second approach can generate any nondominated solution. Weighted sums (Gass & Saaty, 1955) If we combine the multiple criteria into a single criterion by multiplying each criterion with a positive weight and summing up the weighted criteria, then the solution to the resulting single criterion problem is a special efficient solution. These special efficient solutions appear at corner points of the set of available solutions. Efficient solutions that are not at corner points have special characteristics and this method is not capable of finding such points. Mathematically, we can represent this situation as = subject to By varying the weights, weighted sums can be used for generating efficient extreme point solutions for design problems, and supported (convex nondominated) points for evaluation problems. Achievement scalarizing function (Wierzbicki, 1980) Achievement scalarizing functions also combine multiple criteria into a single criterion by weighting them in a very special way. They create rectangular contours going away from a reference point towards the available efficient solutions. This special structure empower achievement scalarizing functions to reach any efficient solution. This is a powerful property that makes these functions very useful for MCDM problems. Mathematically, we can represent the corresponding problem as = }, subject to The achievement scalarizing function can be used to project any point (feasible or infeasible) on the efficient frontier. Any point (supported or not) can be reached. The second term in the objective function is required to avoid generating inefficient solutions. Figure 3 demonstrates how a feasible point, , and an infeasible point, , are projected onto the nondominated points, and , respectively, along the direction using an achievement scalarizing function. The dashed and solid contours correspond to the objective function contours with and without the second term of the objective function, respectively. Solving MCDM problems Different schools of thought have developed for solving MCDM problems (both of the design and evaluation type). For a bibliometric study showing their development over time, see Bragge, Korhonen, H. Wallenius and J. Wallenius [2010]. Multiple objective mathematical programming school (1) Vector maximization: The purpose of vector maximization is to approximate the nondominated set; originally developed for Multiple Objective Linear Programming problems (Evans and Steuer, 1973; Yu and Zeleny, 1975). (2) Interactive programming: Phases of computation alternate with phases of decision-making (Benayoun et al., 1971; Geoffrion, Dyer and Feinberg, 1972; Zionts and Wallenius, 1976; Korhonen and Wallenius, 1988). No explicit knowledge of the DM's value function is assumed. Goal programming school The purpose is to set apriori target values for goals, and to minimize weighted deviations from these goals. Both importance weights as well as lexicographic pre-emptive weights have been used (Charnes and Cooper, 1961). Fuzzy-set theorists Fuzzy sets were introduced by Zadeh (1965) as an extension of the classical notion of sets. This idea is used in many MCDM algorithms to model and solve fuzzy problems. Ordinal data based methods Ordinal data has a wide application in real-world situations. In this regard, some MCDM methods were designed to handle ordinal data as input data. For example, Ordinal Priority Approach and Qualiflex method. Multi-attribute utility theorists Multi-attribute utility or value functions are elicited and used to identify the most preferred alternative or to rank order the alternatives. Elaborate interview techniques, which exist for eliciting linear additive utility functions and multiplicative nonlinear utility functions, may be used (Keeney and Raiffa, 1976). Another approach is to elicit value functions indirectly by asking the decision-maker a series of pairwise ranking questions involving choosing between hypothetical alternatives (PAPRIKA method; Hansen and Ombler, 2008). French school The French school focuses on decision aiding, in particular the ELECTRE family of outranking methods that originated in France during the mid-1960s. The method was first proposed by Bernard Roy (Roy, 1968). Evolutionary multiobjective optimization school (EMO) EMO algorithms start with an initial population, and update it by using processes designed to mimic natural survival-of-the-fittest principles and genetic variation operators to improve the average population from one generation to the next. The goal is to converge to a population of solutions which represent the nondominated set (Schaffer, 1984; Srinivas and Deb, 1994). More recently, there are efforts to incorporate preference information into the solution process of EMO algorithms (see Deb and Köksalan, 2010). Grey system theory based methods In the 1980s, Deng Julong proposed Grey System Theory (GST) and its first multiple-attribute decision-making model, called Deng's Grey relational analysis (GRA) model. Later, the grey systems scholars proposed many GST based methods like Liu Sifeng's Absolute GRA model, Grey Target Decision Making (GTDM) and Grey Absolute Decision Analysis (GADA). Analytic hierarchy process (AHP) The AHP first decomposes the decision problem into a hierarchy of subproblems. Then the decision-maker evaluates the relative importance of its various elements by pairwise comparisons. The AHP converts these evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative (Saaty, 1980). A consistency index measures the extent to which the decision-maker has been consistent in her responses. AHP is one of the more controversial techniques listed here, with some researchers in the MCDA community believing it to be flawed. Several papers reviewed the application of MCDM techniques in various disciplines such as fuzzy MCDM, classic MCDM, sustainable and renewable energy, VIKOR technique, transportation systems, service quality, TOPSIS method, energy management problems, e-learning, tourism and hospitality, SWARA and WASPAS methods. MCDM methods The following MCDM methods are available, many of which are implemented by specialized decision-making software: Aggregated Indices Randomization Method (AIRM) Analytic hierarchy process (AHP) Analytic network process (ANP) Balance Beam process Best worst method (BWM) Brown–Gibson model Characteristic Objects METhod (COMET) Choosing By Advantages (CBA) Conjoint Value Hierarchy (CVA) Data envelopment analysis Decision EXpert (DEX) Disaggregation – Aggregation Approaches (UTA*, UTAII, UTADIS) Rough set (Rough set approach) Dominance-based rough set approach (DRSA) ELECTRE (Outranking) Evaluation Based on Distance from Average Solution (EDAS) Evidential reasoning approach (ER) FITradeoff (www.fitradeoff.org) Goal programming (GP) Grey relational analysis (GRA) Inner product of vectors (IPV) Measuring Attractiveness by a categorical Based Evaluation Technique (MACBETH) Multi-Attribute Global Inference of Quality (MAGIQ) Multi-attribute utility theory (MAUT) Multi-attribute value theory (MAVT) Markovian Multi Criteria Decision Making New Approach to Appraisal (NATA) Nonstructural Fuzzy Decision Support System (NSFDSS) Ordinal Priority Approach (OPA) Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA) PROMETHEE (Outranking) Simple Multi-Attribute Rating Technique (SMART) Stratified Multi Criteria Decision Making (SMCDM) Stochastic Multicriteria Acceptability Analysis (SMAA) Superiority and inferiority ranking method (SIR method) System Redesigning to Creating Shared Value (SYRCS) Technique for the Order of Prioritisation by Similarity to Ideal Solution (TOPSIS) Value analysis (VA) Value engineering (VE) VIKOR method Weighted product model (WPM) Weighted sum model (WSM) See also Architecture tradeoff analysis method Decision-making Decision-making software Decision-making paradox Decisional balance sheet Multicriteria classification problems Rank reversals in decision-making Superiority and inferiority ranking method References Further reading A Brief History prepared by Steuer and Zionts Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons. Decision analysis Management systems Mathematical optimization Utility sr:Вишекритеријумска оптимизација
Multiple-criteria decision analysis
[ "Mathematics" ]
4,545
[ "Mathematical optimization", "Mathematical analysis" ]
1,050,741
https://en.wikipedia.org/wiki/Pythagorean%20trigonometric%20identity
The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions. The identity is As usual, means . Proofs and their relationships to the Pythagorean theorem Proof based on right-angle triangles Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely . The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are: The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining and for the unit circle and thus and for a circle of radius and reflecting our triangle in the and setting and . Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for then it is true for all real . Next we prove the identity in the range . To do this we let , will now be in the range . We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs): Finally, it remains is to prove the formula for ; this can be done by squaring the symmetry identities to get Related identities The two identities are also called Pythagorean trigonometric identities. If one leg of a right triangle has length 1, then the tangent of the angle adjacent to that leg is the length of the other leg, and the secant of the angle is the length of the hypotenuse. In this way, this trigonometric identity involving the tangent and the secant follows from the Pythagorean theorem. The angle opposite the leg of length 1 (this angle can be labeled ) has cotangent equal to the length of the other leg, and cosecant equal to the length of the hypotenuse. In that way, this trigonometric identity involving the cotangent and the cosecant also follows from the Pythagorean theorem. The following table gives the identities with the factor or divisor that relates them to the main identity. Proof using the unit circle The unit circle centered at the origin in the Euclidean plane is defined by the equation: Given an angle θ, there is a unique point P on the unit circle at an anticlockwise angle of θ from the x-axis, and the x- and y-coordinates of P are: Consequently, from the equation for the unit circle, the Pythagorean identity. In the figure, the point has a -coordinate, and is appropriately given by , which is a negative number: . Point has a positive -coordinate, and . As increases from zero to the full circle , the sine and cosine change signs in the various quadrants to keep and with the correct signs. The figure shows how the sign of the sine function varies as the angle changes quadrant. Because the - and -axes are perpendicular, this Pythagorean identity is equivalent to the Pythagorean theorem for triangles with hypotenuse of length 1 (which is in turn equivalent to the full Pythagorean theorem by applying a similar-triangles argument). See Unit circle for a short explanation. Proof using power series The trigonometric functions may also be defined using power series, namely (for an angle measured in radians): Using the multiplication formula for power series at Multiplication and division of power series (suitably modified to account for the form of the series here) we obtain In the expression for , must be at least 1, while in the expression for , the constant term is equal to 1. The remaining terms of their sum are (with common factors removed) by the binomial theorem. Consequently, which is the Pythagorean trigonometric identity. When the trigonometric functions are defined in this way, the identity in combination with the Pythagorean theorem shows that these power series parameterize the unit circle, which we used in the previous section. This definition constructs the sine and cosine functions in a rigorous fashion and proves that they are differentiable, so that in fact it subsumes the previous two. Proof using the differential equation Sine and cosine can be defined as the two solutions to the differential equation: satisfying respectively , and , . It follows from the theory of ordinary differential equations that the first solution, sine, has the second, cosine, as its derivative, and it follows from this that the derivative of cosine is the negative of the sine. The identity is equivalent to the assertion that the function is constant and equal to 1. Differentiating using the chain rule gives: so is constant. A calculation confirms that , and is a constant so for all , so the Pythagorean identity is established. A similar proof can be completed using power series as above to establish that the sine has as its derivative the cosine, and the cosine has as its derivative the negative sine. In fact, the definitions by ordinary differential equation and by power series lead to similar derivations of most identities. This proof of the identity has no direct connection with Euclid's demonstration of the Pythagorean theorem. Proof using Euler's formula Using Euler's formula and factoring as the complex difference of two squares, See also Pythagorean theorem List of trigonometric identities Unit circle Power series Differential equation Notes Mathematical identities Articles containing proofs Trigonometry Identity
Pythagorean trigonometric identity
[ "Mathematics" ]
1,298
[ "Mathematical theorems", "Planes (geometry)", "Euclidean plane geometry", "Mathematical objects", "Equations", "Pythagorean theorem", "Articles containing proofs", "Mathematical identities", "Mathematical problems", "Algebra" ]
1,051,315
https://en.wikipedia.org/wiki/Monocrete%20construction
Monocrete is a building construction method utilising modular bolt-together pre-cast concrete wall panels. Monocrete construction was widely used in the construction of government housing in the 1940s and 1950s in Canberra, Australia. The expansion of the new capital was exceeding the ability of the Government to build houses, so alternative construction methods were investigated. The Canberra monocrete homes are built on brick piers and surrounding brick footing, and all of the walls are of monocrete construction including interior ones. They are precast with steel windows and door frames set directly into the concrete. Steel plates in the ceiling space bolt the individual wall panels together. The floor and roof are of normal construction - wood and tile respectively. The gaps between the wall panels are filled with a flexible gap-filling compound and covered with tape on the interior. It has been suggested that the panels tend to move separately to one another, opening up cracks in between them, and that the houses also tend to be susceptible to condensation build up and mold growth on the inside of the walls. A similar technique is used in the construction of some modern commercial buildings. References Building engineering Building materials Prefabricated houses
Monocrete construction
[ "Physics", "Engineering" ]
239
[ "Building engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Matter", "Building materials" ]
1,051,627
https://en.wikipedia.org/wiki/Szemer%C3%A9di%E2%80%93Trotter%20theorem
The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given points and lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is This bound cannot be improved, except in terms of the implicit constants in its big O notation. An equivalent formulation of the theorem is the following. Given points and an integer , the number of lines which pass through at least of the points is The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition. Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs. This method has been used to produce the explicit upper bound on the number of incidences. Subsequent research has lowered the constant, coming from the crossing lemma, from 2.5 to 2.44. On the other hand, this bound would not remain valid if one replaces the coefficient 2.44 with 0.42. The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics. Proof of the first formulation We may discard the lines which contain two or fewer of the points, as they can contribute at most incidences to the total number. Thus we may assume that every line contains at least three of the points. If a line contains points, then it will contain line segments which connect two consecutive points along the line. Because after discarding the two-point lines, it follows that , so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if denotes the number of such line segments, it will suffice to show that Now consider the graph formed by using the points as vertices, and the line segments as edges. Since each line segment lies on one of lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most . The crossing number inequality implies that either , or that . In either case , giving the desired bound Proof of the second formulation Since every pair of points can be connected by at most one line, there can be at most lines which can connect at or more points, since . This bound will prove the theorem when is small (e.g. if for some absolute constant ). Thus, we need only consider the case when is large, say . Suppose that there are m lines that each contain at least points. These lines generate at least incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have and so at least one of the statements , or is true. The third possibility is ruled out since was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound as desired. Optimality Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer a set of points on the integer lattice and a set of lines Clearly, and . Since each line is incident to points (i.e., once for each ), the number of incidences is which matches the upper bound. Generalization to One generalization of this result to arbitrary dimension, , was found by Agarwal and Aronov. Given a set of points, , and the set of hyperplanes, , which are each spanned by , the number of incidences between and is bounded above by provided . Equivalently, the number of hyperplanes in containing or more points is bounded above by A construction due to Edelsbrunner shows this bound to be asymptotically optimal. József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem. In Many proofs of the Szemerédi–Trotter theorem over rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane. Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane by introducing additional ideas. This result was also obtained independently and through a different method by Zahl. The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be ; the constant is not explicit in Zahl's proof. When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument. In finite fields Let be a field. A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in : let be the set of all points and let be the set of all lines in the plane. Since each line contains points, there are incidences. On the other hand, a Szemerédi-Trotter bound would give incidences. This example shows that the trivial, combinatorial incidence bound is tight. Bourgain, Katz and Tao show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained. Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic. Large set incidence bounds Let be an odd prime power. Then Vinh showed that the number of incidences between points and lines in is at most Note that there is no implicit constant in this bound. Small set incidence bounds Let be a field of characteristic . Stevens and de Zeeuw show that the number of incidences between points and lines in is under the condition in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when . If the point set is a Cartesian Product, then they show an improved incidence bound: let be a finite set of points with and let be a set of lines in the plane. Suppose that and in positive characteristic that . Then the number of incidences between and is This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure. In both the reals and arbitrary fields, Rudnev and Shkredov show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds. See also Hopcroft's problem, the algorithmic problem of detecting a point-line incidence References Euclidean plane geometry Theorems in discrete geometry Theorems in combinatorics Articles containing proofs
Szemerédi–Trotter theorem
[ "Mathematics" ]
1,557
[ "Theorems in combinatorics", "Theorems in discrete geometry", "Euclidean plane geometry", "Theorems in discrete mathematics", "Combinatorics", "Theorems in geometry", "Articles containing proofs", "Planes (geometry)" ]
1,052,019
https://en.wikipedia.org/wiki/Mundane%20astrology
Mundane astrology, also known as political astrology, is the branch of astrology dealing with politics, the government, and the laws governing a particular nation, state, or city. The name derives name from the Latin term , 'world'. Certain countries have astrological charts (or horoscopes) just like a person is said to in astrology; for example, the chart for the United States is widely thought to be sometime during the day of July 4, 1776, for this is the exact day that the Declaration of Independence was signed and made fully official, thus causing the "birth" of the United States as a nation. Indeed, July 4 is a major national holiday in America and unequivocally thought of as the "birthday" of the entire nation. History Mundane astrology is widely believed by astrological historians to be the most ancient branch of astrology. Early Babylonian astrology was exclusively concerned with mundane astrology, being geographically oriented, specifically applied to countries cities and nations, and almost wholly concerned with the welfare of the state and the king as the governing head of the nation. Astrological practices of divination and planetary interpretation have been used for millennia to answer political questions, but only with the gradual emergence of horoscopic astrology, from the sixth century BC, did astrology develop into the two distinct branches of mundane astrology and natal astrology. Techniques and principles Astrologically, the affairs of a nation are judged from the horoscope set up at the time of its official inauguration or the birth chart of its leader, or various phenomena such as eclipses, lunations, great conjunctions, planetary stations, comets and ingresses. The techniques of the subject were discussed in detail in the 2nd century work of the Alexandrian astronomer Ptolemy, who outlined its principles in the second book of his Tetrabiblos. Ptolemy set this topic before his discussion of individual birth charts because he argued that the astrological assessment of any 'particular' individual must rest upon prior knowledge of the 'general' temperament of their ethnic type; and that the circumstances of individual lives are subsumed, to some extent, within the fate of their community. The third chapter of his work offers an association between planets, zodiac signs and the national characteristics of 73 nations. It concludes with three assertions which act as core principles of mundane astrology: Each of the fixed stars has familiarity with the countries attributed to the sign of its ecliptic rising. The time of the first founding of a city (or nation) can be used in a similar way to an individual horoscope, to astrologically establish the characteristics and experiences of that city. The most significant considerations are the regions of the zodiac which mark the place of the Sun and Moon, and the four angles of the chart – in particular the ascendant. If the time of the foundation of the city or nation is not known, a similar use can be made of the horoscope of whoever holds office or is king at the time, with particular attention given to the midheaven of that chart. Practice The first English astrologer for whom we have evidence of astrological practice is Richard Trewythian, whose notebook is largely concerned with mundane astrology. He constructed horoscopes for the Sun's ingress into Aries over thirty years, and recorded general predictions for twelve of those years between 1430 and 1458. His notebooks demonstrate how he recorded the logic for his conclusions: He also made several predictions concerning the king (Henry VI), such as one he made in 1433 where he noted: "it seems that the king will be sick this year because Saturn is lord of the tenth house". Notes References Works cited External links 17th Century study in the Ancient Art of Mundane Astrology hosted by Skyscript (accessed 1 July 2012). The complete fourth book of William Ramesey's Astrologiae Restaurata, 'Astrology Restored' (London, 1653), edited and annotated by Steven Birchfield (1.43MB). The Fourth book is entitled Astrologia Munda, 'Mundane Astrology' - said by Birchfield to be the closest thing we have to an accessible textbook on traditional mundane astrology. Astrology
Mundane astrology
[ "Astronomy" ]
868
[ "Astrology", "History of astronomy" ]
1,052,135
https://en.wikipedia.org/wiki/Human%20action%20cycle
The human action cycle is a psychological model which describes the steps humans take when they interact with computer systems. The model was proposed by Donald A. Norman, a scholar in the discipline of human–computer interaction. The model can be used to help evaluate the efficiency of a user interface (UI). Understanding the cycle requires an understanding of the user interface design principles of affordance, feedback, visibility and tolerance. The human action cycle describes how humans may form goals and then develop a series of steps required to achieve that goal, using the computer system. The user then executes the steps, thus the model includes both cognitive activities and physical activities. The three stages of the human action cycle The model is divided into three stages of seven steps in total, and is (approximately) as follows: Goal formation stage 1. Goal formation. Execution stage 2. Translation of goals into a set of unordered tasks required to achieve goals. 3. Sequencing the tasks to create the action sequence. 4. Executing the action sequence. Evaluation stage 5. Perceiving the results after having executed the action sequence. 6. Interpreting the actual outcomes based on the expected outcomes. 7. Comparing what happened with what the user wished to happen. Use in evaluation of user interfaces Typically, an evaluator of the user interface will pose a series of questions for each of the cycle's steps, an evaluation of the answer provides useful information about where the user interface may be inadequate or unsuitable. These questions might be: Step 1, Forming a goal: Do the users have sufficient domain and task knowledge and sufficient understanding of their work to form goals? Does the UI help the users form these goals? Step 2, Translating the goal into a task or a set of tasks: Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the tasks? Does the UI help the users formulate these tasks? Step 3, Planning an action sequence: Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the action sequence? Does the UI help the users formulate the action sequence? Step 4, Executing the action sequence: Can typical users easily learn and use the UI? Do the actions provided by the system match those required by the users? Are the affordance and visibility of the actions good? Do the users have an accurate mental model of the system? Does the system support the development of an accurate mental model? Step 5, Perceiving what happened: Can the users perceive the system’s state? Does the UI provide the users with sufficient feedback about the effects of their actions? Step 6, Interpreting the outcome according to the users’ expectations: Are the users able to make sense of the feedback? Does the UI provide enough feedback for this interpretation? Step 7, Evaluating what happened against what was intended: Can the users compare what happened with what they were hoping to achieve? Further reading Norman, D. A. (1988). The Design of Everyday Things. New York, Doubleday/Currency Ed. Related terms Gulf of evaluation exists when the user has trouble performing the evaluation stage of the human action cycle (steps 5 to 7). Gulf of execution exists when the user has trouble performing the execution stage of the human action cycle (steps 2 to 4). OODA Loop is an equivalent in military strategy. Human–computer interaction Motor control Psychological models
Human action cycle
[ "Engineering", "Biology" ]
685
[ "Human–computer interaction", "Behavior", "Human–machine interaction", "Motor control" ]
2,265,144
https://en.wikipedia.org/wiki/Syneresis%20%28chemistry%29
Syneresis (also spelled 'synæresis' or 'synaeresis'), in chemistry, is the extraction or expulsion of a liquid from a gel, such as when serum drains from a contracting clot of blood. Another example of syneresis is the collection of whey on the surface of yogurt. Syneresis can also be observed when the amount of diluent in a swollen polymer exceeds the solubility limit as the temperature changes. A household example of this is the counterintuitive expulsion of water from dry gelatin when the temperature increases. Syneresis has also been proposed as the mechanism of formation for the amorphous silica composing the frustule of diatoms. Examples In the processing of dairy milk, for example during cheese making, syneresis is the formation of the curd due to the sudden removal of the hydrophilic macropeptides, which causes an imbalance in intermolecular forces. Bonds between hydrophobic sites start to develop and are enforced by calcium bonds, which form as the water molecules in the micelles start to leave the structure. This process is usually referred to as the phase of coagulation and syneresis. The splitting of the bond between residues 105 and 106 in the κ-casein molecule is often called the primary phase of the rennet action, while the phase of coagulation and syneresis is referred to as the secondary phase. In cooking, syneresis is the sudden release of moisture contained within protein molecules, usually caused by excessive heat, which over-hardens the protective shell. Moisture inside expands upon heating. The hard protein shell pops, expelling the moisture. This process is responsible for transforming juicy rare steak into dry steak when cooked thoroughly. It creates weeping in scrambled eggs, with dry protein curd swimming in the released moisture. It also causes emulsified sauces, such as hollandaise, to "break" ("split"). Additionally, it creates unsightly moisture pockets within baked custard dishes, such as flan or crème brûlée. Gels formed from agarose are prone to syneresis, and the degree of syneresis is inversely proportional to the concentration of the agarose in the gels. In dentistry, syneresis is the expulsion of water or other liquid molecules from dental impression materials (for instance, alginate) after an impression has been taken. Due to this process, the impression shrinks a little and therefore its size is no longer accurate. For this reason, many dental impression companies strongly recommend to pour the dental cast as soon as possible to prevent distortion of the dimension of the teeth and objects in the impression. The opposite process of syneresis is imbibition, which is the process of a material absorbing water molecules from the surroundings. Alginate also demonstrates imbibition because it will absorb water if soaked in it. See also Coagulation Flocculation References Chemical processes Chemical mixtures Colloidal chemistry
Syneresis (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
624
[ "Colloidal chemistry", "Colloids", "Surface science", "Chemical processes", "Chemical mixtures", "Condensed matter physics", "nan", "Chemical process engineering", "Chemical process stubs" ]
2,265,316
https://en.wikipedia.org/wiki/Queen%27s%20metal
Queen's Metal, an alloy of nine parts tin and one each of antimony, lead, and bismuth, is intermediate in hardness between pewter and britannia metal. It was developed by English pewtersmiths in the 16th century; the recipe was initially a secret and was reserved for pieces made for the English royal family. References Fusible alloys Tin alloys Lead alloys Antimony alloys Bismuth alloys
Queen's metal
[ "Chemistry", "Materials_science" ]
85
[ "Lead alloys", "Alloy stubs", "Metallurgy", "Bismuth alloys", "Fusible alloys", "Tin alloys", "Alloys", "Antimony alloys" ]
2,267,331
https://en.wikipedia.org/wiki/Perfect%20mixing
Perfect mixing is a term heavily used in relation to the definition of models that predict the behavior of chemical reactors. Perfect mixing assumes that there are no spatial gradients in a given physical envelope, such as: concentration (with respect to any chemical species) temperature chemical potential catalytic activity Physical chemistry
Perfect mixing
[ "Physics", "Chemistry" ]
59
[ "Physical chemistry", "Applied and interdisciplinary physics", "Physical chemistry stubs", "nan" ]
5,677,733
https://en.wikipedia.org/wiki/Total%20variation%20diminishing
In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten. Model equation In systems described by partial differential equations, such as the following hyperbolic advection equation, the total variation (TV) is given by and the total variation for the discrete case is, where . A numerical method is said to be total variation diminishing (TVD) if, Characteristics A numerical scheme is said to be monotonicity preserving if the following properties are maintained: If is monotonically increasing (or decreasing) in space, then so is . proved the following properties for a numerical scheme, A monotone scheme is TVD, and A TVD scheme is monotonicity preserving. Application in CFD In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of field variable “” is discontinuous. To capture the variation fine grids ( very small) are needed and the computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, hybrid difference scheme, and power law scheme gives false shock predictions. TVD scheme enables sharper shock predictions on coarse grids saving computation time and as the scheme preserves monotonicity there are no spurious oscillations in the solution. Discretisation Consider the steady state one-dimensional convection diffusion equation, , where is the density, is the velocity vector, is the property being transported, is the coefficient of diffusion and is the source term responsible for generation of the property . Making the flux balance of this property about a control volume we get, Here is the normal to the surface of control volume. Ignoring the source term, the equation further reduces to: Assuming and The equation reduces to Say, From the figure: The equation becomes: The continuity equation also has to be satisfied in one of its equivalent forms for this problem: Assuming diffusivity is a homogeneous property and equal grid spacing we can say we getThe equation further reduces toThe equation above can be written aswhere is the Péclet number TVD scheme Total variation diminishing scheme makes an assumption for the values of and to be substituted in the discretized equation as follows: Where is the Péclet number and is the weighing function to be determined from, where refers to upstream, refers to upstream of and refers to downstream. Note that is the weighing function when the flow is in positive direction (i.e., from left to right) and is the weighing function when the flow is in the negative direction from right to left. So, If the flow is in positive direction then, Péclet number is positive and the term , so the function won't play any role in the assumption of and . Likewise when the flow is in negative direction, is negative and the term , so the function won't play any role in the assumption of and . It therefore takes into account the values of property depending on the direction of flow and using the weighted functions tries to achieve monotonicity in the solution thereby producing results with no spurious shocks. Limitations Monotone schemes are attractive for solving engineering and scientific problems because they do not produce non-physical solutions. Godunov's theorem proves that linear schemes which preserve monotonicity are, at most, only first order accurate. Higher order linear schemes, although more accurate for smooth solutions, are not TVD and tend to introduce spurious oscillations (wiggles) where discontinuities or shocks arise. To overcome these drawbacks, various high-resolution, non-linear techniques have been developed, often using flux/slope limiters. See also Flux limiters Godunov's theorem High-resolution scheme MUSCL scheme Sergei K. Godunov Total variation References Further reading Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Vol 2, Wiley. Laney, C. B. (1998), Computational Gas Dynamics, Cambridge University Press. Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. Tannehill, J. C., Anderson, D. A., and Pletcher, R. H. (1997), Computational Fluid Mechanics and Heat Transfer, 2nd Ed., Taylor & Francis. Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer-Verlag. Anil W. Date Introduction to Computational Fluid Dynamics, Cambridge University Press. Numerical differential equations Computational fluid dynamics
Total variation diminishing
[ "Physics", "Chemistry" ]
967
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
5,678,057
https://en.wikipedia.org/wiki/Godunov%27s%20theorem
In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high-resolution schemes for the numerical solution of partial differential equations. The theorem states that: Professor Sergei Godunov originally proved the theorem as a Ph.D. student at Moscow State University. It is his most influential work in the area of applied and numerical mathematics and has had a major impact on science and engineering, particularly in the development of methods used in computational fluid dynamics (CFD) and other computational fields. One of his major contributions was to prove the theorem (Godunov, 1954; Godunov, 1959), that bears his name. The theorem We generally follow Wesseling (2001). Aside Assume a continuum problem described by a PDE is to be computed using a numerical scheme based upon a uniform computational grid and a one-step, constant step-size, M grid point, integration algorithm, either implicit or explicit. Then if and , such a scheme can be described by In other words, the solution at time and location is a linear function of the solution at the previous time step . We assume that determines uniquely. Now, since the above equation represents a linear relationship between and we can perform a linear transformation to obtain the following equivalent form, Theorem 1: Monotonicity preserving The above scheme of equation (2) is monotonicity preserving if and only if Proof - Godunov (1959) Case 1: (sufficient condition) Assume (3) applies and that is monotonically increasing with . Then, because it therefore follows that because This means that monotonicity is preserved for this case. Case 2: (necessary condition) We prove the necessary condition by contradiction. Assume that for some and choose the following monotonically increasing , Then from equation (2) we get Now choose , to give which implies that is NOT increasing, and we have a contradiction. Thus, monotonicity is NOT preserved for , which completes the proof. Theorem 2: Godunov’s Order Barrier Theorem Linear one-step second-order accurate numerical schemes for the convection equation cannot be monotonicity preserving unless where is the signed Courant–Friedrichs–Lewy condition (CFL) number. Proof - Godunov (1959) Assume a numerical scheme of the form described by equation (2) and choose The exact solution is If we assume the scheme to be at least second-order accurate, it should produce the following solution exactly Substituting into equation (2) gives: Suppose that the scheme IS monotonicity preserving, then according to the theorem 1 above, . Now, it is clear from equation (15) that Assume and choose such that . This implies that and . It therefore follows that, which contradicts equation (16) and completes the proof. The exceptional situation whereby is only of theoretical interest, since this cannot be realised with variable coefficients. Also, integer CFL numbers greater than unity would not be feasible for practical problems. See also Finite volume method Flux limiter Total variation diminishing References Godunov, Sergei K. (1954), Ph.D. Dissertation: Different Methods for Shock Waves, Moscow State University. Godunov, Sergei K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Mat. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7226, 1969. Further reading Numerical differential equations Theorems in analysis Computational fluid dynamics
Godunov's theorem
[ "Physics", "Chemistry", "Mathematics" ]
734
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Computational fluid dynamics", "Computational physics", "Mathematical problems", "Fluid dynamics" ]
5,679,969
https://en.wikipedia.org/wiki/Block%20and%20bleed%20manifold
A Block and bleed manifold is a hydraulic manifold that combines one or more block/isolate valves, usually ball valves, and one or more bleed/vent valves, usually ball or needle valves, into one component for interface with other components (pressure measurement transmitters, gauges, switches, etc.) of a hydraulic (fluid) system. The purpose of the block and bleed manifold is to isolate or block the flow of fluid in the system so the fluid from upstream of the manifold does not reach other components of the system that are downstream. Then they bleed off or vent the remaining fluid from the system on the downstream side of the manifold. For example, a block and bleed manifold would be used to stop the flow of fluids to some component, then vent the fluid from that component’s side of the manifold, in order to effect some kind of work (maintenance/repair/replacement) on that component. Types of valves Block and Bleed A block and bleed manifold with one block valve and one bleed valve is also known as an isolation valve or block and bleed valve; a block and bleed manifold with multiple valves is also known as an isolation manifold. This valve is used in combustible gas trains in many industrial applications. Block and bleed needle valves are used in hydraulic and pneumatic systems because the needle valve allows for precise flow regulation when there is low flow in a non-hazardous environment. Double Block and Bleed (DBB Valves) These valves replace existing traditional techniques employed by pipeline engineers to generate a double block and bleed configuration in the pipeline. Two block valves and a bleed valve are as a unit, or manifold, to be installed for positive isolation. Used for critical process service, DBB valves are for high pressure systems or toxic/hazardous fluid processes. Applications that use DBB valves include instrument drain, chemical injection connection, chemical seal isolation, and gauge isolation. DBB valves do the work of three separate valves (2 isolations and 1 drain) and require less space and have less weight. Cartridge Type Standard Length DBB This type of Double Block and Bleed Valves have a patented design which incorporates two ball valves and a bleed valve into one compact cartridge type unit with ANSI B16.5 tapped flanged connections. The major benefit of this design configuration is that the valve has the same face-to-face dimension as a single block ball valve (as specified in API 6D and ANSI B16.10), which means the valve can easily be installed into an existing pipeline without the need for any pipeline re-working. Three Piece Non Standard Length DBB This type of Double Block and Bleed Valves (DBB Valves) feature the traditional style of flange-by-flange type valve and is available with ANSI B16.5 flanges, hub connections and welded ends to suit the pipeline system it is to be installed in. It features all the benefits of the single unit DBB valve, with the added benefit of a bespoke face-to-face dimension if required. Single Unit DBB This design also has operational advantages, there are significantly fewer potential leak paths within the double block and bleed section of the pipeline. Because the valves are full bore with an uninterrupted flow orifice they have got a negligible pressure drop across the unit. The pipelines where these valves are installed can also be pigged without any problems. There are several advantages in using a Double Block and Bleed Valve. Significantly, because all the valve components are housed in a single unit, the space required for the installation is dramatically reduced thus freeing up room for other pieces of essential equipment. Considering the operations and procedures executed before an operator can intervene, the Double Block and Bleed manifold offers further advantages over the traditional hook up. Due to the volume of the cavity between the two balls being so small, the operator is afforded the opportunity to evacuate this space efficiently thereby quickly establishing a safe working environment. References Fluid mechanics Hydraulics Mechanical engineering
Block and bleed manifold
[ "Physics", "Chemistry", "Engineering" ]
800
[ "Applied and interdisciplinary physics", "Physical systems", "Hydraulics", "Civil engineering", "Mechanical engineering", "Fluid mechanics", "Fluid dynamics" ]
5,680,091
https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27ing%C3%A9nieurs%20de%20constructions%20a%C3%A9ronautiques
The École nationale supérieure d'ingénieurs de constructions aéronautiques (; meaning "National Higher School of aeronautical constructions"), or ENSICA, was a French engineering school founded in 1945. It was located in Toulouse. In 2007, Ensica merged with Supaéro to form the Institut supérieur de l'aéronautique et de l'espace (ISAE). Ensica recruited its students from the French "Concours des Grandes Écoles". A competitive examination which requires studies at the "classes préparatoires". Classes préparatoires last two years where students are to work intensively on mathematics and physics. Studies at Ensica lasted for 3 years where students eventually got a Master in Aeronautics. Area of studies cover all the fundamentals of aeronautics, including: aerodynamics, structures, fluid dynamics, thermal power, electronics, control theory, airframe systems, IT... Students are also trained to management, manufacturing, certification, and foreign languages. Main employers are Airbus, Thales, Dassault, Safran (Sagem, Snecma), Rolls-Royce, Astrium, Eurocopter. History The decree giving birth to the "Ecole Nationale des Travaux Aéronautiques" (ENTA) was signed in 1945. The text was then ratified by Charles de Gaulle, president of the temporary government, and by René Pleven, Finance Minister. There were 25 students in the first class and 24 of them joined the "Ingénieurs Militaires des Travaux de l'Air" (IMTA). In 1957, the school changed its name to the "Ecole Nationale d'Ingénieurs des Constructions Aéronautiques" (ENICA).The course was extended to three years and the school embarked on its new civil vocation welcoming a higher proportion of civil students. In 1961, ENICA was transferred to Toulouse, the director at that time being Emile Blouin. It then took on a new dimension and established its identity. In 1969, the school joined the competitive entrance examination system organised by the Ecoles Nationales Supérieures d'Ingénieurs (ENSI). It thus increased its recruitment standards to become one of the leading French schools. This excellence was rewarded in 1979 when it received the Médaille de l'Aéronautique from Général Georges Bousquet: ENICA then became ENSICA, Ecole Nationale Supérieure d'Ingénieurs de Constructions Aéronautiques. The eighties were marked by a profound diversification in the training courses offered: opening of a "Mastère" degree and an Advanced Studies degree (DEA) in automatic control and mechanics, specialisations in aircraft maintenance and helicopter techniques. ENSICA became the top-listed school for students with pass marks in ENSI competitive entrance examinations and continuously increased the part set aside for research. It also internationalised its training by implementing exchange programmes with English, American and German institutes and universities. In 1994, ENSICA became a public establishment and can now sign, in its own name, agreements and conventions with other organisations and receive research contracts. Today, ENSICA has a staff of 150 people including 25 scientific directors and almost 700 part-time lecturers. The school can accommodate more than 400 students on the initial training courses and the same number of persons doing further training. The 50th class recently graduated. It included a total of 98 graduates 11 students of which did their third year of studies in a foreign university (USA, Great Britain, Germany and Sweden) and a high number of students who carried out their end of study projects abroad. Missions A public establishment under the auspices of the Ministry of Defence, ENSICA gives technological teaching courses for civil and military engineering students and offers a range of training: "Diplôme d'Ingenieur" (engineer's diploma) course; training for and through scientific research; a set of "Mastère Spécialisé" courses; further education courses; research. The engineer's course lasts three years. Departments of Ensica At ENSICA, research and training are integrated into the four training and research departments: avionics and systems, mechanical engineering, fluid mechanics, applied mathematics and computer science. All the departments are composed with a scientific staff. The staff is composed by lecturers-researchers with Ph.D's, lecturers and senior lecturers from universities and full professors. They are responsible for the research work and pedagogical engineering, as well as the coordination of the lecturers' teams. By this way, they actively participate in international actions and in industrial relations. The lecturers come, for one third, from the university and research world, for one fourth from industry and one fourth from the DGA. Human, economics, social, linguistics and multi-cultural training is under the responsibility of three departments: human and social sciences, sports and languages. Main departments are Avionics, Mechanical Engineering, Fluid Dynamics and Mathematics Avionics The Avionics & Systems Department develop : - In the first year a basic training in: Signal processing, Automatic System and Electrical Engineering. - In the third year, two advanced itineraries are proposed into the field : Signals - Communications Control - Avionics The Department trains at these multidisciplinary itineraries : Aircraft system Space systems Control - Guidance Radar - Telecommunications Preparation for the post-graduate diplomas DEA (Advanced Studies Diploma) : Signals - images - acoustics automatics systems. These two itineraries allow, respectively, the preparation for the postgraduate diplomas signals-images-acoutics and automatic systems. Taught subjects Functional approach of electronics and electric engineering Strong theorical bases of signal processing allowing a use in image processing, radar and telecommunications. Optics and optronics bases. Antennas and radars theories and applications in the aeronautical and spatial domains. Approach of real-time systems based on a concrete system built on a micro controller. Finally, control : from modelisation and control of simple processes to applied advanced methods in the aeronautical domain. Mechanical engineering The aim of the Mechanical engineering Department's curriculum is to provide the students with basic knowledge in mechanics indispensable for their future jobs as engineers and this within a multidisciplinary aerospace training framework. The Mechanical Engineering courses lasts three years and includes : - basic training including fundamental knowledge mainly concerning calculation of structures and technological knowledge of mechanisms, manufacturing and materials, - training applied to aeronautics and space; this part increasing progressively throughout the three years. This common core is complemented, within the scope the third-year optional modules, by courses given at ENSICA for the Mechanical Engineering advanced studies degree and more specialised courses related to aeronautics and space. The Mechanical Engineering Department also coordinates the school's space activities: this specific space training corresponds to around 250 hours and development is oriented both towards ultralight systems and crewed flight engineering. Fluid Dynamics The courses given by the Fluid Mechanics Department concern the thermodynamics of irreversible processes and continuum mechanics. The courses in these two disciplines are given in the first year and are completed by a basic fluid mechanics course (general equations of the movement of a Newtonian fluid and inviscid fluid movements). In the second year, the studies concern the flow of incompressible viscous fluids and compressible inviscid fluids dealing with the boundary layer, shock wave and turbulence phenomena with complements in unsteady fluid hypersonic and mechanical phenomena. From these theoretical bases, aeronautical applications are introduced in the second year. They mainly concern: external aerodynamics plus flight mechanics and handling qualities. aeronautical turbine engines. Mathematics The goals of CS training are: (1) to study the methods for developing programs (specification methods, object-oriented design, structured programming algorithms, testing); (2) to learn the basics of algorithmics (3) in-depth study of object programming, and learning an object-oriented methodology that uses UML as modeling notation; (4) to study the specific features of "Real-Time" applications and systems and of new-generation network architectures in close association with the research work carried out in the department. Practical implementations of theoretical concepts are based on Java language; ENSICA is co-accredited for issuing the Toulouse Systems Postgraduate School's Computer-based Systems DEAs (Advanced Studies Degrees) in cooperation with UPS science university, INSA and SUPAERO engineering schools, and the Toulouse CS and Telecommunications Postgraduate School's Networks and Telecommunications DEAs in cooperation with INPT engineering school, UPS science university, SUPAERO, INSA, ENST and ENAC engineering schools. Training periods and international perspectives During the 3 years, students of Ensica have the opportunity of studying for one semester or one year abroad, or make a one-year additional training period in a company. Foreign partnerships include: Australia University of Technology Sydney Belgium Vrije Universiteit Brussel Katholieke Universiteit Leuven Université catholique de Louvain Canada Université de Sherbrooke Ecole Polytechnique de Montréal China Nanjing University Germany Technische Universität München Universität Stuttgart Rheinisch-Westfälische Technische Hochschule Aachen Technische Universität Braunschweig Italy Politecnico di Torino Politecnico di Milano Mexico Instituto Politécnico Nacional Netherlands Delft University of Technology Poland Warsaw University of Technology Lublin University of Technology Romania Polytechnic University of Bucharest Military Technical Academy Russia Samara State Aerospace University St. Petersburg State University Singapore National University of Singapore Nanyang Technological University Spain Universidad Politécnica de Madrid (CETSEI) Universitat Politècnica de Catalunya (ETSEIB - ETSEIAT) Universidad de Sevilla Sweden Kungl Tekniska Högskolan United Kingdom Cranfield University Imperial College University of Bristol University of Southampton University of Glasgow USA State University of New York at Buffalo Louisiana State University University of Wisconsin Madison University of Maryland at College Park Syracuse University Aerospace engineering organizations Aviation schools in France Educational institutions established in 1945 1945 establishments in France
École nationale supérieure d'ingénieurs de constructions aéronautiques
[ "Engineering" ]
2,067
[ "Aeronautics organizations", "Aerospace engineering organizations", "Aerospace engineering" ]
14,740,555
https://en.wikipedia.org/wiki/Simplified%20sewerage
Simplified sewerage, also called small-bore sewerage, is a sewer system that collects all household wastewater (blackwater and greywater) in small-diameter pipes laid at fairly flat gradients. Simplified sewers are laid in the front yard or under the pavement (sidewalk) or - if feasible - inside the back yard, rather than in the centre of the road as with conventional sewerage. It is suitable for existing unplanned low-income areas, as well as new housing estates with a regular layout. It allows for a more flexible design. With simplified sewerage it is crucial to have management arrangements in place to remove blockages, which are more frequent than with conventional sewers. It has been estimated that simplified sewerage reduces investment costs by up to 50% compared to conventional sewerage. Simplified sewerage is sometimes also referred to as conventional sewerage with appropriate standards, implying that most conventional sewers are overdesigned. The concept of simplified sewerage emerged in parallel in Natal, Brazil and Karachi, Pakistan in the early 1980s without any interaction or communication. In both cases particular emphasis was given to community mobilization, an essential element for the success of simplified sewerage. In Latin America, and particularly in Brazil, simplified sewerage is also known as condominial sewerage, a term that underscores the importance of community participation in planning and maintenance at the level of a housing block (known as condominio in the Spanish and Portuguese use of the term). Background In developing countries, connection to sewer systems is often costly for poor households, despite typically low monthly sewer tariffs. This apparent paradox is explained by the high costs of in-plot and in-house sanitary installations that have to be paid entirely by the user, by sometimes high sewer connection fees levied by utilities, and by a lack of community consultation. As a result, in many cities in developing countries conventional sewers are laid at high costs under a street, while many users on that street do not connect to them. In Brazil, in some cities connection rates in the early 1990s were less than 40% of the intended beneficiary population. Application Simplified sewerage is most widely used in Brazil. It is estimated that in Brazil some 5 million people in over 200 towns and cities are served with simplified sewerage - or condominial sewerage. This corresponds to about 3% of the population of Brazil and about 6% of the population connected to sewers. They serve poor and rich alike. Simplified sewerage has also been used in Bolivia, beginning with a pilot project in El Alto; Honduras, primarily in marginal areas of Tegucigalpa where simplified sewerage has been introduced in 20 communities with 24,000 inhabitants; Peru, primarily in marginal areas of Lima; in South Africa, where pilot projects were carried out in Johannesburg and Durban; in Sri Lanka, where the National Housing Development Authority implemented over 20 schemes in the 1980s and 90s. In Pakistan, beginning with the Orangi Pilot Project in Karachi, a variation of simplified sewerage using larger diameter pipes has been used. Community participation Community participation in the planning of any sewer system is a fundamental requirement to achieve higher household connection rates and to increase the likelihood of proper maintenance of in-block sewers. In addition, it can motivate users to assume parts of the costs of the sewer system that they are able to assume, such as contribution of labor for construction and/or maintenance. Typically, in the planning process for a simplified sewerage system, meetings are carried out at the housing block (condominio) level for information, discussions and clarifications required for a joint group decision on network design, community contributions during construction and maintenance responsibilities. Users might finance and implement in-house sanitary installations and household connections and would agree on a suitable type of condominial branch. They are asked to comply with agreements established for construction and operation of the condominial branch, as well as payment of tariffs. In turn, the service provider agrees to fulfill his responsibilities as established in the “Terms of Connection ” between the parties. The community participation process also provides a good opportunity for complementary actions like hygiene promotion, which can have a significant impact on public health at a relatively limited cost. Design and construction Simplified sewers are usually laid in the front yard or under the pavement (sidewalk). In some rare cases it is possible to lay them in the back yard. Sidewalk branches are usually preferred in regular urbanizations, while the front and back yard branches are particularly suited to neighborhoods with challenging topography or urbanization patterns. However, in some cases neither of these options is possible. For example, in South Asia, in many cities there is no sidewalk or front yard, so pipes have to be laid in the middle of the street as with conventional sewers. In Latin America typical simplified sewer diameters are 100 mm, laid at a gradient of 1 in 200 (0.5 percent). Such a sewer will serve around 200 households of 5 people with a wastewater flow of 80 litres per person per day. In Pakistan, however, there are no rigorous standards for sewer diameters. In a small pilot as part of the Orangi Pilot Project pipes with a diameter of 150mm were used. Laying small diameter pipes at fairly flat gradients requires careful construction techniques. Plastic pipes are best used as they are more easily jointed correctly. This reduces wastewater leakage from the sewer and groundwater infiltration into it. With simplified sewerage there is no need to have the large expensive manholes of the type used for conventional sewerage — simple brick or plastic junction chambers are used instead. Construction can be carried out by contractors or by trained and properly supervised community members. Training and proper supervision are actually needed in both cases, since contractors in many cities are not familiar with simplified sewerage. Investment cost comparison The cost of sewerage - conventional or simplified - are always site-specific, and estimates are subject to controversies. Construction costs of simplified sewerage are up to half the costs of conventional sewerage. Investment cost savings come from various design features that may or may not be present in a particular simplified sewerage system. Cost-saving features of any simplified sewerage system are a smaller diameter of pipes, smaller and shallower trenches and simplified manholes. The two latter features are estimated to account for most of the cost savings. Other features that could further reduce costs may only be present in some systems, such as: shorter networks; avoidance of the need to damage pavements and sidewalks (if they already exist and if pipes are laid in front or back yards); decentralized, small-scale wastewater treatment, and consequently elimination of main collectors and sewage pumping stations. An element that may slightly increase costs compared to conventional sewerage is the introduction of grease traps or of interceptor chambers for the settlement of sludge. The latter are more common in South Asia and are not used in the condominial model. A 2006 study of four countries showed cost savings of 31-57% from the use of simplified sewerage compared to conventional sewerage with unit costs varying from US$119 per connection in a neighborhood in Bolivia and to US$759 per connection in a small town in Paraguay. A detailed estimate gives the costs of simplified sewerage in Lima as at least US$700 per household (US$120–140 per person), including in-house sanitary facilities (US$100 per household) and including design, supervision and social intermediation costs (US$126 per household, which are common costs shared with water infrastructure), but excluding taxes. In general, at higher population densities sewer systems are cheaper than on-site sanitation (such as septic tanks). The switching value at which sewerage becomes less costly is largely determined by the type of sewerage, conventional or simplified. A 1983 study in Natal showed that the investment costs for simplified sewerage were lower than for on-site systems at the quite low population density of about 160 people per hectare. Conventional sewerage, however, was cheaper only at densities above 400 people per hectare. Operation and maintenance Good operation and maintenance (O&M) is essential for the long-term sustainability of any sewerage system, but particularly for simplified sewerage, since the small diameter of pipes and low gradients make the system highly vulnerable to clogging. Solids can readily block the small diameter piping and the shallow grade of pipe alignment prevents sewage flow from reaching scouring velocity, meaning that solids fall out of suspension and depositing within the low gradient pipe before reaching the downstream receiving body. The original concept of householders being responsible for O&M of in-block condominial sewers has not worked well in the long term. A study of simplified sewerage systems in Brazil has shown that effective maintenance of sewers by utilities has often been the result of community pressure by neighborhood associations. Without such pressure maintenance by utilities has often been inadequate, and community maintenance has not come about either. Few situation exist where simplified sewers are appropriate sanitation solutions to install. Therefore, alternative management systems had to be developed to mitigate the high issues of simplified sewers, and a few examples are provided below: In rural Ceará a villager is employed by the Residents’ Association to maintain the sewers and the wastewater treatment plant (typically, a single facultative waste stabilization pond). He is also responsible for the water supply. In parts of Recife in northeast Brazil the state water and sewerage company employs local contracting firms for O&M. Usually this is done by a small team comprising a technician engineer and two laborers who work in a low-income area served by simplified sewerage and to whom residents report any problems. In Brasília the water and sewerage company, which has over 1,200 km of condominial sewers, uses van-mounted water jet units to clear any blockages. Concerning maintenance costs, available information indicates similar costs and requirements for the simplified and the conventional system under the same conditions. Simplified systems typically require more interventions, but the cost per intervention is lower. Comparative analytical studies are not yet available, however. Constraints for application According to Jose Carlos Melo, who is considered to be the "father" of condominial sewers in Brazil, some important constraints for the application of simplified sewerage are: Lack of information on fundamentals and techniques of the approach or lack of experience in its application, Resistance to change: Institutional, technical and operational changes required by the service provider for implementing the condominial approach usually provoke resistance and can hinder the application. Normative and legal restrictions: Existing conservative design and construction standards linked to conventional systems can be an essential constraint in the introduction and dissemination of the systems. Over the last years, countries like Bolivia and Peru reviewed and modernized technical standards according to methods and criteria established and accepted in Brazil in the 1980s, thus overcoming the latter constraint. See also Effluent sewer or solids-free sewer References External links Simplified Sewerage Design, Microsoft Producer presentations and supporting Material, Duncan Mara, Leeds University CONDOMINIAL SYSTEMS - BRAZILIAN PANORAMA AND CONCEPTUAL ELEMENTS, Leeds University PC-Based Simplified Sewerage Design Program, Leeds University Sewerage Environmental engineering
Simplified sewerage
[ "Chemistry", "Engineering", "Environmental_science" ]
2,267
[ "Chemical engineering", "Water pollution", "Sewerage", "Civil engineering", "Environmental engineering" ]
14,743,159
https://en.wikipedia.org/wiki/City%20Solar
City Solar AG is a producer of large-scale photovoltaic power plants, taking care of all aspects of production. This includes site location, planning, construction, and management. The company was started in 2002 in Bad Kreuznach, Germany, but now has offices in Saarbrücken, Berlin, Chemnitz, Augsburg, and Madrid. City Solar has produced over a dozen power stations including the world's largest photovoltaic power plant located in Beneixama, Spain. The Beneixama photovoltaic power plant is a 10MWp power station, with 100,000 solar modules, encompassing an area of approximately 500,000m2. As of 2007, City Solar has 4 more plants under construction or in development. See also Photovoltaic power stations List of photovoltaics companies References Solar energy companies of Germany Photovoltaics manufacturers
City Solar
[ "Engineering" ]
186
[ "Photovoltaics manufacturers", "Engineering companies" ]
14,748,503
https://en.wikipedia.org/wiki/Mageu
Mageu (Setswana spelling), Mahewu (Shona/Chewa/Nyanja spelling), Mahleu (Sesotho spelling), Magau (xau-Namibia) (Khoikhoi spelling), Madleke (Tsonga spelling), Mabundu (Tshivenda spelling), maHewu, amaRhewu (Xhosa spelling) or amaHewu (Zulu and Northern Ndebele spelling) is a traditional Southern African non-alcoholic drink among many of the Chewa/Nyanja, Shona, Ndebele, Nama Khoikhoi and Damara people, Sotho people, Tswana people and Nguni people made from fermented mealie pap. Home production is still widely practised, but the drink is also available at many supermarkets, being produced at factories. Its taste is derived predominantly from the lactic acid that is produced during fermentation, but commercial mageu is often flavoured and sweetened, much in the way commercially-available yogurt is. Similar beverages are also made in other parts of Africa. Fermentation process Thin mealie pap (maize meal) is prepared, to which wheat flour is added, providing the inoculum of lactate-producing bacteria. The mixture is left to ferment, typically in a warm area. Pasteurisation is done in commercial operations to extend shelf-life. Nutrition Nutritionally, it is similar to its parent mealie meal, but with the glucose metabolized to lactate during fermentation. Commercial preparations are often enriched (In South Africa, the term 'fortification' is only allowed legally for specific, government-sanctioned nutrition programs, e.g. that of bread) with vitamins and minerals. Although typically considered non-alcoholic, very small amounts (less than 1%) of ethanol have been reported. See also Ogi Tejuino Boza References Steinkraus, Keith H. "Industrialization of indigenous fermented foods". Google books. Accessed May 2010. South African cuisine Fermented drinks Maize-based drinks
Mageu
[ "Biology" ]
439
[ "Fermented drinks", "Biotechnology products" ]
14,749,308
https://en.wikipedia.org/wiki/Iodine%20oxide
Iodine oxides are chemical compounds of oxygen and iodine. Iodine has only two stable oxides which are isolatable in bulk, iodine tetroxide and iodine pentoxide, but a number of other oxides are formed in trace quantities or have been hypothesized to exist. The chemistry of these compounds is complicated with only a few having been well characterized. Many have been detected in the atmosphere and are believed to be particularly important in the marine boundary layer. Molecular compounds Diiodine monoxide has largely been the subject of theoretical study, but there is some evidence that it may be prepared in a similar manner to dichlorine monoxide, via a reaction between HgO and I2. The compound appears to be highly unstable but can react with alkenes to give halogenated products. Radical iodine oxide (IO), iodine dioxide (IO2), collectively referred to as IO and iodine tetroxide ((I2O4) all possess significant and interconnected atmospheric chemistry. They are formed, in very small quantities, in the marine boundary layer by the photooxidation of diiodomethane, which is produced by macroalga such as seaweed or through the oxidation of molecular iodine, produced by the reaction of gaseous ozone and iodide present at the seasurface. Despite the small quantities produced (typically below ppt) they are thought to be powerful ozone depletion agents. Diiodine pentoxide (I2O5) is the anhydride of iodic acid and the only stable anhydride of an iodine oxoacid. Tetraiodine nonoxide (I4O9) has been prepared by the gas-phase reaction of I2 with O3 but has not been extensively studied. Iodate anions Iodine oxides also form negatively charged anions, which (associated with complementary cations) are components of acids or salts. These include the iodates and periodates. Their conjugate acids are: The -1 oxidation state, hydrogen iodide, is not an oxide, but it is included in this table for completeness. The periodates include two variants: metaperiodate and orthoperiodate . See also Oxygen fluoride Chlorine oxide Bromine oxide References Oxides Iodides Iodine compounds
Iodine oxide
[ "Chemistry" ]
489
[ "Oxides", "Salts" ]
14,749,434
https://en.wikipedia.org/wiki/Water%20immersion%20objective
In light microscopy, a water immersion objective is a specially designed objective lens used to increase the resolution of the microscope. This is achieved by immersing both the lens and the specimen in water which has a higher refractive index than air, thereby increasing the numerical aperture of the objective lens. Applications Water immersion objectives are used not only at very large magnifications that require high resolving power, but also of moderate power as there are water immersion objectives as low as 4X. Objectives with high power magnification have short focal lengths, facilitating the use of water. The water is applied to the specimen (conventional microscope), and the stage is raised, immersing the objective in water. Sometimes with water dipping objectives, the objective is directly immersed in the solution of water which contains the specimens to look at. Electrophoretic preparations used in the case of comet assay can benefit from the use of water objectives. The refractive index of the water (1.33) is closer to those of imaged materials or to the glass of the cover-slip, so more light will be collected/focused by this type of objective comparing to air-immersion ones, leading to a range of higher numerical apertures (NA). Correction collar Unlike oil, water does not have the same or near identical refractive value as the cover slip glass, so a correction collar is needed to be able to variate for its thickness. Lenses without a correction collar generally are made for the use of a 0.17 mm cover slip or for use without a coverslip (dipping lens). See also Oil immersion objective Microscopy Optical microscope Index-matching material References Microscopy
Water immersion objective
[ "Chemistry" ]
335
[ "Microscopy" ]
7,413,289
https://en.wikipedia.org/wiki/Beer%20engine
A beer engine is a device for pumping beer from a cask, usually located in a pub's cellar. The beer engine was invented by John Lofting, a Dutch inventor, merchant and manufacturer who moved from Amsterdam to London in about 1688 and patented a number of inventions including a fire hose and engine for extinguishing fires and a thimble knurling machine. The London Gazette of 17 March 1691 stated "the patentee hath also projected a very useful engine for starting of beers and other liquors which will deliver from 20 to 30 barrels an hour which are completely fixed with brass joints and screws at reasonable rates." The locksmith and hydraulic engineer Joseph Bramah developed beer pumping further in 1797. The beer engine is normally manually operated, although electrically powered and gas powered pumps are occasionally used; when manually powered, the term handpump is often used to refer to both the pump and the associated handle. The beer engine is normally located below the bar with the visible handle being used to draw the beer through a flexible tube to the spout, below which the glass is placed. Modern hand pumps may clamp onto the edge of the bar or be mounted on the top of the bar. A pump clip is usually attached to the handle giving the name and sometimes the brewery, beer type and alcoholic strength of the beer being served through that handpump. The handle of a handpump is often used as a symbol of cask ale. This style of beer has continued fermentation and uses porous and non-porous pegs, called spiles, to respectively release and retain the gases generated by fermentation and thus achieve the optimum level of carbonation in the beer. In the 1970s many breweries were keen to replace cask conditioned ale with keg versions for financial benefit, and started to disguise keg taps by adorning them with cosmetic hand pump handles. This practice was opposed as fraudulent by the Campaign for Real Ale and was discontinued. Swan neck A swan neck is a curved spout. This is often used in conjunction with a sparkler - a nozzle containing small holes - fitted to the spout to aerate the beer as it enters the glass, giving a frothier head; this presentation style is more popular in the north of England than in the south. Sparkler A sparkler is a device that can be attached to the nozzle of a beer engine. Designed rather like a shower-head, beer dispensed through a sparkler becomes aerated and frothy which results in a noticeable head. The sparkler works via the venturi effect. As the beer flows through the nozzle, air is drawn into the beer. Consequently, the beer will have a head, whether or not the beer is alive (fresh). Real ale only produces a head whilst the yeast is alive, when yeast produces carbon dioxide. Typically, after three days of opening a barrel of beer, the yeast will die, and the beer will be flat. A sparkler will disguise flat beer, replacing the missing carbon dioxide with nitrogen and oxygen. Whether or not the beer is alive (fresh), whisking the beer changes the texture, and gaseous composition, which can change the taste. There is an argument that the sparkler can reduce the flavour and aroma, especially of the hops, in some beers. The counter argument is that the sparkler takes away harshness and produces a smoother, creamier beer that is easier to quaff. Breweries may state whether or not a sparkler is preferred when serving their beers. Generally, breweries in northern England serve their beers with a sparkler attached and breweries in the south without, but this is by no means definitive. Pump clips Pump clips are badges that are attached to handpumps in pubs to show which cask ales are available. In addition to the name of the beer served through the pump, they may give other details such as the brewer's name and alcoholic strength of the beer and serve as advertising. Pump clips can be made of various materials. For beers that are brewed regularly by the big breweries, high quality plastic, metal or ceramic pump clips are used. Smaller breweries would use a printed plastic pump clip and for one-off beers laminated paper is used. There are variations on the material used, and the gaudiness or tastefulness of the decoration depending on how much the brewery wants to market their beers at the point of sale. Novelty pump clips have also been made of wood, slate and compact discs. Some even incorporate electronic flashing lights. Older pump clips were made of enamel. The term pump clip originates from the clip that attaches it to the pump handle. These consist of a two-piece plastic ring which clamps to the handle with two screws. Plastic and laminated paper pump clips usually have a white plastic clip fixed with a sticky double-sided pad that pushes onto the handle. See also Beer tap References External links DeeCee's Beer Pump Clips National pump clip museum in Nottingham Beer vessels and serving Pumps Bartending equipment
Beer engine
[ "Physics", "Chemistry" ]
1,042
[ "Physical systems", "Hydraulics", "Turbomachinery", "Pumps" ]
7,414,275
https://en.wikipedia.org/wiki/Kopp%27s%20law
Kopp's law can refer to either of two relationships discovered by the German chemist Hermann Franz Moritz Kopp (1817–1892). Kopp found "that the molecular heat capacity of a solid compound is the sum of the atomic heat capacities of the elements composing it; the elements having atomic heat capacities lower than those required by the Dulong–Petit law retain these lower values in their compounds." In studying organic compounds, Kopp found a regular relationship between boiling points and the number of CH2 groups present. Kopp–Neumann law The Kopp–Neumann law, named for Kopp and Franz Ernst Neumann, is a common approach for determining the specific heat C (in J·kg−1·K−1) of compounds using the following equation: where N is the total number of compound constituents, and Ci and fi denote the specific heat and mass fraction of the i-th constituent. This law works surprisingly well at room-temperature conditions, but poorly at elevated temperatures. See also Rule of mixtures References Frederick Seitz, The Modern Theory of Solids, McGraw-Hill, New York, USA, 1940, ASIN: B000OLCK08 Further reading Laws of thermodynamics
Kopp's law
[ "Physics", "Chemistry" ]
246
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics", "Laws of thermodynamics" ]
7,415,870
https://en.wikipedia.org/wiki/Motion%20analysis
Motion analysis is used in computer vision, image processing, high-speed photography and machine vision that studies methods and applications in which two or more consecutive images from an image sequences, e.g., produced by a video camera or high-speed camera, are processed to produce information based on the apparent motion in the images. In some applications, the camera is fixed relative to the scene and objects are moving around in the scene, in some applications the scene is more or less fixed and the camera is moving, and in some cases both the camera and the scene are moving. The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images. This means that motion analysis can produce time-dependent information about motion. Applications of motion analysis can be found in rather diverse areas, such as surveillance, medicine, film industry, automotive crash safety, ballistic firearm studies, biological science, flame propagation, and navigation of autonomous vehicles to name a few examples. Background A video camera can be seen as an approximation of a pinhole camera, which means that each point in the image is illuminated by some (normally one) point in the scene in front of the camera, usually by means of light that the scene point reflects from a light source. Each visible point in the scene is projected along a straight line that passes through the camera aperture and intersects the image plane. This means that at a specific point in time, each point in the image refers to a specific point in the scene. This scene point has a position relative to the camera, and if this relative position changes, it corresponds to a relative motion in 3D. It is a relative motion since it does not matter if it is the scene point, or the camera, or both, that are moving. It is only when there is a change in the relative position that the camera is able to detect that some motion has happened. By projecting the relative 3D motion of all visible points back into the image, the result is the motion field, describing the apparent motion of each image point in terms of a magnitude and direction of velocity of that point in the image plane. A consequence of this observation is that if the relative 3D motion of some scene points are along their projection lines, the corresponding apparent motion is zero. The camera measures the intensity of light at each image point, a light field. In practice, a digital camera measures this light field at discrete points, pixels, but given that the pixels are sufficiently dense, the pixel intensities can be used to represent most characteristics of the light field that falls onto the image plane. A common assumption of motion analysis is that the light reflected from the scene points does not vary over time. As a consequence, if an intensity I has been observed at some point in the image, at some point in time, the same intensity I will be observed at a position that is displaced relative to the first one as a consequence of the apparent motion. Another common assumption is that there is a fair amount of variation in the detected intensity over the pixels in an image. A consequence of this assumption is that if the scene point that corresponds to a certain pixel in the image has a relative 3D motion, then the pixel intensity is likely to change over time. Methods Motion detection One of the simplest type of motion analysis is to detect image points that refer to moving points in the scene. The typical result of this processing is a binary image where all image points (pixels) that relate to moving points in the scene are set to 1 and all other points are set to 0. This binary image is then further processed, e.g., to remove noise, group neighboring pixels, and label objects. Motion detection can be done using several methods; the two main groups are differential methods and methods based on background segmentation. Applications Human motion analysis In the areas of medicine, sports, video surveillance, physical therapy, and kinesiology, human motion analysis has become an investigative and diagnostic tool. See the section on motion capture for more detail on the technologies. Human motion analysis can be divided into three categories: human activity recognition, human motion tracking, and analysis of body and body part movement. Human activity recognition is most commonly used for video surveillance, specifically automatic motion monitoring for security purposes. Most efforts in this area rely on state-space approaches, in which sequences of static postures are statistically analyzed and compared to modeled movements. Template-matching is an alternative method whereby static shape patterns are compared to pre-existing prototypes. Human motion tracking can be performed in two or three dimensions. Depending on the complexity of analysis, representations of the human body range from basic stick figures to volumetric models. Tracking relies on the correspondence of image features between consecutive frames of video, taking into consideration information such as position, color, shape, and texture. Edge detection can be performed by comparing the color and/or contrast of adjacent pixels, looking specifically for discontinuities or rapid changes. Three-dimensional tracking is fundamentally identical to two-dimensional tracking, with the added factor of spatial calibration. Motion analysis of body parts is critical in the medical field. In postural and gait analysis, joint angles are used to track the location and orientation of body parts. Gait analysis is also used in sports to optimize athletic performance or to identify motions that may cause injury or strain. Tracking software that does not require the use of optical markers is especially important in these fields, where the use of markers may impede natural movement. Motion analysis in manufacturing Motion analysis is also applicable in the manufacturing process. Using high speed video cameras and motion analysis software, one can monitor and analyze assembly lines and production machines to detect inefficiencies or malfunctions. Manufacturers of sports equipment, such as baseball bats and hockey sticks, also use high speed video analysis to study the impact of projectiles. An experimental setup for this type of study typically uses a triggering device, external sensors (e.g., accelerometers, strain gauges), data acquisition modules, a high-speed camera, and a computer for storing the synchronized video and data. Motion analysis software calculates parameters such as distance, velocity, acceleration, and deformation angles as functions of time. This data is then used to design equipment for optimal performance. Additional applications for motion analysis The object and feature detecting capabilities of motion analysis software can be applied to count and track particles, such as bacteria, viruses, "ionic polymer-metal composites", micron-sized polystyrene beads, aphids, and projectiles. See also Mechanography Structure from motion Video motion analysis X-ray motion analysis References Research methods Motion in computer vision
Motion analysis
[ "Physics" ]
1,445
[ "Physical phenomena", "Motion (physics)", "Motion in computer vision" ]
7,416,843
https://en.wikipedia.org/wiki/Peskin%E2%80%93Takeuchi%20parameter
In particle physics, the Peskin–Takeuchi parameters are a set of three measurable quantities, called S, T, and U, that parameterize potential new physics contributions to electroweak radiative corrections. They are named after physicists Michael Peskin and Tatsu Takeuchi, who proposed the parameterization in 1990; proposals from two other groups (see References below) came almost simultaneously. The Peskin–Takeuchi parameters are defined so that they are all equal to zero at a reference point in the Standard Model, with a particular value chosen for the (then unmeasured) Higgs boson mass. The parameters are then extracted from a global fit to the high-precision electroweak data from particle collider experiments (mostly the Z pole data from the CERN LEP collider) and atomic parity violation. The measured values of the Peskin–Takeuchi parameters agree with the Standard Model. They can then be used to constrain models of new physics beyond the Standard Model. The Peskin–Takeuchi parameters are only sensitive to new physics that contributes to the oblique corrections, i.e., the vacuum polarization corrections to four-fermion scattering processes. Definitions The Peskin–Takeuchi parameterization is based on the following assumptions about the nature of the new physics: The electroweak gauge group is given by SU(2)L x U(1)Y, and thus there are no additional electroweak gauge bosons beyond the photon, Z boson, and W boson. In particular, this framework assumes there are no Z' or W' gauge bosons. If there are such particles, the S, T, U parameters do not in general provide a complete parameterization of the new physics effects. New physics couplings to light fermions are suppressed, and hence only oblique corrections need to be considered. In particular, the framework assumes that the nonoblique corrections (i.e., vertex corrections and box corrections) can be neglected. If this is not the case, then the process by which the S, T, U parameters are extracted from the precision electroweak data is no longer valid, and they no longer provide a complete parameterization of the new physics effects. The energy scale at which the new physics appears is large compared to the electroweak scale. This assumption is inherent in defining S, T, U independent of the momentum transfer in the process. With these assumptions, the oblique corrections can be parameterized in terms of four vacuum polarization functions: the self-energies of the photon, Z boson, and W boson, and the mixing between the photon and the Z boson induced by loop diagrams. Assumption number 3 above allows us to expand the vacuum polarization functions in powers of q2/M2, where M represents the heavy mass scale of the new interactions, and keep only the constant and linear terms in q2. We have, where denotes the derivative of the vacuum polarization function with respect to q2. The constant pieces of and are zero because of the renormalization conditions. We thus have six parameters to deal with. Three of these may be absorbed into the renormalization of the three input parameters of the electroweak theory, which are usually chosen to be the fine structure constant , as determined from quantum electrodynamic measurements (there is a significant running of α between the scale of the mass of the electron and the electroweak scale and this needs to be corrected for), the Fermi coupling constant GF, as determined from the muon decay which measures the weak current coupling strength at close to zero momentum transfer, and the Z boson mass MZ, leaving three left over which are measurable. This is because we are not able to determine which contribution comes from the Standard Model proper and which contribution comes from physics beyond the Standard Model (BSM) when measuring these three parameters. To us, the low energy processes could have equally well come from a pure Standard Model with redefined values of e, GF and MZ. These remaining three are the Peskin–Takeuchi parameters S, T and U, and are defined as: where sw and cw are the sine and cosine of the weak mixing angle, respectively. The definitions are carefully chosen so that Any BSM correction which is indistinguishable from a redefinition of e, GF and MZ (or equivalently, g1, g2 and ν) in the Standard Model proper at the tree level does not contribute to S, T or U. Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term only contributes to T and not to S or U. This term violates custodial symmetry. Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term only contributes to S and not to T or U. (The contribution of can be absorbed into g1 and the contribution of can be absorbed into g2). Assuming that the Higgs sector consists of electroweak doublet(s) H, the effective action term contributes to U. Uses The S parameter measures the difference between the number of left-handed fermions and the number of right-handed fermions that carry weak isospin. It tightly constrains the allowable number of new fourth-generation chiral fermions. This is a problem for theories like the simplest version of technicolor (physics) that contain a large number of extra fermion doublets. The T parameter measures isospin violation, since it is sensitive to the difference between the loop corrections to the Z boson vacuum polarization function and the W boson vacuum polarization function. An example of isospin violation is the large mass splitting between the top quark and the bottom quark, which are isospin partners to each other and in the limit of isospin symmetry would have equal mass. The S and T parameters are both affected by varying the mass of the Higgs boson (recall that the zero point of S and T is defined relative to a reference value of the Standard Model Higgs mass). Before the Higgs-like boson was discovered at the LHC, experiments at the CERN LEP collider set a lower bound of 114 GeV on its mass. If we assume that the Standard Model is correct, a best fit value of the Higgs mass could be extracted from the S, T fit. The best fit was near the LEP lower bound, and the 95% confidence level upper bound was around 200 GeV. Thus the measured mass of 125-126 GeV fits comfortably in this prediction, suggesting the Standard Model may be a good description up to energies past the TeV ( = 1,000 GeV) scale. The U parameter tends not to be very useful in practice, because the contributions to U from most new physics models are very small. This is because U actually parameterizes the coefficient of a dimension-eight operator, while S and T can be represented as dimension-six operators. See also Parameterized post-Newtonian formalism - a similar parametrization in the gravitational context References The following papers constitute the original proposals for the S, T, U parameters: The first detailed global fits were presented in: For a review, see: Electroweak theory Physics beyond the Standard Model
Peskin–Takeuchi parameter
[ "Physics" ]
1,525
[ "Physical phenomena", "Unsolved problems in physics", "Electroweak theory", "Fundamental interactions", "Particle physics", "Physics beyond the Standard Model" ]
7,417,294
https://en.wikipedia.org/wiki/Dodman
A dodman (plural "dodmen") or a hoddyman dod is a local English vernacular word for a land snail. The word is used in some of the counties of England. This word is found in the Norfolk dialect, according to the Oxford English Dictionary. Fairfax, in his Bulk and Selvedge (1674), speaks of "a snayl or dodman". Hodimadod is a similar word for snail that is more commonly used in the Buckinghamshire dialect. Alternatively (and apparently now more commonly used in the Norfolk dialect) are the closely related words Dodderman or Doddiman. In everyday folklore, these words are popularly said to be derived from the surname of a travelling cloth seller called Dudman, who supposedly had a bent back and carried a large roll of cloth on his back. The words to dodder, doddery, doddering, meaning to progress in an unsteady manner, are popularly said to have the same derivation. A traditional Norfolk rhyme goes as follows: The 'inventor' of ley lines, Alfred Watkins, thought that in the words "dodman" and the builder's "hod" there was a survival of an ancient British term for a surveyor. Watkins felt that the name came about because the snail's two horns resembled a surveyor's two surveying rods. Watkins also supported this idea with an etymology from 'doddering along' and 'dodge' (akin, in his mind, to the series of actions a surveyor would carry out in moving his rod back and forth until it accurately lined up with another one as a backsight or foresight) and the Welsh verb 'dodi' meaning to lay or place. He thus decided that The Long Man of Wilmington was an image of an ancient surveyor. References Mollusc common names Pseudoarchaeology Surveying
Dodman
[ "Engineering" ]
373
[ "Surveying", "Civil engineering" ]
7,417,940
https://en.wikipedia.org/wiki/Acoustic%20transmission%20line
An acoustic transmission line is the use of a long duct, which acts as an acoustic waveguide and is used to produce or transmit sound in an undistorted manner. Technically it is the acoustic analog of the electrical transmission line, typically conceived as a rigid-walled duct or tube, that is long and thin relative to the wavelength of sound present in it. Examples of transmission line (TL) related technologies include the (mostly obsolete) speaking tube, which transmitted sound to a different location with minimal loss and distortion, wind instruments such as the pipe organ, woodwind and brass which can be modeled in part as transmission lines (although their design also involves generating sound, controlling its timbre, and coupling it efficiently to the open air), and transmission line based loudspeakers which use the same principle to produce accurate extended low bass frequencies and avoid distortion. The comparison between an acoustic duct and an electrical transmission line is useful in "lumped-element" modeling of acoustical systems, in which acoustic elements like volumes, tubes, pistons, and screens can be modeled as single elements in a circuit. With the substitution of pressure for voltage, and volume particle velocity for current, the equations are essentially the same. Electrical transmission lines can be used to describe acoustic tubes and ducts, provided the frequency of the waves in the tube is below the critical frequency, such that they are purely planar. Design principles Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPL's and lower distortion levels, compared with reflex and infinite baffle designs. The calculation of the length of the line required for a certain bass extension appears to be straightforward, based on a simple formula: where is the sound frequency in hertz (Hz), is the speed of sound in air at 20°C in meters/second, and is the length of the transmission line in meters. The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. However, most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion. The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in a paper in the Journal of the Audio Engineering Society (JAES) in 1976 and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. These kinds of materials, however, produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High-specification acoustic foams, developed by loudspeaker manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. Quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies. Discovery and development The concept was termed "acoustical labyrinth" by Stromberg-Carlson Co. when used in their console radios beginning in 1936 (see Concert Grand 837G Ch= 837 Radio Stromberg-Carlson Australasia Pty | Radiomuseum). Benjamin Olney who worked for Stromberg-Carlson was the inventor of the Acoustical Labyrinth and wrote an article for the Journal of the Acoustic Society of America in October of 1936 entitled "A Method of Eliminating Cavity Resonance, Extending Low Frequency Response and Increasing Acoustic Damping in Cabinet Type Loudspeakers" see Stromberg-Carlson started manufacturing an Acoustic Labyrinth speaker enclosure meant for a 12" or 15" coaxial driver as early as 1952 as evident in an Audio Engineering article in July of 1952 (page 28) see and numerous ads in Hi-Fidelity Magazine in 1952 and thereafter. The Transmission line type of loudspeaker enclosure was proposed in October 1965 by Dr A.R. Bailey and A.H. Radford in Wireless World (p483-486) magazine. The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channeled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established. Uses Loudspeaker design Acoustic transmission lines gained attention in their use within loudspeakers in the 1960s and 1970s. In 1965, A R Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, detailed a working Transmission Line, which was commercialized by John Wright and partners under the brand name IMF and later TDL, and were sold by audiophile Irving M. "Bud" Fried in the United States. A transmission line is used in loudspeaker design, to reduce time, phase and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near-infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 7 Hz upwards, without needing a separate subwoofer. Irving M. Fried, an advocate of TL design, stated that: "I believe that speakers should preserve the integrity of the signal waveform and the Audio Perfectionist Journal has presented a great deal of information about the importance of time domain performance in loudspeakers. I’m not the only one who appreciates time- and phase-accurate speakers but I have been virtually the only advocate to speak out in print in recent years. There’s a reason for that." In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Depending upon the drive unit and quantity – and various physical properties – of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low-pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length is specified to reverse the phase of the rear output of the drive unit as it exits the vent. This energy combines with the output of the bass unit, extending its response and effectively creating a second driver. Sound ducts as transmission lines A duct for sound propagation also behaves like a transmission line (e.g. air conditioning duct, car muffler, ...). Its length may be similar to the wavelength of the sound passing through it, but the dimensions of its cross-section are normally smaller than one quarter the wavelength. Sound is introduced at one end of the tube by forcing the pressure across the whole cross-section to vary with time. An almost planar wavefront travels down the line at the speed of sound. When the wave reaches the end of the transmission line, behaviour depends on what is present at the end of the line. There are three possible scenarios: The frequency of the pulse generated at the transducer results in a pressure peak at the terminus exit (odd ordered harmonic open pipe resonance) resulting in effectively low acoustic impedance of the duct and high level of energy transfer. The frequency of the pulse generated at the transducer results in a pressure null at the terminus exit (even ordered harmonic open pipe anti -resonance) resulting in effectively high acoustic impedance of the duct and low level of energy transfer. The frequency of the pulse generated at the transducer results in neither a peak or null in which energy transfer is nominal or in keeping with typical energy dissipation with distance from the source. See also Frequency response Loudspeaker acoustics Loudspeaker measurement Speaking tube Transmission line loudspeaker References External links Quarterwave loudspeakers – Martin J King, developer of TL modeling software – TL theory & design Transmission Line Speakers Pages – TL projects, history & more Brines Acoustics Articles (Archived 2009-10-24) – Application, tips, essays Quarter Wave Tube - DiracDelta.co.uk – description of operation, equation and online calculation Loudspeaker technology Audio engineering
Acoustic transmission line
[ "Engineering" ]
2,351
[ "Electrical engineering", "Audio engineering" ]
355,377
https://en.wikipedia.org/wiki/Photonic%20crystal
A photonic crystal is an optical nanostructure in which the refractive index changes periodically. This affects the propagation of light in the same way that the structure of natural crystals gives rise to X-ray diffraction and that the atomic lattices (crystal structure) of semiconductors affect their conductivity of electrons. Photonic crystals occur in nature in the form of structural coloration and animal reflectors, and, as artificially produced, promise to be useful in a range of applications. Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made of thin film layers deposited on each other. Two-dimensional ones can be made by photolithography, or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other, direct laser writing, or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres. Photonic crystals can, in principle, find uses wherever light must be manipulated. For example, dielectric mirrors are one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals called photonic-crystal fibers are used for fiber-optic communication, among other applications. Three-dimensional crystals may one day be used in optical computers, and could lead to more efficient photovoltaic cells. Although the energy of light (and all electromagnetic radiation) is quantized in units called photons, the analysis of photonic crystals requires only classical physics. "Photonic" in the name is a reference to photonics, a modern designation for the study of light (optics) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicist Lord Rayleigh experimented with periodic multi-layer dielectric stacks, showing they can effect a photonic band-gap in one dimension. Research interest grew with work in 1987 by Eli Yablonovitch and Sajeev John on periodic optical structures with more than one dimension—now called photonic crystals. Introduction Photonic crystals are composed of periodic dielectric, metallo-dielectric—or even superconductor microstructures or nanostructures that affect electromagnetic wave propagation in the same way that the periodic potential in a semiconductor crystal affects the propagation of electrons, determining allowed and forbidden electronic energy bands. Photonic crystals contain regularly repeating regions of high and low refractive index. Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are called modes, and the ranges of wavelengths which propagate are called bands. Disallowed bands of wavelengths are called photonic band gaps. This gives rise to distinct optical phenomena, such as inhibition of spontaneous emission, high-reflecting omni-directional mirrors, and low-loss-waveguiding. The bandgap of photonic crystals can be understood as the destructive interference of multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids. There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make the Brillouin zone more similar to sphere. However, the former is limited by the available technologies and materials and the latter is restricted by the crystallographic restriction theorem. For this reason, the photonic crystals with a complete band gap demonstrated to date have face-centered cubic lattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing. The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited. Visible light ranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the average index of refraction. The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques of thin-film deposition. History Photonic crystals have been studied in one form or another since 1887, but no one used the term photonic crystal until over 100 years later—after Eli Yablonovitch and Sajeev John published two milestone papers on photonic crystals in 1987. The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by the American Physical Society. Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as the Bragg mirror) were studied extensively. Lord Rayleigh started their study in 1887, by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as a stop-band. Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example, VCSEL). The pass-bands and stop-bands in photonic crystals were first reduced to practice by Melvin M. Weiner who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed by Vladimir P. Bykov, who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used. The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979, who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonic density of states to control the spontaneous emission of materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light. After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (see Fabrication challenges), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of the electromagnetic fields known as scale invariance. In essence, electromagnetic fields, as the solutions to Maxwell's equations, have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.) By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime. The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known as Yablonovite. In 1996, Thomas Krauss demonstrated a two-dimensional photonic crystal at optical wavelengths. This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry. Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide. The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resulting metamaterial while mitigating wave interference effects. This provided “a missing degree of freedom in photonics” and resolved an important limitation in silicon photonics which was its restricted set of available materials insufficient to achieve complex optical on-chip functions. Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor. Total internal reflection confines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips. Autocloning fabrication technique, proposed for infrared and visible range photonic crystals by Sato et al. in 2002, uses electron-beam lithography and dry etching: lithographically formed layers of periodic grooves are stacked by regulated sputter deposition and etching, resulting in "stationary corrugations" and periodicity. Titanium dioxide/silica and tantalum pentoxide/silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition. Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used in photonic crystal fibres (otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed by Philip Russell in 1998, and can be designed to possess enhanced properties over (normal) optical fibres. Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication. Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated, for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures from self-assembly—essentially letting a mixture of dielectric nano-spheres settle from solution into three-dimensionally periodic structures that have photonic band-gaps. Vasily Astratov's group from the Ioffe Institute realized in 1995 that natural and synthetic opals are photonic crystals with an incomplete bandgap. The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at the University of Toronto, and Institute of Materials Science of Madrid (ICMM-CSIC), Spain. The ever-expanding field of natural photonics, bioinspiration and biomimetics—the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals. For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle. Analogously, in 2012 a diamond crystal structure was found in a weevil and a gyroid-type architecture in a butterfly. More recently, gyroid photonic crystals have been found in the feather barbs of blue-winged leafbirds and are responsible for the bird's shimmery blue coloration. Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump. Construction strategies The fabrication method depends on the number of dimensions that the photonic bandgap must exist in. One-dimensional photonic crystals To produce a one-dimensional photonic crystal, thin film layers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). A Bragg grating is an example of this type of photonic crystal. One-dimensional photonic crystals can include layers of non-linear optical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reach where N is the total number of layers. However, by using layers which include an optically anisotropic material, it has been shown that the field enhancement can reach , which, in conjunction with non-linear optics, has potential applications such as in the development of an all-optical switch. A one-dimensional photonic crystal can be implemented using repeated alternating layers of a metamaterial and vacuum. If the metamaterial is such that the relative permittivity and permeability follow the same wavelength dependence, then the photonic crystal behaves identically for TE and TM modes, that is, for both s and p polarizations of light incident at an angle. Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source. Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications. 1D photonic crystals doped with bio-active metals (i.e. silver) have been also proposed as sensing devices for bacterial contaminants. Similar planar 1D photonic crystals made of polymers have been used to detect volatile organic compounds vapors in atmosphere. In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color. For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures. Two-dimensional photonic crystals In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed. The Holey fiber or photonic crystal fiber can be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes. Three-dimensional photonic crystals There are several structure types that have been constructed: Spheres in a diamond lattice Yablonovite The woodpile structure – "rods" are repeatedly etched with beam lithography, filled in, and covered with a layer of new material. As the process repeats, the channels etched in each layer are perpendicular to the layer below, and parallel to and out of phase with the channels two layers below. The process repeats until the structure is of the desired height. The fill-in material is then dissolved using an agent that dissolves the fill-in material but not the deposition material. It is generally hard to introduce defects into this structure. Inverse opals or Inverse Colloidal Crystals-Spheres (such as polystyrene or silicon dioxide) can be allowed to deposit into a cubic close packed lattice suspended in a solvent. Then a hardener is introduced that makes a transparent solid out of the volume occupied by the solvent. The spheres are then dissolved with an acid such as Hydrochloric acid. The colloids can be either spherical or nonspherical. contains in excess of 750,000 polymer nanorods. Light focused on this beam splitter penetrates or is reflected, depending on polarization. Photonic crystal cavities Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosize cavity. This defect allows you to guide or to trap the light with the same function as nanophotonic resonator and it is characterized by the strong dielectric modulation in the photonic crystals. For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to the quality factor of the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related with cavity quantum electrodynamics and the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually in grating or distributed feedback structures. For two-dimensional photonic crystal cavities, they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelength mode volume. For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach, surface ion beam lithography, and micromanipulation technique. All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated. There is no full control with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction. Fabrication challenges Higher-dimensional photonic crystal fabrication faces two major challenges: Making them with enough precision to prevent scattering losses blurring the crystal properties Designing processes that can robustly mass-produce the crystals One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as a holey fiber. Using fiber draw techniques developed for communications fiber it meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such as silicon—that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip. For three dimensional photonic crystals, various techniques have been used—including photolithography and etching techniques similar to those used for integrated circuits. Some of these techniques are already commercially available. To avoid the complex machinery of nanotechnological methods, some alternate approaches involve growing photonic crystals from colloidal crystals as self-assembled structures. Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films of fcc lattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structural color effects. Computing photonic band structure The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in the dispersion relation of the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of the bandgap by computational modeling using any of the following methods: Plane wave expansion method Inverse dispersion method Finite element method Finite difference time domain method Order-n spectral method KKR method Bloch wave – MoM method Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases of n, the band index. For an introduction to photonic band structure, see K. Sakoda's and Joannopoulos books. The plane wave expansion method can be used to calculate the band structure using an eigen formulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector (DBR) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducible Brillouin zone. The Inverse dispersion method also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequency is considered as a parameter. Thus, it solves the dispersion relation instead of , which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account. To speed calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude. Applications Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form of thin-film optics, with applications from low and high reflection coatings on lenses and mirrors to colour changing paints and inks. Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications. The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventional optical fiber for applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such as optical nonlinearity required for the operation of optical transistors used in optical computers, when some technological aspects such as manufacturability and principal difficulties such as disorder are under control. SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays. SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices. They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing. These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation. In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells and optical sensors, including chemical sensors and biosensors. See also References External links Business report on Photonic Crystals in Metamaterials – see also Scope and Analyst Photonic crystals tutorials by Prof S. Johnson at MIT Photonic crystals an introduction Invisibility cloak created in 3-D; Photonic crystals(BBC) Condensed matter physics Metamaterials Photonics
Photonic crystal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,071
[ "Metamaterials", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
355,547
https://en.wikipedia.org/wiki/London%20dispersion%20force
London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest intermolecular force. Introduction The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant. While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation like , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by where is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds), such as hydrocarbons and highly symmetric molecules like bromine (Br2, a liquid at room temperature) or iodine (I2, a solid at room temperature). In hydrocarbons and waxes, the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces. When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions, the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules. Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F2, Cl2, Br2, I2). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms. Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons. Quantum mechanical theory The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum-mechanical theory based on second-order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied. London wrote a Taylor series expansion of the perturbation in , where is the distance between the nuclear centers of mass of the moieties. This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes, , and ionization energies, , (ancient term: ionization potentials). In this manner, the following approximation is obtained for the dispersion interaction between two atoms and . Here and are the polarizability volumes of the respective atoms. The quantities and are the first ionization energies of the atoms, and is the intermolecular distance. Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces. The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion. Relative magnitude Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given: See also Dispersion (chemistry) van der Waals force van der Waals molecule Non-covalent interactions References Intermolecular forces Chemical bonding sv:Dispersionkraft
London dispersion force
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,442
[ "Molecular physics", "Materials science", "Intermolecular forces", "Condensed matter physics", "nan", "Chemical bonding" ]
356,382
https://en.wikipedia.org/wiki/Gene%20regulatory%20network
A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo). The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory. In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects. In multicellular animals the same principle has been put in the service of gene cascades that control body-shape. Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism of epigenetics by which chromatin modification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use of morphogen gradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell and diffuses through adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate other morphogens that signal back to the original cell. Over longer distances morphogens may use the active process of signal transduction. Such signalling controls embryogenesis, the building of a body plan from scratch through a series of sequential steps. They also control and maintain adult bodies through feedback processes, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen in cancer. In parallel with this process of building structure, the gene cascade turns on genes that make structural proteins that give each cell the physical properties it needs. Overview At one level, biological cells can be thought of as "partially mixed bags" of biological chemicals – in the discussion of gene regulatory networks, these chemicals are mostly the messenger RNAs (mRNAs) and proteins that arise from gene expression. These mRNA and proteins interact with each other with various degrees of specificity. Some diffuse around the cell. Others are bound to cell membranes, interacting with molecules in the environment. Still others pass through cell membranes and mediate long range signals to other cells in a multi-cellular organism. These molecules and their interactions comprise a gene regulatory network. A typical gene regulatory network looks something like this: The nodes of this network can represent genes, proteins, mRNAs, protein/protein complexes or cellular processes. Nodes that are depicted as lying along vertical lines are associated with the cell/environment interfaces, while the others are free-floating and can diffuse. Edges between nodes represent interactions between the nodes, that can correspond to individual molecular reactions between DNA, mRNA, miRNA, proteins or molecular processes through which the products of one gene affect those of another, though the lack of experimentally obtained information often implies that some reactions are not modeled at such a fine level of detail. These interactions can be inductive (usually represented by arrowheads or the + sign), with an increase in the concentration of one leading to an increase in the other, inhibitory (represented with filled circles, blunt arrows or the minus sign), with an increase in one leading to a decrease in the other, or dual, when depending on the circumstances the regulator can activate or inhibit the target node. The nodes can regulate themselves directly or indirectly, creating feedback loops, which form cyclic chains of dependencies in the topological network. The network structure is an abstraction of the system's molecular or chemical dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. In practice, such GRNs are inferred from the biological literature on a given system and represent a distillation of the collective knowledge about a set of related biochemical reactions. To speed up the manual curation of GRNs, some recent efforts try to use text mining, curated databases, network inference from massive data, model checking and other information extraction technologies for this purpose. Genes can be viewed as nodes in the network, with input being proteins such as transcription factors, and outputs being the level of gene expression. The value of the node depends on a function which depends on the value of its regulators in previous time steps (in the Boolean network described below these are Boolean functions, typically AND, OR, and NOT). These functions have been interpreted as performing a kind of information processing within the cell, which determines cellular behavior. The basic drivers within cells are concentrations of some proteins, which determine both spatial (location within the cell or tissue) and temporal (cell cycle or developmental stage) coordinates of the cell, as a kind of "cellular memory". The gene networks are only beginning to be understood, and it is a next step for biology to attempt to deduce the functions for each gene "node", to help understand the behavior of the system in increasing levels of complexity, from gene to signaling pathway, cell or tissue level. Mathematical models of GRNs have been developed to capture the behavior of the system being modeled, and in some cases generate predictions corresponding with experimental observations. In some other cases, models have proven to make accurate novel predictions, which can be tested experimentally, thus suggesting new approaches to explore in an experiment that sometimes wouldn't be considered in the design of the protocol of an experimental laboratory. Modeling techniques include differential equations (ODEs), Boolean networks, Petri nets, Bayesian networks, graphical Gaussian network models, Stochastic, and Process Calculi. Conversely, techniques have been proposed for generating models of GRNs that best explain a set of time series observations. Recently it has been shown that ChIP-seq signal of histone modification are more correlated with transcription factor motifs at promoters in comparison to RNA level. Hence it is proposed that time-series histone modification ChIP-seq could provide more reliable inference of gene-regulatory networks in comparison to methods based on expression levels. Structure and evolution Global feature Gene regulatory networks are generally thought to be made up of a few highly connected nodes (hubs) and many poorly connected nodes nested within a hierarchical regulatory regime. Thus gene regulatory networks approximate a hierarchical scale free network topology. This is consistent with the view that most genes have limited pleiotropy and operate within regulatory modules. This structure is thought to evolve due to the preferential attachment of duplicated genes to more highly connected genes. Recent work has also shown that natural selection tends to favor networks with sparse connectivity. There are primarily two ways that networks can evolve, both of which can occur simultaneously. The first is that network topology can be changed by the addition or subtraction of nodes (genes) or parts of the network (modules) may be expressed in different contexts. The Drosophila Hippo signaling pathway provides a good example. The Hippo signaling pathway controls both mitotic growth and post-mitotic cellular differentiation. Recently it was found that the network the Hippo signaling pathway operates in differs between these two functions which in turn changes the behavior of the Hippo signaling pathway. This suggests that the Hippo signaling pathway operates as a conserved regulatory module that can be used for multiple functions depending on context. Thus, changing network topology can allow a conserved module to serve multiple functions and alter the final output of the network. The second way networks can evolve is by changing the strength of interactions between nodes, such as how strongly a transcription factor may bind to a cis-regulatory element. Such variation in strength of network edges has been shown to underlie between species variation in vulva cell fate patterning of Caenorhabditis worms. Local feature Another widely cited characteristic of gene regulatory network is their abundance of certain repetitive sub-networks known as network motifs. Network motifs can be regarded as repetitive topological patterns when dividing a big network into small blocks. Previous analysis found several types of motifs that appeared more often in gene regulatory networks than in randomly generated networks. As an example, one such motif is called feed-forward loops, which consist of three nodes. This motif is the most abundant among all possible motifs made up of three nodes, as is shown in the gene regulatory networks of fly, nematode, and human. The enriched motifs have been proposed to follow convergent evolution, suggesting they are "optimal designs" for certain regulatory purposes. For example, modeling shows that feed-forward loops are able to coordinate the change in node A (in terms of concentration and activity) and the expression dynamics of node C, creating different input-output behaviors. The galactose utilization system of E. coli contains a feed-forward loop which accelerates the activation of galactose utilization operon galETK, potentially facilitating the metabolic transition to galactose when glucose is depleted. The feed-forward loop in the arabinose utilization systems of E.coli delays the activation of arabinose catabolism operon and transporters, potentially avoiding unnecessary metabolic transition due to temporary fluctuations in upstream signaling pathways. Similarly in the Wnt signaling pathway of Xenopus, the feed-forward loop acts as a fold-change detector that responses to the fold change, rather than the absolute change, in the level of β-catenin, potentially increasing the resistance to fluctuations in β-catenin levels. Following the convergent evolution hypothesis, the enrichment of feed-forward loops would be an adaptation for fast response and noise resistance. A recent research found that yeast grown in an environment of constant glucose developed mutations in glucose signaling pathways and growth regulation pathway, suggesting regulatory components responding to environmental changes are dispensable under constant environment. On the other hand, some researchers hypothesize that the enrichment of network motifs is non-adaptive. In other words, gene regulatory networks can evolve to a similar structure without the specific selection on the proposed input-output behavior. Support for this hypothesis often comes from computational simulations. For example, fluctuations in the abundance of feed-forward loops in a model that simulates the evolution of gene regulatory networks by randomly rewiring nodes may suggest that the enrichment of feed-forward loops is a side-effect of evolution. In another model of gene regulator networks evolution, the ratio of the frequencies of gene duplication and gene deletion show great influence on network topology: certain ratios lead to the enrichment of feed-forward loops and create networks that show features of hierarchical scale free networks. De novo evolution of coherent type 1 feed-forward loops has been demonstrated computationally in response to selection for their hypothesized function of filtering out a short spurious signal, supporting adaptive evolution, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored. Bacterial regulatory networks Regulatory networks allow bacteria to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals. One example stress is when the environment suddenly becomes poor of nutrients. This triggers a complex adaptation process in bacteria, such as E. coli. After this environmental change, thousands of genes change expression level. However, these changes are predictable from the topology and logic of the gene network that is reported in RegulonDB. Specifically, on average, the response strength of a gene was predictable from the difference between the numbers of activating and repressing input transcription factors of that gene. Modelling Coupled ordinary differential equations It is common to model such a network with a set of coupled ordinary differential equations (ODEs) or SDEs, describing the reaction kinetics of the constituent parts. Suppose that our regulatory network has nodes, and let represent the concentrations of the corresponding substances at time . Then the temporal evolution of the system can be described approximately by where the functions express the dependence of on the concentrations of other substances present in the cell. The functions are ultimately derived from basic principles of chemical kinetics or simple expressions derived from these e.g. Michaelis–Menten enzymatic kinetics. Hence, the functional forms of the are usually chosen as low-order polynomials or Hill functions that serve as an ansatz for the real molecular dynamics. Such models are then studied using the mathematics of nonlinear dynamics. System-specific information, like reaction rate constants and sensitivities, are encoded as constant parameters. By solving for the fixed point of the system: for all , one obtains (possibly several) concentration profiles of proteins and mRNAs that are theoretically sustainable (though not necessarily stable). Steady states of kinetic equations thus correspond to potential cell types, and oscillatory solutions to the above equation to naturally cyclic cell types. Mathematical stability of these attractors can usually be characterized by the sign of higher derivatives at critical points, and then correspond to biochemical stability of the concentration profile. Critical points and bifurcations in the equations correspond to critical cell states in which small state or parameter perturbations could switch the system between one of several stable differentiation fates. Trajectories correspond to the unfolding of biological pathways and transients of the equations to short-term biological events. For a more mathematical discussion, see the articles on nonlinearity, dynamical systems, bifurcation theory, and chaos theory. Boolean network The following example illustrates how a Boolean network can model a GRN together with its gene products (the outputs) and the substances from the environment that affect it (the inputs). Stuart Kauffman was amongst the first biologists to use the metaphor of Boolean networks to model genetic regulatory networks. Each gene, each input, and each output is represented by a node in a directed graph in which there is an arrow from one node to another if and only if there is a causal link between the two nodes. Each node in the graph can be in one of two states: on or off. For a gene, "on" corresponds to the gene being expressed; for inputs and outputs, "on" corresponds to the substance being present. Time is viewed as proceeding in discrete steps. At each step, the new state of a node is a Boolean function of the prior states of the nodes with arrows pointing towards it. The validity of the model can be tested by comparing simulation results with time series observations. A partial validation of a Boolean network model can also come from testing the predicted existence of a yet unknown regulatory connection between two particular transcription factors that each are nodes of the model. Continuous networks Continuous network models of GRNs are an extension of the Boolean networks described above. Nodes still represent genes and connections between them regulatory influences on gene expression. Genes in biological systems display a continuous range of activity levels and it has been argued that using a continuous representation captures several properties of gene regulatory networks not present in the Boolean model. Formally most of these approaches are similar to an artificial neural network, as inputs to a node are summed up and the result serves as input to a sigmoid function, e.g., but proteins do often control gene expression in a synergistic, i.e. non-linear, way. However, there is now a continuous network model that allows grouping of inputs to a node thus realizing another level of regulation. This model is formally closer to a higher order recurrent neural network. The same model has also been used to mimic the evolution of cellular differentiation and even multicellular morphogenesis. Stochastic gene networks Experimental results have demonstrated that gene expression is a stochastic process. Thus, many authors are now using the stochastic formalism, after the work by Arkin et al. Works on single gene expression and small synthetic genetic networks, such as the genetic toggle switch of Tim Gardner and Jim Collins, provided additional experimental data on the phenotypic variability and the stochastic nature of gene expression. The first versions of stochastic models of gene expression involved only instantaneous reactions and were driven by the Gillespie algorithm. Since some processes, such as gene transcription, involve many reactions and could not be correctly modeled as an instantaneous reaction in a single step, it was proposed to model these reactions as single step multiple delayed reactions in order to account for the time it takes for the entire process to be complete. From here, a set of reactions were proposed that allow generating GRNs. These are then simulated using a modified version of the Gillespie algorithm, that can simulate multiple time delayed reactions (chemical reactions where each of the products is provided a time delay that determines when will it be released in the system as a "finished product"). For example, basic transcription of a gene can be represented by the following single-step reaction (RNAP is the RNA polymerase, RBS is the RNA ribosome binding site, and Pro i is the promoter region of gene i): Furthermore, there seems to be a trade-off between the noise in gene expression, the speed with which genes can switch, and the metabolic cost associated their functioning. More specifically, for any given level of metabolic cost, there is an optimal trade-off between noise and processing speed and increasing the metabolic cost leads to better speed-noise trade-offs. A recent work proposed a simulator (SGNSim, Stochastic Gene Networks Simulator), that can model GRNs where transcription and translation are modeled as multiple time delayed events and its dynamics is driven by a stochastic simulation algorithm (SSA) able to deal with multiple time delayed events. The time delays can be drawn from several distributions and the reaction rates from complex functions or from physical parameters. SGNSim can generate ensembles of GRNs within a set of user-defined parameters, such as topology. It can also be used to model specific GRNs and systems of chemical reactions. Genetic perturbations such as gene deletions, gene over-expression, insertions, frame shift mutations can also be modeled as well. The GRN is created from a graph with the desired topology, imposing in-degree and out-degree distributions. Gene promoter activities are affected by other genes expression products that act as inputs, in the form of monomers or combined into multimers and set as direct or indirect. Next, each direct input is assigned to an operator site and different transcription factors can be allowed, or not, to compete for the same operator site, while indirect inputs are given a target. Finally, a function is assigned to each gene, defining the gene's response to a combination of transcription factors (promoter state). The transfer functions (that is, how genes respond to a combination of inputs) can be assigned to each combination of promoter states as desired. In other recent work, multiscale models of gene regulatory networks have been developed that focus on synthetic biology applications. Simulations have been used that model all biomolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems. Prediction Other work has focused on predicting the gene expression levels in a gene regulatory network. The approaches used to model gene regulatory networks have been constrained to be interpretable and, as a result, are generally simplified versions of the network. For example, Boolean networks have been used due to their simplicity and ability to handle noisy data but lose data information by having a binary representation of the genes. Also, artificial neural networks omit using a hidden layer so that they can be interpreted, losing the ability to model higher order correlations in the data. Using a model that is not constrained to be interpretable, a more accurate model can be produced. Being able to predict gene expressions more accurately provides a way to explore how drugs affect a system of genes as well as for finding which genes are interrelated in a process. This has been encouraged by the DREAM competition which promotes a competition for the best prediction algorithms. Some other recent work has used artificial neural networks with a hidden layer. Applications Multiple sclerosis There are three classes of multiple sclerosis: relapsing-remitting (RRMS), primary progressive (PPMS) and secondary progressive (SPMS). Gene regulatory network (GRN) plays a vital role to understand the disease mechanism across these three different multiple sclerosis classes. See also Body plan Cis-regulatory module Genenetwork (database) Morphogen Operon Synexpression Systems biology Weighted gene co-expression network analysis References Further reading External links Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform Open source web service for GRN analysis BIB: Yeast Biological Interaction Browser Graphical Gaussian models for genome data – Inference of gene association networks with GGMs A bibliography on learning causal networks of gene interactions – regularly updated, contains hundreds of links to papers from bioinformatics, statistics, machine learning. https://web.archive.org/web/20060907074456/http://mips.gsf.de/proj/biorel/ BIOREL is a web-based resource for quantitative estimation of the gene network bias in relation to available database information about gene activity/function/properties/associations/interactio. Evolving Biological Clocks using Genetic Regulatory Networks – Information page with model source code and Java applet. Engineered Gene Networks Tutorial: Genetic Algorithms and their Application to the Artificial Evolution of Genetic Regulatory Networks BEN: a web-based resource for exploring the connections between genes, diseases, and other biomedical entities Global protein-protein interaction and gene regulation network of Arabidopsis thaliana Gene expression Networks Systems biology Evolutionary developmental biology
Gene regulatory network
[ "Chemistry", "Biology" ]
4,770
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Systems biology" ]
357,190
https://en.wikipedia.org/wiki/Kondo%20effect
In physics, the Kondo effect describes the scattering of conduction electrons in a metal due to magnetic impurities, resulting in a characteristic change i.e. a minimum in electrical resistivity with temperature. The cause of the effect was first explained by Jun Kondo, who applied third-order perturbation theory to the problem to account for scattering of s-orbital conduction electrons off d-orbital electrons localized at impurities (Kondo model). Kondo's calculation predicted that the scattering rate and the resulting part of the resistivity should increase logarithmically as the temperature approaches 0 K. Extended to a lattice of magnetic impurities, the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium, praseodymium, and ytterbium, and actinide elements such as uranium. The Kondo effect has also been observed in quantum dot systems. Theory The dependence of the resistivity on temperature , including the Kondo effect, is written as where is the residual resistivity, the term shows the contribution from the Fermi liquid properties, and the term is from the lattice vibrations: , , and are constants independent of temperature. Jun Kondo derived the third term with logarithmic dependence on temperature and the experimentally observed concentration dependence. History In 1930, Walther Meissner and B. Voigt observed that the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. Similar results were discovered in other metals. Kondo described the three puzzling aspects that frustrated previous researchers who tried to explain the effect: The resistivity of a truly pure metal is expected to decrease monotonically, because with lower temperature, the probability of electron-phonon scattering decreases. The resistivity should rapidly plateau when the temperature drops below the Debye temperature of the phonons, corresponding with the highest allowed mode of vibration of the metal. However, in the AuFe alloy, the resistivity continues to rise sharply below 0.01 K, yet there seemed to be no energy gap in AuFe alloy that small. The phenomenon is universal, so any explanation should apply in general. Experiments in the 1960s by Myriam Sarachik at Bell Laboratories showed that phenomenon was caused by magnetic impurity in nominally pure metals. When Kondo sent a preview of his paper to Sarachik, Sarachik confirmed the data fit the theory. Kondo's solution was derived using perturbation theory resulting in a divergence as the temperature approaches 0 K, but later methods used non-perturbative techniques to refine his result. These improvements produced a finite resistivity but retained the feature of a resistance minimum at a non-zero temperature. One defines the Kondo temperature as the energy scale limiting the validity of the Kondo results. The Anderson impurity model and accompanying Wilsonian renormalization theory were an important contribution to understanding the underlying physics of the problem. Based on the Schrieffer–Wolff transformation, it was shown that the Kondo model lies in the strong coupling regime of the Anderson impurity model. The Schrieffer–Wolff transformation projects out the high energy charge excitations in the Anderson impurity model, obtaining the Kondo model as an effective Hamiltonian. The Kondo effect can be considered as an example of asymptotic freedom, i.e. a situation where the coupling becomes non-perturbatively strong at low temperatures and low energies. In the Kondo problem, the coupling refers to the interaction between the localized magnetic impurities and the itinerant electrons. Examples Extended to a lattice of magnetic ions, the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium, praseodymium, and ytterbium, and actinide elements such as uranium. In heavy fermion materials, the non-perturbative growth of the interaction leads to quasi-electrons with masses up to thousands of times the free electron mass, i.e., the electrons are dramatically slowed by the interactions. In a number of instances they are superconductors. It is believed that a manifestation of the Kondo effect is necessary for understanding the unusual metallic delta-phase of plutonium. The Kondo effect has been observed in quantum dot systems. In such systems, a quantum dot with at least one unpaired electron behaves as a magnetic impurity, and when the dot is coupled to a metallic conduction band, the conduction electrons can scatter off the dot. This is completely analogous to the more traditional case of a magnetic impurity in a metal. Band-structure hybridization and flat band topology in Kondo insulators have been imaged in angle-resolved photoemission spectroscopy experiments. In 2012, Beri and Cooper proposed a topological Kondo effect could be found with Majorana fermions, while it has been shown that quantum simulations with ultracold atoms may also demonstrate the effect. In 2017, teams from the Vienna University of Technology and Rice University conducted experiments into the development of new materials made from the metals cerium, bismuth and palladium in specific combinations and theoretical work experimenting with models of such structures, respectively. The results of the experiments were published in December 2017 and, together with the theoretical work, lead to the discovery of a new state, a correlation-driven Weyl semimetal. The team dubbed this new quantum material Weyl-Kondo semimetal. References Further reading Kondo Effect - 40 Years after the Discovery - special issue of the Journal of the Physical Society of Japan . Monograph by Kondo himeslf. Monograph on newer versions of the Kondo effect in non-magnetic contexts especially Electrical resistance and conductance Correlated electrons Electric and magnetic fields in matter Physical phenomena
Kondo effect
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,228
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Correlated electrons", "Wikipedia categories named after physical quantities", "Electrical resistance and conductance" ]
357,353
https://en.wikipedia.org/wiki/Large%20Hadron%20Collider
The Large Hadron Collider (LHC) is the world's largest and highest-energy particle accelerator. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories across more than 100 countries. It lies in a tunnel in circumference and as deep as beneath the France–Switzerland border near Geneva. The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.5 TeV per beam (13.0 TeV total collision energy). At the end of 2018, it was shut down for maintenance and further upgrades, reopened over three years later in April 2022. The collider has four crossing points where the accelerated particles collide. Nine detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions, such as in lead–lead collisions and proton–lead collisions. The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and studying other unresolved questions in particle physics. Background The term hadron refers to subatomic composite particles composed of quarks held together by the strong force (analogous to the way that atoms and molecules are held together by the electromagnetic force). The best-known hadrons are the baryons such as protons and neutrons; hadrons also include mesons such as the pion and kaon, which were discovered during cosmic ray experiments in the late 1940s and early 1950s. A collider is a type of a particle accelerator that brings two opposing particle beams together such that the particles collide. In particle physics, colliders, though harder to construct, are a powerful research tool because they reach a much higher center of mass energy than fixed target setups. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. Many of these byproducts are produced only by high-energy collisions, and they decay after very short periods of time. Thus many of them are hard or nearly impossible to study in other ways. Purpose Many physicists hope that the Large Hadron Collider will help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among elementary particles and the deep structure of space and time, particularly the interrelation between quantum mechanics and general relativity. These high-energy particle experiments can provide data to support different scientific models. For example, the Standard Model and Higgsless model required high-energy particle experiment data to validate their predictions and allow further theoretical development. The Standard Model was completed by detection of the Higgs boson by the LHC in 2012. LHC collisions have explored other questions, including: Do all known particles have supersymmetric partners, as part of supersymmetry in an extension of the Standard Model and Poincaré symmetry? Are there extra dimensions, as predicted by various models based on string theory, and can we detect them? What is the nature of the dark matter, a hypothetical form of matter which appears to account for 27% of the mass-energy of the universe? Other open questions that may be explored using high-energy particle collisions include: It is already known that electromagnetism and the weak nuclear force are different manifestations of a single force called the electroweak force. The LHC may clarify whether the electroweak force and the strong nuclear force are similarly just different manifestations of one universal unified force, as predicted by various Grand Unification Theories. Why is the fourth fundamental force (gravity) so many orders of magnitude weaker than the other three fundamental forces? See also Hierarchy problem. Are there additional sources of quark flavour mixing beyond those already present within the Standard Model? Why are there apparent violations of the symmetry between matter and antimatter? See also CP violation. What are the nature and properties of quark–gluon plasma, thought to have existed in the early universe and in certain compact and strange astronomical objects today? This will be investigated by heavy ion collisions, mainly in ALICE, but also in CMS, ATLAS and LHCb. First observed in 2010, findings published in 2012 confirmed the phenomenon of jet quenching in heavy-ion collisions. Design The collider is contained in a circular tunnel, with a circumference of , at a depth ranging from underground. The variation in depth was deliberate, to reduce the amount of tunnel that lies under the Jura Mountains to avoid having to excavate a vertical access shaft there. A tunnel was chosen to avoid having to purchase expensive land on the surface and to take advantage of the shielding against background radiation that the Earth's crust provides. The wide concrete-lined tunnel, constructed between 1983 and 1988, was formerly used to house the Large Electron–Positron Collider. The tunnel crosses the border between Switzerland and France at four points, with most of it in France. Surface buildings hold ancillary equipment such as compressors, ventilation equipment, control electronics and refrigeration plants. The collider tunnel contains two adjacent parallel beamlines (or beam pipes) each containing a beam, which travel in opposite directions around the ring. The beams intersect at four points around the ring, which is where the particle collisions take place. Some 1,232 dipole magnets keep the beams on their circular path (see image), while an additional 392 quadrupole magnets are used to keep the beams focused, with stronger quadrupole magnets close to the intersection points in order to maximize the chances of interaction where the two beams cross. Magnets of higher multipole orders are used to correct smaller imperfections in the field geometry. In total, about 10,000 superconducting magnets are installed, with the dipole magnets having a mass of over 27 tonnes. About 96 tonnes of superfluid helium-4 is needed to keep the magnets, made of copper-clad niobium-titanium, at their operating temperature of , making the LHC the largest cryogenic facility in the world at liquid helium temperature. LHC uses 470 tonnes of Nb–Ti superconductor. During LHC operations, the CERN site draws roughly 200 MW of electrical power from the French electrical grid, which, for comparison, is about one-third the energy consumption of the city of Geneva; the LHC accelerator and detectors draw about 120 MW thereof. Each day of its operation generates 140 terabytes of data. When running an energy of 6.5 TeV per proton, once or twice a day, as the protons are accelerated from 450 GeV to 6.5 TeV, the field of the superconducting dipole magnets is increased from 0.54 to . The protons each have an energy of 6.5 TeV, giving a total collision energy of 13 TeV. At this energy, the protons have a Lorentz factor of about 6,930 and move at about , or about slower than the speed of light (c). It takes less than for a proton to travel 26.7 km around the main ring. This results in per second for protons whether the particles are at low or high energy in the main ring, since the speed difference between these energies is beyond the fifth decimal. Rather than having continuous beams, the protons are bunched together, into up to , with in each bunch so that interactions between the two beams take place at discrete intervals, mainly apart, providing a bunch collision rate of 40 MHz. It was operated with fewer bunches in the first years. The design luminosity of the LHC is 1034 cm−2s−1, which was first reached in June 2016. By 2017, twice this value was achieved. Before being injected into the main accelerator, the particles are prepared by a series of systems that successively increase their energy. The first system is the linear particle accelerator Linac4 generating 160 MeV negative hydrogen ions (H− ions), which feeds the Proton Synchrotron Booster (PSB). There, both electrons are stripped from the hydrogen ions leaving only the nucleus containing one proton. Protons are then accelerated to 2 GeV and injected into the Proton Synchrotron (PS), where they are accelerated to 26 GeV. Finally, the Super Proton Synchrotron (SPS) is used to increase their energy further to 450 GeV before they are at last injected (over a period of several minutes) into the main ring. Here, the proton bunches are accumulated, accelerated (over a period of ) to their peak energy, and finally circulated for 5 to while collisions occur at the four intersection points. The LHC physics programme is mainly based on proton–proton collisions. However, during shorter running periods, typically one month per year, heavy-ion collisions are included in the programme. While lighter ions are considered as well, the baseline scheme deals with lead ions (see A Large Ion Collider Experiment). The lead ions are first accelerated by the linear accelerator LINAC 3, and the Low Energy Ion Ring (LEIR) is used as an ion storage and cooler unit. The ions are then further accelerated by the PS and SPS before being injected into LHC ring, where they reach an energy of 2.3 TeV per nucleon (or 522 TeV per ion), higher than the energies reached by the Relativistic Heavy Ion Collider. The aim of the heavy-ion programme is to investigate quark–gluon plasma, which existed in the early universe. Detectors Nine detectors have been built in large caverns excavated at the LHC's intersection points. Two of them, the ATLAS experiment and the Compact Muon Solenoid (CMS), are large general-purpose particle detectors. ALICE and LHCb have more specialized roles, while the other five—TOTEM, MoEDAL, LHCf, SND and FASER—are much smaller and are for very specialized research. The ATLAS and CMS experiments discovered the Higgs boson, which is strong evidence that the Standard Model has the correct mechanism of giving mass to elementary particles. Computing and analysis facilities Data produced by LHC, as well as LHC-related simulation, were estimated at 200 petabytes per year. The LHC Computing Grid was constructed as part of the LHC design, to handle the massive amounts of data expected for its collisions. It is an international collaborative project that consists of a grid-based computer network infrastructure initially connecting 140 computing centres in 35 countries (over 170 in more than 40 countries ). It was designed by CERN to handle the significant volume of data produced by LHC experiments, incorporating both private fibre optic cable links and existing high-speed portions of the public Internet to enable data transfer from CERN to academic institutions around the world. The LHC Computing Grid consists of global federations across Europe, Asia Pacific and the Americas. The distributed computing project LHC@home was started to support the construction and calibration of the LHC. The project uses the BOINC platform, enabling anybody with an Internet connection and a computer running Mac OS X, Windows or Linux to use their computer's idle time to simulate how particles will travel in the beam pipes. With this information, the scientists are able to determine how the magnets should be calibrated to gain the most stable "orbit" of the beams in the ring. In August 2011, a second application (Test4Theory) went live which performs simulations against which to compare actual test data, to determine confidence levels of the results. By 2012, data from over 6 quadrillion () LHC proton–proton collisions had been analysed. The LHC Computing Grid had become the world's largest computing grid in 2012, comprising over 170 computing facilities in a worldwide network across more than 40 countries. Operational history The LHC first went operational on 10 September 2008, but initial testing was delayed for 14 months from 19 September 2008 to 20 November 2009, following a magnet quench incident that caused extensive damage to over 50 superconducting magnets, their mountings, and the vacuum pipe. During its first run (2010–2013), the LHC collided two opposing particle beams of either protons at up to 4 teraelectronvolts or , or lead nuclei (574 TeV per nucleus, or 2.76 TeV per nucleon). Its first run discoveries included the long-sought Higgs boson, several composite particles (hadrons) like the χb (3P) bottomonium state, the first creation of a quark–gluon plasma, and the first observations of the very rare decay of the Bs meson into two muons (Bs0 → μ+μ−), which challenged the validity of existing models of supersymmetry. Construction Operational challenges The size of the LHC constitutes an exceptional engineering challenge with unique operational issues on account of the amount of energy stored in the magnets and the beams. While operating, the total energy stored in the magnets is and the total energy carried by the two beams reaches . Loss of only one ten-millionth part (10−7) of the beam is sufficient to quench a superconducting magnet, while each of the two beam dumps must absorb . These energies are carried by very little matter: under nominal operating conditions (2,808 bunches per beam, 1.15×1011 protons per bunch), the beam pipes contain 1.0×10−9 gram of hydrogen, which, in standard conditions for temperature and pressure, would fill the volume of one grain of fine sand. Cost With a budget of €7.5 billion (about $9bn or £6.19bn ), the LHC is one of the most expensive scientific instruments ever built. The total cost of the project is expected to be of the order of 4.6bn Swiss francs (SFr) (about $4.4bn, €3.1bn, or £2.8bn ) for the accelerator and 1.16bn (SFr) (about $1.1bn, €0.8bn, or £0.7bn ) for the CERN contribution to the experiments. The construction of LHC was approved in 1995 with a budget of SFr 2.6bn, with another SFr 210M toward the experiments. However, cost overruns, estimated in a major review in 2001 at around SFr 480M for the accelerator, and SFr 50M for the experiments, along with a reduction in CERN's budget, pushed the completion date from 2005 to April 2007. The superconducting magnets were responsible for SFr 180M of the cost increase. There were also further costs and delays owing to engineering difficulties encountered while building the cavern for the Compact Muon Solenoid, and also due to magnet supports which were insufficiently strongly designed and failed their initial testing (2007) and damage from a magnet quench and liquid helium escape (inaugural testing, 2008). Because electricity costs are lower during the summer, the LHC normally does not operate over the winter months, although exceptions over the 2009/10 and 2012/2013 winters were made to make up for the 2008 start-up delays and to improve precision of measurements of the new particle discovered in 2012, respectively. Construction accidents and delays On 25 October 2005, José Pereira Lages, a technician, was killed in the LHC when a switchgear that was being transported fell on top of him. On 27 March 2007, a cryogenic magnet support designed and provided by Fermilab and KEK broke during an initial pressure test involving one of the LHC's inner triplet (focusing quadrupole) magnet assemblies. No one was injured. Fermilab director Pier Oddone stated "In this case we are dumbfounded that we missed some very simple balance of forces". The fault had been present in the original design, and remained during four engineering reviews over the following years. Analysis revealed that its design, made as thin as possible for better insulation, was not strong enough to withstand the forces generated during pressure testing. Details are available in a statement from Fermilab, with which CERN is in agreement. Repairing the broken magnet and reinforcing the eight identical assemblies used by LHC delayed the start-up date, then planned for November 2007. On 19 September 2008, during initial testing, a faulty electrical connection led to a magnet quench (the sudden loss of a superconducting magnet's superconducting ability owing to warming or electric field effects). Six tonnes of supercooled liquid helium—used to cool the magnets—escaped, with sufficient force to break 10-ton magnets nearby from their mountings, and caused considerable damage and contamination of the vacuum tube. Repairs and safety checks caused a delay of around 14 months. Two vacuum leaks were found in July 2009, and the start of operations was further postponed to mid-November 2009. Exclusion of Russia With the 2022 Russian invasion of Ukraine, the participation of Russians with CERN was called into question. About 8% of the workforce are of Russian nationality. In June 2022, CERN said the governing council "intends to terminate" CERN's cooperation agreements with Belarus and Russia when they expire, respectively in June and December 2024. CERN said it would monitor developments in Ukraine and remains prepared to take additional steps as warranted. CERN further said that it would reduce the Ukrainian contribution to CERN for 2022 to the amount already remitted to the Organization, thereby waiving the second installment of the contribution. Initial lower magnet currents In both of its runs (2010 to 2012 and 2015), the LHC was initially run at energies below its planned operating energy, and ramped up to just 2 x 4 TeV energy on its first run and 2 x 6.5 TeV on its second run, below the design energy of 2 x 7 TeV. This is because massive superconducting magnets require considerable magnet training to handle the high currents involved without losing their superconducting ability, and the high currents are necessary to allow a high proton energy. The "training" process involves repeatedly running the magnets with lower currents to provoke any quenches or minute movements that may result. It also takes time to cool down magnets to their operating temperature of around 1.9 K (close to absolute zero). Over time the magnet "beds in" and ceases to quench at these lesser currents and can handle the full design current without quenching; CERN media describe the magnets as "shaking out" the unavoidable tiny manufacturing imperfections in their crystals and positions that had initially impaired their ability to handle their planned currents. The magnets, over time and with training, gradually become able to handle their full planned currents without quenching. Inaugural tests (2008) The first beam was circulated through the collider on the morning of 10 September 2008. CERN successfully fired the protons around the tunnel in stages, three kilometres at a time. The particles were fired in a clockwise direction into the accelerator and successfully steered around it at 10:28 local time. The LHC successfully completed its major test: after a series of trial runs, two white dots flashed on a computer screen showing the protons travelled the full length of the collider. It took less than one hour to guide the stream of particles around its inaugural circuit. CERN next successfully sent a beam of protons in an anticlockwise direction, taking slightly longer at one and a half hours owing to a problem with the cryogenics, with the full circuit being completed at 14:59. Quench incident On 19 September 2008, a magnet quench occurred in about 100 bending magnets in sectors 3 and 4, where an electrical fault vented about six tonnes of liquid helium (the magnets' cryogenic coolant) into the tunnel. The escaping vapour expanded with explosive force, damaging 53 superconducting magnets and their mountings, and contaminating the vacuum pipe, which also lost vacuum conditions. Shortly after the incident, CERN reported that the most likely cause of the problem was a faulty electrical connection between two magnets. It estimated that repairs would take at least two months, owing to the time needed to warm up the affected sectors and then cool them back down to operating temperature. CERN released an interim technical report and preliminary analysis of the incident on 15 and 16 October 2008 respectively, and a more detailed report on 5 December 2008. The analysis of the incident by CERN confirmed that an electrical fault had indeed been the cause. The faulty electrical connection had led (correctly) to a failsafe power abort of the electrical systems powering the superconducting magnets, but had also caused an electric arc (or discharge) which damaged the integrity of the supercooled helium's enclosure and vacuum insulation, causing the coolant's temperature and pressure to rapidly rise beyond the ability of the safety systems to contain it, and leading to a temperature rise of about 100 degrees Celsius in some of the affected magnets. Energy stored in the superconducting magnets and electrical noise induced in other quench detectors also played a role in the rapid heating. Around two tonnes of liquid helium escaped explosively before detectors triggered an emergency stop, and a further four tonnes leaked at lower pressure in the aftermath. A total of 53 magnets were damaged in the incident and were repaired or replaced during the winter shutdown. This accident was thoroughly discussed in a 22 February 2010 Superconductor Science and Technology article by CERN physicist Lucio Rossi. In the original schedule for LHC commissioning, the first "modest" high-energy collisions at a centre-of-mass energy of 900 GeV were expected to take place before the end of September 2008, and the LHC was expected to be operating at 10 TeV by the end of 2008. However, owing to the delay caused by the incident, the collider was not operational until November 2009. Despite the delay, LHC was officially inaugurated on 21 October 2008, in the presence of political leaders, science ministers from CERN's 20 Member States, CERN officials, and members of the worldwide scientific community. Most of 2009 was spent on repairs and reviews from the damage caused by the quench incident, along with two further vacuum leaks identified in July 2009; this pushed the start of operations to November of that year. Run 1: first operational run (2009–2013) On 20 November 2009, low-energy beams circulated in the tunnel for the first time since the incident, and shortly after, on 30 November, the LHC achieved 1.18 TeV per beam to become the world's highest-energy particle accelerator, beating the Tevatron's previous record of 0.98 TeV per beam held for eight years. The early part of 2010 saw the continued ramp-up of beam in energies and early physics experiments towards 3.5 TeV per beam and on 30 March 2010, LHC set a new record for high-energy collisions by colliding proton beams at a combined energy level of 7 TeV. The attempt was the third that day, after two unsuccessful attempts in which the protons had to be "dumped" from the collider and new beams had to be injected. This also marked the start of the main research programme. The first proton run ended on 4 November 2010. A run with lead ions started on 8 November 2010, and ended on 6 December 2010, allowing the ALICE experiment to study matter under extreme conditions similar to those shortly after the Big Bang. CERN originally planned that the LHC would run through to the end of 2012, with a short break at the end of 2011 to allow for an increase in beam energy from 3.5 to 4 TeV per beam. At the end of 2012, the LHC was planned to be temporarily shut down until around 2015 to allow upgrade to a planned beam energy of 7 TeV per beam. In late 2012, in light of the July 2012 discovery of the Higgs boson, the shutdown was postponed for some weeks into early 2013, to allow additional data to be obtained before shutdown. Long Shutdown 1 (2013–2015) The LHC was shut down on 13 February 2013 for its two-year upgrade called Long Shutdown 1 (LS1), which was to touch on many aspects of the LHC: enabling collisions at 14 TeV, enhancing its detectors and pre-accelerators (the Proton Synchrotron and Super Proton Synchrotron), as well as replacing its ventilation system and of cabling impaired by high-energy collisions from its first run. The upgraded collider began its long start-up and testing process in June 2014, with the Proton Synchrotron Booster starting on 2 June 2014, the final interconnection between magnets completing and the Proton Synchrotron circulating particles on 18 June 2014, and the first section of the main LHC supermagnet system reaching operating temperature of , a few days later. Due to the slow progress with "training" the superconducting magnets, it was decided to start the second run with a lower energy of 6.5 TeV per beam, corresponding to a current in the magnet of 11,000 amperes. The first of the main LHC magnets were reported to have been successfully trained by 9 December 2014, while training the other magnet sectors was finished in March 2015. Run 2: second operational run (2015–2018) On 5 April 2015, the LHC restarted after a two-year break, during which the electrical connectors between the bending magnets were upgraded to safely handle the current required for 7 TeV per beam (14 TeV collision energy). However, the bending magnets were only trained to handle up to 6.5 TeV per beam (13 TeV collision energy), which became the operating energy for 2015 to 2018. The energy was first reached on 10 April 2015. The upgrades culminated in colliding protons together with a combined energy of 13 TeV. On 3 June 2015, the LHC started delivering physics data after almost two years offline. In the following months, it was used for proton–proton collisions, while in November, the machine switched to collisions of lead ions and in December, the usual winter shutdown started. In 2016, the machine operators focused on increasing the luminosity for proton–proton collisions. The design value was first reached 29 June, and further improvements increased the collision rate to 40% above the design value. The total number of collisions in 2016 exceeded the number from Run 1 – at a higher energy per collision. The proton–proton run was followed by four weeks of proton–lead collisions. In 2017, the luminosity was increased further and reached twice the design value. The total number of collisions was higher than in 2016 as well. The 2018 physics run began on 17 April and stopped on 3 December, including four weeks of lead–lead collisions. Long Shutdown 2 (2018–2022) Long Shutdown 2 (LS2) started on 10 December 2018. The LHC and the whole CERN accelerator complex was maintained and upgraded. The goal of the upgrades was to implement the High Luminosity Large Hadron Collider (HL-LHC) project that will increase the luminosity by a factor of 10. LS2 ended in April 2022. The Long Shutdown 3 (LS3) in the 2020s will take place before the HL-LHC project is done. Run 3: third operational run (2022) LHC became operational again on 22 April 2022 with a new maximum beam energy of 6.8 TeV (13.6 TeV collision energy), which was first achieved on 25 April. It officially commenced its run 3 physics season on 5 July 2022. This round is expected to continue until 2026. In addition to a higher energy the LHC is expected to reach a higher luminosity, which is expected to increase even further with the upgrade to the HL-LHC after Run 3. Timeline of operations Findings and discoveries An initial focus of research was to investigate the possible existence of the Higgs boson, a key part of the Standard Model of physics which was predicted by theory, but had not yet been observed before due to its high mass and elusive nature. CERN scientists estimated that, if the Standard Model was correct, the LHC would produce several Higgs bosons every minute, allowing physicists to finally confirm or disprove the Higgs boson's existence. In addition, the LHC allowed the search for supersymmetric particles and other hypothetical particles as possible unknown areas of physics. Some extensions of the Standard Model predict additional particles, such as the heavy W' and Z' gauge bosons, which are also estimated to be within reach of the LHC to discover. First run (data taken 2009–2013) The first physics results from the LHC, involving 284 collisions which took place in the ALICE detector, were reported on 15 December 2009. The results of the first proton–proton collisions at energies higher than Fermilab's Tevatron proton–antiproton collisions were published by the CMS collaboration in early February 2010, yielding greater-than-predicted charged-hadron production. After the first year of data collection, the LHC experimental collaborations started to release their preliminary results concerning searches for new physics beyond the Standard Model in proton–proton collisions. No evidence of new particles was detected in the 2010 data. As a result, bounds were set on the allowed parameter space of various extensions of the Standard Model, such as models with large extra dimensions, constrained versions of the Minimal Supersymmetric Standard Model, and others. On 24 May 2011, it was reported that quark–gluon plasma (the densest matter thought to exist besides black holes) had been created in the LHC. Between July and August 2011, results of searches for the Higgs boson and for exotic particles, based on the data collected during the first half of the 2011 run, were presented in conferences in Grenoble and Mumbai. In the latter conference, it was reported that, despite hints of a Higgs signal in earlier data, ATLAS and CMS exclude with 95% confidence level (using the CLs method) the existence of a Higgs boson with the properties predicted by the Standard Model over most of the mass region between 145 and 466 GeV. The searches for new particles did not yield signals either, allowing to further constrain the parameter space of various extensions of the Standard Model, including its supersymmetric extensions. On 13 December 2011, CERN reported that the Standard Model Higgs boson, if it exists, is most likely to have a mass constrained to the range 115–130 GeV. Both the CMS and ATLAS detectors have also shown intensity peaks in the 124–125 GeV range, consistent with either background noise or the observation of the Higgs boson. On 22 December 2011, it was reported that a new composite particle had been observed, the χb (3P) bottomonium state. On 4 July 2012, both the CMS and ATLAS teams announced the discovery of a boson in the mass region around 125–126 GeV, with a statistical significance at the level of 5 sigma each. This meets the formal level required to announce a new particle. The observed properties were consistent with the Higgs boson, but scientists were cautious as to whether it is formally identified as actually being the Higgs boson, pending further analysis. On 14 March 2013, CERN announced confirmation that the observed particle was indeed the predicted Higgs boson. On 8 November 2012, the LHCb team reported on an experiment seen as a "golden" test of supersymmetry theories in physics, by measuring the very rare decay of the meson into two muons (). The results, which match those predicted by the non-supersymmetrical Standard Model rather than the predictions of many branches of supersymmetry, show the decays are less common than some forms of supersymmetry predict, though could still match the predictions of other versions of supersymmetry theory. The results as initially drafted are stated to be short of proof but at a relatively high 3.5 sigma level of significance. The result was later confirmed by the CMS collaboration. In August 2013, the LHCb team revealed an anomaly in the angular distribution of B meson decay products which could not be predicted by the Standard Model; this anomaly had a statistical certainty of 4.5 sigma, just short of the 5 sigma needed to be officially recognized as a discovery. It is unknown what the cause of this anomaly would be, although the Z' boson has been suggested as a possible candidate. On 19 November 2014, the LHCb experiment announced the discovery of two new heavy subatomic particles, and . Both of them are baryons that are composed of one bottom, one down, and one strange quark. They are excited states of the bottom Xi baryon. The LHCb collaboration has observed multiple exotic hadrons, possibly pentaquarks or tetraquarks, in the Run 1 data. On 4 April 2014, the collaboration confirmed the existence of the tetraquark candidate Z(4430) with a significance of over 13.9 sigma. On 13 July 2015, results consistent with pentaquark states in the decay of bottom Lambda baryons (Λ) were reported. On 28 June 2016, the collaboration announced four tetraquark-like particles decaying into a J/ψ and a φ meson, only one of which was well established before (X(4274), X(4500) and X(4700) and X(4140)). In December 2016, ATLAS presented a measurement of the W boson mass, researching the precision of analyses done at the Tevatron. Second run (2015–2018) At the conference EPS-HEP 2015 in July, the collaborations presented first cross-section measurements of several particles at the higher collision energy. On 15 December 2015, the ATLAS and CMS experiments both reported a number of preliminary results for Higgs physics, supersymmetry (SUSY) searches and exotics searches using 13 TeV proton collision data. Both experiments saw a moderate excess around 750 GeV in the two-photon invariant mass spectrum, but the experiments did not confirm the existence of the hypothetical particle in an August 2016 report. In July 2017, many analyses based on the large dataset collected in 2016 were shown. The properties of the Higgs boson were studied in more detail and the precision of many other results was improved. As of March 2021, the LHC experiments have discovered 59 new hadrons in the data collected during the first two runs. Third run (2022 – present) The third run of the LHC began in July of 2022, after more than three years of upgrades, and is planned to last until July of 2026. On 5 July 2022, LHCb reported the discovery of a new type of pentaquark made up of a charm quark and a charm antiquark and an up, a down and a strange quark, observed in an analysis of decays of charged B mesons. The first ever pair of tetraquarks was also reported. On 18 September 2024, ATLAS reported the first observation of quantum entanglement between quarks, with it also being the highest-energy observation of entanglement so far. Future plans "High-luminosity" upgrade After some years of running, any particle physics experiment typically begins to suffer from diminishing returns: as the key results reachable by the device begin to be completed, later years of operation discover proportionately less than earlier years. A common response is to upgrade the devices involved, typically in collision energy, luminosity, or improved detectors. In addition to a possible increase to 14 TeV collision energy, a luminosity upgrade of the LHC, called the High Luminosity Large Hadron Collider, started in June 2018 that will boost the accelerator's potential for new discoveries in physics, starting in 2027. The upgrade aims at increasing the luminosity of the machine by a factor of 10, up to 1035 cm−2s−1, providing a better chance to see rare processes and improving statistically marginal measurements. Proposed Future Circular Collider CERN has several preliminary designs for a Future Circular Collider (FCC)—which would be the most powerful particle accelerator ever built—with different types of collider ranging in cost from around €9 billion (US$10.2 billion) to €21 billion. It would use the LHC ring as preaccelerator, similar to how the LHC uses the smaller Super Proton Synchrotron. It is CERN's opening bid in a priority-setting process called the European Strategy for Particle Physics Update, and will affect the field's future well into the second half of the century. As of 2023, no fixed plan exists and it is unknown if the construction will be funded. Safety of particle collisions The experiments at the Large Hadron Collider sparked fears that the particle collisions might produce doomsday phenomena, involving the production of stable microscopic black holes or the creation of hypothetical particles called strangelets. Two CERN-commissioned safety reviews examined these concerns and concluded that the experiments at the LHC present no danger and that there is no reason for concern, a conclusion endorsed by the American Physical Society. The reports also noted that the physical conditions and collision events that exist in the LHC and similar experiments occur naturally and routinely in the universe without hazardous consequences, including ultra-high-energy cosmic rays observed to impact Earth with energies far higher than those in any human-made collider, like the Oh-My-God particle which had 320 million TeV of energy, and a collision energy tens of times more than the most energetic collisions produced in the LHC. Popular culture The Large Hadron Collider gained a considerable amount of attention from outside the scientific community and its progress is followed by most popular science media. The LHC has also inspired works of fiction including novels, TV series, video games and films. CERN employee Katherine McAlpine's "Large Hadron Rap" surpassed 8 million YouTube views as of 2022. The band Les Horribles Cernettes was founded by women from CERN. The name was chosen so to have the same initials as the LHC. National Geographic Channel's World's Toughest Fixes, Season 2 (2010), Episode 6 "Atom Smasher" features the replacement of the last superconducting magnet section in the repair of the collider after the 2008 quench incident. The episode includes actual footage from the repair facility to the inside of the collider, and explanations of the function, engineering, and purpose of the LHC. The song "Munich" on the 2012 studio album Scars & Stories by The Fray is inspired by the Large Hadron Collider. Lead singer Isaac Slade said in an interview with The Huffington Post, "There's this large particle collider out in Switzerland that is kind of helping scientists peel back the curtain on what creates gravity and mass. Some very big questions are being raised, even some things that Einstein proposed, that have just been accepted for decades are starting to be challenged. They're looking for the God Particle, basically, the particle that holds it all together. That song is really just about the mystery of why we're all here and what's holding it all together, you know?" The Large Hadron Collider was the focus of the 2012 student film Decay, with the movie being filmed on location in CERN's maintenance tunnels. Fiction The novel Angels & Demons, by Dan Brown, involves antimatter created at the LHC to be used in a weapon against the Vatican. In response, CERN published a "Fact or Fiction?" page discussing the accuracy of the book's portrayal of the LHC, CERN, and particle physics in general. The movie version of the book has footage filmed on-site at one of the experiments at the LHC; the director, Ron Howard, met with CERN experts in an effort to make the science in the story more accurate. The novel FlashForward, by Robert J. Sawyer, involves the search for the Higgs boson at the LHC. CERN published a "Science and Fiction" page interviewing Sawyer and physicists about the book and the TV series based on it. See also List of accelerators in particle physics Accelerator projects Circular Electron Positron Collider Compact Linear Collider Future Circular Collider International Linear Collider Very Large Hadron Collider References External links Overview of the LHC at CERN's public webpage CERN Courier magazine LHC Portal Web portal Full documentation for design and construction of the LHC and its six detectors (2008). Video Animation of LHC in collision production mode (June 2015) News Eight Things To Know As The Large Hadron Collider Breaks Energy Records Buildings and structures in Ain Buildings and structures in the canton of Geneva CERN accelerators E-Science International science experiments Laboratories in France Laboratories in Switzerland Particle physics facilities Physics beyond the Standard Model Underground laboratories CERN facilities Government buildings completed in 2008
Large Hadron Collider
[ "Physics" ]
8,669
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
357,565
https://en.wikipedia.org/wiki/Potassium%20sodium%20tartrate
Potassium sodium tartrate tetrahydrate, also known as Rochelle salt, is a double salt of tartaric acid first prepared (in about 1675) by an apothecary, Pierre Seignette, of La Rochelle, France. Potassium sodium tartrate and monopotassium phosphate were the first materials discovered to exhibit piezoelectricity. This property led to its extensive use in crystal phonograph cartridges, microphones and earpieces during the post-World War II consumer electronics boom of the mid-20th century. Such transducers had an exceptionally high output with typical pick-up cartridge outputs as much as 2 volts or more. Rochelle salt is deliquescent so any transducers based on the material deteriorated if stored in damp conditions. It has been used medicinally as a laxative. It has also been used in the process of silvering mirrors. It is an ingredient of Fehling's solution (reagent for reducing sugars). It is used in electroplating, in electronics and piezoelectricity, and as a combustion accelerator in cigarette paper (similar to an oxidizer in pyrotechnics). In organic synthesis, it is used in aqueous workups to break up emulsions, particularly for reactions in which an aluminium-based hydride reagent was used. Sodium potassium tartrate is also important in the food industry. It is a common precipitant in protein crystallography and is also an ingredient in the Biuret reagent which is used to measure protein concentration. This ingredient maintains cupric ions in solution at an alkaline pH. Preparation The starting material is tartar with a minimum 68% tartaric acid content. This is first dissolved in water or in the mother liquor of a previous batch. It is then basified with hot saturated sodium hydroxide solution to pH 8, decolorized with activated charcoal, and chemically purified before being filtered. The filtrate is evaporated to 42 °Bé at 100 °C, and passed to granulators in which Seignette's salt crystallizes on slow cooling. The salt is separated from the mother liquor by centrifugation, accompanied by washing of the granules, and is dried in a rotary furnace and sieved before packaging. Commercially marketed grain sizes range from 2000 μm to < 250 μm (powder). Larger crystals of Rochelle salt have been grown under conditions of reduced gravity and convection on board Skylab. Rochelle salt crystals will begin to dehydrate when the relative humidity drops to about 30% and will begin to dissolve at relative humidities above 84%. Piezoelectricity In 1824, Sir David Brewster demonstrated piezoelectric effects using Rochelle salts, which led to him naming the effect pyroelectricity. In 1919, Alexander McLean Nicolson worked with Rochelle salt, developing audio-related inventions like microphones and speakers at Bell Labs. References Potassium compounds Organic sodium salts Piezoelectric materials Ferroelectric materials Tartrates Food acidity regulators Food antioxidants Double salts E-number additives Deliquescent materials
Potassium sodium tartrate
[ "Physics", "Chemistry", "Materials_science" ]
658
[ "Physical phenomena", "Ferroelectric materials", "Double salts", "Salts", "Organic sodium salts", "Materials", "Electrical phenomena", "Deliquescent materials", "Piezoelectric materials", "Hysteresis", "Matter" ]
357,657
https://en.wikipedia.org/wiki/Bloch%27s%20theorem
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written where is position, is the wave function, is a periodic function with the same periodicity as the crystal, the wave vector is the crystal momentum vector, is Euler's number, and is the imaginary unit. Functions of this form are known as Bloch functions or Bloch states, and serve as a suitable basis for the wave functions or states of electrons in crystalline solids. The description of electrons in terms of Bloch functions, termed Bloch electrons (or less often Bloch Waves), underlies the concept of electronic band structures. These eigenstates are written with subscripts as , where is a discrete index, called the band index, which is present because there are many different wave functions with the same (each has a different periodic component ). Within a band (i.e., for fixed ), varies continuously with , as does its energy. Also, is unique only up to a constant reciprocal lattice vector , or, . Therefore, the wave vector can be restricted to the first Brillouin zone of the reciprocal lattice without loss of generality. Applications and consequences Applicability The most common example of Bloch's theorem is describing electrons in a crystal, especially in characterizing the crystal's electronic properties, such as electronic band structure. However, a Bloch-wave description applies more generally to any wave-like phenomenon in a periodic medium. For example, a periodic dielectric structure in electromagnetism leads to photonic crystals, and a periodic acoustic medium leads to phononic crystals. It is generally treated in the various forms of the dynamical theory of diffraction. Wave vector Suppose an electron is in a Bloch state where is periodic with the same periodicity as the crystal lattice. The actual quantum state of the electron is entirely determined by , not or directly. This is important because and are not unique. Specifically, if can be written as above using , it can also be written using , where is any reciprocal lattice vector (see figure at right). Therefore, wave vectors that differ by a reciprocal lattice vector are equivalent, in the sense that they characterize the same set of Bloch states. The first Brillouin zone is a restricted set of values of with the property that no two of them are equivalent, yet every possible is equivalent to one (and only one) vector in the first Brillouin zone. Therefore, if we restrict to the first Brillouin zone, then every Bloch state has a unique . Therefore, the first Brillouin zone is often used to depict all of the Bloch states without redundancy, for example in a band structure, and it is used for the same reason in many calculations. When is multiplied by the reduced Planck constant, it equals the electron's crystal momentum. Related to this, the group velocity of an electron can be calculated based on how the energy of a Bloch state varies with ; for more details see crystal momentum. Detailed example For a detailed example in which the consequences of Bloch's theorem are worked out in a specific situation, see the article Particle in a one-dimensional lattice (periodic potential). Statement A second and equivalent way to state the theorem is the following Proof Using lattice periodicity Being Bloch's theorem a statement about lattice periodicity, in this proof all the symmetries are encoded as translation symmetries of the wave function itself. Using operators In this proof all the symmetries are encoded as commutation properties of the translation operators Using group theory Apart from the group theory technicalities this proof is interesting because it becomes clear how to generalize the Bloch theorem for groups that are not only translations. This is typically done for space groups which are a combination of a translation and a point group and it is used for computing the band structure, spectrum and specific heats of crystals given a specific crystal group symmetry like FCC or BCC and eventually an extra basis. In this proof it is also possible to notice how it is key that the extra point group is driven by a symmetry in the effective potential but it shall commute with the Hamiltonian. In the generalized version of the Bloch theorem, the Fourier transform, i.e. the wave function expansion, gets generalized from a discrete Fourier transform which is applicable only for cyclic groups, and therefore translations, into a character expansion of the wave function where the characters are given from the specific finite point group. Also here is possible to see how the characters (as the invariants of the irreducible representations) can be treated as the fundamental building blocks instead of the irreducible representations themselves. Velocity and effective mass If we apply the time-independent Schrödinger equation to the Bloch wave function we obtain with boundary conditions Given this is defined in a finite volume we expect an infinite family of eigenvalues; here is a parameter of the Hamiltonian and therefore we arrive at a "continuous family" of eigenvalues dependent on the continuous parameter and thus at the basic concept of an electronic band structure. This shows how the effective momentum can be seen as composed of two parts, a standard momentum and a crystal momentum . More precisely the crystal momentum is not a momentum but it stands for the momentum in the same way as the electromagnetic momentum in the minimal coupling, and as part of a canonical transformation of the momentum. For the effective velocity we can derive For the effective mass The quantity on the right multiplied by a factor is called effective mass tensor and we can use it to write a semi-classical equation for a charge carrier in a band where is an acceleration. This equation is analogous to the de Broglie wave type of approximation As an intuitive interpretation, both of the previous two equations resemble formally and are in a semi-classical analogy with Newton's second law for an electron in an external Lorentz force. History and related equations The concept of the Bloch state was developed by Felix Bloch in 1928 to describe the conduction of electrons in crystalline solids. The same underlying mathematics, however, was also discovered independently several times: by George William Hill (1877), Gaston Floquet (1883), and Alexander Lyapunov (1892). As a result, a variety of nomenclatures are common: applied to ordinary differential equations, it is called Floquet theory (or occasionally the Lyapunov–Floquet theorem). The general form of a one-dimensional periodic potential equation is Hill's equation: where is a periodic potential. Specific periodic one-dimensional equations include the Kronig–Penney model and Mathieu's equation. Mathematically Bloch's theorem is interpreted in terms of unitary characters of a lattice group, and is applied to spectral geometry. See also Bloch oscillations Bloch wave – MoM method Electronic band structure Nearly free electron model Periodic boundary conditions Symmetries in quantum mechanics Tight-binding model Wannier function References Further reading Eponymous theorems of physics Quantum mechanics Condensed matter physics
Bloch's theorem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,476
[ "Theorems in quantum mechanics", "Equations of physics", "Phases of matter", "Quantum mechanics", "Materials science", "Eponymous theorems of physics", "Theorems in mathematical physics", "Condensed matter physics", "Matter", "Physics theorems" ]
9,598,851
https://en.wikipedia.org/wiki/Atomic%20carbon
Atomic carbon, systematically named carbon and λ0-methane, is a colourless gaseous inorganic chemical with the chemical formula C (also written [C]). It is kinetically unstable at ambient temperature and pressure, being removed through autopolymerisation. Atomic carbon is the simplest of the allotropes of carbon, and is also the progenitor of carbon clusters. In addition, it may be considered to be the monomer of all (condensed) carbon allotropes like graphite and diamond. Nomenclature The trivial name monocarbon is the most commonly used and preferred IUPAC name. The systematic name carbon, a valid IUPAC name, is constructed according to the compositional nomenclature. However, as a compositional name, it does not distinguish between different forms of pure carbon. The systematic name λ0-methane, also valid IUPAC name, is constructed according to the substitutive nomenclature. Along with monocarbon, this name does distinguish the titular compound as they derived using structural information about the molecule. To better reflect its structure, free atomic carbon is often written as [C]. λ2-methylium () is the ion resulting from the gain of by atomic carbon. Properties Amphotericity A Lewis acid can join with an electron pair of atomic carbon, and an electron pair of a Lewis base can join with atomic carbon by adduction: :[C] + M → [MC] [C] + :L → [CL] Because of this donation or acceptance of an adducted electron pair, atomic carbon has Lewis amphoteric character. Atomic carbon has the capacity to donate up to two electron pairs to Lewis acids, or accept up to two pairs from Lewis bases. A proton can join with the atomic carbon by protonation: C + → Because of this capture of the proton (), atomic carbon and its adducts of Lewis bases, such as water, also have Brønsted–Lowry basic character. Atomic carbon's conjugate acid is λ2-methylium (). + C + Aqueous solutions of adducts are however, unstable due to hydration of the carbon centre and the λ2-methylium group to produce λ2-methanol (CHOH) or λ2-methane (), or hydroxymethylium () groups, respectively. + C → CHOH + → The λ2-methanol group in adducts can potentially isomerise to form formaldehyde, or be further hydrated to form methanediol. The hydroxymethylium group in adducts can potentially be further hydrated to form dihydroxymethylium (), or be oxidised by water to form formylium (). Electromagnetic properties The electrons in atomic carbon are distributed among the atomic orbitals according to the aufbau principle to produce unique quantum states, with corresponding energy levels. The state with the lowest energy level, or ground state, is a triplet diradical state (3P0), closely followed by 3P1 and 3P2. The next two excited states that are relatively close in energy are a singlet (1D2) and singlet diradical (1S0). The non-radical state of atomic carbon is systematically named λ2-methylidene, and the diradical states that include the ground state is named carbon(2•) or λ2-methanediyl. The 1D2 and 1S0 states lie 121.9 kJ mol−1 and 259.0 kJ mol−1 above the ground state, respectively. Transitions between these three states are formally forbidden from occurring due to the requirement of spin flipping and or electron pairing. This means that atomic carbon phosphoresces in the near-infrared region of the electromagnetic spectrum at 981.1 nm. It can also fluoresce in infrared and phosphoresce in the blue region at 873.0 nm and 461.9 nm, respectively, upon excitation by ultraviolet radiation. The different states of atomic carbon exhibits varying chemical behaviours. For example, reactions of the triplet radical with non-radical species generally involves abstraction, whereas reactions of the singlet non-radical involves not only abstraction, but also addition by insertion. [C]2•(3P0) + → [CHOH] → [CH] + [HO] [C](1D2) + → [CHOH] → CO + or Production One method of synthesis, developed by Phil Shevlin has done the principal work in the field., is by passing a large current through two adjacent carbon rods, generating an electric arc. The way this species is made is closely related to the formation of fullerenes C60, the chief difference being that a much lower vacuum is used in atomic carbon formation. Atomic carbon is generated in the thermolysis of 5-diazotetrazole upon extrusion of 3 equivalents of dinitrogen: CN6 → :C: + 3N2 A clean source of atomic carbon can be obtained based on the thermal decomposition of tantalum carbide. In the developed source, carbon is loaded into a thin-walled tantalum tube. After being sealed, it is heated by direct electric current. The solvated carbon atoms diffuse to the outer surface of the tube and, when the temperature rises, the evaporation of atomic carbon from the surface of the tantalum tube is observed. The source provides purely carbon atoms without presence of any additional species. Carbon suboxide decarbonylation Atomic carbon can be produced by carbon suboxide decarbonylation. In this process, carbon suboxide decomposes to produce atomic carbon and carbon monoxide according to the equation: → 2 CO + [C] The process involves dicarbon monoxide as an intermediate, and occurs in two steps. Photolytic far ultraviolet radiation is needed for both decarbonylations. → [CCO] + CO [CCO] → CO + [C] Uses Normally, a sample of atomic carbon exists as a mixture of excited states in addition to the ground-state in thermodynamic equilibrium. Each state contributes differently to the reaction mechanisms that can take place. A simple test used to determine which state is involved is to make use of the diagnostic reaction of the triplet state with O2, if the reaction yield is unchanged it indicates that the singlet state is involved. The diradical ground-state normally undergoes abstraction reactions. Atomic carbon has been used to generate "true" carbenes by the abstraction of oxygen atoms from carbonyl groups: R2C=O + :C: → R2C: + CO Carbenes formed in this way will exhibit true carbenic behaviour. Carbenes prepared by other methods such as diazo compounds, might exhibit properties better attributed to the diazo compound used to make the carbene (which mimic carbene behaviour), rather than to the carbene itself. This is important from a mechanistic understanding of true carbene behaviour perspective. Reactions As atomic carbon is an electron-deficient species, it spontaneously autopolymerises in its pure form, or converts to an adduct upon treatment with a Lewis acid or base. Oxidation of atomic carbon gives carbon monoxide, whereas reduction gives λ2-methane. Non-metals, including oxygen, strongly attack atomic carbon, forming divalent carbon compounds: 2 [C] + → 2 CO Atomic carbon is highly reactive, most reactions are very exothermic. They are generally carried out in the gas phase at liquid nitrogen temperatures (77 K). Typical reactions with organic compounds include: Insertion into a C-H bond in alkanes to form a carbene Deoxygenation of carboxyl groups in ketones and aldehydes to form a carbene, 2-butanone forming 2-butanylidene. Insertion into carbon -carbon double bonds to form a cyclopropylidene which undergoes ring-opening, a simple example being insertion into an alkene to form a cumulene. With water insertion into the O-H bond forms the carbene, H-C-OH that rearranges to formaldehyde, HCHO. References Further reading Allotropes of carbon
Atomic carbon
[ "Chemistry" ]
1,730
[ "Allotropes of carbon", "Allotropes" ]
9,599,147
https://en.wikipedia.org/wiki/Melanotroph
A melanotroph (or melanotrope) is a cell in the pituitary gland that generates melanocyte-stimulating hormone (α‐MSH) from its precursor pro-opiomelanocortin. Chronic stress can induce the secretion of α‐MSH in melanotrophs and lead to their subsequent degeneration. See also Chromophobe cell Chromophil Acidophil cell Basophil cell Oxyphil cell Oxyphil cell (parathyroid) Pituitary gland Neuroendocrine cell List of distinct cell types in the adult human body References Endocrine system
Melanotroph
[ "Chemistry", "Biology" ]
134
[ "Endocrine system", "Biotechnology stubs", "Biochemistry stubs", "Organ systems", "Biochemistry" ]
9,606,667
https://en.wikipedia.org/wiki/Pervious%20concrete
Pervious concrete (also called porous concrete, permeable concrete, no fines concrete and porous pavement) is a special type of concrete with a high porosity used for concrete flatwork applications that allows water from precipitation and other sources to pass directly through, thereby reducing the runoff from a site and allowing groundwater recharge. Pervious concrete is made using large aggregates with little to no fine aggregates. The concrete paste then coats the aggregates and allows water to pass through the concrete slab. Pervious concrete is traditionally used in parking areas, areas with light traffic, residential streets, pedestrian walkways, and greenhouses. It is an important application for sustainable construction and is one of many low impact development techniques used by builders to protect water quality. History Pervious concrete was first used in the 1800s in Europe as pavement surfacing and load bearing walls. Cost efficiency was the main motive due to a decreased amount of cement. It became popular again in the 1920s for two storey homes in Scotland and England. It became increasingly viable in Europe after WWII due to the scarcity of cement. It did not become as popular in the US until the 1970s. In India it became popular in 2000. Stormwater management The proper utilization of pervious concrete is a recognized Best Management Practice by the U.S. Environmental Protection Agency (EPA) for providing first flush pollution control and stormwater management. As regulations further limit stormwater runoff, it is becoming more expensive for property owners to develop real estate, due to the size and expense of the necessary drainage systems. Pervious concrete lowers the NRCS Runoff Curve Number or CN by retaining stormwater on site. This allows the planner/designer to achieve pre-development stormwater goals for pavement intense projects. Pervious concrete reduces the runoff from paved areas, which reduces the need for separate stormwater retention ponds and allows the use of smaller capacity storm sewers. This allows property owners to develop a larger area of available property at a lower cost. Pervious concrete also naturally filters storm water and can reduce pollutant loads entering into streams, ponds, and rivers. Pervious concrete functions like a storm water infiltration basin and allows the storm water to infiltrate the soil over a large area, thus facilitating recharge of precious groundwater supplies locally. All of these benefits lead to more effective land use. Pervious concrete can also reduce the impact of development on trees. A pervious concrete pavement allows the transfer of both water and air to root systems to help trees flourish even in highly developed areas. Properties Pervious concrete consists of cement, coarse aggregate (size should be 9.5 mm to 12.5 mm) and water with little to no fine aggregates. The addition of a small amount of sand will increase the strength. The mixture has a water-to-cement ratio of 0.28 to 0.40 with a void content of 15 to 25 percent. The correct quantity of water in the concrete is critical. A low water to cement ratio will increase the strength of the concrete, but too little water may cause surface failure. A proper water content gives the mixture a wet-metallic appearance. As this concrete is sensitive to water content, the mixture should be field checked. Entrained air may be measured by a Rapid Air system, where the concrete is stained black and sections are analyzed under a microscope. A common flatwork form has riser strips on top such that the screed is 3/8-1/2 inches (9 to 12 mm) above final pavement elevation. Mechanical screeds are preferable to manual. The riser strips are removed to guide compaction. Immediately after screeding, the concrete is compacted to improve the bond and smooth the surface. Excessive compaction of pervious concrete results in higher compressive strength, but lower porosity (and thus lower permeability). Jointing varies little from other concrete slabs. Joints are tooled with a rolling jointing tool prior to curing or saw cut after curing. Curing consists of covering concrete with 6 mil plastic sheeting within 20 minutes of concrete discharge. However, this contributes to a substantial amount of waste sent to landfills. Alternatively, preconditioned absorptive lightweight aggregate as well as internal curing admixture (ICA) have been used to effectively cure pervious concrete without waste generation. Testing and inspection Pervious concrete has a common strength of though strengths up to can be reached. There is no standardized test for compressive strength. Acceptance is based on the unit weight of a sample of poured concrete using ASTM standard no. C1688. An acceptable tolerance for the density is plus or minus of the design density. Slump and air content tests are not applicable to pervious concrete because of the unique composition. The designer of a storm water management plan should ensure that the pervious concrete is functioning properly through visual observation of its drainage characteristics prior to opening of the facility. Cold climates Concerns over the resistance to the freeze-thaw cycle have limited the use of pervious concrete in cold weather environments. The rate of freezing in most applications is dictated by the local climate. Entrained air may help protect the paste as it does in regular concrete. The addition of a small amount of fine aggregate to the mixture increases the durability of the pervious concrete. Avoiding saturation during the freeze cycle is the key to the longevity of the concrete. Related, having a well prepared 8 to 24 inch (200 to 600 mm) sub-base and a good drainage preventing water stagnation will reduce the possibility of freeze-thaw damage. Using permeable concrete for pavements can make them safer for pedestrians in the winter because water won't settle on the surface and freeze leading to dangerously icy conditions. Roads can also be made safer for cars by the use of permeable concrete as the reduction in the formation of standing water will reduce the possibility of aquaplaning, and porous roads will also reduce tire noise. Maintenance To prevent reduction in permeability, pervious concrete needs to be cleaned regularly. Cleaning can be accomplished through wetting the surface of the concrete and vacuum sweeping. See also References Further reading US EPA. Office of Research and Development. "Research Highlights: Porous Pavements: Managing Rainwater Runoff." October 17, 2008. External links National Pervious Concrete Pavement Association Pervious Concrete Design Resources American Concrete Institute Building materials Concrete Environmental engineering
Pervious concrete
[ "Physics", "Chemistry", "Engineering" ]
1,304
[ "Structural engineering", "Building engineering", "Chemical engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Environmental engineering", "Concrete", "Matter", "Building materials" ]
9,609,051
https://en.wikipedia.org/wiki/Thiamine%20triphosphate
Thiamine triphosphate (ThTP) is a biomolecule found in most organisms including bacteria, fungi, plants and animals. Chemically, it is the triphosphate derivative of the vitamin thiamine. Function It has been proposed that ThTP has a specific role in nerve excitability, but this has never been confirmed and recent results suggest that ThTP probably plays a role in cell energy metabolism. Low or absent levels of thiamine triphosphate have been found in Leigh's disease. In E. coli, ThTP is accumulated in the presence of glucose during amino acid starvation. On the other hand, suppression of the carbon source leads to the accumulation, of adenosine thiamine triphosphate (AThTP). Metabolism It has been shown that in brain ThTP is synthesized in mitochondria by a chemiosmotic mechanism, perhaps similar to ATP synthase. In mammals, ThTP is hydrolyzed to thiamine pyrophosphate (ThDP) by a specific thiamine-triphosphatase. It can also be converted into ThDP by thiamine-diphosphate kinase. History Thiamine triphosphate (ThTP) was chemically synthesized in 1948 at a time when the only organic triphosphate known was ATP. The first claim of the existence of ThTP in living organisms was made in rat liver, followed by baker’s yeast. Its presence was later confirmed in rat tissues and in plants germs, but not in seeds, where thiamine was essentially unphosphorylated. In all those studies, ThTP was separated from other thiamine derivatives using a paper chromatographic method, followed by oxidation in fluorescent thiochrome compounds with ferricyanide in alkaline solution. This method is at best semi-quantitative, and the development of liquid chromatographic methods suggested that ThTP represents far less than 10% of total thiamine in animal tissues. References Biomolecules Organophosphates Thiazoles Thiamine Pyrimidines Phosphate esters
Thiamine triphosphate
[ "Chemistry", "Biology" ]
441
[ "Natural products", "Organic compounds", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
9,610,679
https://en.wikipedia.org/wiki/Morse%E2%80%93Palais%20lemma
In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates. The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale. Statement of the lemma Let be a real Hilbert space, and let be an open neighbourhood of the origin in Let be a -times continuously differentiable function with that is, Assume that and that is a non-degenerate critical point of that is, the second derivative defines an isomorphism of with its continuous dual space by Then there exists a subneighbourhood of in a diffeomorphism that is with inverse, and an invertible symmetric operator such that Corollary Let be such that is a non-degenerate critical point. Then there exists a -with--inverse diffeomorphism and an orthogonal decomposition such that, if one writes then See also References Calculus of variations Hilbert spaces Lemmas in analysis
Morse–Palais lemma
[ "Physics", "Mathematics" ]
264
[ "Theorems in mathematical analysis", "Quantum mechanics", "Lemmas in mathematical analysis", "Hilbert spaces", "Lemmas" ]
16,350,686
https://en.wikipedia.org/wiki/CGNS
CGNS stands for CFD General Notation System. It is a general, portable, and extensible standard for the storage and retrieval of CFD analysis data. It consists of a collection of conventions, and free and open software implementing those conventions. It is self-descriptive, cross-platform also termed platform or machine independent, documented, and administered by an international steering committee. It is also an American Institute of Aeronautics and Astronautics (AIAA) recommended practice. The CGNS project originated in 1994 as a joint effort between Boeing and NASA, and has since grown to include many other contributing organizations worldwide. In 1999, control of CGNS was completely transferred to a public forum known as the CGNS Steering Committee . This Committee is made up of international representatives from government and private industry. The CGNS system consists of two parts: (1) a standard format (known as Standard Interface Data Structure, or SIDS) for recording the data, and (2) software that reads, writes, and modifies data in that format. The format is a conceptual entity established by the documentation; the software is a physical product supplied to enable developers to access and produce data recorded in that format. The CGNS system is designed to facilitate the exchange of data between sites and applications, and to help stabilize the archiving of aerodynamic data. The data are stored in a compact, binary format and are accessible through a complete and extensible library of functions. The application programming interface (API) is cross-platform and can be easily implemented in C, C++, Fortran and Fortran 90 applications. A MEX interface mexCGNS also exists for calling the CGNS API in high-level programming languages MATLAB and GNU Octave. Object oriented interface CGNS++ and Python module pyCGNS exist. The principal target of CGNS is data normally associated with compressible viscous flow (i.e., the Navier-Stokes equations), but the standard is also applicable to subclasses such as Euler and potential flows. The CGNS standard includes the following types of data. Structured, unstructured, and hybrid grids Flow solution data, which may be nodal, cell-centered, face-centered, or edge-centered Multizone interface connectivity, both abutting and overset Boundary conditions Flow equation descriptions, including the equation of state, viscosity and thermal conductivity models, turbulence models, multi-species chemistry models, and electromagnetics Time-dependent flow, including moving and deforming grids Dimensional units and nondimensionalization information Reference states Convergence history Association to CAD geometry definitions User-defined data Much of the standard and the software is applicable to computational field physics in general. Disciplines other than fluid dynamics would need to augment the data definitions and storage conventions, but the fundamental database software, which provides platform independence, is not specific to fluid dynamics. CGNS is self-describing, allowing an application to interpret the structure and contents of a file without any outside information. CGNS can make use of either two different low-level data formats: an internally developed and supported method called Advanced Data Format (ADF), based on a common file format system previously in use at McDonnell Douglas HDF5, a widely used hierarchical data format Tools and Guides In addition to the CGNS library itself, the following tools and guides are available from Github: CGNSTools - Includes ADFVIEWER, a browser and editor for CGNS files Users Guide code - small practical example CGNS programs written in both Fortran and C F77 Examples - example computer programs written in Fortran that demonstrate all CGNS functionality HDFql enables users to manage CGNS/HDF5 files through a high-level language (similar to SQL) in C, C++, Java, Python, C#, Fortran and R. See also Common Data Format (CDF) EAS3 (Ein-Ausgabe-System) FITS (Flexible Image Transport System) GRIB (GRIdded Binary) Hierarchical Data Format (HDF) NetCDF (Network Common Data Form) Tecplot binary files XMDF (eXtensible Model Data Format) External links CGNS home page CGNS Mid Level Library MEX interface of CGNS for MATLAB and Octave pyCGNS CGNS 4.5 Release notes Computer file formats Computational fluid dynamics C (programming language) libraries
CGNS
[ "Physics", "Chemistry" ]
926
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
12,073,631
https://en.wikipedia.org/wiki/Satraplatin
Satraplatin (INN, codenamed JM216) is a platinum-based antineoplastic agent that was under investigation as a treatment of patients with advanced prostate cancer who have failed previous chemotherapy. It has not yet received approval from the U.S. Food and Drug Administration. First mentioned in the medical literature in 1993, satraplatin is the first orally active platinum-based chemotherapeutic drug; other available platinum analogues—cisplatin, carboplatin, and oxaliplatin—must be given intravenously. The drug has also been used in the treatment of lung and ovarian cancers. The proposed mode of action is that the compound binds to the DNA of cancer cells rendering them incapable of dividing. Mode of action The proposed mode of action is that the compound binds to the DNA of cancer cells rendering them incapable of dividing. In addition some cisplatin resistant tumour cell lines were sensitive to satraplatin treatment in vitro. This may be due to an altered mechanism of cellular uptake (satraplatin by passive diffusion instead of active transport for e.g. cisplatin). Clinical development Satraplatin has been developed for the treatment of men with castrate-refractory, metastatic prostate cancer for several reasons. Its relative ease of administration, potential lack of cross-resistance with other platinum agents, clinical benefits seen in early studies of prostate cancer, and an unmet need in this patient population after Docetaxel failure at that time. The only Phase III trial with satraplatin (SPARC Trial) was conducted in pretreated metastatic castrate-resistant prostate cancer (CRPC), revealing a 33% reduction in risk of progression or death versus a placebo. However, no difference in overall survival was observed. An FDA or EMA-approved indication has not yet been achieved. Satraplatin appears to have clinical activity against a variety of malignancies such as Breast Cancer, Prostate cancer and Lung cancer. Especially in combination with radiotherapy it appears to have good efficacy in combination for lung and squamous head and neck cancer. In a phase I study from Vanderbilt University, seven of eight patients with squamous cell carcinoma of the head and neck, who were treated with 10 to 30 mg of satraplatin thrice a week concurrently with radiotherapy achieved a complete response. Side effects Satraplatin is similar in toxicity profile to carboplatin, with no nephrotoxicity, neurotoxicity, or ototoxicity observed. Moreover, it is much better tolerated than cisplatin and does not require hydration for each dose. A somewhat more intense hematotoxicity is observed. Anemia, diarrhea, constipation, nausea or vomiting, increase risk of infection, bruising. Possible risks and complications Thrombus: Cancer can increase the risk of developing a blood clot, and chemotherapy may increase this risk further. A blood clot may cause symptoms such as pain, redness and swelling in a leg, or breathlessness and chest pain. Most clots can be treated with drugs that thin the blood. Fertility: Satraplatin can affect a person's ability to become pregnant and may cause sterility in men. Use Contraception: Satraplatin may harm a developing baby. It is important to use effective contraception while taking this drug and for at least a few months afterwards Detailed mechanism of action Many human tumors including testicular, bladder, lung, head, neck, and cervical cancers have been treated with platinum compounds. All of the marketed platinum analogues must be administered via intravenous infusion is one of the main disadvantages for these platinum compounds due to severe, dose-limiting effects. An acquired resistance to cisplatin/carboplatin in ovarian cancer was discovered due to insufficient amounts of platinum reaching the target DNA or failure to achieve cell death. These drawbacks led to the development of the next generation of platinum analogues such as satraplatin Satraplatin is a prodrug, meaning it is metabolized in the body and transformed into its working form. The two polar acetate groups on satraplatin increase the drugs bioavailability, which in turn allows for a large fraction of the administered dose to make it into the bloodstream where metabolism begins. Once the molecule makes it to the bloodstream the drug loses its acetate groups. At this point the drug is structurally similar to cisplatin with the exception of one cyclohexylamine group in place of an amine group. Since the drug is now structurally similar to cisplatin its mechanism of action is also very similar. The chlorine atoms are displaced and the platinum atom in the drug binds to guanine residues in DNA. This unfortunately happens to not only cancer cells but other regular functioning cells as well causing some of the harsh side effects. By binding to guanine residues satraplatin inhibits DNA replication and transcription which leads to subsequent apoptosis. Where satraplatin differs is its cyclohexamine group. In cisplatin the two amine groups are symmetrical while satraplatin's cyclohexamine makes it asymmetrical which contributes to some of the drug's special properties. A large problem with cisplatin and other platinum based anti-cancer drugs is that the body can develop resistance to them. A major way that this happens is through a mammalian nucleotide excision repair pathway which repairs damaged DNA. However, some studies show that satraplatin compared to other platinum anti-cancer drugs can be elusive and are not recognized by the DNA repair proteins due to the different adducts on the molecule (cyclohexamine). Since satraplatin is not recognized by the DNA repair proteins the DNA remains damaged, the DNA cannot be replicated, the cell dies, and the problem of resistance is solved. In vitro experiments have shown that satraplatin is more effective in well-defined hematological cancers than cisplatin. MTAP deficiency and Bcl-2 gene mutation were identified as biomarkers of enhanced efficacy. References Coordination complexes Platinum(IV) compounds Acetates Platinum-based antineoplastic agents Ammine complexes
Satraplatin
[ "Chemistry" ]
1,304
[ "Coordination chemistry", "Coordination complexes" ]
12,074,874
https://en.wikipedia.org/wiki/Industry%20Technology%20Facilitator
Industry Technology Facilitator (ITF) is an oil industry trade organisation established in 1999. It is owned by 30 major global oil majors and oilfield service companies. The group has offices in Aberdeen, UK, Houston, USA, Abu Dhabi, UAE, Perth, Australia and Kuala Lumpur, Malaysia. Members ITF currently has a membership of 30 global operators and service companies including: Aramco Services Company BG Group BP Chevron ConocoPhillips DONG Energy ENI EnQuest ExxonMobil GE Oil and Gas Kuwait Oil Company Maersk Marathon Oil Corporation Nexen Petrofac Petronas Petroleum Development Oman Premier Oil PTTEP QatarEnergy Schlumberger Shell Siemens Statoil Technip Total Tullow Oil Weatherford Wintershall Wood Group Woodside Energy Awards and recognition Alick Buchanan Smith Spirit of Enterprise: 2009 Winner – ITF Investors in People - Gold Scottish Offshore Achievement Awards: 2009 Rising Star Winner - Ryan McPherson IoD Scotland - Emerging Director Finalist: Neil Poxon The topics addressed by these ITF sponsored technologies include seismic resolution, complex reservoirs, cost-effective drilling and intervention, subsea, maximising production, integrity management, and environmental performance. References External links ITF official website ITF Single Strategy Offshore Industry Energy KTN Petroleum organizations Organizations established in 1999 Energy business associations Organisations based in Scotland 1999 establishments in Scotland
Industry Technology Facilitator
[ "Chemistry", "Engineering" ]
274
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
1,616,027
https://en.wikipedia.org/wiki/Neon-burning%20process
The neon-burning process is a set of nuclear fusion reactions that take place in evolved massive stars with at least 8 Solar masses. Neon burning requires high temperatures and densities (around 1.2×109 K or 100 keV and 4×109 kg/m3). At such high temperatures photodisintegration becomes a significant effect, so some neon nuclei decompose, absorbing 4.73 MeV and releasing alpha particles. This free helium nucleus can then fuse with neon to produce magnesium, releasing 9.316 MeV. :{| border="0" |- style="height:2em;" | ||+ ||γ ||→ || ||+ || |- style="height:2em;" | ||+ || ||→ || ||+ ||γ |} Alternatively: :{| border="0" |- style="height:2em;" | ||+ ||n ||→ || ||+ ||γ |- style="height:2em;" | ||+ || ||→ || ||+ ||n |} where the neutron consumed in the first step is regenerated in the second. A secondary reaction causes helium to fuse with magnesium to produce silicon: + → + γ Contraction of the core leads to an increase of temperature, allowing neon to fuse directly as follows: + → + Neon burning takes place after carbon burning has consumed all carbon in the core and built up a new oxygen–neon–sodium–magnesium core. The core ceases producing fusion energy and contracts. This contraction increases density and temperature up to the ignition point of neon burning. The increased temperature around the core allows carbon to burn in a shell, and there will be shells burning helium and hydrogen outside. During neon burning, oxygen and magnesium accumulate in the central core while neon is consumed. After a few years the star consumes all its neon and the core ceases producing fusion energy and contracts. Again, gravitational pressure takes over and compresses the central core, increasing its density and temperature until the oxygen-burning process can start. References External links Arnett, W. D. Advanced evolution of massive stars. V – Neon burning / Astrophysical Journal, vol. 193, Oct. 1, 1974, pt. 1, p. 169–176. Nucleosynthesis
Neon-burning process
[ "Physics", "Chemistry", "Astronomy" ]
497
[ "Nuclear fission", "Astronomy stubs", "Astrophysics", "Nucleosynthesis", "Stellar astronomy stubs", "Nuclear chemistry stubs", "Nuclear physics", "Nuclear fusion" ]
1,616,141
https://en.wikipedia.org/wiki/Topological%20module
In mathematics, a topological module is a module over a topological ring such that scalar multiplication and addition are continuous. Examples A topological vector space is a topological module over a topological field. An abelian topological group can be considered as a topological module over where is the ring of integers with the discrete topology. A topological ring is a topological module over each of its subrings. A more complicated example is the -adic topology on a ring and its modules. Let be an ideal of a ring The sets of the form for all and all positive integers form a base for a topology on that makes into a topological ring. Then for any left -module the sets of the form for all and all positive integers form a base for a topology on that makes into a topological module over the topological ring See also References Abstract algebra Topology Topological algebra Topological groups
Topological module
[ "Physics", "Mathematics" ]
171
[ "Algebra stubs", "Abstract algebra", "Space (mathematics)", "Topological algebra", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Topology stubs", "Geometry", "Topological groups", "Spacetime", "Algebra" ]
1,616,775
https://en.wikipedia.org/wiki/Dissociation%20%28chemistry%29
Dissociation in chemistry is a general process in which molecules (or ionic compounds such as salts, or complexes) separate or split into other things such as atoms, ions, or radicals, usually in a reversible manner. For instance, when an acid dissolves in water, a covalent bond between an electronegative atom and a hydrogen atom is broken by heterolytic fission, which gives a proton (H+) and a negative ion. Dissociation is the opposite of association or recombination. Dissociation constant For reversible dissociations in a chemical equilibrium AB <=> A + B the dissociation constant Kd is the ratio of dissociated to undissociated compound where the brackets denote the equilibrium concentrations of the species. Dissociation degree The dissociation degree is the fraction of original solute molecules that have dissociated. It is usually indicated by the Greek symbol α. More accurately, degree of dissociation refers to the amount of solute dissociated into ions or radicals per mole. In case of very strong acids and bases, degree of dissociation will be close to 1. Less powerful acids and bases will have lesser degree of dissociation. There is a simple relationship between this parameter and the van 't Hoff factor . If the solute substance dissociates into ions, then For instance, for the following dissociation KCl <=> K+ + Cl- As , we would have that . Salts The dissociation of salts by solvation in a solution, such as water, means the separation of the anions and cations. The salt can be recovered by evaporation of the solvent. An electrolyte refers to a substance that contains free ions and can be used as an electrically conductive medium. Most of the solute does not dissociate in a weak electrolyte, whereas in a strong electrolyte a higher ratio of solute dissociates to form free ions. A weak electrolyte is a substance whose solute exists in solution mostly in the form of molecules (which are said to be "undissociated"), with only a small fraction in the form of ions. Simply because a substance does not readily dissolve does not make it a weak electrolyte. Acetic acid () and ammonium () are good examples. Acetic acid is extremely soluble in water, but most of the compound dissolves into molecules, rendering it a weak electrolyte. Weak bases and weak acids are generally weak electrolytes. In an aqueous solution there will be some and some and . A strong electrolyte is a solute that exists in solution completely or nearly completely as ions. Again, the strength of an electrolyte is defined as the percentage of solute that is ions, rather than molecules. The higher the percentage, the stronger the electrolyte. Thus, even if a substance is not very soluble, but does dissociate completely into ions, the substance is defined as a strong electrolyte. Similar logic applies to a weak electrolyte. Strong acids and bases are good examples, such as HCl and . These will all exist as ions in an aqueous medium. Gases The degree of dissociation in gases is denoted by the symbol , where refers to the percentage of gas molecules which dissociate. Various relationships between and exist depending on the stoichiometry of the equation. The example of dinitrogen tetroxide () dissociating to nitrogen dioxide () will be taken. If the initial concentration of dinitrogen tetroxide is 1 mole per litre, this will decrease by at equilibrium giving, by stoichiometry, moles of . The equilibrium constant (in terms of pressure) is given by the equation where represents the partial pressure. Hence, through the definition of partial pressure and using to represent the total pressure and to represent the mole fraction; The total number of moles at equilibrium is , which is equivalent to . Thus, substituting the mole fractions with actual values in term of and simplifying; This equation is in accordance with Le Chatelier's principle. will remain constant with temperature. The addition of pressure to the system will increase the value of , so must decrease to keep constant. In fact, increasing the pressure of the equilibrium favours a shift to the left favouring the formation of dinitrogen tetroxide (as on this side of the equilibrium there is less pressure since pressure is proportional to number of moles) hence decreasing the extent of dissociation . Acids in aqueous solution The reaction of an acid in water solvent is often described as a dissociation HA <=> H+ + A- where HA is a proton acid such as acetic acid, CH3COOH. The double arrow means that this is an equilibrium process, with dissociation and recombination occurring at the same time. This implies that the acid dissociation constant However a more explicit description is provided by the Brønsted–Lowry acid–base theory, which specifies that the proton H+ does not exist as such in solution but is instead accepted by (bonded to) a water molecule to form the hydronium ion H3O+. The reaction can therefore be written as HA + H2O <=> H3O+ + A- and better described as an ionization or formation of ions (for the case when HA has no net charge). The equilibrium constant is then where [H_2O] is not included because in dilute solution the solvent is essentially a pure liquid with a thermodynamic activity of one. Ka is variously named a dissociation constant, an acid ionization constant, an acidity constant or an ionization constant. It serves as an indicator of the acid strength: stronger acids have a higher Ka value (and a lower pKa value). Fragmentation Fragmentation of a molecule can take place by a process of heterolysis or homolysis. Receptors Receptors are proteins that bind small ligands. The dissociation constant Kd is used as indicator of the affinity of the ligand to the receptor. The higher the affinity of the ligand for the receptor the lower the Kd value (and the higher the pKd value). See also Bond-dissociation energy Photodissociation, dissociation of molecules by photons (light, gamma rays, x-rays) Radiolysis, dissociation of molecules by ionizing radiation Thermal decomposition References Chemical processes Equilibrium chemistry
Dissociation (chemistry)
[ "Chemistry" ]
1,357
[ "Chemical process engineering", "nan", "Chemical processes", "Equilibrium chemistry" ]
1,616,845
https://en.wikipedia.org/wiki/Aerated%20lagoon
An aerated lagoon (or aerated pond) is a simple wastewater treatment system consisting of a pond with artificial aeration to promote the biological oxidation of wastewaters. There are many other aerobic biological processes for treatment of wastewaters, for example activated sludge, trickling filters, rotating biological contactors and biofilters. They all have in common the use of oxygen (or air) and microbial action to reduce the pollutants in wastewaters. Types Suspension mixed lagoons, where there is less energy provided by the aeration equipment to keep the sludge in suspension. Facultative lagoons, where there is insufficient energy provided by the aeration equipment to keep the sludge in suspension and solids settle to the lagoon floor. The biodegradable solids in the settled sludge then degrade as in an anaerobic lagoon. Suspension mixed lagoons Suspension mixed lagoons flow through activated sludge systems where the effluent has the same composition as the mixed liquor in the lagoon. Typically the sludge will have a residence time or sludge age of 1 to 5 days. This means that the chemical oxygen demand (COD) removed is relatively little and the effluent is therefore unacceptable for discharge into receiving waters. The objective of the lagoon is therefore to act as a biologically assisted flocculator which converts the soluble biodegradable organics in the influent to a biomass which is able to settle as a sludge. Usually the effluent is then put in a second pond where the sludge can settle. The effluent can then be removed from the top with a low chemical oxygen demand, while the sludge accumulates on the floor and undergoes anaerobic stabilisation. Methods of aerating lagoons or basins There are many methods for aerating a lagoon or basin: Motor-driven submerged or floating jet aerators Motor-driven floating surface aerators Motor-driven fixed-in-place surface aerators Injection of compressed air through submerged diffusers Floating surface aerators Ponds or basins using floating surface aerators achieve 80 to 90% removal of BOD with retention times of 1 to 10 days. The ponds or basins may range in depth from 1.5 to 5.0 meters. In a surface-aerated system, the aerators provide two functions: they transfer air into the basins required by the biological oxidation reactions, and they provide the mixing required for dispersing the air and for contacting the reactants (that is, oxygen, wastewater and microbes). Typically, the floating high speed surface aerators are rated to deliver the amount of air equivalent to 1 to 1.2 kg [[O2]]/kWh. However, they do not provide as good mixing as is normally achieved in activated sludge systems and therefore aerated basins do not achieve the same performance level as activated sludge units. With low speed surface aerators SOTE (Standard Oxygen Transfer Efficiency) is higher thanks to better mixing capacity. This mixing capacity of an impeller depends highly on the impeller diameter. Low speed surface aerator present such high diameter. Therefore SOTE for low speed surface aerators is about 2 to 2.5 kg O2/kWh. This is why low speed surface aerators are mostly used in sewage or industrial treatment as WWTP are bigger and sparing energy becomes very interesting. Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C. Submerged diffused aeration Submerged diffused air is essentially a form of a diffuser grid inside a lagoon. There are two main types of submerged diffused aeration systems for lagoon applications: floating lateral and submerged lateral. Both these systems utilize fine or medium bubble diffusers to provide aeration and mixing to the process water. The diffusers can be suspended slightly above the lagoon floor or may rest on the bottom. Flexible airline or weighted air hose supplies air to the diffuser unit from the air lateral (either floating or submerged). See also Industrial wastewater treatment List of waste water treatment technologies Retention basin Rotating biological contactor Sewage treatment Waste stabilization pond Water aeration Water pollution References External links Wastewater Lagoon Systems in Maine Aerated, Partial Mix Lagoons (Wastewater Technology Fact Sheet by the U.S. Environmental Protection Agency) Aerated Lagoon Technology (Linvil G. Rich, Professor Emeritus, Department of Environmental Engineering and Science, Clemson University) Waste treatment technology
Aerated lagoon
[ "Chemistry", "Engineering" ]
933
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
1,616,855
https://en.wikipedia.org/wiki/Carbon%20chauvinism
Carbon chauvinism is a neologism meant to disparage the assumption that the chemical processes of hypothetical extraterrestrial life must be constructed primarily from carbon (organic compounds) because as far as is known, carbon's chemical and thermodynamic properties render it far superior to all other elements at forming molecules used in living organisms. The expression "carbon chauvinism" is also used to criticize the idea that artificial intelligence cannot in theory be sentient or truly intelligent because the underlying matter is not biological. Furthermore, the term is used by transhumanists to object to the commonly held view that life has an inherently higher moral value than hypothetical artificial consciousness. Concept The term was used as early as 1973, when scientist Carl Sagan described it and other human chauvinisms that limit imagination of possible extraterrestrial life. It suggests that human beings, as carbon-based life forms who have never encountered any life that has evolved outside the Earth's environment, may find it difficult to envision radically different biochemistries. Carbon alternatives Like carbon, silicon can form four stable bonds with itself and other elements, and long chemical chains known as silane polymers, which are very similar to the hydrocarbons essential to life on Earth. Silicon is more reactive than carbon, which could make it optimal for extremely cold environments. However, silanes spontaneously burn in the presence of oxygen at relatively low temperatures, so an oxygen atmosphere may be deadly to silicon-based life. On the other hand, it is worth considering that alkanes are as a rule quite flammable, but carbon-based life on Earth does not store energy directly as alkanes, but as sugars, lipids, alcohols, and other hydrocarbon compounds with very different properties. Water as a solvent would also react with silanes, but again, this only matters if for some reason silanes are used or mass-produced by such organisms. Silicon lacks an important property of carbon: single, double, and triple carbon-carbon bonds are all relatively stable. Aromatic carbon structures underpin DNA, which could not exist without this property of carbon. By comparison, compounds containing silene double bonds (such as silabenzene, an unstable analogue of benzene) exhibit far lower stability than the equivalent carbon compound. A pair of silane single bonds have significantly greater total enthalpy than a single silene double bond, so simple disilenes readily autopolymerise, and silicon favors the formation of linear chains of single bonds (see the double bond rule). Hydrocarbons and organic compounds are abundant in meteorites, comets, and interstellar clouds, while their silicon analogs have never been observed in nature. Silicon does, however, form complex one-, two- and three-dimensional polymers in which oxygen atoms form bridges between silicon atoms. These are termed silicates. They are both stable and abundant under terrestrial conditions, and have been proposed as a basis for a pre-organic form of evolution on Earth (see clay hypothesis). See also References Astrobiology Biological hypotheses Astronomical hypotheses Chauvinism Carbon Biochemistry
Carbon chauvinism
[ "Chemistry", "Astronomy", "Biology" ]
648
[ "Astronomical hypotheses", "Origin of life", "Speculative evolution", "Astrobiology", "Astronomical controversies", "nan", "Biochemistry", "Astronomical sub-disciplines", "Biological hypotheses" ]
1,616,932
https://en.wikipedia.org/wiki/Amperometric%20titration
Amperometric titration refers to a class of titrations in which the equivalence point is determined through measurement of the electric current produced by the titration reaction. It is a form of quantitative analysis. Background A solution containing the analyte, A, in the presence of some conductive buffer. If an electrolytic potential is applied to the solution through a working electrode, then the measured current depends (in part) on the concentration of the analyte. Measurement of this current can be used to determine the concentration of the analyte directly; this is a form of amperometry. However, the difficulty is that the measured current depends on several other variables, and it is not always possible to control all of them adequately. This limits the precision of direct amperometry. If the potential applied to the working electrode is sufficient to reduce the analyte, then the concentration of analyte close to the working electrode will decrease. More of the analyte will slowly diffuse into the volume of solution close to the working electrode, restoring the concentration. If the potential applied to the working electrode is great enough (an overpotential), then the concentration of analyte next to the working electrode will depend entirely on the rate of diffusion. In such a case, the current is said to be diffusion limited. As the analyte is reduced at the working electrode, the concentration of the analyte in the whole solution will very slowly decrease; this depends on the size of the working electrode compared to the volume of the solution. What happens if some other species which reacts with the analyte (the titrant) is added? (For instance, chromate ions can be added to oxidize lead ions.) After a small quantity of the titrant (chromate) is added, the concentration of the analyte (lead) has decreased due to the reaction with chromate. The current from the reduction of lead ion at the working electrode will decrease. The addition is repeated, and the current decreases again. A plot of the current against volume of added titrant will be a straight line. After enough titrant has been added to react completely with the analyte, the excess titrant may itself be reduced at the working electrode. Since this is a different species with different diffusion characteristics (and different half-reaction), the slope of current versus added titrant will have a different slope after the equivalence point. This change in slope marks the equivalence point, in the same way that, for instance, the sudden change in pH marks the equivalence point in an acid–base titration. The electrode potential may also be chosen such that the titrant is reduced, but the analyte is not. In this case, the presence of excess titrant is easily detected by the increase in current above background (charging) current. Advantages The chief advantage over direct amperometry is that the magnitude of the measured current is of interest only as an indicator. Thus, factors that are of critical importance to quantitative amperometry, such as the surface area of the working electrode, completely disappear from amperometric titrations. The chief advantage over other types of titration is the selectivity offered by the electrode potential, as well as by the choice of titrant. For instance, lead ion is reduced at a potential of -0.60 V (relative to the saturated calomel electrode), while zinc ions are not; this allows the determination of lead in the presence of zinc. Clearly this advantage depends entirely on the other species present in the sample. See also Titration References Electroanalytical methods Titration
Amperometric titration
[ "Chemistry" ]
745
[ "Electroanalytical methods", "Instrumental analysis", "Titration", "Electroanalytical chemistry" ]