id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
51,280,448
https://en.wikipedia.org/wiki/MEGABYTE%20Act%20of%202016
The Making Electronic Government Accountable By Yielding Tangible Efficiencies Act of 2016 (or the MEGABYTE Act of 2016) is a United States federal law which requires the Director of the Office of Management and Budget to issue a directive on the management of software licenses by the US federal government. The directive will require the chief information officer (CIO) of each federal agency to develop a comprehensive software licensing policy covering roles in relation to software license management, an inventory of software licenses held by the agency, an analysis of software usage and agency goals covering the use of software within the agency. The agency CIO must subsequently report after one year and then at five-yearly intervals of the financial savings which have resulted from improved software license management. The bill was sponsored by Senator Bill Cassidy and Representative Matt Cartwright, and enacted after being signed by President Obama on July 29, 2016. The Congressional Budget Office argued that mostly "the bill would codify and expand current policies and practices of the federal government", but expected that "most of the savings in this area will probably be achieved through current efforts to make cost effective decisions when acquiring software". In accordance with the act requirements, OMB published M-16-12 Category Management Policy 16-1: Improving the Acquisition and Management of Common Information Technology: Software Licensing. This established the Enterprise Software Category Team (ESCT), co-managed by GSA, DoD and OMB. It required agencies to appoint a Software Manager for the entire agency. It requires a continual agency-wide inventory of software licenses, Ongoing analysis of license utilization, and reporting of cost savings/avoidance made possible by this policy. The House of Representatives' Oversight and Government Reform Subcommittee on Government Operations and Information Technology has integrated software policy and inventory monitoring into its oversight of executive agencies' Federal Information Technology Acquisition Reform Act implementation. References External links MEGABYTE Act Software Inventories Released under Freedom of Information Act (FOIA), Various Agencies Acts of the 114th United States Congress Information technology management
MEGABYTE Act of 2016
[ "Technology" ]
404
[ "Information technology", "Information technology management" ]
44,098,050
https://en.wikipedia.org/wiki/Symmetry%20of%20diatomic%20molecules
Molecular symmetry in physics and chemistry describes the symmetry present in molecules and the classification of molecules according to their symmetry. Molecular symmetry is a fundamental concept in the application of Quantum Mechanics in physics and chemistry, for example it can be used to predict or explain many of a molecule's properties, such as its dipole moment and its allowed spectroscopic transitions (based on selection rules), without doing the exact rigorous calculations (which, in some cases, may not even be possible). To do this it is necessary to classify the states of the molecule using the irreducible representations from the character table of the symmetry group of the molecule. Among all the molecular symmetries, diatomic molecules show some distinct features and they are relatively easier to analyze. Symmetry and group theory The physical laws governing a system is generally written as a relation (equations, differential equations, integral equations etc.). An operation on the ingredients of this relation, which keeps the form of the relations invariant is called a symmetry transformation or a symmetry of the system. These symmetry operations can involve external or internal co-ordinates; giving rise to geometrical or internal symmetries. These symmetry operations can be global or local; giving rise to global or gauge symmetries. These symmetry operations can be discrete or continuous. Symmetry is a fundamentally important concept in quantum mechanics. It can predict conserved quantities and provide quantum numbers. It can predict degeneracies of eigenstates and gives insights about the matrix elements of the Hamiltonian without calculating them. Rather than looking into individual symmetries, it is sometimes more convenient to look into the general relations between the symmetries. It turns out that Group theory is the most efficient way of doing this. Groups A group is a mathematical structure (usually denoted in the form (G,*)) consisting of a set G and a binary operation (sometimes loosely called 'multiplication'), satisfying the following properties: closure: For every pair of elements , the product . associativity: For every x and y and z in G, both (x*y)*z and x*(y*z) result with the same element in G  (in symbols, ). existence of identity: There must be an element (say e ) in G such that product any element of G with e make no change to the element (in symbols,  ). existence of inverse: For each element ( x ) in G, there must be an element y in G such that product of x and y is the identity element e  (in symbols, for each such that ). In addition to the above four, if it so happens that ,, i.e., the operation in commutative, then the group is called an abelian group. Otherwise it is called a non-abelian group. Groups, symmetry and conservation The set of all symmetry transformations of a Hamiltonian has the structure of a group, with group multiplication equivalent to applying the transformations one after the other. The group elements can be represented as matrices, so that the group operation becomes the ordinary matrix multiplication. In quantum mechanics, the evolution of an arbitrary superposition of states are given by unitary operators, so each of the elements of the symmetry groups are unitary operators. Now any unitary operator can be expressed as the exponential of some Hermitian operator. So, the corresponding Hermitian operators are the 'generators' of the symmetry group. These unitary transformations act on the Hamiltonian operator in some Hilbert space in a way that the Hamiltonian remains invariant under the transformations. In other words, the symmetry operators commute with the Hamiltonian. If represents the unitary symmetry operator and acts on the Hamiltonian , then; These operators have the above-mentioned properties of a group: The symmetry operations are closed under multiplication. Application of symmetry transformations are associative. There is always a trivial transformation, where nothing is done to the original co-ordinates. This is the identity element of the group. And as long as an inverse transformation exists, it is a symmetry transformation, i.e. it leaves the Hamiltonian invariant. Thus the inverse is part of this set. So, by the symmetry of a system, we mean a set of operators, each of which commutes with the Hamiltonian, and they form a symmetry group. This group may be abelian or non-abelian. Depending upon which one it is, the properties of the system changes (for example, if the group is abelian, there would be no degeneracy). Corresponding to every different kind of symmetry in a system, we can find a symmetry group associated with it. It follows that the generator of the symmetry group also commutes with the Hamiltonian. Now, it follows that: Some specific examples can be systems having rotational, translational invariance etc. For a rotationally invariant system, the symmetry group of the Hamiltonian is the general rotation group. Now, if (say) the system is invariant about any rotation about Z-axis (i.e., the system has axial symmetry), then the symmetry group of the Hamiltonian is the group of rotation about the symmetry axis. Now, this group is generated by the Z-component of the orbital angular momentum, (general group element ). Thus, commutes with for this system and Z-component of the angular momentum is conserved. Similarly, translation symmetry gives rise to conservation of linear momentum, inversion symmetry gives rise to parity conservation and so on. Geometrical symmetries Symmetry operations, point groups and permutation-inversion groups A molecule at equilibrium in a certain electronic state usually has some geometrical symmetry. This symmetry is described by a certain point group which consists of operations (called symmetry operations) that produce a spatial orientation of the molecule that is indistinguishable from the starting configuration. There are five types of point group symmetry operation: identity, rotation, reflection, inversion and improper rotation or rotation-reflection. Common to all symmetry operations is that the geometrical center-point of the molecule does not change its position; hence the name point group. One can determine the elements of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group, the elements are not to be interpreted in the same way. Instead the elements rotate and/or reflect the vibronic (vibration-electronic) coordinates and these elements commute with the vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates. The symmetry classification of the rotational levels, the eigenstates of the full (rovibronic nuclear spin) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-Higgins. See the Section Inversion symmetry and nuclear permutation symmetry below. The elements of permutation-inversion groups commute with the full molecular Hamiltonian. In addition to point groups, there exists another kind of group important in crystallography, where translation in 3-D also needs to be taken care of. They are known as space groups. Basic point group symmetry operations The five basic symmetry operations mentioned above are: Identity Operation E (from the German 'Einheit' meaning unity): The identity operation leaves the molecule unchanged. It forms the identity element in the symmetry group. Though its inclusion seems to be trivial, it is important also because even for the most asymmetric molecule, this symmetry is present. The corresponding symmetry element is the entire molecule itself. Inversion, i : This operation inverts the molecule about its center of inversion (if it has any). The center of inversion is the symmetry element in this case. There may or may not be an atom at this center. A molecule may or may not have a center of inversion. For example: the benzene molecule, a cube, and spheres do have a center of inversion, whereas a tetrahedron does not. Reflection σ: The reflection operation produces a mirror image geometry of the molecule about a certain plane. The mirror plane bisects the molecule and must include its center of geometry. The plane of symmetry is the symmetry element in this case. A symmetry plane parallel with the principal axis (defined below) is dubbed vertical (σv) and one perpendicular to it horizontal (σh). A third type of symmetry plane exists: If a vertical symmetry plane additionally bisects the angle between two 2-fold rotation axes perpendicular to the principal axis, the plane is dubbed dihedral (σd). n-Fold Rotation : The n-fold rotation operation about a n-fold axis of symmetry  produces molecular orientations indistinguishable from the initial for each rotation of   (clockwise and counter-clockwise).It is denoted by . The axis of symmetry is the symmetry element in this case. A molecule can have more than one symmetry axis; the one with the highest n is called the principal axis, and by convention is assigned the z-axis in a Cartesian coordinate system. n-Fold Rotation-Reflection or improper rotation Sn : The n-fold improper rotation operation about an n-fold axis of improper rotation is composed of two successive geometry transformations: first, a rotation through  about the axis of that rotation, and second, reflection through a plane perpendicular (and through the molecular center of geometry) to that axis. This axis is the symmetry element in this case. It is abbreviated Sn. All other symmetry present in a specific molecule are a combination of these 5 operations. Schoenflies notation The Schoenflies (or Schönflies) notation, named after the German mathematician Arthur Moritz Schoenflies, is one of two conventions commonly used to describe point groups. This notation is used in spectroscopy and is used here to specify a molecular point group. Point groups for diatomic molecules There are two point groups for diatomic molecules: for heteronuclear diatomics, and for homonuclear diatomics. : The group , contains rotations through any angle about the axis of symmetry and an infinite number of reflections through the planes containing the inter-nuclear axis (or the vertical axis, that is reason of the subscript 'v').In the group all planes of symmetry are equivalent, so that all reflections form a single class with a continuous series of elements; the axis of symmetry is bilateral, so that there is a continuous series of classes, each containing two elements . Note that this group is non-abelian and there exists an infinite number of irreducible representations in the group. The character table of the group is as follows: : In addition to axial reflection symmetry, homonuclear diatomic molecules are symmetric with respect to inversion or reflection through any axis in the plane passing through the point of symmetry and perpendicular to the inter-nuclear axis. The classes of the group can be obtained from those of the group using the relation between the two groups: . Like , is non-abelian and there are an infinite number of irreducible representations in the group. The character table of this group is as follows: Summary examples Complete set of commuting operators Unlike a single atom, the Hamiltonian of a diatomic molecule doesn't commute with . So the quantum number is no longer a good quantum number. The internuclear axis chooses a specific direction in space and the potential is no longer spherically symmetric. Instead, and commutes with the Hamiltonian (taking the arbitrary internuclear axis as the Z axis). But do not commute with due to the fact that the electronic Hamiltonian of a diatomic molecule is invariant under rotations about the internuclear line (the Z axis), but not under rotations about the X or Y axes. Again, and act on a different Hilbert space, so they commute with in this case also. The electronic Hamiltonian for a diatomic molecule is also invariant under reflections in all planes containing the internuclear line. The (X-Z) plane is such a plane, and reflection of the coordinates of the electrons in this plane corresponds to the operation . If is the operator that performs this reflection, then . So the Complete Set of Commuting Operators (CSCO) for a general heteronuclear diatomic molecule is ; where is an operator that inverts only one of the two spatial co-ordinates (x or y). In the special case of a homonuclear diatomic molecule, there is an extra symmetry since in addition to the axis of symmetry provided by the internuclear axis, there is a centre of symmetry at the midpoint of the distance between the two nuclei (the symmetry discussed in this paragraph only depends on the two nuclear charges being the same. The two nuclei can therefore have different mass, that is they can be two isotopes of the same species such as the proton and the deuteron, or and , and so on). Choosing this point as the origin of the coordinates, the Hamiltonian is invariant under an inversion of the coordinates of all electrons with respect to that origin, namely in the operation . Thus the parity operator . Thus the CSCO for a homonuclear diatomic molecule is . Molecular term symbol, Λ-doubling Molecular term symbol is a shorthand expression of the group representation and angular momenta that characterize the state of a molecule. It is the equivalent of the term symbol for the atomic case. We already know the CSCO of the most general diatomic molecule. So, the good quantum numbers can sufficiently describe the state of the diatomic molecule. Here, the symmetry is explicitly stated in the nomenclature. Angular momentum Here, the system is not spherically symmetric. So, , and the state cannot be depicted in terms of as an eigenstate of the Hamiltonian is not an eigenstate of anymore (in contrast to the atomic term symbol, where the states were written as ). But, as , the eigenvalues corresponding to can still be used. If, where is the absolute value (in a.u.) of the projection of the total electronic angular momentum on the internuclear axis; can be used as a term symbol. By analogy with the spectroscopic notation S, P, D, F, ... used for atoms, it is customary to associate code letters with the values of according to the correspondence: For the individual electrons, the notation and the correspondence used are: and Axial symmetry Again, , and in addition: [as ]. It follows immediately that if the action of the operator on an eigenstate corresponding to the eigenvalue of converts this state into another one corresponding to the eigenvalue , and that both eigenstates have the same energy. The electronic terms such that (that is, the terms ) are thus doubly degenerate, each value of the energy corresponding to two states which differ by the direction of the projection of the orbital angular momentum along the molecular axis. This twofold degeneracy is actually only approximate and it is possible to show that the interaction between the electronic and rotational motions leads to a splitting of the terms with into two nearby levels, which is called -doubling. corresponds to the states. These states are non-degenerate, so that the states of a term can only be multiplied by a constant in a reflection through a plane containing the molecular axis. When , simultaneous eigenfunctions of , and can be constructed. Since , the eigenfunctions of have eigenvalues . So to completely specify states of diatomic molecules, states, which is left unchanged upon reflection in a plane containing the nuclei, needs to be distinguished from states, for which it changes sign in performing that operation. Inversion symmetry and nuclear permutation symmetry Homonuclear diatomic molecules have a center of symmetry at their midpoint. Choosing this point (which is the nuclear center of mass) as the origin of the coordinates, the electronic Hamiltonian is invariant under the point group operation i of inversion of the coordinates of all electrons at that origin. This operation is not the parity operation P (or E*); the parity operation involves the inversion of nuclear and electronic spatial coordinates at the molecular center of mass. Electronic states either remain unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The subscripts g or u are therefore added to the term symbol, so that for homonuclear diatomic molecules electronic states can have the symmetries ,......according to the irreducible representations of the point group. The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -. The complete Hamiltonian of a homonuclear diatomic molecule also commutes with the operation of permuting (or exchanging) the coordinates of the two (identical) nuclei and rotational levels gain the additional label s or a depending on whether the total wavefunction is unchanged (symmetric) or changed in sign (antisymmetric) by the permutation operation. Thus, the rotational levels of heteronuclear diatomic molecules are labelled + or -, whereas those of homonuclear diatomic molecules are labelled +s, +a, -s or -a. The rovibronic nuclear spin states are classified using the appropriate permutation-inversion group. The complete Hamiltonian of a homonuclear diatomic molecule (as for all centro-symmetric molecules) does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions Spin and total angular momentum If S denotes the resultant of the individual electron spins, are the eigenvalues of S and as in the case of atoms, each electronic term of the molecule is also characterised by the value of S. If spin-orbit coupling is neglected, there is a degeneracy of order associated with each for a given . Just as for atoms, the quantity is called the multiplicity of the term and.is written as a (left) superscript, so that the term symbol is written as . For example, the symbol denotes a term such that and . It is worth noting that the ground state (often labelled by the symbol ) of most diatomic molecules is such that and exhibits maximum symmetry. Thus, in most cases it is a state (written as , excited states are written with in front) for a heteronuclear molecule and a state (written as ) for a homonuclear molecule. Spin–orbit coupling lifts the degeneracy of the electronic states. This is because the z-component of spin interacts with the z-component of the orbital angular momentum, generating a total electronic angular momentum along the molecule axis Jz. This is characterized by the quantum number , where . Again, positive and negative values of are degenerate, so the pairs (ML, MS) and (−ML, −MS) are degenerate. These pairs are grouped together with the quantum number , which is defined as the sum of the pair of values (ML, MS) for which ML is positive: Molecular term symbol So, the overall molecular term symbol for the most general diatomic molecule is given by: where S is the total spin quantum number is the projection of the orbital angular momentum along the internuclear axis is the projection of the total angular momentum along the internuclear axis u/g is the effect of the point group operation i +/− is the reflection symmetry along an arbitrary plane containing the internuclear axis von Neumann-Wigner non-crossing rule Effect of symmetry on the matrix elements of the Hamiltonian The electronic terms or potential curves of a diatomic molecule depend only on the internuclear distance , and it is important to investigate the behaviour of these potential curves as R varies. It is of considerable interest to examine the intersection of the curves representing the different terms. Let and two different electronic potential curves. If they intersect at some point, then the functions and will have neighbouring values near this point. To decide whether such an intersection can occur, it is convenient to put the problem as follows. Suppose at some internuclear distance the values and are close, but distinct (as shown in the figure). Then it is to be examined whether or and can be made to intersect by the modification . The energies and are eigenvalues of the Hamiltonian . The corresponding orthonormal electronic eigenstates will be denoted by and and are assumed to be real. The Hamiltonian now becomes , where is the small perturbation operator (though it is a degenerate case, so ordinary method of perturbation won't work). setting , it can be deduced that in order for and to be equal at the point the following two conditions are required to be fulfilled: However, we have at our disposal only one arbitrary parameter giving the perturbation . Hence the two conditions involving more than one parameter cannot in general be simultaneously satisfied (the initial assumption that and real, implies that is also real). So, two case can arise:  The matrix element vanishes identically. It is then possible to satisfy the first condition independently. Therefore, it is possible for the crossing to occur if, for a certain value of (i.e., for a certain value of ) the first equation is satisfied. As the perturbation operator (or ) commutes with the symmetry operators of the molecule, this case will happen if the two electronic states and have different point group symmetries (for example if they correspond to two electronic terms having different values of , different electronic parities g and u, different multiplicities, or for example are the two terms and ) as it can be shown that, for a scalar quantity whose operator commutes with the angular momentum and inversion operators, only the matrix elements for transitions between states of the same angular momentum and parity are non-zero and the proof remains valid, in essentially the same form, for the general case of an arbitrary symmetry operator.   If the electronic states and have the same point group symmetry, then can be, and will in general be, non-zero. Except for accidental crossing which would occur if, by coincidence, the two equations were satisfied at the same value of , it is in general impossible to find a single value of (i.e., a single value of ) for which the two conditions are satisfied simultaneously.   Thus, in a diatomic molecule, only terms of different symmetry can intersect, while the intersection of terms of like symmetry is forbidden. This is, in general, true for any case in quantum mechanics where the Hamiltonian contains some parameter and its eigenvalues are consequently functions of that parameter. This general rule is known as von Neumann - Wigner non-crossing rule. This general symmetry principle has important consequences is molecular spectra. In fact, in the applications of valence bond method in case of diatomic molecules, three main correspondence between the atomic and the molecular orbitals are taken care of: Molecular orbitals having a given value of (the component of the orbital angular momentum along the internuclear axis) must connect with atomic orbitals having the same value of (i.e. the same value of ). The electronic parity of the wave function (g or u) must be preserved as varies from to . The von Neumann-Wigner non-crossing rule must be obeyed, so that energy curves corresponding to orbitals having the same symmetry do not cross as varies from to . Thus, von Neumann-Wigner non-crossing rule also acts as a starting point for valence bond theory. Observable consequences Symmetry in diatomic molecules manifests itself directly by influencing the molecular spectra of the molecule. The effect of symmetry on different types of spectra in diatomic molecules are: Rotational spectrum In the electric dipole approximation the transition amplitude for emission or absorption of radiation can be shown to be proportional to the vibronic matrix element of the component of the electric dipole operator along the molecular axis. This is the permanent electric dipole moment. In homonuclear diatomic molecules, the permanent electric dipole moment vanishes and there is no pure rotation spectrum (but see N.B. below). Heteronuclear diatomic molecules possess a permanent electric dipole moment and exhibit spectra corresponding to rotational transitions, without change in the vibronic state. For , the selection rules for a rotational transition are: . For , the selection rules become: .This is due to the fact that although the photon absorbed or emitted carries one unit of angular momentum, the nuclear rotation can change, with no change in , if the electronic angular momentum makes an equal and opposite change. Symmetry considerations require that the electric dipole moment of a diatomic molecule is directed along the internuclear line, and this leads to the additional selection rule .The pure rotational spectrum of a diatomic molecule consists of lines in the far infra-red or the microwave region, the frequencies of these lines given by: ; where , and N.B. In exceptional circumstances the hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states of homonuclear diatomic molecules giving rise to pure rotational (ortho - para) transitions in a homonuclear diatomic molecule. Vibrational spectrum The transition matrix elements for pure vibrational transition are , where is the dipole moment of the diatomic molecule in the electronic state . Because the dipole moment depends on the bond length , its variation with displacement of the nuclei from equilibrium can be expressed as: ; where is the dipole moment when the displacement is zero. The transition matrix elements are, therefore: using orthogonality of the states. So, the transition matrix is non-zero only if the molecular dipole moment varies with displacement, for otherwise the derivatives of would be zero. The gross selection rule for the vibrational transitions of diatomic molecules is then: To show a vibrational spectrum, a diatomic molecule must have a dipole moment that varies with extension. So, homonuclear diatomic molecules do not undergo electric-dipole vibrational transitions. So, a homonuclear diatomic molecule doesn't show purely vibrational spectra. For small displacements, the electric dipole moment of a molecule can be expected to vary linearly with the extension of the bond. This would be the case for a heteronuclear molecule in which the partial charges on the two atoms were independent of the internuclear distance. In such cases (known as harmonic approximation), the quadratic and higher terms in the expansion can be ignored and . Now, the matrix elements can be expressed in position basis in terms of the harmonic oscillator wavefunctions: Hermite polynomials. Using the property of Hermite polynomials: , it is evident that which is proportional to , produces two terms, one proportional to and the other to . So, the only non-zero contributions to comes from . So, the selection rule for heteronuclear diatomic molecules is: Conclusion: Homonuclear diatomic molecules show no pure vibrational spectral lines, and the vibrational spectral lines of heteronuclear diatomic molecules are governed by the above-mentioned selection rule. Rovibrational spectrum Homonuclear diatomic molecules show neither pure vibrational nor pure rotational spectra. However, as the absorption of a photon requires the molecule to take up one unit of angular momentum, vibrational transitions are accompanied by a change in rotational state, which is subject to the same selection rules as for the pure rotational spectrum. For a molecule in a state, the transitions between two vibration-rotation (or rovibrational) levels and , with vibrational quantum numbers and , fall into two sets according to whether or . The set corresponding to is called the R branch. The corresponding frequencies are given by: The set corresponding to is called the P branch. The corresponding frequencies are given by: Both branches make up what is called a rotational-vibrational band or a rovibrational band. These bands are in the infra-red part of the spectrum. If the molecule is not in a state, so that , transitions with are allowed. This gives rise to a further branch of the vibrational-rotational spectrum, called the Q branch. The frequencies corresponding to the lines in this branch are given by a quadratic function of if and are unequal, and reduce to the single frequency: if . For a heteronuclear diatomic molecule, this selection rule has two consequences: Both the vibrational and rotational quantum numbers must change. The Q-branch is therefore forbidden. The energy change of rotation can be either subtracted from or added to the energy change of vibration, giving the P- and R- branches of the spectrum, respectively. Homonuclear diatomic molecules also show this kind of spectra. The selection rules, however, are a bit different. Conclusion: Both homo- and hetero-nuclear diatomic molecules show rovibrational spectra. A Q-branch is absent in the spectra of heteronuclear diatomic molecules. A special example: Hydrogen molecule ion An explicit implication of symmetry on the molecular structure can be shown in case of the simplest bi-nuclear system: a hydrogen molecule ion or a di-hydrogen cation, . A natural trial wave function for the is determined by first considering the lowest-energy state of the system when the two protons are widely separated. Then there are clearly two possible states: the electron is attached either to one of the protons, forming a hydrogen atom in the ground state, or the electron is attached to the other proton, again in the ground state of a hydrogen atom (as depicted in the picture). The trial states in the position basis (or the 'wave functions') are then: and The analysis of using variational method starts assuming these forms. Again, this is only one possible combination of states. There can be other combination of states also, for example, the electron is in an excited state of the hydrogen atom. The corresponding Hamiltonian of the system is: Clearly, using the states and as basis will introduce off-diagonal elements in the Hamiltonian. Here, because of the relative simplicity of the ion, the matrix elements can actually be calculated. The electronic Hamiltonian of commutes with the point group inversion symmetry operation i. Using its symmetry properties, we can relate the diagonal and off-diagonal elements of the Hamiltonian as: Because as well as , the linear combination of and that diagonalizes the Hamiltonian is (after normalization). Now as i for , the states are also eigenstates of i. It turns out that and are the eigenstates of i with eigenvalues +1 and -1 (in other words, the wave functions and are gerade (symmetric) and ungerade (unsymmetric), respectively). The corresponding expectation value of the energies are . From the graph, we see that only has a minimum corresponding to a separation of 1.3 Å and a total energy , which is less than the initial energy of the system, . Thus, only the gerade state stabilizes the ion with a binding energy of . As a result, the ground state of is and this state is called a bonding molecular orbital. Thus, symmetry plays an explicit role in the formation of . See also Character table Diatomic molecule Molecular symmetry Schoenflies notation List of character tables for chemically important 3D point groups Hund's cases Rotational-vibrational spectroscopy Molecular term symbol Avoided crossing Dihydrogen cation Symmetry in quantum mechanics Group (mathematics) Point groups in three dimensions Complete set of commuting observables Born-Oppenheimer approximation Notes References Further reading Quantum Mechanics, Third Edition: Non-Relativistic Theory (Volume 3)by L. D. Landau, L. M. Lifshitz; Edition: 3rd; chapters: XI and XII. Physics of Atoms & Molecules by B.H. Bransden, C.J. Joachain; Edition: 2nd edition; chapter: 9 Molecular Spectra and Molecular Structure: Spectra of Diatomic Molecules by Gerhard Herzberg; Edition: 2nd Molecular Quantum Mechanics by Peter W. Atkins, Ronald S. Friedman; Edition: 5th; chapter: 10. Lecture notes on Quantum Mechanics (handouts: 12, 10) by Prof. Sourendu Gupta, Tata Institute of Fundamental Research, Mumbai. Symmetry in Physics: Principles and Simple Applications Volume 1 by James Philip Elliott, P.G. Dawber; A Modern Approach to Quantum Mechanics by John S. Townsend; Edition 2nd; http://www.astro.uwo.ca/~jlandstr/p467/lec5-mol_spect/index.html External links http://www.astro.uwo.ca/~jlandstr/p467/lec5-mol_spect/index.html http://csi.chemie.tu-darmstadt.de/ak/immel/script/redirect.cgi?filename=http://csi.chemie.tu-darmstadt.de/ak/immel/tutorials/symmetry/index1.html http://theory.tifr.res.in/~sgupta/courses/qm2014/index.php A pdf file explaining the relation between Point Groups and Permutation-Inversion Groups Link Symmetry Theoretical chemistry Molecular physics
Symmetry of diatomic molecules
[ "Physics", "Chemistry", "Mathematics" ]
7,049
[ "Molecular physics", "Theoretical chemistry", " molecular", "nan", "Geometry", "Atomic", "Symmetry", " and optical physics" ]
50,448,624
https://en.wikipedia.org/wiki/Supersymmetric%20WKB%20approximation
In physics, the supersymmetric WKB (SWKB) approximation is an extension of the WKB approximation that uses principles from supersymmetric quantum mechanics to provide estimations on energy eigenvalues in quantum-mechanical systems. Using the supersymmetric method, there are potentials that can be expressed in terms of a superpotential, , such that The SWKB approximation then writes the Born–Sommerfeld quantization condition from the WKB approximation in terms of . The SWKB approximation for unbroken supersymmetry, to first order in is given by where is the estimate of the energy of the -th excited state, and and are the classical turning points, given by The addition of the supersymmetric method provides several appealing qualities to this method. First, it is known that, by construction, the ground state energy will be exactly estimated. This is an improvement over the standard WKB approximation, which often has weaknesses at lower energies. Another property is that a class of potentials known as shape invariant potentials have their energy spectra estimated exactly by this first-order condition. See also Quantum mechanics Supersymmetric quantum mechanics Supersymmetry WKB approximation References Supersymmetry Quantum mechanics Mathematical physics Approximations
Supersymmetric WKB approximation
[ "Physics", "Mathematics" ]
264
[ "Symmetry", "Approximations", "Applied mathematics", "Theoretical physics", "Unsolved problems in physics", "Quantum mechanics", "Quantum physics stubs", "Mathematical relations", "Mathematical physics", "Physics beyond the Standard Model", "Supersymmetry" ]
50,448,673
https://en.wikipedia.org/wiki/Levinson%27s%20theorem
Levinson's theorem is an important theorem of scattering theory. In non-relativistic quantum mechanics, it relates the number of bound states in channels with a definite orbital momentum to the difference in phase of a scattered wave at infinite and zero momenta. It was published by Norman Levinson in 1949. The theorem applies to a wide range of potentials that increase limitedly at zero distance and decrease sufficiently fast as the distance grows. Statement of theorem The difference in the -wave phase shift of a scattered wave at infinite momentum, , and zero momentum, , for a spherically symmetric potential is related to the number of bound states by: , where or . The scenario is uncommon and can only occur in -wave scattering, if a bound state with zero energy exists. The following conditions are sufficient to guarantee the theorem: continuous in except for a finite number of finite discontinuities, Generalizations of Levinson's theorem include tensor forces, nonlocal potentials, and relativistic effects. In relativistic scattering theory, essential information about the system is contained in the Jost function, whose analytical properties are well defined and can be used to prove and generalize Levinson's theorem. The presence of Castillejo, Dalitz and Dyson (CDD) poles and Jaffe and Low primitives which correspond to zeros of the Jost function at the unitary cut modifies the theorem. In general case, the phase difference at infinite and zero particle momenta is determined by the number of bound states, , the number of primitives, , and the number of CDD poles, : . The bound states and primitives give a negative contribution to the phase asymptotics, while the CDD poles give a positive contribution. In the context of potential scattering, a decrease (increase) in the scattering phase shift due to greater particle momentum is interpreted as the action of a repulsive (attractive) potential. The following universal properties of the Jost function, , are essential to guarantee the generalized theorem: an analytic function of the square of energy, , in the center-of-mass frame of the scattered particles with a cut from threshold to infinity, simple zeros below the threshold, simple zeros above the threshold, and simple poles on the real axis. The zeros correspond to bound states and primitives in a fixed channel with total angular momentum . References External links Larry Spruch, "Levinson's Theorem", http://physics.nyu.edu/LarrySpruch/LevinsonsTheorem.PDF#Levinson_theorem. M. Wellner, "Levinson's Theorem (an Elementary Derivation," Atomic Energy Research Establishment, Harwell, England. March 1964. Theorems in quantum mechanics de:Compton-Effekt#Compton-Wellenlänge
Levinson's theorem
[ "Physics", "Mathematics" ]
583
[ "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
64,051,718
https://en.wikipedia.org/wiki/Conophylline
Conophylline is a autophagy inducing vinca alkaloid found in plants of the genus Tabernaemontana. Among the many functional groups in this molecule is an epoxide: the compound where that ring is replaced with a double bond is called conophyllidine and this co-occurs in the same plants. History Conophylline and conophyllidine were first reported in 1993 after isolation from the ethanol extract of leaves of Tabernaemontana divaricata. Their structures were confirmed by X-ray crystallography. The class of vinca alkaloids to which these compounds belong also contains vincristine and vinblastine, well-known therapeutic agents for human cancers, so they were candidates for a number of biochemical assays to see if they had useful biological activity. By 1996, conophylline it had been reported to inhibit tumours in rats by its action on Ras-expressing cells. This finding did not lead to a useful drug but the molecule continues to be investigated for its biological properties. Synthesis Biosynthesis As with other Indole alkaloids, the biosynthesis of conophylline and conophyllidine starts from the amino acid tryptophan. This is converted into strictosidine before further elaboration and dimerisation. Chemical synthesis Fukuyama and coworkers published a total synthesis of conophylline and conophyllidine in 2011. Their strategy was to couple two indoline-containing fragments using a type of Polonovski reaction. The synthesis was challenging owing to the eleven stereogenic centers which have to be controlled. The final products are chiral, and laevorotary. Natural occurrence Conophylline and conophyllidine are found in species of the genus Tabernaemontana including Ervatamia microphylla and Tabernaemontana divaricata. The latter species is known to produce many other alkaloids including catharanthine, ibogamine and voacristine. See also Conofoline References Carbazoles Tryptamine alkaloids
Conophylline
[ "Chemistry" ]
439
[ "Tryptamine alkaloids", "Alkaloids by chemical classification" ]
64,053,223
https://en.wikipedia.org/wiki/Reproductive%20interference
Reproductive interference is the interaction between individuals of different species during mate acquisition that leads to a reduction of fitness in one or more of the individuals involved. The interactions occur when individuals make mistakes or are unable to recognise their own species, labelled as ‘incomplete species recognition'. Reproductive interference has been found within a variety of taxa, including insects, mammals, birds, amphibians, marine organisms, and plants. There are seven causes of reproductive interference, namely signal jamming, heterospecific rivalry, misdirected courtship, heterospecific mating attempts, erroneous female choice, heterospecific mating, and hybridisation. All types have fitness costs on the participating individuals, generally from a reduction in reproductive success, a waste of gametes, and the expenditure of energy and nutrients. These costs are variable and dependent on numerous factors, such as the cause of reproductive interference, the sex of the parent, and the species involved. Reproductive interference occurs between species that occupy the same habitat and can play a role in influencing the coexistence of these species. It differs from competition as reproductive interference does not occur due to a shared resource. Reproductive interference can have ecological consequences, such as through the segregation of species both spatially and temporally. It can also have evolutionary consequences, for example; it can impose a selective pressure on the affected species to evolve traits that better distinguish themselves from other species. Causes of reproductive interference Reproductive interference can occur at different stages of mating, from locating a potential mate, to the fertilisation of an individual of a different species. There are seven causes of reproductive interference that each have their own consequences on the fitness of one or both of the involved individuals. Signal jamming Signal jamming refers to the interference of one signal by another. Jamming can occur by signals emitted from environmental sources (e.g. noise pollution), or from other species. In the context of reproductive interference, signal jamming only refers to the disruption of the transmission or retrieval of signals by another species. The process of mate attraction and acquisition involves signals to aid in locating and recognising potential mates. Signals can also give the receiver an indication of the quality of a potential mate. Signal jamming can occur in different types of communication. Auditory signal jamming, otherwise labelled as auditory masking, is when a noisy environment created by heterospecific signals causes difficulties in identifying conspecifics. Likewise in chemical signals, pheromones that are meant to attract conspecifics and drive off others may overlap with heterospecific pheromones, leading to confusion. Difficulties in recognising and locating conspecifics can result in a reduction of encounters with potential mates and a decrease in mating frequencies. Examples Vibrational signalling in the American grapevine leafhopper - Individuals of the American grapevine leafhopper communicate with each other through vibrational signals that they transmit through the host plant. American grapevine leafhoppers are receptive of signals within their receptor’s sensitivity range of 50 to 1000 Hz. The vibrations can be used to identify and locate potential female mates. To successfully communicate, a duet is performed between the male and female American grapevine leafhopper. The female replies within a specific timeframe after the male signal, and the male may use the timing of her reply to identify her. However, vibrational signals are prone to disruption and masking by heterospecific signals, conspecific signals, and background noise that are within their species-specific sensitivity range. The interference of the duet between a male and female American grapevine leafhopper can reduce the male’s success in identifying and locating the female, which can reduce the frequency of mating. Auditory signalling in the gray treefrog (Hyla versicolor) and the Cope's gray treefrogs (Hyla chrysoscelis) – The success of reproduction is dependent on a female’s ability to correctly identify and respond to the advertisement call of a potential mate. At a breeding site with high densities of males, the male’s chorus may overlap with heterospecific calls, making it difficult for the female to successfully locate a mate. When the advertisement calls of the male gray treefrog and male Cope’s gray treefrog overlap, female gray treefrogs make mistakes and choose the heterospecific call. The amount of errors the female makes is dependent on the amount of overlap between signals. Female Cope’s gray treefrogs can better differentiate the signals and are only significantly affected when heterospecifics completely overlap conspecific male signals. However, female Cope’s gray treefrogs prefer conspecific male signals that have less overlap (i.e. less interference). Furthermore, females have longer response times to overlapped calls, where it takes longer for them to choose a mate. Signal jamming can affect both males and females as difficulties in identifying and locating a mate reduces their mating frequencies. Females may have more costs if they mate with a male of a lower quality, and may be susceptible to a higher risk of predation by predators within the breeding site if they take longer to choose and locate a male. Heterospecific mating between the gray treefrog and Cope’s gray treefrogs also can form an infertile hybrid which is highly costly to both parents due to the wastage of gametes. Chemical signalling in ticks – Female ticks produce a pheromone that is a species-specific signal to attract conspecific males that are attached to the host. Female ticks also produce a pheromone that is not species-specific which can attract males that are in a close proximity to her. Pheromones emitted from closely related species can mix and lead to interference. Three species of ticks: Aponomma hydrosauri, Amblyomma albolimbatum, and Amblyomma limbatum, are closely related and can interfere with one another when attached to the same host. When two of the species of tick are attached on the same host, males have difficulties locating a female of the same species, potentially due to the mixing of pheromones. The pheromone that is not species-specific also has the capability of attracting males of all three species when they are in close proximity to the female. The presence of a heterospecific female can also reduce the time a male spends with conspecific females, leading to a reduction of reproductive success. Furthermore, when Amblyomma albolimbatum males attach to Aponomma hydrosauri females to mate, despite being unsuccessful, they remain attached which physically inhibits following males from mating. Heterospecific rivalry Heterospecific rivalry occurs between males, when a male of a different species is mistaken as a rival for mates (i.e. mistaken for a conspecific male). In particular, heterospecific rivalry is hard to differentiate from other interspecific interactions, such as the competition over food and other resources. Costs to the mistaken males can include the wastage of time and energy, and a higher risk of injury and predation if they leave their mating territory to pursue the heterospecific male. Males that chase off a heterospecific male may also leave females exposed to following intruders, whether it be a conspecific or heterospecific male. Examples Eastern amberwing dragonfly (Perithemis tenera) – Male Eastern amberwing dragonflies are territorial as they defend mating territories from rival conspecific males. The male will perch around their territory and pursue conspecifics that fly near the perch. When the male is approached by a species of horsefly and butterfly, they are similarly pursued. The horsefly and butterfly do not compete over a common resource with the Eastern amberwing dragonfly, have not been seen interfering with the mating within the territory, and are neither a predator nor prey of the Eastern amberwing dragonfly. Instead, they are pursued potentially due to being mistaken for a rival conspecific as they share similar characteristics in size, colour, and flight height. The similar characteristics may be cues used by the male Eastern amberwing dragonfly to identify conspecifics. The heterospecific pursuit is costly for the male as they waste energy and time, have a higher risk of injury, and may lose opportunities to defend their territory against subsequent intruders. Misdirected courtship Misdirected courtship occurs when males display courtship towards individuals of a different species of either sex. The misdirection is caused by a mistake during species recognition, or by an attraction towards heterospecifics that possess desirable traits. Such desirable traits are those traits that normally are an indicator of conspecific mate quality, such as body size. Costs associated with misdirecting courtship for males include the wasted energy investment in the attempt to court heterospecifics, and a decrease in mating frequency within species. Examples Waxbill – Waxbills are monogamous, where an individual only has one partner. Parents also display biparental care, where both the mother and father contribute to the care of the offspring. The combination of monogamy and biparental investment suggest that both male and female waxbills should be ‘choosy’ and have strong preferences to reduce the chances of mating with a heterospecific female. Males of the three species of waxbill: blue breast (Uraeginthus angolensis), red cheek (Uraeginthus bengalus), and blue cap (Uraeginthus cyanocephalus), have differing strengths of preferences for conspecific females when also presented with a heterospecific female. The differing preferences is affected by the body size of the females, potentially due to body size being an indicator of fecundity, which is the ability to produce offspring. Blue breast males prefer conspecifics over red cheek females that are smaller; however, have a weaker preference for conspecifics over blue breast females that are only slightly smaller. Red cheek males have no preference for conspecifics in the presence of a larger blue breast female or blue cap female. Blue cap males prefer conspecifics over red cheek females; however, have no preference for conspecifics in the presence of a larger blue breast male. Atlantic salmon (Salmo salar) – Atlantic salmon that were once native to Lake Ontario were reintroduced to the lake to study their spawning interactions with other species of fish, including the chinook salmon, coho salmon, and brown trout. Chinook salmon interacted with Atlantic salmon the most, where male chinooks attempted to court female Atlantic salmon. Male chinooks also chased away, and in some interactions, behaved aggressively towards other Atlantic salmon that approached female Atlantic salmon. A male brown trout was also observed to court a female Atlantic salmon. Misdirected courtship towards the Atlantic salmon can cause problems in waters that the Atlantic salmon currently occupy, and towards conservation efforts to reintroduce the Atlantic salmon to Lake Ontario. Implications of misdirected courtship on the Atlantic salmon can cause the delay or prevention of spawning, and the hybridisation of the Atlantic salmon with other species. Heterospecific mating attempts Heterospecific mating attempts occur when males attempt to mate with females of a different species, regardless of whether courtship occurs. During each mating attempt, sperm transfer may or may not occur. Both sexes have costs when a heterospecific attempts to mate. Costs associated with heterospecific mating attempts include wasted energy, time, and potentially gametes if sperm transfer occurs. There is also a risk of injury and increased risk of predation for both sexes. Examples Cepero's grasshopper (Tetrix ceperoi) and the slender groundhopper (Tetrix subulata) – Naturally the distribution of the Cepero’s grasshopper and slender groundhopper overlap; however, they rarely co-exist. The reproductive success of the Cepero’s grasshopper decreases when housed within the same enclosure as high numbers of the slender groundhopper. The reduction of reproductive success stems from an increase in mating attempts by the Cepero's grasshopper towards the slender groundhopper, which may be due to their larger body size. However, these mating attempts are generally unsuccessful as the mate recognition of female slender groundhoppers are reliable, which may be due to the different courtship displays of the two species. The reduced reproductive success can cause the displacement in one of the species, potentially a factor as to why the species rarely co-exist despite sharing similar habitat preferences. Italian agile frog (Rana latastei) - The distribution of Italian agile frog and the agile frog (Rana dalmatina) overlap naturally in ponds and drainage ditches. In the areas of overlap, the abundance of agile frogs is higher than Italian agile frogs. When there is a higher abundance of agile frogs, the mating between Italian agile frogs is interfered with. Male agile frogs attempt to displace male Italian agile frogs during amplexus, which is a type of mating position where the male grasps onto the female. The Italian agile frog and agile frog have been seen in amplexus when co-existing. The mating attempts by the agile frog reduces the reproductive success of the Italian agile frog. The Italian agile frog also produces a lower number of viable eggs in the presence of the agile frog, potentially due to sperm competition between the male Italian agile frog and agile frog. Species and sex-recognition errors among true toads are very well studied. Toads are known to have amplexus with species from other genera in the same family, and species belonging to other families. Hybridization cases have also been reported among toads. Erroneous female choice Erroneous female choice refers to mistakes made by females when differentiating males of the same species from males of a different species. Female choice may occur at different stages of mating, including male courtship, copulation, or after copulation. Female choice can depend on the availability of appropriate males. When there are less available conspecific males, females may make more mistakes as they become less ‘choosy’. Examples Striped ground cricket (Allonemobius fasciatus) and Southern ground cricket (Allonemobius socius) - The striped ground cricket and the Southern ground cricket are closely related species that have an overlapping distribution. Both crickets use calling songs in order to identify and locate potential mates. The songs of the two species have a different frequency and period. Females of both species show little preference between the songs from conspecific and heterospecific males. The minor preference disappears if the intensity of the calls are altered. The lack of ability to differentiate between the two songs can result in erroneous female choice. Erroneous female choice has costs, including energy wastage, and increases in predation risk when searching for a conspecific. Additionally, it is highly costly when the mistake leads to heterospecific mating, which involves the wastage of gametes. However, the cost of erroneous female choice may be small for the striped ground and Southern ground cricket due to their high abundance. The lack of ability to differentiate between the calling songs is proposed to be due to the weak selective pressure on the females. Heterospecific mating Heterospecific mating is when two individuals from different species mate. After the male transfers his sperm into the heterospecific female, different processes can occur that may change the outcome of the copulation. Heterospecific mating may result in the production of a hybrid in some pairings. Costs associated to heterospecific mating include the wastage of time, energy, and gametes. Examples Spider mites – two closely related Panonychus mites: the Panonychus citri and Panonychus mori, are generally geographically segregated and on occasion co-exist. However, the co-existence is not stable as the Panonychus mori is eventually excluded. The exclusion is a result of reproductive interference and also due to the higher reproductive rate of the Panonychus citri. Heterospecific mating occurs between the two species which can produce infertile eggs or infertile hybrid females. Furthermore, females are not able to produce female offspring after mating with a heterospecific. In addition to the wastage of energy, time, and gametes, the inability to produce female offspring after heterospecific mating skews the sex ratio of the co-existing populations. The high costs associated with heterospecific mating along with the higher reproductive rate of the Panonychus citri lead to the displacement of the Panonychus mori. Black-legged meadow katydid (Orchelimum nigripes) and the handsome meadow katydid (Orchelimum pulchellum) – The two closely related species of katydid have the same habitat preferences and co-exist along the Potomac River. Females of both species that mate heterospecifically have a large reduction in fecundity compared to conspecific pairings. Heterospecific mating either produces no eggs or male hybrids that may be sterile. Both individuals suffer a large fitness cost from the wastage of energy, time, and gametes, as they unsuccessfully pass on their genes. However, females may be able to offset this cost through multiple mating, as they receive nutritional benefits from consuming a nuptial food gift from the male, otherwise known as the spermatophylax. Hybridisation Hybridisation, in the context of reproductive interference, is defined as the mating between individuals of different species that can lead to a hybrid, an inviable egg, or an inviable offspring. The frequency of hybridisation increases if it is hard to recognise potential mates, especially when heterospecifics share similarities, such as body size, colouration, and acoustic signals. Costs associated with hybridisation are dependent on the level of parental investment and on the product of the pairing (hybrid). Hybrids have the potential to become invasive if they develop traits that make them more successful than their parent species in surviving within new and changing habitats, otherwise known as hybrid vigor or heterosis. Compared to each individual parent species, they hold a different combination of characteristics that can be more adaptable and 'fit' within particular environments. If an inviable product is produced, both parents suffer from the cost of unsuccessfully passing on their genes. Examples California Tiger Salamanders (Ambystoma californiense) x Barred Tiger Salamanders (Ambystoma mavortium) - California tiger salamanders are native to California, and were geographically isolated from Barred tiger salamanders. Barred tiger salamanders were then introduced by humans to California, and the mating between these two species led to the formation of a population of hybrids. The hybrids have since established in their parent habitat and spread into human modified environments. Within hybrids, the survivability of individuals with a mixed-ancestry is higher than individuals with a highly native or highly introduced genetic background. Stable populations can form as populations with a large native ancestry become mixed with more introduced genes, and vice versa. Hybrids pose both ecological and conservation consequences as they threaten the population viability of the native California tiger salamanders, which is currently listed as an endangered species. The hybrids may also affect the viability of other native organisms within the invaded regions, as they consume large quantities of aquatic invertebrate and tadpole. Red deer (Cervus elaphus) x sika deer (Cervus nippon) - The sika deer were originally introduced by humans to Britain and has since established and spread through deliberate reintroductions and escape. The red deer are native to Britain and hybridise with the sika deer in areas which they co-exist. Heterospecific mating between the red deer and sika deer can produce viable hybrids. Sika deer and the hybrids may outcompete and displace native deer from dense woodland. As the complete eradication of sika and the hybrids is impractical, management efforts are directed at minimising spread by not planting vegetation that would facilitate their spread into regions where the red deer still persist. References Hybridisation (biology) Hybrid organisms Biology terminology Botanical nomenclature Evolutionary biology Population genetics Breeding Reproduction Fertility
Reproductive interference
[ "Biology" ]
4,267
[ "Evolutionary biology", "Behavior", "Botanical nomenclature", "Reproduction", "Hybrid organisms", "Botanical terminology", "Biological interactions", "Biological nomenclature", "nan", "Breeding" ]
64,054,538
https://en.wikipedia.org/wiki/Standard%20Ebooks
Standard Ebooks is an open source, volunteer-driven project to create and publish high-quality, fully featured, and accessible e-books of works in the public domain. Standard Ebooks sources titles from places like Project Gutenberg, the Internet Archive, and Wikisource, among others, but differs from those projects in that the goal is to maximize readability for a modern audience, take advantage of accessibility features available in modern e-book file formats, and to streamline updates to the e-books (such as typo fixes) by making use of GitHub as a collaboration tool. All Standard Ebooks titles are released in epub, azw3, and Kepub formats, and are available through Google Play Books and Apple Books. All of the project's e-book files are released in the United States public domain, and all code is released under the GNU General Public License v3. Style Standard Ebooks produces e-books by following a unified style guide, which specifies everything from typography standards to semantic tagging and internal code structure, with the goal of creating a consistent corpus, aligned with modern publishing standards and "cleaned of ancient and irrelevant ephemera." Standard Ebooks works with organizations such as the National Network for Equitable Library Service, and strives to conform to DAISY Consortium accessibility standards, among others, to ensure that all productions will work with modern tools such as screen readers. With the goal of making public domain works more accessible to modern audiences, archaic spellings are modernized and typographic quirks are addressed "so ebooks look like books and not text documents." This approach stands in contrast to the work of transcription sites like Project Gutenberg. All book covers are derived from public domain fine art. Volunteer e-book producers locate paintings suitable for the work they are producing. History Standard Ebooks was founded by Alex Cabal after he experienced frustration at being unable to find well-formatted English-language e-books while living in Germany. After early experiments creating a pay what you want edition of Alice's Adventures in Wonderland, the Standard Ebooks website was launched in 2017. Initial notice came from posts on Hacker News and Reddit, with later mentions including Stack Overflow's newsletter. In 2021, Standard Ebooks began accepting donations and sponsorships to produce specific books. In May 2024, Standard Ebooks published Ulysses as its thousandth title. References External links Standard Ebooks at GitHub Ebook suppliers Public domain Open access projects Electronic publishing New media
Standard Ebooks
[ "Technology" ]
515
[ "Multimedia", "New media" ]
57,469,010
https://en.wikipedia.org/wiki/Vertifolia%20effect
The Vertifolia effect is a well documented phenomenon in the fields of plant breeding and plant pathology. It is characterized by the erosion of a crop’s horizontal resistance to disease during a breeding cycle due to the presence of strong vertical resistance, characterized by the presence of R genes. This effect was observed in late blight of potato. This phenomenon was first described by J.E. Van der Plank in his 1963 book Plant Disease: Epidemics and Control. Van der Plank observed that under artificial selection the potato variety Vertifolia had stronger vertical resistance to the potato late blight pathogen, Phytophthora infestans, as measured by the presence of specific R genes. However, when the pathogen overcame these R genes Vertifolia exhibited a greater loss of horizontal resistance than varieties with fewer R genes and lower vertical resistance. This effect suggests that when a pathogen evolves an avirulence gene to counteract a variety’s R gene, that variety will be more susceptible to the pathogen than other varieties. The Vertifolia effect has important implications for the breeding of disease resistant crops. To avoid it plant breeders may opt to cross in R genes or insert transgenes at the end of the breeding cycle to maintain levels of horizontal resistance during early rounds of selection. It is also suggests that breeders should focus on enhancing horizontal resistance to avoid potential catastrophic crop losses. Though the effect is a frequently observed phenomenon among plant breeders and plant pathologists, it is difficult to document and there are situations where it does not hold true. References Phytopathology Pathology
Vertifolia effect
[ "Biology" ]
317
[ "Pathology" ]
57,469,065
https://en.wikipedia.org/wiki/Journal%20of%20Trace%20Elements%20in%20Medicine%20and%20Biology
The Journal of Trace Elements in Medicine and Biology is a bimonthly peer-reviewed medical journal covering the roles played by trace elements in medical and biological systems. It was established in 1987 as the Journal of Trace Elements and Electrolytes in Health and Disease, obtaining its current title in 1995. It is published by Elsevier on behalf of the Federation of European Societies on Trace Elements and Minerals (FESTEM), of which it is the official journal. The editor-in-chief is Dirk Schaumlöffel (Université de Pau et des Pays de l'Adour/Centre national de la recherche scientifique). According to the Journal Citation Reports, the journal has a 2017 impact factor of 3.755. References External links Biochemistry journals Medicinal chemistry journals Quarterly journals Academic journals established in 1987 Elsevier academic journals English-language journals Inorganic chemistry journals
Journal of Trace Elements in Medicine and Biology
[ "Chemistry" ]
179
[ "Biochemistry journals", "Biochemistry journal stubs", "Medicinal chemistry journals", "Medicinal chemistry stubs", "Biochemistry stubs", "Biochemistry literature", "Medicinal chemistry", "Inorganic chemistry journals" ]
57,477,295
https://en.wikipedia.org/wiki/Design%20optimization
Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages: Variables: Describe the design alternatives Objective: Elected functional combination of variables (to be maximized or minimized) Constraints: Combination of Variables expressed as equalities or inequalities that must be satisfied for any acceptable design alternative Feasibility: Values for set of variables that satisfies all constraints and minimizes/maximizes Objective. Design optimization problem The formal mathematical (standard form) statement of the design optimization problem is where is a vector of n real-valued design variables is the objective function are equality constraints are inequality constraints is a set constraint that includes additional restrictions on besides those implied by the equality and inequality constraints. The problem formulation stated above is a convention called the negative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem. We can introduce the vector-valued functions to rewrite the above statement in the compact expression We call the set or system of (functional) constraints and the set constraint. Application Design optimization applies the methods of mathematical optimization to design problem formulations and it is sometimes used interchangeably with the term engineering optimization. When the objective function f is a vector rather than a scalar, the problem becomes a multi-objective optimization one. If the design optimization problem has more than one mathematical solutions the methods of global optimization are used to identified the global optimum. Optimization Checklist Problem Identification Initial Problem Statement Analysis Models Optimal Design Model Model Transformation Local Iterative Techniques Global Verification Final Review A detailed and rigorous description of the stages and practical applications with examples can be found in the book Principles of Optimal Design. Practical design optimization problems are typically solved numerically and many optimization software exist in academic and commercial forms. There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include, shape optimization, wing-shape optimization, topology optimization, architectural design optimization, power optimization. Several books, articles and journal publications are listed below for reference. One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools. Journals Journal of Engineering for Industry Journal of Mechanical Design Journal of Mechanisms, Transmissions, and Automation in Design Design Science Engineering Optimization Journal of Engineering Design Computer-Aided Design Journal of Optimization Theory and Applications Structural and Multidisciplinary Optimization Journal of Product Innovation Management International Journal of Research in Marketing See also Design Decisions Wiki (DDWiki) : Established by the Design Decisions Laboratory at Carnegie Mellon University in 2006 as a central resource for sharing information and tools to analyze and support decision-making References Further reading Rutherford., Aris, ([2016], ©1961). The optimal design of chemical reactors : a study in dynamic programming. Saint Louis: Academic Press/Elsevier Science. . OCLC 952932441 Jerome., Bracken, ([1968]). Selected applications of nonlinear programming. McCormick, Garth P.,. New York,: Wiley. . OCLC 174465 L., Fox, Richard ([1971]). Optimization methods for engineering design. Reading, Mass.,: Addison-Wesley Pub. Co. . OCLC 150744 Johnson, Ray C. Mechanical Design Synthesis With Optimization Applications. New York: Van Nostrand Reinhold Co, 1971. 1905-, Zener, Clarence, ([1971]). Engineering design by geometric programming. New York,: Wiley-Interscience. . OCLC 197022 H., Mickle, Marlin ([1972]). Optimization in systems engineering. Sze, T. W., 1921-2017,. Scranton,: Intext Educational Publishers. . OCLC 340906. Optimization and design; [papers]. Avriel, M.,, Rijckaert, M. J.,, Wilde, Douglass J.,, NATO Science Committee., Katholieke Universiteit te Leuven (1970- ). Englewood Cliffs, N.J.,: Prentice-Hall. [1973]. . OCLC 618414. J., Wilde, Douglass (1978). Globally optimal design. New York: Wiley. . OCLC 3707693. J., Haug, Edward (1979). Applied optimal design : mechanical and structural systems. Arora, Jasbir S.,. New York: Wiley. . OCLC 4775674. Uri., Kirsch, (1981). Optimum structural design : concepts, methods, and applications. New York: McGraw-Hill. . OCLC 6735289. Uri., Kirsch, (1993). Structural optimization : fundamentals and applications. Berlin: Springer-Verlag. . OCLC 27676129. Structural optimization : recent developments and applications. Lev, Ovadia E., American Society of Civil Engineers. Structural Division., American Society of Civil Engineers. Structural Division. Committee on Electronic Computation. Committee on Optimization. New York, N.Y.: ASCE. 1981. . OCLC 8182361. Foundations of structural optimization : a unified approach. Morris, A. J. Chichester [West Sussex]: Wiley. 1982. . OCLC 8031383. N., Siddall, James (1982). Optimal engineering design : principles and applications. New York: M. Dekker. . OCLC 8389250. 1944-, Ravindran, A., (2006). Engineering optimization : methods and applications. Reklaitis, G. V., 1942-, Ragsdell, K. M. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. . OCLC 61463772. N.,, Vanderplaats, Garret (1984). Numerical optimization techniques for engineering design : with applications. New York: McGraw-Hill. . OCLC 9785595. T., Haftka, Raphael (1990). Elements of Structural Optimization. Gürdal, Zafer., Kamat, Manohar P. (Second rev. edition ed.). Dordrecht: Springer Netherlands. . OCLC 851381183. S., Arora, Jasbir (2011). Introduction to optimum design (3rd ed.). Boston, MA: Academic Press. . OCLC 760173076. S.,, Janna, William. Design of fluid thermal systems (SI edition ; fourth edition ed.). Stamford, Connecticut. . OCLC 881509017. Structural optimization : status and promise. Kamat, Manohar P. Washington, DC: American Institute of Aeronautics and Astronautics. 1993. . OCLC 27918651. Mathematical programming for industrial engineers. Avriel, M., Golany, B. New York: Marcel Dekker. 1996. . OCLC 34474279. Hans., Eschenauer, (1997). Applied structural mechanics : fundamentals of elasticity, load-bearing structures, structural optimization : including exercises. Olhoff, Niels., Schnell, W. Berlin: Springer. . OCLC 35184040. 1956-, Belegundu, Ashok D., (2011). Optimization concepts and applications in engineering. Chandrupatla, Tirupathi R., 1944- (2nd ed.). New York: Cambridge University Press. . OCLC 746750296. Okechi., Onwubiko, Chinyere (2000). Introduction to engineering design optimization. Upper Saddle River, NJ: Prentice-Hall. . OCLC 41368373. Optimization in action : proceedings of the Conference on Optimization in Action held at the University of Bristol in January 1975. Dixon, L. C. W. (Laurence Charles Ward), 1935-, Institute of Mathematics and Its Applications. London: Academic Press. 1976. . OCLC 2715969. P., Williams, H. (2013). Model building in mathematical programming (5th ed.). Chichester, West Sussex: Wiley. . OCLC 810039791. Integrated design of multiscale, multifunctional materials and products. McDowell, David L., 1956-. Oxford: Butterworth-Heinemann. 2010. . OCLC 610001448. M.,, Dede, Ercan. Multiphysics simulation : electromechanical system applications and optimization. Lee, Jaewook,, Nomura, Tsuyoshi,. London. . OCLC 881071474. 1962-, Liu, G. P. (Guo Ping), (2001). Multiobjective optimisation and control. Yang, Jian-Bo, 1961-, Whidborne, J. F. (James Ferris), 1960-. Baldock, Hertfordshire: Research Studies Press. . OCLC 54380075. Structural Topology Optimization Design
Design optimization
[ "Engineering" ]
2,037
[ "Design" ]
57,478,553
https://en.wikipedia.org/wiki/PC-SAFT
PC-SAFT (perturbed chain SAFT) is an equation of state that is based on statistical associating fluid theory (SAFT). Like other SAFT equations of state, it makes use of chain and association terms developed by Chapman, et al from perturbation theory. However, unlike earlier SAFT equations of state that used unbonded spherical particles as a reference fluid, it uses spherical particles in the context of hard chains as reference fluid for the dispersion term. PC-SAFT was developed by Joachim Gross and Gabriele Sadowski, and was first presented in their 2001 article. Further research extended PC-SAFT for use with associating and polar molecules, and it has also been modified for use with polymers. A version of PC-SAFT has also been developed to describe mixtures with ionic compounds (called electrolyte PC-SAFT or ePC-SAFT). Form of the Equation of State The equation of state is organized into terms that account for different types of intermolecular interactions, including terms for the hard chain reference dispersion association polar interactions ions The equation is most often expressed in terms of the residual Helmholtz energy because all other thermodynamic properties can be easily found by taking the appropriate derivatives of the Helmholtz energy. Here is the molar residual Helmholtz energy. Hard Chain Term where is the number of compounds; is the mole fraction; is the average number of segments in the mixture; is the Boublík-Mansoori-Leeland- Carnahan-Starling hard sphere equation of state; is the hard sphere radial distribution function at contact. References Engineering thermodynamics
PC-SAFT
[ "Physics", "Chemistry", "Engineering" ]
348
[ "Engineering thermodynamics", "Thermodynamics", "Mechanical engineering" ]
42,656,205
https://en.wikipedia.org/wiki/Enlist%20Weed%20Control%20System
The Enlist Weed Control System is an agricultural system that includes seeds for genetically modified crops that are resistant to Enlist (a broadleaf herbicide with two active agents, 2,4-Dichlorophenoxyacetic acid (2,4-D) and glyphosate) and the Enlist herbicide; spraying the herbicide will kill weeds but not the resulting crop. The system was developed by Dow AgroSciences, part of Dow Chemical Company. In October 2014 the system was registered for restricted use in Illinois, Indiana, Iowa, Ohio, South Dakota and Wisconsin by the US Environmental Protection Agency. In 2013, the system was approved by Canada for the same uses. The Enlist approach was developed to replace the "Roundup-Ready" system that was introduced in 1996 by Monsanto and which has become less useful with the rise of glyphosate-resistant weeds. Enlist Duo Enlist Duo is an herbicide that contains the choline form of 2,4-Dichlorophenoxyacetic acid (2,4-D) and glyphosate plus an unknown number of unlisted ingredients. Dow added chemicals to the mixture in what it termed "Colex-D technology". 2,4-D is one of the most widely used herbicides in the world. 2,4-D is volatile and by EPA assessment is a hazardous air pollutant that is difficult to contain. According to Dow, the Colex-D formulation reduces drift and damage from evaporation. As of 2013 glyphosate was the world's largest-selling herbicide, with sales driven by glyphosate-resistant genetically modified crops. Other countries assessing the system include Brazil, Argentina and various food importing countries. Enlist crops As of April 2014 maize and soybeans resistant to 2,4-D and glyphosate had been approved in Canada, and in September 2014 the USDA approved the same two crops. Criticism 2,4-D was one of the main ingredients of Agent Orange, a defoliant used during the Vietnam War that was blamed for many health problems. According to a Reuters article the main health problems arose from TCDD contamination created in the synthesis of the other Agent Orange component, 2,4,5-T The U.S. Environmental Protection Agency has moved to rescind its approval due to conflicting claims from the manufacturer about synergistic effects from mixing the two herbicides. Dow had told the EPA that the combination of the two herbicides didn't enhance their toxicity to plants, but an earlier patent application from Dow claimed that it did. References External links Enlist Weed Control System EPA Factsheet Herbicides Genetic engineering Genetically modified organisms in agriculture Dow Chemical Company
Enlist Weed Control System
[ "Chemistry", "Engineering", "Biology" ]
567
[ "Biological engineering", "Herbicides", "Genetic engineering", "Molecular biology", "Biocides" ]
48,947,106
https://en.wikipedia.org/wiki/UFAW%20Handbook
The UFAW Handbook is a manual about care of animals used in animal testing. It is presented by the Universities Federation for Animal Welfare. Reviews Editions of the text have been reviewed in 1948, 1968, 1978, and more. References External links Wiley's own sales page Animal testing techniques Medical manuals 1948 non-fiction books
UFAW Handbook
[ "Chemistry" ]
68
[ "Animal testing", "Animal testing techniques" ]
48,950,101
https://en.wikipedia.org/wiki/Surveying%20in%20North%20America
Surveying in North America is heavily influenced by the United States Public lands survey system. It inherits the basis of its land tenure from the United Kingdom, as well as the other countries that established colonies, namely Spain and France. History The first European Explorations of North America were quickly followed by territorial claims. The original colonies that made up the United States were granted royal charters that described the limits of the lands where the settlements could be located. Since much of the lands were unknown to Europeans, the grants allocated sweeping areas and were often later amended or superseded. As the population grew and the land was explored, the state borders were defined, like the Mason–Dixon line, finalized in 1767. The Lewis and Clark Expedition included a preliminary survey of the features of the western United States, resulting in maps of geographical features. Organization The current geodetic model of the Earth used in the US is the North American Datum 1983, often called NAD83. The system is used to define horizontal co-ordinates of reference markers all over the US. Although created as a geocentric datum which originates at the center of the Earth, more recent models of the Earth have shown the origin to be 2.2 m off the center of the Earth. Elevation is recorded against the North American Vertical Datum 1988 (NAVD 88), which uses as its origin point Father Point, in Quebec, Canada. Error in the system is approximately 0.5 mm per mile, resulting in a total error of approximately 1 m from one corner of the continental US to the other. North American bearings are quadrant bearings. Licensing Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Canada Land surveyors register to work in their respective province. The designation for a land surveyor breaks down by province. It follows the rule whereby the first letter indicates the province, followed by L.S. There is also a designation C.L.S. or Canada lands surveyor. They have the authority to work on Canada lands, which include Indian Reserves, National Parks, the three territories, and offshore lands. The Canadian version of the PLSS is the Dominion Land Survey. In Canada, most provinces have Common Law legal systems for the management and regulation of land and personal property being a former dominion of the British Empire, while in Quebec, a mixed legal system also combines a large amount of Civil Law and traditions in dealing with property going back to its founding and subsequent expansion as the central hub of New France. Mexico The public land survey systems carried out and maintained in the United States and Canada have influenced and affected how the modern Mexican government licenses and regulates surveying, and how it has undertaken the monumental task of the physical surveying, mapping, and cataloging of public and private land throughout the country. Mexico has a land surveying system based upon Civil Law, inherited from the Colonial expansion starting with the first exploration and campaigns of the Conquistadors, right through to the wave of settlement coming over from Spain, and as such certain rights differ and are looked at dissimilarly with respect to private and personal property compared to countries with English Common Law systems such as the United States. It is the same source and tradition of some of the civil and land laws found throughout the states in the Southwestern United States, themselves being formerly part of the sparsely populated frontier territory of the Spanish Empire. United States Most of the United States recognizes surveying as a distinct profession apart from engineering. Licensing requirements vary by state, but they have components of education, experience, and examinations. Most states insist upon the basic qualification of a degree in surveying, plus experience and examination requirements. In the past, candidates completed an apprenticeship before taking a series of examinations to gain licensure. The licensing process follows two phases. Upon graduation, the candidate may take the Fundamentals of Surveying (FS) exam. If they pass and meet the other requirements they become a surveying intern (SI). Upon certification as an SI, the candidate then needs to gain on-the-job experience to become eligible for the second phase. In most states, this is the Principles and Practice of Land Surveying (PS) exam and a state-specific examination. SIs were formerly called surveyors in training (SIT), which they are still known by in some states. Licensed surveyors usually denote themselves with post nominals. The letters PLS (professional land surveyor), PS (professional surveyor), LS (land surveyor), RLS (registered land surveyor), RPLS (Registered Professional Land Surveyor), or PSM (professional surveyor and mapper) follow their names, depending upon their jurisdiction of registration. Within the United States a majority of states' land law systems are derived from English Common Law, while several states still retain at least some of their land laws from Civil Law, being settled and established originally as Spanish and French territories, such as Louisiana, Texas, Arizona, etc... References North America North America North America
Surveying in North America
[ "Engineering" ]
1,003
[ "Surveying", "Civil engineering" ]
48,951,615
https://en.wikipedia.org/wiki/Syrian%20hamster%20behavior
Syrian hamster behavior refers to the ethology of the Syrian hamster (Mesocricetus auratus). Sleeping habits Syrian hamsters have a sleep cycle that lasts about 10 to 12 minutes. In the laboratory, Syrian hamsters are observed to be nocturnal and in their natural circadian rhythm they wake and sleep on a consistent schedule. In all kinds of laboratory settings hamsters do 80% of their routine activities at night. Hamsters are most active early in the night, then become less active as the night passes. A study of Syrian hamsters in the wild found that they were active almost exclusively in the daytime, which is a surprising difference from behavior in the laboratory. The sleeping behavior of wild hamsters is not well understood. Reproduction The female Syrian hamster has anatomic features that are unique from other animals. They mature between 8–10 weeks of age and have a 4-day estrous cycle. Female Syrian hamsters show mate preference before they engage in copulation by displaying vaginal marking, known to solicit males. She often chooses to mate with an alpha male, who will flank mark (a scent-marking behaviour associated with aggression and competition) more frequently than any subordinate males present. Male offspring are at higher risk than female offspring of enduring effects from maternal social stress. In the presence of a dominant pregnant female, subordinate pregnant female hamsters have the ability to reabsorb or spontaneously abort their young (most often males) in utero. The subordinate females produce smaller litters overall, and any male offspring they do produce will be smaller in size than those that were produced by the dominant female. After a mother hamster gives birth, normal behavior from the mother in the postpartum period can include establishing a maternal bond with the babies, the mother being aggressive to protect the babies, or infanticide in rodents of the mother to her young. The male Syrian hamster has a requirement for both hormonal cues and chemosensory cues in order to engage in copulation. Further, the integration of steroid cues (i.e. testosterone) and odour cues (relayed through the olfactory bulb) is crucial for mating. It has also been shown that within the medial amygdala, the anterior and posterior regions work together to process the stimuli (odors), showing that their mating behaviour relies on the main olfactory system's communication to nuclei in the amygdala regions. Their behaviour has demonstrated this phenomenon, as they are attracted to the odor of female hamster's vaginal discharge. Males have even demonstrated mounting behaviour on other males who are scented with the female vaginal discharge. When one male and two females are placed in the same environment, the male is likely to engage in copulation with both females as it provides him with a reproductive advantage. In all observed scenarios where there was one male and two females, he did not demonstrate preference for either female and engaged in copulation with both the females present. There has been no reproductive disadvantage to the female when another female is present, other than decreased stimulation as compared to a one-male one-female situation. Interactions with others Syrian hamsters acquire learned helplessness when they are bullied a few times by larger animal. Syrian hamsters can regain lost confidence when some time passes without experiencing bullying. Interactions between male and female Syrian hamsters are influenced by the estrous cycle - in addition, their behaviour changes over the course of the 4-day cycle. Parameters for interactions that have been studied include sniffing, approaching, leaving, and following each other (male/female pair). Specific to the male hamster, his response to the female can be measured by mounting behaviour, intromission and ejaculation. Under semi-natural conditions, the mating behaviours of male and female hamsters were observed during the 4-day period of estrous. When they were allowed free interaction, females displayed lordosis in their own living area 93% of the time, where after 60 minutes of copulation the male would be driven out by the female while she retrieved his food supply and forced him into a corner farthest away from her nest via displays of aggressive behaviour. When a Syrian hamster is introduced to a stranger hamster in its own cage, they perform a standard sequence of acts and postures (also known as a fixed action pattern) that are agonistic by nature. It has been observed that one hamster becomes the dominant and the other becomes submissive, as shown by their posture. The stranger hamster was observed to be the dominant in the majority of situations, and the resident hamster was the submissive. Feeding behaviour Food-anticipatory activity (FAA), meaning increased locomotion due to restricted feeding schedules (often found in laboratory settings), is a behaviour seen in many rodents. The Syrian hamster is one of only few exceptions to this activity. It has been found that the arcuate nucleus, ventromedial nucleus, and dorsomedial nucleus are all involved in the presence of FAA, and that Syrian hamsters in the laboratory do not demonstrate FAA because of the presence of light and the typical light cycles used in experiments. In a study of their food-hoarding behaviour, Syrian hamsters were given a limited access to food and expected to consume more in each sitting than they typically would. Instead, they exhibited hoarding behaviour where they took the food during the given time period and continuously ate the food that they hoarded as though they were on a free-fed schedule. This allowed them to maintain typical body weight, and mimic the adaptive feeding strategies they may use in their natural habitats. Females have shown signs of anorexia and anxiety when separated from social interactions. Social separation of hamsters has a bias toward females, thus providing a model for the differences between sexes when experiencing anorexia and anxiety in their adulthood. Laboratory behaviour Although most all hamsters display wire-gnawing behaviour in all laboratory cage sizes, it has been shown that the more restricted the cage size, the more their gnawing behaviour increases. Additionally, hamsters in smaller cages used the roof of their house as a platform more often than those in a larger cage which may suggest that they are trying to create more space for themselves within their cage. In another study, the bedding depth of hamsters and its influence on their stress and wire-gnawing behaviour was tracked by assigning 3 groups different bedding depths - 10 cm, 40 cm, and 80 cm. This is due to the natural instinct that laboratory rodents have to dig. Hamsters who had the 10 cm deep bedding showed significantly more wire-gnawing than any others, and the 80 cm deep bedding group demonstrated no wire-gnawing behaviour. This research demonstrates the importance of having enough bedding for the hamsters to indulge their natural tendencies and have enough material to dig. The behaviour and responses of Syrian hamsters have been observed and tested for a variety of medical-related studies as well, such as the development of the palate and incidence of cleft palate, the influence of retinoic acid on physical malformations in fetuses, immune responses to diseases like hookworm, and the effects of ingesting ethanol solution on liver composition and fatty acid accumulation. References Behavior Mammal behavior Ethology
Syrian hamster behavior
[ "Biology" ]
1,496
[ "Behavior by type of animal", "Behavior", "Mammal behavior", "Behavioural sciences", "Ethology" ]
61,679,626
https://en.wikipedia.org/wiki/Rotating%20wall%20technique
The rotating wall technique (RW technique) is a method used to compress a single-component plasma (a cold dense gas of charged particles) confined in an electromagnetic trap. It is one of many scientific and technological applications that rely on storing charged particles in vacuum. This technique has found extensive use in improving the quality of these traps and in tailoring of both positron and antiproton (i.e. antiparticle) plasmas for a variety of end uses. Overview Single-component plasmas (SCP), which are a type of nonneutral plasma, have many uses, including studying a variety of plasma physics phenomena and for the accumulation, storage and delivery of antiparticles. Applications include the creation and study of antihydrogen, beams to study the interaction of positrons with ordinary matter and to create dense gases of positronium (Ps) atoms, and the creation of Ps-atom beams. The “rotating wall (RW) technique” uses rotating electric fields to compress SCP in PM traps radially to increase the plasma density and/or to counteract the tendency of plasma to diffuse radially out of the trap. It has proven crucial in improving the quality and hence utility of trapped plasmas and trap-based beams. Principles of operation For this application, a plasma is stored in a Penning–Malmberg (PM) trap in a uniform magnetic field, B. The charge cloud is typically cylindrical in shape with dimension along B large compared to the radius. This charge produces a radial electric field which would tend to push the plasma outward. To counteract this, the plasma spins about the axis of symmetry producing a Lorentz force to balance that due to the electric field, and the plasma takes the form of a spinning charged rod. Such cold, single-component plasmas in PM traps can come to thermal equilibrium and rotate as a rigid body at frequency , where n is the plasma density. As illustrated in Fig. 1, the RW technique uses an azimuthally segmented cylindrical electrode covering a portion of a plasma. Phased, sinusoidal voltages at frequency fRW are applied to the segments. The result is a rotating electric field perpendicular to the axis of symmetry of the plasma. This field induces an electric dipole moment in the plasma and hence a torque. Rotation of the field in the direction of, and faster than the natural rotation of the plasma acts to spin the plasma faster, thereby increasing the Lorentz force and producing plasma compression (cf. Figs. 2 and 3). An important requirement for plasma compression using the RW technique is good coupling between the plasma and the rotating field. This is necessary to overcome asymmetry-induced transport which acts as a drag on the plasma and tends to oppose the RW torque. For high quality PM traps with little asymmetry induced transport, one can access a so-called “strong drive regime." In this case, application of a rotating electric field at frequency results in the plasma spinning up to the applied frequency, namely fE = fRW (cf. Fig. 3). This has proven enormously useful as a way to fix plasma density simply by adjusting fRW. History The RW technique was first developed by Huang et al., to compress a magnetized Mg+ plasma. The technique was soon thereafter applied to electron plasmas, where a segmented electrode, such as that described above, was used to couple to waves (Trivelpiece-Gould modes) in the plasma. The technique was also used to phase-lock the rotation frequency of laser cooled single-component ion crystals. The first use of the RW technique for antimatter was done using small positron plasmas without coupling to modes. The strong drive regime, which was discovered somewhat later using electron plasmas, has proven to be more useful in that tuning to (and tracking) plasma modes is unnecessary. A related technique has been developed to compress single-component charged gases in PM traps (i.e., charge clouds not in the plasma regime). Uses The RW technique has found extensive use in manipulating antiparticles in Penning–Malmberg traps. One important application is the creation of specially tailored antiparticle beams for atomic physics experiments. Frequently one would like a beam with a large current density. In this case, one compresses the plasma with the RW technique before delivery. This has been crucial in experiments to study dense gases of positronium (Ps) atoms and formation of the Ps2 molecule (e+e−e+e−) [5-7]. It has also been important in the creation of high-quality Ps-atom beams. The RW technique is used in three ways in the creation of low-energy antihydrogen atoms. Antiprotons are compressed radially by sympathetic compression with electrons co-loaded in the trap. The technique has also been used to fix the positron density before the positrons and antiprotons are combined. Recently it was discovered that one could set all of the important parameters of the electron and positron plasmas for antihydrogen production using the RW to fix the plasma density and evaporative cooling to cool the plasma and fix the on-axis space charge potential. The result was greatly increased reproducibility for antihydrogen production. In particular, this technique, dubbed SDREVC (strong drive regime evaporative cooling), was successful to the extent that it increased the number of trappable antihydrogen by an order of magnitude. This is particularly important in that, while copious amounts of antihydrogen can be produced, the vast majority are at high temperature and cannot be trapped in the small well depth of the minimum-magnetic field atom traps. See also Positron Antiproton Penning trap Non-neutral plasmas Annihilation Positronium Antihydrogen References Plasma technology and applications
Rotating wall technique
[ "Physics" ]
1,227
[ "Plasma technology and applications", "Plasma physics" ]
61,691,092
https://en.wikipedia.org/wiki/Dyakonov%E2%80%93Voigt%20wave
A Dyakonov–Voigt wave (also known as DV wave and Dyakonov–Voigt surface wave) is a distinctive type of surface electromagnetic light wave that results from a particular manipulation of crystals. It was discovered in 2019 by researchers from the University of Edinburgh and Pennsylvania State University and its unique properties were described based on models involving equations developed in the mid-1800s by mathematician and physicist James Clerk Maxwell. Its discoverers found that the wave is produced at the specific interface between natural or synthetic crystals and another material, such as water or oil. Such DV waves were found to travel in a single direction, and decay as they moved away from the interface. Other types of such surface waves, like Dyakonov surface waves (DSWs), travel in multiple directions, and decay more quickly. DV waves decay as "the product of a linear and an exponential function of the distance from the interface in the anisotropic medium," but the fields of the Dyakonov surface waves decay "only exponentially in the anisotropic medium". Research co-leader Tom Mackay noted: "Dyakonov–Voigt waves represent a step forward in our understanding of how light interacts with complex materials, and offer opportunities for a range of technological advancements." Applications of the newly found waves may include biosensor improvements for blood sample screening, and fiber optic circuit developments, to permit a better transfer of data. This wave is now classified as an exceptional surface wave. See also Dyakonov surface waves Maxwell's equations References Condensed matter physics Surface waves
Dyakonov–Voigt wave
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
331
[ "Physical phenomena", "Surface waves", "Phases of matter", "Materials science", "Waves", "Condensed matter physics", "Matter" ]
54,161,658
https://en.wikipedia.org/wiki/Jessica%20Lovering
Jessica Lovering is an American astrophysicist, researcher and Director of Energy at the Breakthrough Institute. She supports the innovative development of new nuclear power plants in response to climate change. She also sits on the Advisory Committee of the Nuclear Innovation Alliance, and was a speaker at Nuclear Innovation Bootcamp at the University of California, Berkeley in 2016. Her biography at ClimateOne states that Lovering "works to change how people think about energy and the environment". Her written work has featured in various publications, including journals Issues in Science and Technology, Science and Public Policy, Foreign Affairs and Energy policy. Websites featuring her work include various nuclear energy blogs and EnergyPost.eu. She has worked as a researcher on the documentary film Pandora's Promise and appeared in the TV series Abandoned. References Nuclear power American astrophysicists Living people Year of birth missing (living people)
Jessica Lovering
[ "Physics" ]
177
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
41,240,510
https://en.wikipedia.org/wiki/Morse/Long-range%20potential
The Morse/Long-range potential (MLR potential) is an interatomic interaction model for the potential energy of a diatomic molecule. Due to the simplicity of the regular Morse potential (it only has three adjustable parameters), it is very limited in its applicability in modern spectroscopy. The MLR potential is a modern version of the Morse potential which has the correct theoretical long-range form of the potential naturally built into it. It has been an important tool for spectroscopists to represent experimental data, verify measurements, and make predictions. It is useful for its extrapolation capability when data for certain regions of the potential are missing, its ability to predict energies with accuracy often better than the most sophisticated ab initio techniques, and its ability to determine precise empirical values for physical parameters such as the dissociation energy, equilibrium bond length, and long-range constants. Cases of particular note include: the c-state of dilithium (): where the MLR potential was successfully able to bridge a gap of more than 5000 cm−1 in experimental data. Two years later it was found that the MLR potential was able to successfully predict the energies in the middle of this gap, correctly within about 1 cm−1. The accuracy of these predictions was much better than the most sophisticated ab initio techniques at the time. the A-state of Li2: where Le Roy et al. constructed an MLR potential which determined the C3 value for atomic lithium to a higher-precision than any previously measured atomic oscillator strength, by an order of magnitude. This lithium oscillator strength is related to the radiative lifetime of atomic lithium and is used as a benchmark for atomic clocks and measurements of fundamental constants. the a-state of KLi: where the MLR was used to build an analytic global potential successfully despite there only being a small amount of levels observed near the top of the potential. Historical origins The MLR potential is based on the classic Morse potential which was first introduced in 1929 by Philip M. Morse. A primitive version of the MLR potential was first introduced in 2006 by Robert J. Le Roy and colleagues for a study on N2. This primitive form was used on Ca2, KLi and MgH, before the more modern version was introduced in 2009. A further extension of the MLR potential referred to as the MLR3 potential was introduced in a 2010 study of Cs2, and this potential has since been used on HF, HCl, HBr and HI. Function The Morse/Long-range potential energy function is of the form where for large , so is defined according to the theoretically correct long-range behavior expected for the interatomic interaction. is the depth of the potential at equilibrium. This long-range form of the MLR model is guaranteed because the argument of the exponent is defined to have long-range behavior: where is the equilibrium bond length. There are a few ways in which this long-range behavior can be achieved, the most common is to make a polynomial that is constrained to become at long-range: where n is an integer greater than 1, which value is defined by the model chosen for the long-range potential . It is clear to see that: Applications The MLR potential has successfully summarized all experimental spectroscopic data (and/or virial data) for a number of diatomic molecules, including: N2, Ca2, KLi, MgH, several electronic states of Li2, Cs2, Sr2, ArXe, LiCa, LiNa, Br2, Mg2, HF, HCl, HBr, HI, MgD, Be2, BeH, and NaH. More sophisticated versions are used for polyatomic molecules. It has also become customary to fit ab initio points to the MLR potential, to achieve a fully analytic ab initio potential and to take advantage of the MLR's ability to incorporate the correct theoretically known short- and long-range behavior into the potential (the latter usually being of higher accuracy than the molecular ab initio points themselves because it is based on atomic ab initio calculations rather than molecular ones, and because features like spin-orbit coupling which are difficult to incorporate into molecular ab initio calculations can more easily be treated in the long-range). MLR has been used to represent ab initio points for KLi and KBe. See also Dilithium Morse potential Lennard-Jones potential References Thermodynamics Chemical bonding Intermolecular forces Computational chemistry Theoretical chemistry Quantum mechanical potentials
Morse/Long-range potential
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
949
[ "Molecular physics", "Quantum mechanics", "Intermolecular forces", "Materials science", "Quantum mechanical potentials", "Computational chemistry", "Theoretical chemistry", "Condensed matter physics", "Thermodynamics", "nan", "Chemical bonding", "Dynamical systems" ]
41,246,328
https://en.wikipedia.org/wiki/Debt-lag
Debt-Lag is a condition which results from overuse of one's credit card or other forms of credit while travelling. The debt itself can refer to the amount spent in the lead up to travelling, during the trip and any unexpected costs which come about from that trip, such as cross currency conversion fees and foreign ATM access charges. The condition of debt-lag may last months or even years after a person's trip is complete, as long as the debt accrued within the travel period is still outstanding. The term debt-lag is similar to jet lag in that both have to do with travel, however travellers can be affected by debt-lag without leaving their time zone. Jet lag refers to the condition sustained by the body after rapid long-distance travel which requires days of adjustment upon return. Debt-lag works in much the same way as it refers to a traveller's need to adjust upon return as well, except it is more directly related to the traveller's financial situation than their biological one. Causes Debt-lag is primarily attributed to spending on credit cards while abroad, including paying for items before travel such as transportation costs and accommodation. It is estimated that roughly 85% of travellers use their credit card while on holiday, presumably because of the ease of use, wide acceptance, ability to earn rewards points and security features of credit cards. Debt-lag is not strictly tied to the distance or duration of a trip, but longer trips will typically cost more and thus cause more severe cases of debt-lag. It is a financial problem more commonly found in men than in women, with over two in five men suffering credit card shock upon return compared to less than one in three women. The speed at which a person recovers from debt-lag is largely dependent on the individual's financial situation and any extenuating circumstances met with while travelling. Lack of travel insurance in cases of emergency and/or disruption to travel can exacerbate debt-lag, especially with one in four travelling without cover. One of the most common unexpected costs incurred while travelling overseas is last-minute transport or accommodation changes, which could be covered by a comprehensive travel insurance policy. A common cause of debt-lag is tied to fees and charges by financial institutions to access their services while overseas. In fact, these hidden charges, such as currency conversion fees and foreign ATM access charges, are cited as the top "travel rip-offs" experienced by travellers. There is no maximum to debt-lag attributed debt, although it is largely limited by an individual's credit limit and ability to accrue debt. Inversely, there is no limit to the amount of time it takes to resolve debt-lag as repayment habits, compound interest charges and the ability to continually accrue debt even after the trip is complete can make the debt rollover perpetually. Over a third of people would holiday again before paying down their debt-lag induced debt, lengthening the duration of this condition. Management Budgeting and proper care of one's finances is the strongest stimulus for ridding oneself of debt-lag. This includes preparation, research and budgeting around optimal financial products for travel, both before and after travel. Before travelling Many banks and financial institutions will have specific travel products on offer such as prepaid travel money cards or credit cards with little or no currency conversion fees and complimentary travel insurance. These kinds of financial products will often be far better suited to travel than typical banking products and will minimise the amount spent on fees and charges associated with travel. Another alternative option for travellers to take their money with them is traveller's checks. These are a very secure method of taking money abroad but aren’t widely accepted. During Travel There are a few product features that will affect how you should use your financial products overseas. These include: credit card purchase credit card purchase rate and cash advance rates prepaid travel card reload fees foreign ATM usage fees currency conversion fees Knowing these fees and the circumstance in which they are charged can help travellers avoid them and thus lowers the chances of severe debt-lag upon return. After Travelling The period directly after travel is crucial to minimising the long-term effects of debt-lag. If the debt is paid off in a timely manner, the traveller can avoid paying interest, especially if they incorporated interest free days into their budget before travelling. While the majority of travellers believe they can pay back their debt within just a few months, those who fail to do so can find themselves paying thousands in interest on top of having to pay back their initial debt amount. A balance transfer of the debt to a low rate or zero percent credit card can help alleviate the stress of long-term debt-lag. References Travel Debt
Debt-lag
[ "Physics" ]
976
[ "Physical systems", "Transport", "Travel" ]
41,246,699
https://en.wikipedia.org/wiki/OX1001
OX1001 (saquinavir-NO) is an experimental drug being developed by OncoNOx currently undergoing clinical studies and investigations for the treatment of cancer. OX1001 is a nitrate ester analog of the approved HIV protease inhibitor, saquinavir. This modification increases the anti-cancer property while decreasing toxicity of the drug. OX1001 shows broad activity against cancer cells but is particularly effective against hematological, prostate, and melanoma cancers as seen in in vitro and in vivo studies. Mechanism of action OX1001 is an analog of the already approved HIV protease inhibitor, saquinavir. HIV protease inhibitor drugs not only work against HIV, but they also have proved function in anti-cancer therapy. However, these drugs tend to have many toxic effects. Adding a nitrate ester functional group to HIV protease inhibitors has been found to curb these negative effects as well as increase the anti-tumor properties of the drug. Because OX1001 is still in pre-clinical testing, its mechanism of action is unclear. However, it can be said that as opposed to saquinavir, OX1001 likely works by slowing the growth rate of tumor cells rather than by causing apoptosis, or cell death. Studies suggest that this stoppage of cell proliferation is permanent. Though the difference between OX1001 and saquinavir lies in the presence of a nitrate ester, it is not clear how exactly this modification directly affects the function of the drug. It can be hypothesized that the presence of the nitrate ester plays a role in inhibiting the activity of cytochrome P450, which is an enzyme that normally deactivates saquinavir. Aside from having superior anti-tumor properties, OX1001 also has a much lower toxicity profile than saquinavir. In one experiment, it was determined that at a dose where Saquinavir produced 100% toxicity, OX1001 showed no signs of inducing any toxicity. This might be due to the drug’s ability to produce a powerful, yet short-lived stimulus to normal cells’ Akt signaling pathways which protects them. References Further reading Experimental cancer drugs Isoquinolines Amides Carboxamides Tert-butyl compounds Nitrate esters Quinolines
OX1001
[ "Chemistry" ]
474
[ "Amides", "Functional groups" ]
66,978,170
https://en.wikipedia.org/wiki/Integrative%20and%20conjugative%20element
Integrative and conjugative elements (ICEs) are mobile genetic elements present in both gram-positive and gram-negative bacteria. In a donor cell, ICEs are located primarily on the chromosome, but have the ability to excise themselves from the genome and transfer to recipient cells via bacterial conjugation. Due to their physical association with chromosomes, identifying integrative and conjugative elements has proven challenging, but in silico analysis of bacterial genomes indicate these elements are widespread among many microorganisms. ICEs have been detected in Pseudomonadota (e.g., Pseudomonas spp., Aeromonas spp., E. coli, Haemophilus spp.), Actinomycetota and Bacillota. Among many other virulence determinants, ICEs may spread antibiotic and metal ion resistance genes across prokaryotic phyla. In addition, ICE elements may also facilitate the mobilisation of other DNA modules such as genomic islands. Characteristics Although ICEs exhibit various mechanisms promoting their integration, transfer and regulation, they share many common characteristics. ICEs comprise all mobile genetic elements with self-replication, integration, and conjugation abilities, including conjugative transposons, regardless of the particular conjugation and integration mechanism by which they act. Some immobile genomic pathogenicity islands are also believed to be defective ICEs that have lost their ability to conjugate. ICEs combine certain features of the following mobile genetic elements: Bacteriophages that have the ability to insert into and excise from bacterial chromosomes. Transposons that, besides their inherent transposable activity, can additionally be subject to horizontal gene transfer via conjugation. Conjugative plasmids that transfer from donor to recipient bacteria via conjugation. In contrast to plasmids and phages, integrative and conjugative elements cannot remain in an extrachromosomal form in the cytoplasm of bacterial cells and replicate only with the chromosome they reside in. ICEs possess the structure organized into three gene modules that are responsible for their integration with the chromosome, excision from the genome and conjugation, as well as regulatory genes. All integrative and conjugative elements encode integrases that are essential for controlling the excision, transfer and integration of an ICE. The representative example of ICE integrases is the integrase encoded by lambda phage. The transfer of an integrated ICE element from the donor to recipient bacterium must be preceded by its excision from the chromosome that is co-promoted by small DNA-binding proteins, the so-called recombination directionality factors. The dynamics of the integration and excision processes are specific to each integrative and conjugative element. References Bacteriology Genetics
Integrative and conjugative element
[ "Biology" ]
588
[ "Genetics" ]
66,980,425
https://en.wikipedia.org/wiki/NGC%205582
NGC 5582 is an elliptical galaxy in the constellation Boötes. It was discovered by William Herschel on April 29, 1788. References External links Boötes 5582 Elliptical galaxies
NGC 5582
[ "Astronomy" ]
39
[ "Boötes", "Constellations" ]
66,981,695
https://en.wikipedia.org/wiki/Conductive%20metal%E2%88%92organic%20frameworks
Conductive metal−organic frameworks are a class of metal–organic frameworks with intrinsic ability of electronic conduction. Metal ions and organic linker self-assemble to form a framework which can be 1D/2D/3D in connectivity. The first conductive MOF, Cu[Cu(2,3-pyrazinedithiol)2] was described in 2009 and exhibited electrical conductivity of 6 × 10−4 S cm−1 at 300 K. Design and structure The organic linkers for conductive MOFs are generally conjugated. 2D conductive MOFs have been explored well and several studies of 3D conductive MOFs have also been reported so far. Single crystal structure of a 2D conductive MOF Co(HHTP) [hexahydroxytriphenylene] was reported in 2012. The conductivity of these materials are often tested by two probe method, i.e. a known potential is applied between two probes, the resulting current is measured, and resistance is calculated by using Ohm’s law. A four-probe method employs two wires on the extreme are used to supply a current and the inner two wires measure the drop in potential. This method eliminates the effect of contact resistance. Most MOFs have conductivity less than 10−10 S cm−1 and are considered as Insulator. Based on the literature reports so far, conductivity range in the MOFs can vary from 10−10 to 103 S cm−1. Charge transfer in conductive MOFs have been attributed to three pathways: 1) Through-bond:- when d orbital of  transition metal ion overlaps with the p orbital of the organic linker, π electrons are delocalized across all the adjacent p orbitals. 2) Extended conjugation:- When transition metal ions are coupled with the a conjugated organic linker, the d-π conjugation allows delocalization of the charge carriers. 3) Through-space:- Organic linkers in one layer can interact with the one in the adjacent layer via π-π interaction. This will facilitate charge delocalization in the adjacent layers. Synthesis Solvothermal synthesis In 2017 Kimizuka reported a phthalocyanine based conductive MOF Cu-CuPc with an intrinsic conductivity in the range of 10−6 S cm−1. For the solvothermal synthesis of MOF, the organic linker Cu-octahydroxy phthalocyanine (CuPc) and metal ion is dissolved in a DMF/H2O mixture at heated at 130 °C for 48 hours. Afterwards, Mirica and co-workers were able to enhance the conductivity to a range of 10−2 S cm−1 by synthesizing a bimetallic phthalocyanine based MOF NiPc-Cu. Hydrothermal synthesis Examples include a series of isoretical catecholate-based MOFs employing hexahudroxytriphenylene (HHTP) as thee organic linker and Ni/Cu/Co as metal nodes. For the hydrothermal synthesis of these MOFs, both organic linker (hexahydroxytriphenylene) and metal ion is dissolved in H2O, aqueous ammonia is added and mixture is heated. Cu3(HHTP) also known as (Cu-CAT-1) showed a conductivity up to 2.1 × 10−1 S cm−1. Another MOF based on hexaaminotriphenylene (HATP) organic linker and Ni metal ion showed an electronic conductivity of 40 S cm−1 when measured by using Van der Pauw method . Layering method A Ni-BHT MOF nanosheet has been obtained using liquid-liquid interfacial synthesis. For the synthesis, organic linker is dissolved in dichloromethane upon which H2O is added and then metal salt (Ni(OAc)2) along with sodium bromide is added to the aqueous layer. Potential applications Although no conductive MOF has been commercialized, potential applications have been identified. Electrochemical sensors Conductive MOF are of interest as a chemiresistive sensors. A 2D conductive MOF Cu3(HITP)2 and bulk  conductivity of this MOF was measured to be 0.2 S cm−1. It was employed for chemiresistive sensing of ammonia vapor and limit of detection of this material was 0.5 ppm. Two isoreticular MOFs based on phthalocyanine and naphthalocyanine organic linkers have been tested for sensing of neurotransmitters. In this study authors were able to get a very low limit of detection, NH3 (0.31–0.33 ppm), H2S (19–32 ppb) and NO (1–1.1 ppb) at a driving voltage of (0.01–1.0 V). Later, same group also reported voltametric detection of neurochemical by isoreticular MOFs based on triphenylene organic linker. Ni3(HHTP)2 (2,3,6,7,10,11-hexahydroxytriphenylene) MOF showed nanomolar limit of detection of Dopamine (63±11 nM) and serotonin (40±17 nM). A 2D conductive MOF based on 2,3,7,8,12,13‐hexahydroxyl truxene linker and copper metal has shown promising electrochemical detection of paraquat. Electrocatalysis MOFs have been explored for electrolysis to enhance the rate and selectivity of reactions. Owing to their high surface area they can provide large number of interaction site for the reaction, conductivity of the material allows charge transfer during the electrocatalytic process. Two Cobalt based MOFs Co-BHT (Benzenehexathiol) and Co-HTTP (Hexathioltriphenylene) have been investigated for hydrogen evolution reaction (HER). In this report, overpotential values for Co-BHT and Co-HTTP are found to be 340 mV and 530 mV respectively at pH 1.3. The tafel slopes are between 149 and 189 mV dec−1 at pH 4.2. Ultrathin sheets of Co-HAB MOF have been found to be catalytically active for oxygen evolution reaction (OER). Overpotential for this MOF was 310 mV at 10 mA cm−2 in 1M KOH. Authors claimed that the ultrathin sheets were better than nanoparticles/thick sheets/bulk Co-HAB MOF because of favourable electrode kinetics. A 2-D conductive MOF has also been employed as an electrocatalyst for oxygen reduction reaction (ORR). Ni3(HITP)2 MOF film on glassy carbon electrode in their study showed a potential of 820 mV at 50 μA in 0.1 M potassium hydroxide (KOH). Energy storage MOFs with high surface area, redox active organic linker/metal nodes, intrinsic conductivity have attracted attention as electrode materials for electrochemical energy storage. First Conductive MOF-based electrochemical double layer capacitor (EDLC) was reported by Dinca and co-workers in 2017. They used Ni3(HITP)2 MOF for the fabrication of the device without using conductive additives which are mixed to enhance the conductivity. The resulting electrodes showed a gravimetric capacitance of 111 F g−1 and areal capacitance of 18 μF cm−2 at a discharge rate of 0.05 A g−1. These electrodes also exhibited a capacity retention of 90% after 10000 cycles. A conductive MOFs based on hexaaminobenzene (HAB) organic linker and Cu/Ni metal ions has been tested as electrode for supercapacitor. Ni-HAB and Cu-HAB exhibited gravimetric capacitance of 420 F g−1 and 215 F g−1 respectively. The pellet form of Ni-HAB electrode showed a gravimetric capacitance of 427 F g−1 and volumetric capacitance of 760 F g−1. These MOFs also exhibited a capacitance retention of 90% after 12000 cycles. First conductive MOF based cathode material for Lithium-ion battery was reported by Nishihara and co-workers in 2018. In this study they employed Ni3(HITP)2 MOF, It exhibited a specific capacity of 155 mA h g−1, specific energy density of 434 Wh kg−1 at A current density of 10 mA g−1, and good stability over 300 cycles. In another study, two MOFs based on 2,5‐dichloro‐3,6‐dihydroxybenzoquinone (Cl2dhbqn−) organic linker and Fe metal ions have been employed for Lithium ion battery. (H2NMe2)2Fe2(Cl2dhbq)3 (1) and (H2NMe2)4Fe3(Cl2dhbq)3(SO4)2 (2) showed electrical conductivity of 2.6×10−3 and 8.4×10−5 S cm−1 respectively. (2) exhibited discharge capacity of 165 mA h g−1 at a charging rate of 10 mA g−1) and (1) exhibited 195 mA h g−1 at 20 mA g−1 and a specific energy density of 533 Wh kg−1. See also Metal−organic framework Covalent−organic framework Coordination polymer Sensor References Metal-organic frameworks Inorganic chemistry
Conductive metal−organic frameworks
[ "Chemistry", "Materials_science" ]
2,027
[ "Porous polymers", "Metal-organic frameworks", "nan" ]
66,981,747
https://en.wikipedia.org/wiki/TigerGraph
TigerGraph is a private company headquartered in Redwood City, California. It provides graph database and graph analytics software. History TigerGraph was founded in 2012 by programmer Dr. Yu Xu under the name GraphSQL. In September 2017, the company came out of stealth mode under the name TigerGraph with $33 million in funding. It raised an additional $32 million in funding in September 2019 and another $105 million in a series C round in February 2021. Cumulative funding as of March 2021 is $170 million. Products TigerGraph's hybrid transactional/analytical processing database and analytics software can scale to hundreds of terabytes of data with trillions of edges, and is used for data intensive applications such as fraud detection, customer data analysis (customer 360), IoT, artificial intelligence and machine learning. It is available using the cloud computing delivery model. The analytics uses C++ based software and a parallel processing engine to process algorithms and queries. It has its own graph query language that is similar to SQL. TigerGraph also provides a software development kit for creating graphs and visual representations. As of Nov 2023, TigerGraph version is up to version 3.9.3. Reception A 2018 review of TigerGraph 2.2 in Infoworld gave the product 4.5 out of 5 stars. Query Language GSQL is a SQL-like Turing complete query language designed by TigerGraph. See also Graph Query Language References External links Official website TigerGraph Paper at SIGMOD Conference Graph databases Databases Structured storage
TigerGraph
[ "Mathematics" ]
307
[ "Graph databases", "Mathematical relations", "Graph theory" ]
66,981,876
https://en.wikipedia.org/wiki/Space%20hurricane
A space hurricane is a huge, funnel-like, spiral geomagnetic storm that occurs above the polar Ionosphere of Earth, during extremely quiet conditions. They are related to the aurora borealis phenomenon, as the electron precipitation from the storm's funnel produces gigantic, cyclone-shaped auroras. Scientists believe that they occur in the polar regions of planets with magnetic fields. Hurricanes (tropical cyclones) on Earth are formed within the atmosphere by thunderstorms and angular momentum from the Earth's rotation, and draw up energy from the ocean surface, while space hurricanes are formed by plasma interacting with magnetic fields and draw energy down from the flow of the solar wind. Characteristics Space hurricanes are made up of plasmas, consisting of extremely hot ionized gases that rotate at extremely high speeds, with rotational speeds reaching up to . In 2020, using observations that had been made on 20 August 2014, researchers identified a large space hurricane that had occurred over the Arctic, spanning in diameter at its base in the Ionosphere, the ionized upper upper atmosphere at an altitude of , and roughly centered over the North Magnetic Pole. The space hurricane was characterized by a cyclone-like auroral spot with multiple spiral arms, due to precipitating electrons, strong circular plasma vorticity with zero horizontal flow at its center (the equivalent of the eye of an atmospheric hurricane), a negative-to-positive bipolar magnetic structure (showing a circular magnetic field perturbation), and a large and rapid deposition of energy and flux into the polar ionosphere (comparable to that during space weather superstorms). The storm extended from the Ionosphere upward along geomagnetic field lines to cover a large fraction of the dayside polar magnetosphere, in the Northern Hemisphere. Additionally, the space hurricane had multiple spiral arms, similar to conventional hurricanes, and the storm also rotated in a counterclockwise direction. The large plasma storm rained electrons instead of water. In the calm central region, encircled by the rotating plasma, there was a persistent auroral spot, associated with a strong, upward, field-aligned current caused by precipitating electrons. The electron rain produced a gigantic, cyclone-shaped aurora below the storm. Unlike conventional space weather disturbances, the space hurricane was observed during very quiet geomagnetic conditions, when the flow of the solar wind was slow and the interplanetary magnetic field was pointing northward, whereas a strongly southward orientation is needed to drive conventional geomagnetic storms. This provides a further analogy to hurricanes in the lower atmosphere: an Accuweather meteorologist noted that hurricanes needed light winds aloft in order to form. Effects Researchers indicated that the electron precipitation associated with the storm could disrupt GPS satellites, radio systems, and radar, and could also increase the drag on any nearby satellites, as well as changing the orbits of space debris ("space junk") of all sizes at low altitudes, which are an increasing hazard for spacecraft in low Earth orbit. However, aside from these potential space weather impacts, the storm is expected to have little impacts on the planet. Discovery The phenomenon was discovered by a team of researchers from Shandong University in China, whom had observed the storm over the Arctic region on 20 August 2014, before identifying its nature in 2021. The research team also consisted of scientists from the United States, the United Kingdom, and Norway. The team observed the space hurricane for 8 hours, before it gradually broke down. The storm was observed during a period of low solar and geomagnetic activity. This was the first time that a hurricane-like storm had been observed in the upper atmosphere, and previously, it was uncertain whether they existed. Researchers believe that such space storms may be relatively common in the Solar System and beyond, on planets with magnetic fields, because the storm observed in 2014 occurred during a period of low geomagnetic activity. See also Space tornado Space weather Earth's magnetic field Solar wind References Solar phenomena Space weather Storm
Space hurricane
[ "Physics" ]
803
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
55,717,116
https://en.wikipedia.org/wiki/NGC%206670
NGC 6670 is a pair of interacting galaxies within the Draco constellation, which lie around 401 million light-years from Earth. Its shape resembles a leaping dolphin. NGC 6670 was discovered by Lewis A. Swift on July 31, 1886. NGC 6670 is a combination of two colliding disc galaxies which are known as NGC 6670E and NGC 6670W. The galaxy is 100 billion times brighter than the Sun. The galaxies have already collided once before and they are now moving towards each other again nearing a second collision. Its apparent magnitude is 14.3, its size is 1.0 arc minutes. NGC 6670E NGC 6670E (also known as NGC 6670-1 and NGC 6670B) is a disc galaxy on the north-eastern side of NGC 6670. It has mostly been destroyed by its collision with NGC 6670W. It has bright nuclear region with a luminosity of 18 L⊙/Mbol due to a large amount of star formation. Its star formation efficiency is at least four times higher than the Milky Way disc, and is approaching the starburst phase. This large amount of star formation is likely to be caused by its collision with NGC 6670W. To the south-east of the nucleus there is a bright and fuzzy area caused by dust extinction or a large concentration of blue stars. NGC 6670E is in-front of NGC 6670W and they are moving towards each other. The movement of the H I region of this galaxy has been disrupted by its collision with NGC 6670W. NGC 6670W NGC 6670W (also known as NGC 6670-2 and NGC 6670A) is a disc galaxy on the south-western side of NGC 6670. It forms the larger part of NGC 6670. It has remained mostly intact after its collision with NGC 6670E. Its nuclear region is slightly dimmer than NGC 6670E, being measured at 11 L⊙/Mbol, but this number still indicates a higher than normal level of star production. Both the western and eastern sides of the disc are significantly curved. The H I region of NGC 6670W is a rotating ring. The western end of the galaxy has a stronger radio continuum peak indicating that it may be a recent star forming region. References External links Interacting galaxies Luminous infrared galaxies Draco (constellation) 6670 18329+5950 +10-26-044 062033 11284 Astronomical objects discovered in 1886 Discoveries by Lewis Swift
NGC 6670
[ "Astronomy" ]
509
[ "Constellations", "Draco (constellation)" ]
55,717,769
https://en.wikipedia.org/wiki/Lattice%20and%20bridged-T%20equalizers
Lattice and bridged-T equalizers are circuits which are used to correct for the amplitude and/or phase errors of a network or transmission line. Usually, the aim is to achieve an overall system performance with a flat amplitude response and constant delay over a prescribed frequency range, by the addition of an equalizer. In the past, designers have used a variety of techniques to realize their equalizer circuits. These include the method of complementary networks; the method of straight line asymptotes; using a purpose built test-jig; the use of standard circuit building blocks,; or with the aid of computer programs. In addition, trial and error methods have been found to be surprisingly effective, when performed by an experienced designer. In video or audio channels, equalization results in waveforms that are transmitted with less degradation and have sharper transient edges with reduced overshoots (ringing) than before. In other applications, such as CATV distribution systems or frequency multiplexed telephone signals where multiple carrier signals are being passed, the aim is to equalize the transmission line so that those signals have much the same amplitude. The lattice and bridged-T circuits are favoured for passive equalizers because they can be configured as constant-resistance networks such as the Zobel network, as pointed out by Zobel and later by Bode. The single word description “equalizer” is commonly used when the main purpose of the network is to correct the amplitude response of a system, even though some beneficial phase correction may also be achieved at same time. When phase correction is the main concern, the more explicit term "phase equalizer" or "phase corrector" is used. (In this case, the circuit is usually an all-pass network which does not alter the amplitude response at all such as the lattice phase equalizer). When equalizing a balanced transmission line, the lattice is the best circuit configuration to use, whereas for a single-ended circuit with an earth plane, the bridged-T network is more appropriate. Although equalizer circuits, of either form, can be designed to compensate for a wide range of amplitude and phase characteristics, they can become very complicated when the compensation task is difficult, as is shown later. A variety of methods has been used to design equalizers and some of these are described below. Several of the procedures date back to the early part of the 20th century when equalizers were needed by the rapidly expanding telephone industry. Later, with the advent of television, the equalisation of video links became very important too. Amplitude correction The aim of an equalizer network is to correct for deficiencies in the amplitude response of a transmission line, lumped element network or amplifier chain. Equalisation is often necessary with transmission lines and lumped element delay lines which tend to have increasing loss with frequency. Without correction, waveform fidelity is lost, and rise and fall times of transients are degraded (i.e. less sharp). Sometimes amplitude correction is required for more subtle reasons, for example, in the case of analogue colour television waveforms, colour errors can occur in the displayed pictures when the transmission system’s response is not flat. It is usual to choose lattice and bridged-T equalizers which are constant-resistance networks. It was pointed out by Zobel and later by Bode that such networks can be cascaded with each other and with a transmission line or with a lumped element circuit, without introducing mismatch problems. The use of constant resistance configurations has been common practice in equalizer design, for many years, and almost all of the examples presented in this article have this property. Whatever the design method, passive equalizers always introduce additional loss into the transmission path, and this has to be made good by an amplifier or repeater. The method of complementary networks In some of his early work, Zobel devised a lumped element circuit to simulate the behaviour of a given long transmission line of interest. Such a device was useful in that it allowed investigatory work on a transmission system to be carried out in the convenience of the laboratory. Importantly, as was pointed out by Zobel, once such a network had been designed, it was always possible to find a realizable complementary network, which exhibited the inverse response. An example The procedure can be illustrated by a simple example presented by Zobel, which is shown below. Here, the left hand lattice has a simple low-pass characteristic and the right hand lattice has the complementary characteristic. For this circuit R1*R2 = L1/C1 = L2/C2 = R0^2 with R1 < 2.R0 . C2 is given by For a normalized network R0 = 1Ω. Choose R1 = 0.5Ω and L1 = 1H then R2 = 2Ω, C1 = 1F, C2 = 3F, and L2 = 3H The responses of the individual sections and the overall response are shown in the plots for the composite network, given on the right. This compensation process can be described mathematically, by means of the basic lattice equations given in lattice network, as follows. The transmission loss of a normalized (R0 = 1) constant resistance lattice with through arms Za and cross-diagonal arms of Zb is Now for the low-pass circuit, given above, za is a parallel combination of a 0.5Ω resistor in parallel with 1H inductor. and so Next consider the high –pass section. Here za is a parallel combination of a 0.5Ω resistor in parallel with a 3F capacitor, so and so Finally, the overall response Ttot is i.e. a flat response, with an overall gain of one third. This is shown in the figure above. In this example, the overall characteristic is phaseless, i.e. full phase compensation has been achieved in addition to the amplitude correction. This is because exact correction has been achieved at all frequencies. Often correction is only successful over a limited range of frequencies, in which case the final result will have some residual phase. More generally, the response of a circuit or transmission line may be difficult to reproduce precisely by means of lumped element circuits, so the usual task is to find a realizable characteristic that is an acceptable match. Sometimes trial an error methods, applied in a systematic way can prove successful A general-purpose equalizer circuit When the arms of a lattice network are purely reactive, constant resistance all-pass networks are possible, as shown in lattice delay network. If, however, resistors are included in the lattice arms, then different amplitude responses are possible while still retaining the constant resistance property. One, much-used equalizer circuit is shown here, together with the equivalent bridged-T circuit. The equivalence between the lattice and bridged-T can be shown with Bartlett's bisection theorem. The circuits have constant resistance characteristic when Z1.Z2 = R02. The transfer function of this circuit is When normalized so that the source and terminating resistors and R0 are all unity and Z1.Z2 = 1, it becomes and so Z1, as a function of T, is given by The simplicity of this equation (only one impedance has to be found) makes the circuit popular in equalizer designs. The method of straight line asymptotes The method uses an initial trial response, made up of a sequence of straight line asymptotes, to determine the pole/zero locations of a realizable network. The general principles of the method are as follows. Consider the case of a simple RC low-pass circuit whose transfer function has a single pole on the real frequency axis at p = -a, as shown This response is -3 dB at ω = a and falls at 6 dB per octave at high frequencies. It is shown plotted, in decibels, versus frequency (on a log scale). Also shown, on the right, is the two-line asymptotic approximation, with break frequency also at ω = a. As can be seen, the true response and the straight line approximation are closely matched over most of the frequency range, only deviating near to ω = a. If a zero is now added at, say, -10×a, then a new asymptote with a positive slope of 6 dB per octave is introduced, starting at ω = 10, as shown. By extending the method, a complicated loss characteristic can be approximated by a sequence of straight line asymptotes provided the steepest slope of the loss characteristic does not exceed 6 dB per octave in the frequency range. The results are plotted on a dB scale against frequency on a log scale, as before. In general, the transfer function is Once an expression is deemed to meet the requirement with sufficient accuracy, the impedances Z1 and Z2 can be found from the relationships given earlier. An example As an example, the transmission loss of a notional distribution network, with response shown by the full line in the figure on the left, is to be corrected (equalized). In this example correction is required over the frequency range from Ω1 = 0.09 r/s to Ω2 = 140 r/s. This can be approximated by a series of straight line segments as shown in the figure. Frequencies are chosen so that the various errors exhibited by the straight line approximation are all similar and as small as possible. The straight line approximation shown in the figure has poles at ω = 0.333 r/s and 10 r/s, and zeros at ω = 1 r/s and 30 r/s. The transfer function of a network with these poles and zeros is given by The actual response (magnitude) of T is easily calculated and is shown, on the plot on the right, as a function of frequency, together with the target response. A closer fit would require extra poles and zeros in T(p), at a closer spacing. The low-pass network T(p), with this response will have an L-C network for Z1 and an R-C network for Z2, when realized using the simple circuits previously described. However, of more interest here is the complementary network, since this will correct for the falling transmission loss of this response. The transfer function of the equalizer network will be where K = 9 so that the gain of the network never exceeds unity at any frequency. The expression for Z1(p) can be derived from this equation using So Z1 can be realized as an R-C ladder network (or as a parallel combination of two R-C impedances, each of which is a resistor and a capacitor in series). A ladder network version of Z1 is shown on the left, together with that of Z2 (its dual network). Component values for Z1 (with R0 = 1) are: C1 = 0.04838F, C2 = 0.4747F, R1 = 2.2857Ω, R2 = 5.7152Ω and for Z2 they are L1 = 0.0438H, L2 = 0.4747H, R3 = 0.4375, R4 = 0.1750. These circuits of Z1 and Z2 can be used directly into a bridged-T equalizer network, but for use in a lattice network, the capacitor values should be doubled and resistor values halved in the Z1 network, and all resistor and inductor values doubled in the Z2 (i.e. Za = Z1/2 and Zb = 2.Z2). The overall response, when the equalizer is cascaded with the original characteristic, is shown on the right. As expected, there is ripple on the result, which is similar to the differences between the initial response and its approximation T(p). Although the response, prior to equalization, considered in this example, has a characteristic which falls linearly when plotted against log(ω), it is possible to cope with non-linear characteristics in much the same way, provided the plot falls monotonically – i.e. the slope does not change sign, and the maximum slope is less than 6 dB/octave. In such cases, a series of straight line asymptotes can also be found to approximate a response and so lead to a realizable solution. Equalizers of considerable complexity can be designed using the method of asymptotes, and they can achieve an overall, corrected, response with very low ripple (<<0.1 dB). Deriving compensation networks by means of an experimental test jig An experimental test jig may be used to find the component values to find circuit values for the equalizer. The basic circuit arrangement is shown, on the left. The transfer function of this simple circuit, ignoring flat loss, is Now, if 2.Zx = Z1, this becomes So T(p) has the same form as the transfer function of the basic equalizer circuit given earlier with Zx identical to the “Za” lattice arms. So, although the test jig is not, in itself, a constant resistance network it does provide a convenient experimental method for determining the required component values for a lattice or bridged-T circuit which is a constant resistance network. Once the values of Zx are established, circuit values for Z1 and its dual, Z2, are found in a straightforward manner. A suggested test jig, using these concepts is shown, on the right. (i) The basic R-C circuit that forms the bulk of Zx is in the form of a ladder network, rather than a parallel combination of series R-C pairs. The total spread in the capacitor values needed is much reduced for this configuration. Usually, a network of six or seven sections is sufficient. (ii) A shunt resistor R is placed across the input terminals of Zx, and this also reduces the spread of component values. Unfortunately, it also reduces the maximum correction possible by an individual equalizer section and may result in a cascade of two or more sections being necessary to achieve total equalisation. (iii) A series resonant L-C circuit is also present across the terminals of Zx. This combination is arranged to resonate above the top frequency of the system passband and its purpose is to reduce the equalizer flat loss. (It is a technique commonly used in equalizer circuits). (iv) Instead of finding component values to achieve a desirable frequency response, an alternative approach is to optimize the transient performance, by means of test waveforms (such as pulse and bar signals., for example). Optimizing transient performance in this way is particularly applicable to situations where waveform fidelity is important such as in television. When circuit values are obtained in this way, the description “waveform corrector” is preferred to “response equalizer” An example An example of a “waveform corrector” for a coaxial cable section for video frequencies is shown. The shunt impedance Z2 is not shown in detail. It is the dual of Z1, so whereas Z1 contains a series resonant circuit and an R-C ladder network, Z2 contains a shunt resonant circuit and an R-L ladder network. The plot of vout/vin (in dBs) versus frequency, for this circuit, using the component values proposed in the reference is also shown, on the right. Zobel’s curve fitting procedure Zobel, in his early paper, described a procedure in which a cascade of prototype constant resistance lattice networks formed the basis of his equalizers. His method was basically a curve fitting procedure and an Appendix in his paper provided a series of networks from which a complete equalizer could be built. He apportioned contributions to the overall desired equalizer response to the various members of a lattice (or bridged-T) cascade. Each lattice circuit in the cascade was identified by its impedances Za and Zb (where Zb = R02/Za and by its “propagation function” and “attenuation constant” (in effect, the square of the magnitude). These image parameters... are all interrelated as Zobel demonstrated see (Image impedance). Firstly, the lattice impedance Za can be expressed as the ratio of two polynomials in (jf) In this expression, the impedance coefficients a0, b0, etc., one of which is unity and some may be zero, are algebraic combinations of the network elements. For any given type of network, the coefficients are fixed by the elements, and vice versa. Secondly, the propagation constant Γ can be found from in which g0, g1 ,h0, etc. are algebraic functions of a0, b0, etc. From this, the attenuation constant can be derived and expressed as a function of frequency. (It was the usual practice in the 1920s to display attenuation as a positive parameter, so the response of a low pass filter was displayed as a positively rising curve, with increasing frequency). For the attenuation constant, the expression is of the form: which is a ratio of two polynomials in f2, and in which the coefficients could be determined from the known data, or measurements. Rearranging this, Zobel obtained the “attenuation linear equation”, which holds at all frequencies, thus: By having attenuation data at sufficient data points (frequencies), a family of simultaneous equations can be solved to give the values of P0, Q0, P2, Q2, etc. From these results, Zobel showed, in the Appendix of his paper how, for each prototype equalizer circuit, it was possible to derive the component values for that section. An example As an example, the procedure was used by Zobel to design an equalizer for a balanced line with a characteristic impedance of 600 Ω and 50 miles in length, for frequencies up to 4.5 kHz. In this early paper Zobel used the “napier” (the natural logarithm of a voltage ratio) and the “transmission unit” (a logarithm to the base 10 of a power ratio) interchangeably within his calculations. The two units are related by 1 napier = 8.686 transmission units. In the mid 1920s these units were renamed the “neper” and the “decibel" and these are the units used here. The original attenuation characteristic requiring correction is shown as “Plot 1” in the figure below. Zobel proposed that a satisfactory equalized response would be obtained by a cascade of two lattices, of the types shown in the figure on the left. The left-hand lattice, in the figure, provided correction to the response at low frequencies and the right-hand one provided correction at high frequencies. Considering first the left hand circuit, this has an attenuation linear equation so there are four unknowns to find, P0, P2, Q0, Q2, and so data at four frequencies is required. From the measured data on the transmission line Zobel proposed the following attenuation values. At f1 = 40 Hz, A1 = 0.536 neper; at f2 = 200 Hz, A2 = 0.291 neper; at f3 = 800 Hz, A3 = 0.176 neper; at f4 = 2000 Hz, A4 = 0.100 neper. (These give a response which is the inverse of that of the original plot, as required for the equalizer, but with an arbitrary 0.1neper offset at the highest frequency.) Solution of the four simultaneous equations derived from this data gave P0 = 102.007 ×109, Q0 = 32.20010 ×109, P2 = 5.06037 ×106, Q2 = 3.43087 ×106, from which Zobel’s design data gave the following component values: C1 = 1.2042 μF, R1 = 168.32 Ω, C2 = 4.0342 μF, R2 = 124.19 Ω R3 = 2138.72 Ω, L3 = 0.43351H, R44 = 2898.55 Ω, L4 = 1.4523H In the case of the right-hand lattice chosen by Zobel, the attenuation constant has the same value at low and high frequencies, so P0 = F0 and it has a peak in the response near the resonant frequency of C6 and L6. The attenuation linear equation for this lattice is The expression for attenuation constant of the right hand lattice has P0 = F0, Q0 = 1 and P4 = F0.Q4, so data was needed to solve for P0, P2, Q2 and Q4. The data used was: at f0 = 0 Hz, A1 = 0.796 neper; at f1= 3000 Hz, A1 = 0.747 neper; at f2 = 4000 Hz, A2 = 0.530 neper; at f3 = 4500 Hz, A3 = 0.300 neper. The solution of the four simultaneous equations derived from this data gave F0 = P0 = 4.913; P2 = -46.207×10−8; Q2 = -90092×10−8; Q4 = 23.198×10−16 . Using this data, Zobel’s design equations for this lattice gave following component values: R5 = 226.95 Ω, R6 = 143.4 Ω, L6 = 0.04935H, C6 = .02476 μF R7 = 1586.25 Ω, R8 = 2510.46 Ω, L8 = 8.8992mH, C8 = 0.137 μF The final results are shown in the figure below. Plot 1 of the figure shows the initial response of the cable run that was to be corrected. The compensation achieved by the left-hand (low frequency) lattice, alone, is shown in Plot 2. Finally, total compensation is shown in Plot 3, when the right-hand lattice is also included. Bode’s method for equalisation Bode devoted Chapter XII of his well known book to the topic of equalizers. He pointed out that all transfer functions could be made up from a cascade of first and second order constant-resistance lattices which, of course, includes equalizer networks. In order to assist with the process of network design, Bode provided design details of four first and four second order networks to cover the various possible locations of the poles and zeros in the complex frequency plane. Unfortunately, some of the circuits he proposed (when pole and zero locations were complex), were derived using Brune’s synthesis method, which sometimes produced lattice impedances containing mutually coupled coils. However, a later paper provides alternative networks to avoid this problem. An example As an example of the method, consider the realization of the simple equalizer response given earlier. It can be realized by means of simple lattices in cascade. The response required is This can be rewritten thus This can be apportioned to two first order lattices in cascade, using the Type IV circuits from Bode’s chart to give the circuit shown. This can be easily converted to a cascade of standard constant resistance bridged-T sections of the form described earlier, as shown on the right. (Aa simpler circuit is also possible, which uses fewer resistors). Development of an equalizer by circuit refinement In addition to the methods described earlier, a final equalizer circuit may be obtained by first starting with an initial simple solution and then using a process of circuit refinement to increase the complexity of the circuit, and its response, until a satisfactory performance was obtained. An example of a commercially produced network, obtained in this way is shown below. This equalizer was able to correct for the losses in various lengths of coaxial cable type BICC T3205 (a commercial high quality 75Ω video cable). The equalizer was a bridged-T circuit, rather than a lattice, as was appropriate for coaxial cable, Two versions of the circuit were produced, one for cable lengths of 0 to 100 feet and the second for cable lengths of 100 to 180 feet. Resistors R17 and R18 were “adjusted on test” (with R17×R18 = 752) to give optimum results for a given cable length. Iterative procedures by computer programming With the advent of modern computers, complex iterative routines can be run, which previously were prohibitively time consuming. Routines are possible which minimise the differences between an approximate trial solution and the target specification, either in a “least-squares” or “Chebyshev” sense. The programs use iterative procedures to successively solve linear programming problems, derived locally, as a way of dealing with non-linear problems. An example An example illustrating the method, considered the insertion loss characteristic shown in “Attenuation Plot”, below. A cascade of three lattice sections, as shown, was chosen to achieve the required equalizer response. The component values, derived by the iteration procedure, gave a response which matched the characteristic in a Chebyshev sense, as required. The final result matched the target response at nine frequencies (there are nine degrees of freedom in the circuit R1 to R3, C1 to C3 and L1 to L3) with peak to peak errors of only ± 0.03 dB at intervening frequencies. Variable equalizers In the case of variable equalizers which are set by varying resistor values only, it is usual to use only Bridged-T networks because there are less components to match than in a lattice. Even then, to ensure a fully matched network, dual potentiometers are necessary. Variable equalizers are also discussed by Rounds et al. and Bode. Bode was interested in variable equalizers adjusted by a single potentiometer, so his variable equalizers were not constant resistance networks. Phase equalizers Introduction A phase equalizer is a circuit which is cascaded with a network in order to make the overall phase response more linear (or to make the group delay more constant). The combined circuit will transmit waveforms with improved fidelity, compared to the performance of the initial network alone. Phase equalization is often necessary because many circuits are designed to achieve certain attenuation characteristics, with little regard paid to the phase characteristic that ensues. This is frequently the situation with filters for example, where, in the pursuit of a specific selectivity requirement, scant attention is paid to the phase response of the resulting network (as with Cauer’s filter design procedure). Usually, the circuit produced is a minimum phase network, where sudden changes in the amplitude response always result in nonlinearities in the phase response, because of the precise relationship between amplitude and phase. For example, a sharp-cut minimum phase low-pass filter, with a rapid transition from passband to stopband will always have a phase characteristic which deviates greatly from linear at frequencies in the vicinity of the cut-off frequency. The sharper the transition, the greater will be this deviation. (A plot of the group delay will also show a large increase in delay in the same frequency region). Similarly, any amplitude ripple in the filter passband will also be accompanied by ripple of the phase characteristic. In many cases, both amplitude and phase ripples are undesirable and so there is little point in correcting for one type of ripple without also tackling the other If waveform fidelity is important, then a non-linear phase characteristics are undesirable. In television, for example, the pictorial defects attributable to phase distortion are excessive ringing of the luminance information and smeared and spurious colour edges at transitions in the colour information. In practice, phase correction procedures are most successful when applied to band limited systems, such as those containing a low pass or bandpass filter. This is because such filters inherently define a finite frequency band over which it is necessary to apply the correction. As shown in lattice network, the lattice all-pass circuit is suitable as a phase correcting network because it is able to modify the phase characteristics of a filter network without introducing changes to its amplitude response. Also, the constant resistance property means that it does not create spurious reflections due to mismatch effects, when cascaded with other networks. An example As an example of the phase correction process, consider a conventional Butterworth low-pass filter of 9th order. The circuit, shown below, is the normalised filter, to be terminated with 1 ohm and with unity cut-off frequency). The amplitude and phase characteristics of this filter are given in the figures below. Also shown, in the left-hand plot, is the phase error curve for the filter (i.e. the deviation of the phase slope from linear). As can be seen in the right-hand plot, the phase slope of the low pass filter alone is linear at low frequencies but increases too rapidly at higher frequencies, ultimately resulting in a phase deviation, from linear, of 100 degrees near the top of the pass-band. In order to correct for this phase deviation, the phase corrector needs a linear phase characteristic at low frequencies, but to be positive of the linear asymptote as the frequency increases. A cascade of three second order all-pass lattices will give the required phase characteristic for the filter, and their phase response is shown in the right hand plot. In this example, because the filter has an unbalanced form, it is necessary to use bridged-T equivalents in the phase corrector, rather than lattice circuits. The phase corrector is shown below. As the phase corrector is an all-pass and constant resistance circuit, it does not change the amplitude response of the filter. The combined characteristic has a phase curve which is linear over the pass-band of the filter, but the resulting high phase slope means the combination has a significantly greater transmission delay than the filter alone. In this example, the Butterworth low pass filter has nine poles, located at equi-angular intervals on a unity semi-circle in the left half of the complex frequency plane, and the poles and zeros of each of the three phase correctors are ±0.866 ±0.5j . The transient behaviour of the filter, when subject to a step waveform is shown above. The first plot is for the filter, alone, without phase correction and the second plot shows the performance after correction. As expected the phase correction improves waveform symmetry, reduces the 10–90% rise-time, and roughly halves the peak amplitude of the overshoots, but it has increased the delay. An overview of design methods for phase correctors In practice, a variety of techniques have been used in the design of phase correctors. The simplest method uses trial and error procedures, aided by a basic understanding of all-pass phase characteristics. The procedure can be surprisingly successful as, often, a cascade of a few phase correctors based on the maximally phase flat second order lattice will suffice A similar approach, based on the use of standard phase tables for equalizers with parabolic delay, allows the designer to determine the appropriate network to meet a given peak delay An improved procedure, noting that group delay characteristics of many networks could be considered as sums of parabolic and linear contributions, used charts and graphs in combination with three fitting techniques, namely the 3-point fit, the slope fit or the ‘averaged’ fit, to determine component values. Brain described the phase correction of a cascade of 16 vacuum tubes interconnected by second order maximally flat inter-stage networks. The phase characteristic was derived from measurements of the amplitude response by using the relationship between amplitude and phase for minimum phase shift networks. Phase correction was accomplished by three identical second order phase correctors in the form of bridged-T networks. Fredendhall gave the pole-zero pattern and the circuit diagram for a four section delay equalizer which he proposed to the FCC should be used in transmitters to compensate for the characteristics of the ‘average’ receiver. A more mathematical approach has been described which starts by choosing a basic cascade of phase correctors and then optimising their characteristics by a simple curve fitting procedure. A LPF for television use was designed by this method. More sophisticated still, an iterative convergence procedure has been run on a computer to optimise a cascade of phase correcting sections. By having enough sections available to give the appropriate number of degrees of freedom, the desired phase characteristic can be achieved in a Chebyshev sense. Another optimisation procedure began by first noting that the delays of first and second order all-pass sections add linearly. So for a cascade of M first order and N second order networks, a best fit choice of M and N could be found to meet any given characteristic within acceptable bounds, by using a minimax approximation procedure. The process proceeded in two steps, the first was to find an approximation that coincided with the desired delay on a specified set of points. The second step consisted of perturbing those points in an attempt to find an equal ripple solution. By increasing the number of sections, chosen at the outset, the peak-peak ripple error can be made as small as required. Szentirmai has issued a computer aided design package called “S/FILSYN” which is capable of general purpose synthesis, including circuits for amplitude and phase equalization realised as lattice or bridged-T networks. He has also reviewed a number of other computer aided design packages See also Zobel network Lattice phase equalizer Lattice network References Analog circuits Bridge circuits Electronic filter topology
Lattice and bridged-T equalizers
[ "Engineering" ]
7,002
[ "Analog circuits", "Electronic engineering" ]
55,723,156
https://en.wikipedia.org/wiki/Clinical%20pharmaceutical%20chemistry
Clinical pharmaceutical chemistry is a specialty branch of chemical sciences, which consists of medicinal chemistry with additional training in clinical aspects of translational sciences and medicine. Typically this involves similar principal training as in general medicine, where inspection of and interaction with the patients are a vital part of the training. Typically students in clinical pharmaceutical chemistry use the same curriculum as medical students, but specialize in medicinal and organic chemistry after and during the theoretical/early clinical studies. In clinical pharmaceutical chemistry the aim is to understand biological transformations and processes associated with chemical entities inside the human body, and how those processes can be influenced with changes in chemical structures. The aim of clinical pharmaceutical chemistry is in addition to manage and manipulate clinical effects of different chemical structures, as well as to manage phenomena recognized in first-in-human studies. Typically clinical pharmaceutical chemistry has an important role in discovery, design and manipulation of new drug entities, and is vital especially in early clinical studies (such as Phase I studies). See also Medicinal chemistry References Medicinal chemistry
Clinical pharmaceutical chemistry
[ "Chemistry", "Biology" ]
202
[ "Medicinal chemistry stubs", "Biochemistry stubs", "nan", "Medicinal chemistry", "Biochemistry" ]
47,222,588
https://en.wikipedia.org/wiki/Molecular%20vapor%20deposition
Molecular vapor deposition is the gas-phase reaction between surface reactive chemicals and an appropriately receptive surface. Often bi-functional silanes are used in which one termination of the molecule is reactive. For example, a functional chlorosilane (R-Si-Cl3) can react with surface hydroxyl groups (-OH) resulting a radicalized (R) deposition on the surface. The advantage of a gas phase reaction over a comparable liquid phase process is the control of moisture from the ambient environment, which often results in cross polymerization of the silane leading to particulates on the treated surface. Often a heated sub-atmospheric vacuum chamber is used to allow precise control of the reactants and water content. Additionally the gas phase process allows for easy treatment of complex parts since the coverage of the reactant is generally diffusion limited. Microelectromechanical Systems (MEMS) sensors often use molecular vapor deposition as a technique to address stiction and other parasitic issues relative to surface-to-surface interactions. References Chemical vapor deposition Industrial processes Thin film deposition
Molecular vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
219
[ "Materials science stubs", "Thin film deposition", "Coatings", "Thin films", "Nanotechnology stubs", "Chemical vapor deposition", "Planes (geometry)", "Solid state engineering", "Nanotechnology" ]
59,065,686
https://en.wikipedia.org/wiki/Electrical%20busbar%20system
Electrical busbar systems (sometimes simply referred to as busbar systems) are a modular approach to electrical wiring, where instead of a standard cable wiring to every single electrical device, the electrical devices are mounted onto an adapter which is directly fitted to a current carrying busbar. This modular approach is used in distribution boards, automation panels and other kinds of installation in an electrical enclosure. Busbar systems are subject to safety standards for design and installation along with electrical enclosure according to IEC 61439-1 and vary between countries and regions. Content & types of busbar systems A busbar system usually contains couple of busbar holders, busbars, Adapters to mount devices, clamps either with protective covering or without covering to powerup or distribute the current from the busbar system & busbar mountable electrical devices. Electrical busbar systems can be differentiated by the distance between center of each busbar and vary according to maximum current carrying capacity of the system which depends on IEC standards. commonly known busbar system types. 40 mm Busbar System (Current carrying capacity up to 300–400 Amps) 60 mm Busbar System (Current carrying capacity up to 800–2500 Amps) 100 mm Busbar System (Current carrying capacity up to 1250 Amps) 185 mm Busbar System (Current carrying capacity up to 2500 Amps) Advantages and disadvantages over traditional electrical wiring Advantage Source: Electrically Safe installation up to IP 60 inside the cabinet, Drastically reduce space required inside the cabinet Easy trouble shooting in case of switch gear failure Pre-tested short circuit rating Mounting of 2, 3, 4 or 5 pole switchgear in a single construction Time saving during construction of the cabinet Disadvantages Commercially not viable if the number of switch gears is low Specialists needed for construction of the busbar system from a wiring diagram Lack of adapters for mounting different electrical devices on the busbar Special type of busbars needed to construct busbar system which can carry current more than 800 Ampere See also 10603 – a frequently used MIL-SPEC compliant wire Bus duct Cable Entry System Cable gland Cable management Cable tray Domestic AC power plugs and sockets Electrical conduit Electrical room Electrical wiring in North America Electrical wiring in the United Kingdom Electricity distribution Grounding Home wiring Industrial and multiphase power plugs and sockets Neutral wire OFHC Portable cord Power cord Restriction of Hazardous Substances Directive (RoHS) Single-phase electric power Structured cabling Three-phase electric power References Electrical wiring
Electrical busbar system
[ "Physics", "Engineering" ]
498
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
59,068,034
https://en.wikipedia.org/wiki/Alabama%20v.%20North%20Carolina
Alabama v. North Carolina, 560 U.S. 330 (2010), was an original jurisdiction United States Supreme Court case. It arose from a disagreement between the state of North Carolina and the other members of the Southeast Interstate Low-Level Radioactive Waste Management Compact over the funding for a joint project. Eight states had formed the compact in 1983 to manage low-level radioactive waste in the southeastern United States. In 1986, North Carolina was chosen as the location for the regional waste facility, and it asked the other states for funding to help with the project. The project stalled and was eventually shut down, despite North Carolina receiving $80 million from the other states. After the project's demise, the other states demanded their money back, but North Carolina refused to repay them, leading to this case. Background In 1980, Congress passed the Low Level Radioactive Waste Policy Act to authorize the creation of interstate agreements regarding the management of low-level radioactive waste. Accordingly, in 1983, North Carolina, along with the states of Alabama, Florida, Georgia, Mississippi, South Carolina, Tennessee, and Virginia, formed the Southeast Interstate Low-Level Radioactive Waste Management Compact to coordinate their management of low-level radioactive waste. It was run by a commission, which was tasked with choosing a State in which to construct a "regional disposal facility". In 1986, the commission chose North Carolina, thus requiring it to begin the process of seeking a licence for the construction of such a facility. Two years later, North Carolina asked the other states for monetary assistance with the project, which it received – by 1997, North Carolina had been paid more than $80 million. Yet, despite $34 million of North Carolina's own funds, it was unable to obtain the license in a timely fashion. In 1997, the commission told North Carolina that, without a plan for funding the rest of the licensing steps, it would be cut off; when it was, North Carolina began to shut down the project, claiming that it could not continue without additional funding. In response, in June 1999, Florida and Tennessee asked that the commission levy monetary sanctions against North Carolina. North Carolina responded by attempting to leave the Compact entirely. It based this decision on a clause which declared that "any party state may withdraw from the compact by enacting a law repealing the compact, provided that if a regional facility is located within such state, such regional facility shall remain available to the region for four years after the date the commission receives verification in writing from the Governor of such party state of the rescission of the Compact". The commission, in response to the complaint by Florida and Tennessee, demanded in December 1999 that, in addition to other monetary penalties, North Carolina repay approximately $80 million. The commission believed that, under article 7(F) of the original Compact, it had the power to level such monetary sanctions. However, North Carolina disagreed, and refused to comply with the commission's sanctions. Case history In 2003, the Supreme Court allowed Alabama, Florida, Tennessee, and Virginia (the only four remaining members of the Compact), and the commission to sue North Carolina under the Court's original jurisdiction. The plaintiffs requested "monetary and other relief, including a declaration that North Carolina is subject to sanctions and that the commission's sanctions resolution is valid and enforceable." The case was assigned to a special master, who filed two reports. In January 2010, the Supreme Court heard oral arguments regarding the exceptions to the reports that were filed by both parties. Decision The Supreme Court overruled all of the states' objections to the Special Master's Reports. It held that the Compact did not give the commission the power to impose monetary sanctions against North Carolina; that the Court did not need to follow the commission's findings regarding North Carolina's supposed breach of its obligations; that North Carolina did not breach its obligations to take "appropriate steps" towards getting a license; and that North Carolina was allowed to withdraw from the Compact. The Court remanded the remainder of the case back to the Special Master to further adjudicate the equitable claims raised by the petitioners. Subsequent history In January 2011, the case was dismissed by agreement of the parties. References External links Southeast Compact Commission website Compact (PDF, download only) United States Supreme Court cases United States Supreme Court cases of the Roberts Court United States Supreme Court original jurisdiction cases Radioactive waste 2010 in United States case law
Alabama v. North Carolina
[ "Chemistry", "Technology" ]
897
[ "Environmental impact of nuclear power", "Radioactive waste", "Hazardous waste", "Radioactivity" ]
52,832,272
https://en.wikipedia.org/wiki/Mitochondria%20associated%20membranes
Mitochondria-associated membranes (MAMs) represent regions of the endoplasmic reticulum (ER) which are reversibly tethered to mitochondria. These membranes are involved in import of certain lipids from the ER to mitochondria and in regulation of calcium homeostasis, mitochondrial function, autophagy and apoptosis. They also play a role in development of neurodegenerative diseases and glucose homeostasis. Role In mammalian cells, formation of these linkage sites are important for some cellular events including: Calcium homeostasis Mitochondria associated membranes are involved in the transport of calcium from the ER to mitochondria. This interaction is important for rapid uptake of calcium by mitochondria through Voltage dependent anion channels (VDACs), which are located at the outer mitochondrial membrane (OMM). This transport is regulated with chaperones and regulatory proteins which control the formation of the ER–mitochondria junction. Transfer of calcium from ER to mitochondria depends on high concentration of calcium in the intermembrane space, and mitochondrial calcium uniporter (MCU) accumulates calcium into the mitochondrial matrix for electrochemical gradient. Regulation of lipid metabolism Transport of phosphatidylserine into mitochondria from the ER for decarboxylation to phosphatidylethanolamine through the ER-mitochondria lipid which transform phosphatidic acid (PA) into phosphatidylserine (PS) by phosphatidylserine synthases 1 and 2 (PSS1, PSS2) in the ER and then transfers PS to mitochondria, where phosphatidylserine decarboxylase (PSD) transform into phosphatidylethanolamine (PE). PE which is synthesized at mitochondria goes back to ER where phosphatidylethanolamine methyltransferase 2 (PEMT2) synthesizes PC (phosphatidylcholine). Regulation of autophagy and mitophagy The formation of autophagosomes through the coordination of ATG (autophagy-related) proteins and the vesicular trafficking by MAM. Regulation of the morphology: Dynamics and functions of mitochondria, and cell survival These membrane contact sites have been associated with the delicate balance between life and death of the cell. Isolation membranes are the initial step to form auto-phagosomes. These closed membranes are double membrane-bond, with lysosomes inside it. The main function of these membrane is degradation, as role in cellular homeostasis. However, the origin of them has remained unclear. Maybe it is the plasma membrane, the endoplasmic reticulum (ER) and the mitochondria. But the ER- mitochondria contact site have markers, the auto-phagosome marker ATG14, and the auto-phagosome-formation marker ATG5, until the formation of auto-phagosome is complete. Whereas, the absence of ATG14 puncta, it is caused by the breakdown of the ER–mitochondria contact site The oxidative stress and the beginning of endoplasmic reticulum (ER) stress occur together; the ER stress have a key sensor enriched at the mitochondria-associated ER membranes (MAMs). This key is PERK (RNA-dependent protein kinase (PKR)-like ER kinase), PERK contributes to apoptosis twofold by sustaining the levels of pro-apoptotic C/EBP homologous protein (CHOP). A tight ER–mitochondria contact site is integral to the mechanisms controlling cellular apoptosis and to inter-organelle Ca signals. The mitochondria-associated ER membranes (MAMs), play role in cell death modulation. Mitochondrial outer membrane permeabilization (MOMP), is a reason of the higher matrix Ca levels, which is acts as a trigger for apoptosis. MOMP is the process before apoptosis, which is accompanied to permeability of the inner membrane of the mitochondria (IMM). Permeability transition pore (PTP) opening induces mitochondrial swelling and outer membrane of the mitochondria (OMM) rupture. Moreover, PTP opening induce releasing of caspase-activating factors and apoptosis. Caspase-activating factors induced by cytochrome C to bind to the IP3R, this will result in higher Ca transfer from the ER to the mitochondria, amplifying the apoptotic signal. Alzheimer’s disease (AD) MAMS play an important role in Ca Homeostasis, phospholipid and cholesterol metabolism. Research has associated the alteration of these functions of MAMs in Alzheimer's disease. Mitochindrial associated membranes associated with Alzheimer's disease have been reported to have an up-regulation of lipids synthesized in the MAMs juxtaposition and an up regulation of protein complexes present in the contact region between the ER and mitochondria. Research has suggested that the sites of MAM are the primary sites of activity for γ-secretase activity and amyloid precursor protein (APP) localization along with the presenilin 1 (PS1), presenilin 2 (PS2) proteins. γ-secretase functions in the cleavage of the beta- APP protein. Patients diagnosed with Alzheimer’s disease have presented results that indicated the accumulation of amyloid beta peptide in the brain which in turn leads to the amyloid cascade suggestion. Also increased connectivity between the ER and the mitochondria at MAM sites has been observed in human patients diagnosed with familial AD (FAD) by increase of the contact sites. These individuals showed mutations in the PS1, PS2 and APP proteins at the MAM sites. This increased connectivity also caused an abnormality in Ca signaling between neurons. Also with regard to the role in MAMs in phospholipid metabolism, patients diagnosed with AD have been reported to show alterations in levels of Phosphatedylserine and phostphatedylethanolamine in the ER and mitochondria respectively, this leads to the intracellular tangles containing hyperphosphorylated forms of the microtubule‐associated protein tau within tissues. Parkinson's disease (PD) One of the causes of Parkinson's disease is mutations in genes encoding for different proteins that are localized at the MAM sites. Mutations in the genes that encode the proteins Parkin, PINK1, alpha-Synuclein (α-Syn) or the protein deglycase DJ-1 have been linked to this disease through research. However, further research is still being considered in order to determine the direct correlations of these genes to Parkinson’s disease. In normal conditions, these genes are believed to be responsible for the cells ability to degrade mitochondria that has been rendered nonfunctional in a process known as mitophagy. However, mutations in the Parkin and pink1 genes have been associated with the cells becoming incapable of degrading faulty mitochondria. The proteins alpha-Synuclein (α-Syn) and DJ-1 have been shown to promote MAM function interaction between the ER and the mitochondria. The wild-type gene that codes for α-Syn promotes the physical junction between ER and mitochondria by binding to the lipid raft regions of the MAM. However, the mutant form of this gene has a low affinity to the lipid raft regions, thereby diminishing the contact between the ER and mitochondria and causing accumulation of α-Syn in Lewy bodies which is a major characteristic of PD. Further research on PD association with alterations in MAM is still being developed. References Presenilins are enriched in endoplasmic reticulum membranes associated with mitochondria. Area-Gomez E, de Groof AJ, Boldogh I, Bird TD, Gibson GE, Koehler CM, Yu WH, Duff KE, Yaffe MP, Pon LA, Schon EA. Am J Pathol. 2009 Nov;175(5):1810-6. doi: 10.2353/ajpath.2009.090219. PMID: 19834068 Neurodegenerative disorders Cell biology Mitochondria Membrane biology
Mitochondria associated membranes
[ "Chemistry", "Biology" ]
1,785
[ "Cell biology", "Mitochondria", "Membrane biology", "Molecular biology", "Metabolism" ]
52,837,586
https://en.wikipedia.org/wiki/Pseudo%20Jahn%E2%80%93Teller%20effect
The pseudo Jahn–Teller effect (PJTE), occasionally also known as second-order JTE, is a direct extension of the Jahn–Teller effect (JTE) where spontaneous symmetry breaking in polyatomic systems (molecules and solids) occurs even when the relevant electronic states are not degenerate. The PJTE can occur under the influence of sufficiently low-lying electronic excited states of appropriate symmetry. "The pseudo Jahn–Teller effect is the only source of instability and distortions of high-symmetry configurations of polyatomic systems in nondegenerate states, and it contributes significantly to the instability in degenerate states". History In their early 1957 paper on what is now called pseudo Jahn–Teller effect (PJTE), Öpik and Pryce showed that a small splitting of the degenerate electronic term does not necessarily remove the instability and distortion of a polyatomic system induced by the Jahn–Teller effect (JTE), provided that the splitting is sufficiently small (the two split states remain "pseudo degenerate"), and the vibronic coupling between them is strong enough. From another perspective, the idea of a "mix" of different electronic states induced by low-symmetry vibrations was introduced in 1933 by Herzberg and Teller to explore forbidden electronic transitions, and extended in the late 1950s by Murrell and Pople and by Liehr. The role of excited states in softening the ground state with respect to distortions in benzene was demonstrated qualitatively by Longuet-Higgins and Salem by analyzing the π electron levels in the Hückel approximation, while a general second-order perturbation formula for such vibronic softening was derived by Bader in 1960. In 1961 Fulton and Gouterman presented a symmetry analysis of the two-level case in dimers and introduced the term "pseudo Jahn–Teller effect". The first application of the PJTE to solving a major solid-state structural problem with regard to the origin of ferroelectricity was published in 1966 by Isaac Bersuker, and the first book on the JTE covering the PJTE was published in 1972 by Englman. The second-order perturbation approach was employed by Pearson in 1975 to predict instabilities and distortions in molecular systems; he called it "second-order JTE" (SOJTE). The first explanation of PJT origin of puckering distortion as due to the vibronic coupling to the excited state, was given for the N3H32+ radical by Borden, Davidson, and Feller in 1980 (they called it "pyramidalization"). Methods of numerical calculation of the PJT vibronic coupling effect with applications to spectroscopic problems were developed in the early 1980s A significant step forward in this field was achieved in 1984 when it was shown by numerical calculations that the energy gap to the active excited state may not be the ultimate limiting factor in the PJTE, as there are two other compensating parameters in the condition of instability. It was also shown that, in extension of the initial definition, the PJT interacting electronic states are not necessarily components emerging from the same symmetry type (as in the split degenerate term). As a result, the applicability of the PJTE became a priory unlimited. Moreover, it was shown by Bersuker that the PJTE is the only source of instability of high-symmetry configurations of polyatomic systems in nondegenerate states (works cited in Refs.), and degeneracy and pseudo degeneracy are the only source of spontaneous symmetry breaking in matter in all its forms. The many applications of the PJTE to the study of a variety of properties of molecular systems and solids are reflected in a number of reviews and books ), as well as in proceedings of conferences on the JTE. Theoretical background General theory The equilibrium geometry of any polyatomic system in nondegenerate states is defined as corresponding to the point of the minimum of the adiabatic potential energy surface (APES), where its first derivatives are zero and the second derivatives are positive. If we denote the energy of the system as a function of normal displacements as , at the minimum point of the APES (), the curvature of in direction , (1) is positive, i.e., . Very often the geometry of the system at this point of equilibrium on the APES does not coincide with the highest possible (or even with any high) symmetry expected from general symmetry considerations. For instance, linear molecules are bent at equilibrium, planar molecules are puckered, octahedral complexes are elongated, or compressed, or tilted, cubic crystals are tetragonally polarized (or have several structural phases), etc. The PJTE is the general driving force of all these distortions if they occur in the nondegenerate electronic states of the high-symmetry (reference) geometry. If at the reference configuration the system is structurally unstable with respect to some nuclear displacements , then in this direction. The general formula for the energy is , where is the Hamiltonian and \psi_0 is the wavefunction of the nondegenerate ground state. Substituting in Eq. (1), we get (omitting the index for simplicity) (2) (3) (4) where are the wavefunctions of the excited states, and the expression, obtained as a second order perturbation correction, is always negative, . Therefore, if , the contribution is the only source of instability. The matrix elements in Eq. (4) are off-diagonal vibronic coupling constants, (5) These measure the mixing of the ground and excited states under the nuclear displacements , and therefore is termed the vibronic contribution. Together with the value and the energy gap between the mixing states, are the main parameters of the PJTE (see below). In a series of papers beginning in 1980 (see references in ) it was proved that for any polyatomic system in the high-symmetry configuration (6) and hence the vibronic contribution is the only source of instability of any polyatomic system in nondegenerate states. If for the high-symmetry configuration of any polyatomic system, then a negative curvature, , can be achieved only due to the negative vibronic coupling component , and only if . It follows that any distortion of the high-symmetry configuration is due to, and only to the mixing of its ground state with excited electronic states by the distortive nuclear displacements realized via the vibronic coupling in Eq. (5). The latter softens the system with respect to certain nuclear displacements (), and if this softening is larger than the original (nonvibronic) hardness in this direction, the system becomes unstable with respect to the distortions under consideration, leading to its equilibrium geometry of lower symmetry, or to dissociation. There are many cases when neither the ground state is degenerate, nor is there a significant vibronic coupling to the lowest excited states to realize PJTE instability of the high-symmetry configuration of the system, and still there is a ground state equilibrium configuration with lower symmetry. In such cases the symmetry breaking is produced by a hidden PJTE (similar to a hidden JTE); it takes place due to a strong PJTE mixing of two excited states, one of which crosses the ground state to create a new (lower) minimum of the APES with a distorted configuration. The two-level problem The use of the second order perturbation correction, Eq. (4), for the calculation of the value in the case of PJTE instability is incorrect because in this case , meaning the first perturbation correction is larger than the main term, and hence the criterion of applicability of the perturbation theory in its simplest form does not hold. In this case, we should consider the contribution of the lowest excited states (that make the total curvature negative) in a pseudo degenerate problem of perturbation theory. For the simplest case when only one excited state creates the main instability of the ground state, we can treat the problem via a pseudo degenerate two-level problem, including the contribution of the higher, weaker-influencing states as a second order correction. In the PJTE two-level problem we have two electronic states of the high-symmetry configuration, ground and excited , separated by an energy interval of , that become mixed under nuclear displacements of certain symmetry ; the denotations , , and indicate, respectively, the irreducible representations to which the symmetry coordinate and the two states belong. In essence, this is the original formulation of the PJTE. Assuming that the excited state is sufficiently close to the ground one, the vibronic coupling between them should be treated as a perturbation problem for two near-degenerate states. With both interacting states non-degenerate the vibronic coupling constant in Eq. (5) (omitting indices) is non-zero for only one coordinate with . This gives us directly the symmetry of the direction of softening and possible distortion of the ground state. Assuming that the primary force constants in the two states are the same (for different see [1]), we get a 2×2 secular equation with the following solution for the energies of the two states interacting under the linear vibronic coupling (energy is referred to the middle of the gap between the levels at the undistorted geometry): (7) It is seen from these expressions that, on taking into account the vibronic coupling, , the two APES curves change in different ways: in the upper sheet the curvature (the coefficient at in the expansion on ) increases, whereas in the lower one it decreases. But until the minima of both states correspond to the point , as in the absence of vibronic mixing. However, if (8) the curvature of the lower curve of the APES becomes negative, and the system is unstable with respect to the displacements (Fig. 1). Under condition (8), the minima points on the APES are given by (9) From these expressions and Fig. 1 it is seen that while the ground state is softened (destabilized) by the PJTE, the excited state is hardened (stabilized), and this effect is the larger, the smaller and the larger F. It takes place in any polyatomic system and influences many molecular properties, including the existence of stable excited states of molecular systems that are unstable in the ground state (e.g., excited states of intermediates of chemical reactions); in general, even in the absence of instability the PJTE softens the ground state and increases the vibrational frequencies in the excited state. Comparison with the Jahn-Teller effect The two branches of the APES for the case of strong PJTE resulting in the instability of the ground state (when the condition of instability (11) holds) are illustrated in Fig. 1b in comparison with the case when the two states have the same energy (Fig. 1a), i. e. when they are degenerate and the Jahn–Teller effect (JTE) takes place. We see that the two cases, degenerate and nondegenerate but close-in-energy (pseudo degenerate) are similar in generating two minima with distorted configurations, but there are important differences: while in the JTE there is a crossing of the two terms at the point of degeneracy (leading to conical intersections in more complicated cases), in the nondegenerate case with strong vibronic coupling there is an "avoided crossing" or "pseudo crossing". Even a more important difference between the two vibronic coupling effects emerges from the fact that the two interacting states in the JTE are components of the same symmetry type, whereas in the PJTE each of the two states may have any symmetry. For this reason, the possible kinds of distortion is very limited in the JTE, and unlimited in the PJTE. It is also noticeable that while the systems with JTE are limited by the condition of electron degeneracy, the applicability of the PJTE has no a priori limitations, as it includes also the cases of degeneracy. Even when the PJT coupling is weak and the inequality (11) does not hold, the PJTE is still significant in softening (lowering the corresponding vibrational frequency) of the ground state and increasing it in the excited state. When considering the PJTE in an excited state, all the higher in energy states destabilize it, while the lower ones stabilize it. For a better understanding it is important to follow up on how the PJTE is related to intramolecular interactions. In other words, what is the physical driving force of the PJTE distortions (transformations) in terms of well-known electronic structure and bonding? The driving force of the PJTE is added (improved) covalence: the PJTE distortion takes place when it results in an energy gain due to greater covalent bonding between the atoms in the distorted configuration. Indeed, in the starting high-symmetry configuration the wavefunctions of the electronic states, ground and excited, are orthogonal by definition. When the structure is distorted, their orthogonality is violated, and a nonzero overlap between them occurs. If for two near-neighbor atoms the ground state wavefunction pertains (mainly) to one atom and the excited state wavefunction belongs (mainly) to the other, the orbital overlap resulting from the distortion adds covalency to the bond between them, so the distortion becomes energetically favorable (Fig. 2). Applications Examples of the PJTE being used to explain chemical, physical, biological, and materials science phenomena are innumerable; as stated above, the PJTE is the only source of instability and distortions in high-symmetry configurations of molecular systems and solids with nondegenerate states, hence any phenomenon stemming from such instability can be explained in terms of the PJTE. Below are some illustrative examples. Linear systems PJTE versus Renner–Teller effect in bending distortions. Linear molecules are exceptions from the JTE, and for a long time it was assumed that their bending distortions in degenerate states (observed in many molecules) is produced by the Renner–Teller effect (RTE) (the splitting of the generate state by the quadratic terms of the vibronic coupling). However, recently it was proved that the RTE, by splitting the degenerate electronic state, just softens the lower branch of the APES, but this lowering of the energy is not enough to overcome the rigidity of the linear configuration and to produce bending distortions. It follows that the bending distortion of linear molecular systems is due to, and only to the PJTE that mixes the electronic state under consideration with higher in energy (excited) states. This statement is enhanced by the fact that many linear molecules in nondegenerate states (and hence with no RTE) are, too, bent in the equilibrium configuration. The physical reason for the difference between the PJTE and the RTE in influencing the degenerate term is that while in the former case the vibronic coupling with the excited state produces additional covalent bonding that makes the distorted configuration preferable (see above, section 2.3), the RTE has no such influence; the splitting of the degenerate term in the RTE takes place just because the charge distribution in the two states becomes nonequivalent under the bending distortion. Peierls distortion in linear chains. In linear molecules with three or more atoms there may be PJTE distortions that do not violate the linearity but change the interatomic distances. For instance, as a result of the PJTE a centrosymmetric linear system may become non-centrosymmetric in the equilibrium configurations, as, for example, in the BNB molecule (see in ). An interesting extension of such distortions in sufficiently long (infinite) linear chains was first considered by Peierls. In this case the electronic states, combinations of atomic states, are in fact band states, and it was shown that if the chain is composed by atoms with unpaired electrons, the valence band is only half filled, and the PJTE interaction between the occupied and unoccupied band states leads to the doubling of the period of the linear chain (see also in the books ). Broken cylindrical symmetry. It was shown also that the PJTE not only produces the bending instability of linear molecules, but if the mixing electronic states involve a Δ state (a state with a nonzero momentum with respect to the axis of the molecule, its projection quantum number being Λ=2), the APES, simultaneously with the bending, becomes warped along the coordinate of rotations around the molecular axis, thus violating both the linear and cylindrical symmetry. It happens because the PJTE, by mixing the wavefunctions of the two interacting states, transfers the high momentum of the electrons from states with Λ=2 to states with lower momentum, and this may alter significantly their expected rovibronic spectra. Nonlinear molecules and two-dimensional (2D) systems PJTE and combined PJTE plus JTE effects in molecular structures. There is a practically unlimited number of molecular systems for which the origin of their structural properties was revealed and/or rationalized based on the PJTE, or a combination of the PJTE and JTE. The latter stems from the fact that in any system with a JTE in the ground state the presence of a PJT active excited state is not excluded, and vice versa, the active excited state for the PJTE of the ground one may be degenerate, and hence JT active. Examples are shown, e.g., in Refs., including molecular systems like Na3, C3H3, C4X4 (X= H, F, Cl, Br), CO3, Si4R4 (with R as large ligands), planar cyclic CnHn, all kind of coordination systems of transition metals, mixed-valence compounds, biological systems, origin of conformations, geometry of ligands' coordination, and others. Indeed, it is difficult to find a molecular system for which the PJTE implications are a priori excluded, which is understandable in view of the mentioned above unique role of the PJTE in such instabilities. Three methods to quench the PJTE have been documented: changing the electronic charge of the molecule, sandwiching the molecule with other ions and cyclic molecules, and manipulating the environment of the molecule. Hidden PJTE, spin crossover, and magnetic-dielectric bistability. As mentioned above, there are molecular systems in which the ground state in the high-symmetry configuration is neither degenerate to trigger the JTE, nor does it interact with the low-lying excited states to produce the PJTE (e.g., because of their different spin multiplicity). In these situations, the instability is produced by a strong PJTE in the excited states; this is termed "hidden PJTE" in the sense that its origin is not seen explicitly as a PJTE in the ground state. An interesting typical situation of hidden PJTE emerges in molecular and solid-state systems with valence half-filed closed shells electronic configurations e2 and t3. For instance, in the e2 case the ground state in the high-symmetry equilibrium geometry is an orbital non-degenerate triplet 3A, while the nearby low-lying two excited electronic states are close-in-energy singlets 1E and 1A; due to the strong PJT interaction between the latter, the lower component of 1E crosses the triplet state to produce a global minimum with lower symmetry. Fig. 3 illustrates the hidden PJTE in the CuF3 molecule, showing also the singlet-triplet spin crossover and the resulting two coexisting configurations of the molecule: high-symmetry (undistorted) spin-triplet state with a nonzero magnetic moment, and a lower in energy dipolar-distorted singlet state with zero magnetic moment. Such magnetic-dielectric bistability is inherent to a whole class of molecular systems and solids. Puckering in planar molecules and graphene-like 2D and quasi 2D systems. Special attention has been paid recently to 2D systems in view of a variety of their planar-surface-specific physical and chemical properties and possible graphene-like applications in electronics. Similar-to-graphene properties are sought for in silicene, phosphorene, boron nitride, zinc oxide, gallium nitride, as well as in 2D transition metal dichalkogenides and oxides, plus a number of other organic and inorganic 2D and quasi-2D compounds with expected similar properties. One of the main important features of these systems is their planarity or quasi-planarity, but many of the quasi-2D compounds are subject to out-of-plane deviations known as puckering (buckling). The instability and distortions of the planar configuration (as in any other systems in nondegenerate state) was shown to be due to the PJTE. Detailed exploration of the PJTE in such systems allows one to identify the excited states that are responsible for the puckering, and suggest possible external influence that restores their planarity, including oxidation, reduction, substitutions, or coordination to other species. Recent investigations have also extended to 3D compounds. Solid state and materials science Cooperative PJTE in BaTiO3-type crystals and ferroelectricity. In crystals with PJTE centers the interaction between the local distortions may lead to their ordering to produce a phase transition to a regular crystal phase with lower symmetry. Such cooperative PJTE is quite similar to the cooperative JTE; it was shown in one of the first studies of the PJTE in solid state systems that in the case of ABO3 crystals with perovskite structure the local dipolar PJTE distortions at the transition metal B center and their cooperative interactions lead to ferroelectric phase transitions. Provided the criterion for PJTE is met, each [BO6] center has an APES with eight equivalent minima along the trigonal axes, six orthorhombic, and (higher) twelve tetragonal saddle-points between them. With temperature, the gradually reached transitions between the minima via the different kind of saddle-points explains the origin of all the four phases (three ferroelectric and one paraelectric) in perovskites of the type BaTiO3 and their properties. The predicted by the theory trigonal displacement of the Ti ion in all four phases, the fully disordered PJTE distortions in the paraelectric phase, and their partially disordered state in two other phases was confirmed by a variety of experimental investigations (see in ). Multiferroicity and magnetic-ferroelectric crossover. The PJTE theory of ferroelectricity in ABO3 crystals was expanded to show that, depending on the number of electrons in the dn shell of the transition metal ion B4+ and their low spin or high spin arrangement (which controls the symmetry and spin multiplicity of the ground and PJTE active excited states of the [BO6] center), the ferroelectricity may coexist with a magnetic moment (multiferroicity). Moreover, in combination with the temperature dependent spin crossover phenomenon (which changes the spin multiplicity), this kind of multiferroicity may lead to a novel effect known as a magnetic-ferroelectric crossover. Solid state magnetic-dielectric bistability. Similar to the above-mentioned molecular bistability induced by the hidden PJTE, a magnetic-dielectric bistability due to two coexisting equilibrium configurations with corresponding properties may take place also in crystals with transition metal centers, subject to the electronic configuration with half-filled e2 or t3 shells. As in molecular systems, the latter produce a hidden PJTE and local bistability which, distinguished from the molecular case, are enhanced by the cooperative interactions, thus acquiring larger lifetimes. This crystal bistability was proved by calculations for LiCuO2 and NaCuO2 crystals, in which the Cu3+ ion has the electronic e2(d8) configuration (similar to the CuF3 molecule). Giant enhancement of observable properties in interaction with external perturbations. In a recent development it was shown that in inorganic crystals with PJTE centers, in which the local distortions are not ordered (before the phase transition to the cooperative phase), the effect of interaction with external perturbations contains an orientational contribution which enhances the observable properties by several orders of magnitude. This was demonstrated on the properties of crystals like paraelectric BaTiO3 in interaction with electric fields (in permittivity and electrostriction), or under a strain gradient (flexoelectricity). These giant enhancement effects occur due to the dynamic nature of the PJTE local dipolar distortions (their tunneling between the equivalent minima); the independently rotating dipole moments on each center become oriented (frozen) along the external perturbation resulting in an orientational polarization which is not there in the absence of the PJTE References Condensed matter physics Inorganic chemistry Solid-state chemistry
Pseudo Jahn–Teller effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,323
[ "Phases of matter", "Materials science", "Condensed matter physics", "nan", "Matter", "Solid-state chemistry" ]
50,461,800
https://en.wikipedia.org/wiki/YedZ%20family
YedZ (TC# 5.B.7) of E. coli has been examined topologically and has 6 transmembrane segments (TMSs) with both the N- and C-termini localized to the cytoplasm. von Rozycki et al. 2004 identified homologues of YedZ in bacteria and animals. YedZ homologues exhibit conserved histidyl residues in their transmembrane domains that may function in heme binding. Some of the homologues encoded in the genomes of various bacteria have YedZ domains fused to transport, electron transfer and biogenesis proteins. One of the animal homologues is the 6 TMS epithelial plasma membrane antigen of the prostate (STAMP1) that is over-expressed in prostate cancer. Some animal homologues have YedZ domains fused C-terminal to homologues of NADP oxidoreductases. YedZ homologues arose by intragenic triplication of a 2 TMS-encoding element. They exhibit statistically significant sequence similarity to two families of putative heme export systems and one family of cytochrome-containing electron carriers and have biogenesis. YedZ homologues can function as heme-binding proteins that facilitate or regulate oxidoreduction, transmembrane electron flow and transport. Homologues of YedZ are found in a variety of bacteria, including magnetotactic bacteria and cyanobacteria where YedZ domains are fused C-terminal to magnetosome transporters of the MFS superfamily (TC# 2.A.1) and to electron carriers of the DsbD family (TC# 5.A.1), respectively. YedZ homologues are found in animals where one includes a human 6 TMS epithelial plasma membrane antigen that is expressed at high levels in prostate cancer cells. Even more distant homologues may include the transmembrane domain within members of the gp91phoxNADPH oxidase associated cytochrome b558 (CytB) family (TC #5.B.2). Heme-containing transmembrane ferric reductase domains (FRD) are found in both bacterial and eukaryotic proteins including ferric reductases (FRE), and NADPH oxidases (NOX). Bacteria contain FRD proteins consisting only of a ferric reductase domain, such as YedZ and short FRE proteins. Full length FRE and NOX enzymes are mostly found in eukaryotes and possess a dehydrogenase domain, allowing them to catalyze electron transfer from cytosolic NADPH to extracellular metal ions (FRE) or oxygen (NOX). Metazoa possess YedZ-related STEAP proteins. Phylogenetic analyses suggests that FRE enzymes appeared early in evolution, followed by a transition towards EF-hand containing NOX enzymes (NOX5- and DUOX-like). NOX enzymes are distinguished from FRE enzymes through a four amino acid motif spanning from transmembrane domain 3 (TM3) to TM4, and YedZ/STEAP proteins are identified by the replacement of the first canonical heme-spanning histidine by a highly conserved arginine. Six-transmembrane epithelial antigen of the prostate 3 (Steap3) is the major ferric reductase in developing erythrocytes. Steap family proteins are defined by a shared transmembrane domain that in Steap3 has been shown to function as a transmembrane electron shuttle, moving cytoplasmic electrons derived from NADPH across the lipid bilayer to the extracellular face where they are used to reduce Fe3+ to Fe2+ and potentially Cu2+ to Cu1+. High affinity FAD and iron binding sites and a single b-type heme binding site is present in the Steap3 transmembrane domain. Steap3 is functional as a homodimer and utilizes an intrasubunit electron transfer pathway through the single heme moiety rather than an intersubunit electron pathway through a potential domain-swapped dimer. The sequence motifs in the transmembrane domain that are associated with the FAD and metal binding sites are not only present in Steap2 and Steap4 but also in Steap1 which lacks the N-terminal oxidoreductase domain, suggesting that Steap1 harbors latent oxidoreductase activity. References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
YedZ family
[ "Biology" ]
978
[ "Protein families", "Protein classification", "Membrane proteins" ]
50,475,634
https://en.wikipedia.org/wiki/Real%20element
In group theory, a discipline within modern algebra, an element of a group is called a real element of if it belongs to the same conjugacy class as its inverse , that is, if there is a in with , where is defined as . An element of a group is called strongly real if there is an involution with . An element of a group is real if and only if for all representations of , the trace of the corresponding matrix is a real number. In other words, an element of a group is real if and only if is a real number for all characters of . A group with every element real is called an ambivalent group. Every ambivalent group has a real character table. The symmetric group of any degree is ambivalent. Properties A group with real elements other than the identity element necessarily is of even order. For a real element of a group , the number of group elements with is equal to , where is the centralizer of , . Every involution is strongly real. Furthermore, every element that is the product of two involutions is strongly real. Conversely, every strongly real element is the product of two involutions. If and is real in and is odd, then is strongly real in . Extended centralizer The extended centralizer of an element of a group is defined as making the extended centralizer of an element equal to the normalizer of the set The extended centralizer of an element of a group is always a subgroup of . For involutions or non-real elements, centralizer and extended centralizer are equal. For a real element of a group that is not an involution, See also Brauer–Fowler theorem Notes References Group theory
Real element
[ "Mathematics" ]
355
[ "Group theory", "Fields of abstract algebra" ]
51,291,103
https://en.wikipedia.org/wiki/Mivar-based%20approach
The Mivar-based approach is a mathematical tool for designing artificial intelligence (AI) systems. Mivar (Multidimensional Informational Variable Adaptive Reality) was developed by combining production and Petri nets. The Mivar-based approach was developed for semantic analysis and adequate representation of humanitarian epistemological and axiological principles in the process of developing artificial intelligence. The Mivar-based approach incorporates computer science, informatics and discrete mathematics, databases, expert systems, graph theory, matrices and inference systems. The Mivar-based approach involves two technologies: Information accumulation is a method of creating global evolutionary data-and-rules bases with variable structure. It works on the basis of adaptive, discrete, mivar-oriented information space, unified data and rules representation, based on three main concepts: “object, property, relation”. Information accumulation is designed to store any information with possible evolutionary structure and without limitations concerning the amount of information and forms of its presentation. Data processing is a method of creating a logical inference system or automated algorithm construction from modules, services or procedures on the basis of a trained mivar network of rules with linear computational complexity. Mivar data processing includes logical inference, computational procedures and services. Mivar networks allow us to develop cause-effect dependencies (“If-then”) and create an automated, trained, logical reasoning system. Representatives of Russian association for artificial intelligence (RAAI) – for example, V. I. Gorodecki, doctor of technical science, professor at SPIIRAS and V. N. Vagin, doctor of technical science, professor at MPEI declared that the term is incorrect and suggested that the author should use standard terminology. History While working in the Russian Ministry of Defense, O. O. Varlamov started developing the theory of “rapid logical inference” in 1985. He was analyzing Petri nets and productions to construct algorithms. Generally, mivar-based theory represents an attempt to combine entity-relationship models and their problem instance – semantic networks and Petri networks. The abbreviation MIVAR was introduced as a technical term by O. O. Varlamov, Doctor of Technical Science, professor at Bauman MSTU in 1993 to designate a “semantic unit” in the process of mathematical modeling. The term has been established and used in all of his further works. The first experimental systems operating according to mivar-based principles were developed in 2000. Applied mivar systems were introduced in 2015. Mivar Mivar is the smallest structural element of discrete information space. Object-property-relation Object-Property-Relation (VSO) is a graph, the nodes of which are concepts and arcs are connections between concepts. Mivar space represents a set of axes, a set of elements, a set of points of space and a set of values of points. where: is a set of mivar space axis names; is a number of mivar space axes. Then: where: is a set of axis elements; is a set element identifier; sets form multidimensional space: where: ; is a point of multidimensional space; are coordinates of point . There is a set of values of multidimensional space points of : where: is a value of the point of multidimensional space is a value of the point of multidimensional space . For every point of space there is a single value from set or there is no such value. Thus, is a set of data model state changes represented in multidimensional space. To implement a transition between multidimensional space and set of points values the relation has been introduced: where: To describe a data model in mivar information space it is necessary to identify three axes: The axis of relations «»; The axis of attributes (properties) «»; The axis of elements (objects) of subject domain «». These sets are independent. The mivar space can be represented by the following tuple: Thus, mivar is described by «» formula, in which «» denotes an object or a thing, «» denotes properties, «» variety of relations between other objects of a particular subject domain. The category “Relations” can describe dependencies of any complexity level: formulae, logical transitions, text expressions, functions, services, computational procedures and even neural networks. A wide range of capabilities complicates description of modeling interconnections, but can take into consideration all the factors. Mivar computations use mathematical logic. In a simplified form they can be represented as implication in the form of an "if…, then …” formula. The result of mivar modeling can be represented in the form of a bipartite graph binding two sets of objects: source objects and resultant objects. Mivar network Mivar network is a method for representing objects of the subject domain and their processing rules in the form of a bipartite directed graph consisting of objects and rules. A Mivar network is a bipartite graph that can be described in the form of a two-dimensional matrix, in that records information about the subject domain of the current task. Generally, mivar networks provide formalization and representation of human knowledge in the form of a connected multidimensional space. That is, a mivar network is a method of representing a piece of mivar space information in the form of a bipartite, directed graph. The mivar space information is formed by objects and connections, which in total represent the data model of the subject domain. Connections include rules for objects processing. Thus, a mivar network of a subject domain is a part of the mivar space knowledge for that domain. The graph can consist of objects-variables and rules-procedures. First, two lists are made that form two nonintersecting partitions: the list of objects and the list of rules. Objects are denoted by circles. Each rule in a mivar network is an extension of productions, hyper-rules with multi-activators or computational procedures. It is proved that from the perspective of further processing, these formalisms are identical and in fact are nodes of the bipartite graph, denoted by rectangles. Multi-dimensional binary matrices Mivar networks can be implemented on single computing systems or service-oriented architectures. Certain constraints restrict their application, in particular, the dimension of matrix of linear matrix method for determining logical inference path on the adaptive rule networks. The matrix dimension constraint is due to the fact that implementation requires sending a general matrix to multiple processors. Since every matrix value is initially represented in symbol form, the amount of sent data is crucial when obtaining, for example, 10000 rules/variables. Classical mivar-based method requires storing three values in each matrix cell: 0 – no value; x – input variable for the rule; y – output variable for the rule. The analysis of possibility of firing a rule is separated from determining output variables according to stages after firing the rule. Consequently, it is possible to use different matrices for “search for fired rules” and “setting values for output variables”. This allowsthe use of multidimensional binary matrices. Binary matrix fragments occupy much less space and improve possibilities of applying mivar networks. Logical and computational data processing To implement logical-and-computational data processing the following should be done. First, a formalized subject domain description is developed. The main objects-variables and rules-procedures are specified on the basis of mivar-based approach and then corresponding lists of “objects” and “rules” are formed. This formalized representation is analogous to the bipartite logical network graph. The main stages of mivar-based information processing are: Forming a subject domain matrix; Working with the matrix and designing the solution algorithm for the task; Executing the computations and finding the solution. The first stage is the stage of synthesis of conceptual subject domain model and its formalization in the form of production rules with a transition to mivar rules. “Input objects – rules/procedures – output objects”. Currently, this stage is the most complex and requires involvement of a human expert to develop a mivar model of the subject domain. Automated solution algorithm construction or logical inference is implemented at the second stage. Input data for algorithm construction are: mivar matrix of subject domain description and a set input of object-variables and required object-variables. The solution is implemented at the third stage. Data processing method Firstly, the matrix is constructed. Matrix analysis determines whether a successful inference path exists. Then possible logical inference paths are defined and at the last stage the shortest path is selected according to the set optimality criteria. Let rules and variables be included in the rules as input variables activating them or as output variables. Then, matrix , each row of which corresponds to one of the rules and contains the information about variables used in the rule, can represent all the interconnections between rules and variables. In each row all the input variables are denoted by in the corresponding positions of the matrix, all the output variables are denoted by . All the variables that have already obtained certain value in the process of inference or setting input data – . All the required (output) variables, that is, the variables that should be obtained on the basis of input data – . One row and one column are added in the matrix to store service information. So, the matrix of dimension , is obtained, which shows the whole structure of the source rule network. The structure of this logical network can change, that is, this is a network of rules with evolutionary dynamics. Example To search for a logical inference path the following actions are implemented: Known variables are denoted by and required variables are denoted by w in the row . For example, denotes positions: 1,2,3 in the row ), the variable denotes the position . The search of such rules that can be fired, that is, all the input variables of which are known, is implemented successively, for example, top-down. Absent such rules, no logical inference path exists and input data refinement (addition) is requested. Rules that can be fired, are labeled in the corresponding place of service row. For example, we can write 1 in the matrix cell, which is illustrated in the cell . Given several such rules, the choice of rules to fire first is implemented according to previously determined criteria. Several rules can be fired simultaneously if sufficient resources are available. Rule (procedure) firing simulation is implemented by assigning the values “known” to the variables inferred in this rule, that is, in this example. A fired rule can be marked additionally, for example by number 2 for convenience of further work. For example, the corresponding changes are made in the cells and . After rule firing simulation, goal achievement analysis is carried out, that is, required value acquisition is analyzed by comparing special characters in the service row. Given at least one “unknown” () value in the service row , logical inference path search is carried out. Otherwise, the task is considered to be solved successfully and the rules fired in a corresponding order form the logical inference path searched. The availability of the rules that can be fired after defining new values at the previous stage is assessed. Absent firearable rules, no inference path exists and actions are taken analogous to step 2. Given fireable rules the inference path search continues. In this example such rules exist. In cell the number 1 is obtained as an indication that this rule can be fired. At the next stage, analogous to stage 4, the rules are fired (rule firing simulation), analogous to stages 5 and 6 necessary actions are performed to obtain the result. Stages 2–7 are implemented until the result is achieved. A path may or may not be found. Deducibility of the variables 4 and 5 in the cells and is obtained, and indication that the rule has already been fired in the cell is formed, that is, the number 2 is set. After that the analysis of the service row is carried out, which shows that not all the required variables are known. Thus, it is necessary to continue processing matrix of dimension . The analysis of this matrix demonstrates the possibility of rule firing. When rule m is fired, new values are obtained for required variables as well. Thus, no required rules are in the service row and new values are obtained in the cells of the matrix: 2 appears in the cell and we got the value instead of in the cell . So, positive result is obtained, consequently, a logical inference path exists with given input values. References External links «Mivar» official website. Mathematical logic Artificial intelligence engineering
Mivar-based approach
[ "Mathematics", "Engineering" ]
2,563
[ "Artificial intelligence engineering", "Mathematical logic", "Software engineering" ]
51,292,061
https://en.wikipedia.org/wiki/Chlorophycean%20mitochondrial%20code
The chlorophycean mitochondrial code (translation table 16) is a genetic code found in the mitochondria of Chlorophyceae. Code    AAs = FFLLSSSSYY*LCC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = -----------------------------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) Differences from the standard code Systematic range and comments Chlorophyceae and the chytridiomycete fungus Spizellomyces punctatus. See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Chlorophycean mitochondrial code
[ "Chemistry", "Biology" ]
530
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
51,297,098
https://en.wikipedia.org/wiki/Farris%20effect%20%28rheology%29
In rheology, the Farris Effect describes the decrease of the viscosity of a suspension upon increasing the dispersity of the solid additive, at constant volume fraction of the solid additive. That is, that a broader particle size distribution yields a lower viscosity than a narrow particle size distribution, for the same concentration of particles. The phenomenon is names after Richard J. Farris, who modeled the effect. The effect is relevant whenever suspensions are flowing, particularly for suspensions with high loading fractions. Examples include hydraulic fracturing fluids, metal injection molding feedstocks, cosmetics, and various geological processes including sedimentation and lava flows. References External links Richard Farris' Bio: http://www.pse.umass.edu/~rfarris/obituary.html Rheology Viscosity Colloidal chemistry
Farris effect (rheology)
[ "Physics", "Chemistry" ]
177
[ "Physical phenomena", "Colloidal chemistry", "Physical quantities", "Colloids", "Surface science", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Rheology", "Fluid dynamics" ]
51,300,111
https://en.wikipedia.org/wiki/Trematode%20mitochondrial%20code
The trematode mitochondrial code (translation table 21) is a genetic code found in the mitochondria of Trematoda. Code    AAs = FFLLSSSSYY**CCWWLLLLPPPPHHQQRRRRIIMMTTTTNNNKSSSSVVVVAAAADDEEGGGG Starts = -----------------------------------M---------------M------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) Differences from the standard code Systematic range and comments Trematoda See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Trematode mitochondrial code
[ "Chemistry", "Biology" ]
505
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
51,300,183
https://en.wikipedia.org/wiki/Scenedesmus%20obliquus%20mitochondrial%20code
The Scenedesmus obliquus mitochondrial code (translation table 22) is a genetic code found in the mitochondria of Scenedesmus obliquus, a species of green algae. Code Differences from the standard code Systematic range and comments Scenedesmus obliquus See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Scenedesmus obliquus mitochondrial code
[ "Chemistry", "Biology" ]
76
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
60,543,161
https://en.wikipedia.org/wiki/Octopus%20Energy
Octopus Energy Group is a British renewable energy group. It was founded in 2015 with the backing of Octopus Group, a British asset management company. Headquartered in London, the company has operations in the United Kingdom, France, Germany, Italy, Spain, Australia, Japan, New Zealand and the United States. Octopus is the UK's largest supplier of electricity to domestic customers, and the second largest in domestic gas. Octopus Energy Group operates a range of business divisions, including Octopus Energy Retail, Octopus Energy for Business, Octopus Energy Services, Octopus Electric Vehicles, Octopus Energy Generation, Octopus Hydrogen, Kraken, Kraken Flex and the not-for-profit Centre for Net Zero. The company also supplies software services to other energy suppliers. History Octopus Energy was established in August 2015 as a subsidiary of Octopus Capital Limited. Trading began in December 2015. Greg Jackson is the founder of the company and holds the position of chief executive. By April 2018, the company had 198,000 customers and had made an energy procurement deal with Shell. Later in 2018, Octopus gained the 100,000 customers of Iresa Limited, under Ofgem's "supplier of last resort" process, after Iresa ceased trading. The same year, Octopus replaced SSE as the energy supplier for M&S Energy, a brand of Marks & Spencer, and bought Affect Energy, which had 22,000 customers. In 2018 Hanwha Energy Retail Australia (Nectr) chose the Kraken platform, developed by Octopus to provide billing, CRM and other technology services to support its launch into the Australian retail energy market. In August 2019, an agreement with Midcounties Co-operative saw Octopus gain more than 300,000 customers, taking its total beyond 1 million. Three Co-op brands were affected: Octopus acquired the customers of the GB Energy and Flow Energy brands, and began to operate the accounts of Co-op Energy customers on a white label basis, while Midcounties retained responsibility for acquiring new Co-op Energy customers. In both 2018 and 2019, Octopus was the only energy supplier to earn "Recommended Provider" status from the Which? consumer organisation. In January 2020, Octopus was ranked first in a Which? survey and was one of three recommended providers. In January 2021, Octopus was ranked second and was one of two recommended providers, becoming the only energy provider in the UK to have been named as a recommended provider four years running. In January 2020, ENGIE UK announced that it was selling its residential energy supply business (comprising around 70,000 UK residential customers) to Octopus. In the same month London Power was launched, a partnership with the Mayor of London. In 2020, Octopus completed two funding rounds totalling $577 million, making the company the highest funded UK tech start-up that year. In November 2020, Octopus acquired Manchester-based smart grid energy software company Upside Energy, which in June 2021 rebranded as KrakenFlex. In the same month Octopus launched the not-for-profit Octopus Centre for Net Zero (OCNZ), a research organisation tasked with creating models and policy recommendations for potential paths to a green energy future. In February 2021, CEO Greg Jackson said on a BBC News interview that Octopus does not operate a human resources department. In March 2021 the Financial Times listed Octopus at number 23 on their list of the fastest growing companies in Europe. In July 2021, Octopus rose 12 places on the UK Customer Service Index to 17th, making it the only energy company in the Top 50. Also in 2021, Octopus built the UK's first R&D and training centre for the decarbonisation of heat. Located in Slough, the centre will train 1000 heat pump engineers per year and develop new heating systems. In September 2021, Octopus was appointed as the Supplier of Last Resort (SOLR) for Avro Energy, acquiring Avro Energy's domestic customers and increasing their customer base to 3.1 million customers. In November 2021, Octopus announced in Manchester that it had signed a deal with the city region as part of a bid to become carbon neutral by 2038. In October 2022, Octopus reached an agreement to acquire Bulb Energy's 1.5 million customers. In February 2023, Marks & Spencer announced it was pulling out of the energy supply business and ending its five-year partnership with Octopus; the 60,000 M&S Energy customers would transfer to Octopus Energy in April. In September 2023, Octopus announced it would be acquiring Shell's household energy business in the UK (trading as Shell Energy) and in Germany, in a deal expected to complete in late 2023 which would increase the company's domestic and business customer base to 6.5 million. In April 2024, Ofgem reported that Octopus is the UK's largest electricity supplier by domestic customer numbers, with a 22% share. Octopus has a similar share of domestic gas customers, ranking second behind British Gas. Financial history In September 2019, Octopus acquired German start-up 4hundred for £15 million; the acquisition of 4hundred, which had 11,000 customers, was Octopus' first overseas expansion. In May 2020, Australian electricity and gas supplier Origin Energy paid 507 million for a 20% stake in Octopus. This meant Octopus gained "unicorn" status, as a startup company with a value in excess of £1 billion. In September of that year, Octopus acquired Evolve Energy, a US Silicon Valley–based start-up, in a US$5 million deal. The acquisition was the first step in Octopus' $US100 million US expansion; at the time of the acquisition, Octopus announced it was aiming to acquire 25 million US customers, and 100 million global customers in total, by 2027. In December 2020, Tokyo Gas paid about 20 billion yen ($US193 million) for a 9.7% stake in Octopus, valuing the company at $US2.1 billion. Octopus and Tokyo Gas agreed to launch the Octopus brand in Japan via a 30:70 joint venture to provide electricity from renewable sources, amongst other services. Origin invested a further $US50 million at the same time, to maintain its 20% stake. In August 2021, Octopus entered the Spanish market with the acquisition of green energy start-up Umeme. Upon the acquisition, Octopus announced it was targeting a million Spanish energy accounts under its brand by 2027. In November 2021, Octopus acquired Italian energy retailer SATO Luce e Gas, rebranding the business as Octopus Energy Italy, investing an initial £51 million and targeting 5% of the Italian market by 2025. As a result of these acquisitions, Octopus now has retail, generation or technology licences in 13 countries across four continents. In September 2021, Generation Investment Management, co-founded and chaired by Al Gore, purchased a 13% stake in Octopus Energy Group in a deal worth $US600 million. The investment increased the company's valuation to $US4.6 billion, with the cash injection to be used by Octopus to increase its investment in new technologies for cheaper and faster decarbonisation. In December 2021, Octopus Energy announced a long-term partnership with Canada Pension Plan Investment Board, raising US$300 million and taking the valuation of Octopus Energy Group to approximately $US5 billion. In January 2022, Octopus Energy entered the French market with the acquisition of Plüm énergie, a French energy start-up with 100,000 retail and corporate accounts. Plüm was subsequently rebranded as Octopus Energy France. Further funding of $800m from existing shareholders in December 2023 brought the valuation of Octopus Energy to nearly $US8 billion. Operations Gas and electricity supply As of March 2023 the company has nearly 3 million domestic and business customers. Besides industry-standard fixed and variable tariffs, the company is known for innovative tariffs which are made possible by the national rollout of smart meters. These include: Octopus Tracker – gas and electricity prices change every day, and are based on wholesale prices for that day, with disclosure of overheads and the company's profit margin. Octopus Agile – electricity prices change every half hour, according to a schedule published the previous day, determined from wholesale prices. The price occasionally goes negative (i.e. customers are paid to use electricity) at times of high generation and low demand. Octopus Go – a tariff with a reduced rate for an overnight period, intended for owners of electric vehicles. In March 2019, Octopus announced it had partnered with Amazon's Alexa virtual assistant, to optimise home energy use through the Agile Octopus time-of-use tariff. As part of their partnership agreed in August 2019, Midcounties Co-operative and Octopus established a joint venture to develop the UK's community energy market and encourage small-scale electricity generation. Brands Besides the Octopus Energy brand, customers are supplied under the Ebico Energy, Affect Energy, Co-op Energy, M&S Energy and London Power brands. Electricity generation In its early years the company did not generate gas or electricity, instead making purchases on the wholesale markets. In 2019, Octopus stated that all its electricity came from renewable sources, and began to offer a "green" gas tariff with carbon offsetting. In July 2021, Octopus acquired sister company Octopus Renewables which claims to be the UK's largest investor in solar farms, and also invests in wind power and anaerobic digesters. At the time of the acquisition, the generation assets were reported to be worth over £3.4billion. In October 2020, Octopus partnered with Tesla Energy to power the Tesla Energy Plan, which is designed to power a home with 100% clean energy either from solar panels or Octopus. The plan allows households to become part of the UK Tesla Virtual Power Plant, which connects a network homes that generate, store and return electricity to grid at peak times. In January 2021, Octopus acquired two 73 metre (240') wind turbines to power their 'Fan Club' tariff, which offers households living near its turbines cheaper electricity prices when the wind is blowing strongly. Customers on the tariff get a 20% discount on the unit price when the turbines are spinning, and a 50% discount when the wind is above 8m/s (20 mph). In November 2021, Octopus Energy announced plans to raise £4 billion to fund the global expansion of its Fan Club model, which would be expanded to include solar farms. By 2030, Octopus aims to supply around 2.5 million households with green electricity through Fan Club schemes. Also in November 2021, Octopus Energy Group signed a deal with Elia Group at COP26 to build a "smart" green grid across Belgium and Germany. The company’s flexibility platform KrakenFlex will be used together with Elia Group’s energy data affiliate, re.alto, to enable electric vehicles, heat pumps and other green technologies to be used for grid balancing. In January 2022, it was announced that Octopus Renewables had bought the Broons/Biterne-Sud wind farm in Cotes d’Armor, northeast Brittany, France from Energiequelle for an undisclosed price. In June of that year, the group's fund management team bought the rights to develop the 35MW Gaishecke wind farm near Frankfurt, Germany. Investment trust Octopus Renewables is contracted as the investment manager for Octopus Renewables Infrastructure Trust, an investment trust established in 2019 which owns wind and solar generation in the UK, Europe and Australia. Electric vehicle charging The Electric Universe service, which aims to simplify public charging of electric vehicles, was launched in 2020 as Electric Juice and rebranded in 2022. As of September 2022, more than 450 charging companies were taking part, allowing customers access to over 300,000 chargers in over 50 countries through a single card and/or app. All drivers can use the service, and Octopus Energy customers have the option of paying their charging costs through their domestic energy bills. Marketing Climate change In 2019 Octopus launched a 'Portraits from the Precipice' campaign, which sought to raise awareness of climate change and encouraged customers to switch to greener energy deals. The campaign artwork was exhibited at over 5,000 sites, making it the largest ever digital out-of-home art exhibition. As a result of the campaign, Octopus registered a 163% increase in sign-ups and gained 37,000 customers. The campaign won the 2020 Marketing Week Masters award for utilities, and the 2020 Energy Institute award for Public Engagement. Solar energy at COP26 In November 2021 Octopus financed 'Grace of the Sun', a large-scale art piece by Robert Montgomery made using the Little Sun solar lamps designed by Olafur Eliasson and Frederik Ottesen. The project, which coincided with COP26, was realised in Glasgow through collaboration with the local art community, and was designed as a call for global leaders to invest in renewable energies such as solar PV, in order to power a sustainable future. Software development Octopus Energy licenses their proprietary customer management system called Kraken, which runs on Amazon's cloud computing service. It was first licensed by UK rival Good Energy in late 2019, for an initial three-year term, to manage its 300,000 customers. In March 2020 it was announced that E.ON and its Npower subsidiary had licensed the technology to manage their combined 10 million customers. In May 2021 it was announced that E.ON had completed the migration of all two million former npower customers to its Kraken-powered E.ON Next customer service platform. The migration was hailed as being responsible for E.ON's financial recovery in the UK. The Kraken software was also licensed to Australia's Origin Energy as part of their May 2020 agreement. In November 2021, EDF Energy agreed a deal with Octopus Energy Group to move its five million customers onto its Kraken platform. The customer accounts will be migrated onto Kraken from 2023, increasing the number of energy accounts contracted to be served via Kraken to over 20million worldwide. Kraken also has strategic partnerships with Hanwha Group and Tokyo Gas. In August 2024, Kraken announced the appointment of its own CEO, Amir Orad. References External links British companies established in 2015 Companies based in London Electric power companies of the United Kingdom Renewable resource companies established in 2015 Renewable energy companies of the United Kingdom Utilities of the United Kingdom Renewable energy companies of France Electric power companies
Octopus Energy
[ "Engineering" ]
2,929
[ "Electrical engineering organizations", "Electric power companies" ]
60,544,754
https://en.wikipedia.org/wiki/Markov%20chain%20central%20limit%20theorem
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity. Statement Suppose that: the sequence of random elements of some set is a Markov chain that has a stationary probability distribution; and the initial distribution of the process, i.e. the distribution of , is the stationary distribution, so that are identically distributed. In the classic central limit theorem these random variables would be assumed to be independent, but here we have only the weaker assumption that the process has the Markov property; and is some (measurable) real-valued function for which Now let Then as we have where the decorated arrow indicates convergence in distribution. Monte Carlo Setting The Markov chain central limit theorem can be guaranteed for functionals of general state space Markov chains under certain conditions. In particular, this can be done with a focus on Monte Carlo settings. An example of the application in a MCMC (Markov Chain Monte Carlo) setting is the following: Consider a simple hard spheres model on a grid. Suppose . A proper configuration on consists of coloring each point either black or white in such a way that no two adjacent points are white. Let denote the set of all proper configurations on , be the total number of proper configurations and π be the uniform distribution on so that each proper configuration is equally likely. Suppose our goal is to calculate the typical number of white points in a proper configuration; that is, if is the number of white points in then we want the value of If and are even moderately large then we will have to resort to an approximation to . Consider the following Markov chain on . Fix and set where is an arbitrary proper configuration. Randomly choose a point and independently draw . If and all of the adjacent points are black then color white leaving all other points alone. Otherwise, color black and leave all other points alone. Call the resulting configuration . Continuing in this fashion yields a Harris ergodic Markov chain having as its invariant distribution. It is now a simple matter to estimate with . Also, since is finite (albeit potentially large) it is well known that will converge exponentially fast to which implies that a CLT holds for . Implications Not taking into account the additional terms in the variance which stem from correlations (e.g. serial correlations in markov chain monte carlo simulations) can result in the problem of pseudoreplication when computing e.g. the confidence intervals for the sample mean. References Sources Gordin, M. I. and Lifšic, B. A. (1978). "Central limit theorem for stationary Markov processes." Soviet Mathematics, Doklady, 19, 392–394. (English translation of Russian original). Geyer, Charles J. (2011). "Introduction to MCMC." In Handbook of Markov Chain Monte Carlo, edited by S. P. Brooks, A. E. Gelman, G. L. Jones, and X. L. Meng. Chapman & Hall/CRC, Boca Raton, pp. 3–48. Markov processes Markov models Stochastic processes Stochastic models Probability theorems Asymptotic theory (statistics) Normal distribution
Markov chain central limit theorem
[ "Mathematics" ]
700
[ "Central limit theorem", "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
60,545,283
https://en.wikipedia.org/wiki/Ordered%20exponential%20field
In mathematics, an ordered exponential field is an ordered field together with a function which generalises the idea of exponential functions on the ordered field of real numbers. Definition An exponential on an ordered field is a strictly increasing isomorphism of the additive group of onto the multiplicative group of positive elements of . The ordered field together with the additional function is called an ordered exponential field. Examples The canonical example for an ordered exponential field is the ordered field of real numbers R with any function of the form where is a real number greater than 1. One such function is the usual exponential function, that is . The ordered field R equipped with this function gives the ordered real exponential field, denoted by . It was proved in the 1990s that Rexp is model complete, a result known as Wilkie's theorem. This result, when combined with Khovanskiĭ's theorem on pfaffian functions, proves that Rexp is also o-minimal. Alfred Tarski posed the question of the decidability of Rexp and hence it is now known as Tarski's exponential function problem. It is known that if the real version of Schanuel's conjecture is true then Rexp is decidable. The ordered field of surreal numbers admits an exponential which extends the exponential function exp on R. Since does not have the Archimedean property, this is an example of a non-Archimedean ordered exponential field. The ordered field of logarithmic-exponential transseries is constructed specifically in a way such that it admits a canonical exponential. Formally exponential fields A formally exponential field, also called an exponentially closed field, is an ordered field that can be equipped with an exponential . For any formally exponential field , one can choose an exponential on such that for some natural number . Properties Every ordered exponential field is root-closed, i.e., every positive element of has an -th root for all positive integer (or in other words the multiplicative group of positive elements of is divisible). This is so because for all . Consequently, every ordered exponential field is a Euclidean field. Consequently, every ordered exponential field is an ordered Pythagorean field. Not every real-closed field is a formally exponential field, e.g., the field of real algebraic numbers does not admit an exponential. This is so because an exponential has to be of the form for some in every formally exponential subfield of the real numbers; however, is not algebraic if is algebraic by the Gelfond–Schneider theorem. Consequently, the class of formally exponential fields is not an elementary class since the field of real numbers and the field of real algebraic numbers are elementarily equivalent structures. The class of formally exponential fields is a pseudoelementary class. This is so since a field is exponentially closed if and only if there is a surjective function such that and ; and these properties of are axiomatizable. See also Exponential field Notes References Model theory Field (mathematics) Algebraic structures Exponentials
Ordered exponential field
[ "Mathematics" ]
619
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "E (mathematical constant)", "Algebraic structures", "Exponentials", "Model theory" ]
44,105,802
https://en.wikipedia.org/wiki/Joe%20L.%20Franklin
Joseph Louis Franklin (1906 – August 25, 1982) was a Robert A. Welch Professor of Chemistry at Rice University known for his research in mass spectrometry and ion molecule chemistry. The Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry is named after him and Frank H. Field. Early life and education Joseph Franklin was born in Natchez, Mississippi in 1906 but his family moved to Texas early in his life. He went to the University of Texas to study chemistry and received a B.S. degree in 1929, M.S. in 1930, and Ph.D. in 1934. Humble Oil Franklin took a position at Humble Oil in Baytown, Texas in 1934 where he established a mass spectrometry research group. He recruited Frank Field to Humble Oil in 1952 and they co-wrote Electron Impact Phenomena and the Properties of Gaseous Ions in 1957. Franklin took a two-year leave 1957–1958 at the National Bureau of Standards in Washington, D.C. before returning to Humble. Rice University In 1963, Franklin took a position as Robert A. Welch Professor of chemistry at Rice University. He became an emeritus professor in 1976. He helped to found the American Society for Mass Spectrometry and became the first president of that organization in 1969. Field and Franklin Award In 1983, the Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry was established in his honor by the American Chemical Society. References Further reading 1906 births 1982 deaths 20th-century American chemists Mass spectrometrists Rice University faculty Fellows of the American Physical Society
Joe L. Franklin
[ "Physics", "Chemistry" ]
335
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
44,111,566
https://en.wikipedia.org/wiki/Robocasting
Robocasting (also known as robotic material extrusion) is an additive manufacturing technique analogous to Direct Ink Writing and other extrusion-based 3D-printing techniques in which a filament of a paste-like material is extruded from a small nozzle while the nozzle is moved across a platform. The object is thus built by printing the required shape layer by layer. The technique was first developed in the United States in 1996 as a method to allow geometrically complex ceramic green bodies to be produced by additive manufacturing. In robocasting, a 3D CAD model is divided up into layers in a similar manner to other additive manufacturing techniques. The material (typically a ceramic slurry) is then extruded through a small nozzle as the nozzle's position is controlled, drawing out the shape of each layer of the CAD model. The material exits the nozzle in a liquid-like state but retains its shape immediately, exploiting the rheological property of shear thinning. It is distinct from fused deposition modelling as it does not rely on the solidification or drying to retain its shape after extrusion. Process Robocasting begins with a software process. One method is importing an STL file and slicing that shape into layers of similar thickness to the nozzle diameter. The part is produced by extruding a continuous filament of material in the shape required to fill the first layer. Next, either the stage is moved down or the nozzle is moved up and the next layer is deposited in the required pattern. This is repeated until the 3D part is complete. Numerically controlled mechanisms are typically used to move the nozzle in a calculated tool-path generated by a computer-aided manufacturing (CAM) software package. Stepper motors or servo motors are usually employed to move the nozzle with precision as fine as nanometers. The part is typically very fragile and soft at this point. Drying, debinding and sintering usually follow to give the part the desired mechanical properties. Depending on the material composition, printing speed and printing environment, robocasting can typically deal with moderate overhangs and large spanning regions many times the filament diameter in length, where the structure is unsupported from below. This allows intricate periodic 3D scaffolds to be printed with ease, a capability which is not possessed by other additive manufacturing techniques. These parts have shown extensive promise in fields of photonic crystals, bone transplants, catalyst supports, and filters. Furthermore, supporting structures can also be printed from a "fugitive material" which is easily removed. This allows almost any shape to be printed in any orientation. Mechanical behavior One key advantage of the robocasting additive manufacturing technique is its ability to utilize a wide range of feedstock “inks,” as shear-thinning ability is the only inherently required material property. As such, robocasting has seen diverse application among many disparate materials classes such as metallic foams, pre-ceramic polymers, and biological tissues. This allows for a wide range of mechanical characteristics to be accessible through this technique, with additional tailoring possible through the use of ink fillers and varying extrusion parameters. Filler effects Micro- and nano-scale filler materials are commonly used to create composite feedstocks for robocasting and are available in a wide range of compositions, with morphologies typically falling into the broad categories of spheres, platelets, and filaments/tubes. Both composition and morphology play significant roles in the mechanical characteristics imparted by the filler. For example, the inclusion of stiff boron nitride nanobarbs within epoxy feedstock has been demonstrated to anisotropically increase overall composite strength and stiffness along the direction of fiber orientation due to their shape asymmetry, while the inclusion of hollow glass microspheres within the same epoxy feedstock has been demonstrated to isotropically improve specific strength by significantly reducing total density of the composite. In addition to shape, differing size regimes within fillers of the same morphology have been demonstrated to yield significant changes in mechanical properties. For epoxy-carbon fiber composite systems of identical composition, flexural strength has been shown to generally decrease with decreasing fiber length. However, shorter fibers have also been demonstrated to produce better overall printing behavior during the robocasting process as increasing length also increases the likelihood of jamming within the extruder; higher print fidelity as seen for the shorter fibers generally results in greater reproducibility of mechanical behavior. In addition, very long fibers have exhibited a tendency to break during extrusion, essentially imparting a de facto size cap on filament-type fillers used in robocasting. Extrusion effects Extrusion phenomena inherently tied into the robocasting technique have been shown to have appreciable effects on the mechanical behavior of resulting parts. One of the most significant is the alignment of filler materials within composite feedstocks during deposition, which is enhanced as filler anisotropy increases. This alignment phenomenon also becomes more pronounced with decreasing nozzle diameter and increasing ink deposition speed, as these factors increase the effective shearing experienced by fillers suspended within the feedstock in accordance with Jeffrey-Hamel flow theory. Fillers are thus driven to align parallel to the extrusion pathway, imparting significant anisotropic character within the finished part. This anisotropy can be further enhanced by prescribing extrusion pathways that remain parallel throughout the manufacturing process; conversely, prescribing extrusion pathways that exhibit differing orientations, such as 90° “logpile” rotation between layers, can mitigate this effect. Selection of deposition pathing can also be exploited to alter mechanical characteristics of robocasting products, such as in the case of non-dense and graded components. The creation of open lattice-type structures via robocasting is widespread and enables optimization of specific strength and stiffness by reducing the cross-sectional footprint of a given feedstock material while retaining much of its bulk mechanical integrity. In addition, the creation of unique deposition pathing via finite element analysis of a desired structure can generate dynamically-graded geometries optimized for specific applications. Applications The technique can produce non-dense ceramic bodies which can be fragile and must be sintered before they can be used for most applications, analogous to a wet clay ceramic pot before being fired. A wide variety of different geometries can be formed from the technique, from solid monolithic parts to intricate microscale "scaffolds", and tailored composite materials. A heavily-researched application for robocasting is in the production of biologically compatible tissue implants. "Woodpile" stacked lattice structures can be formed quite easily which allow bone and other tissues in the human body to grow and eventually replace the transplant. With various medical scanning techniques the precise shape of the missing tissue was established and input into 3D modelling software and printed. Calcium phosphate glasses and hydroxyapatite have been extensively explored as candidate materials due to their biocompatibility and structural similarity to bone. Other potential applications include the production of specific high surface area structures, such as catalyst beds or fuel cell electrolytes. Advanced metal matrix- and ceramic matrix- load bearing composites can be formed by infiltrating woodpile bodies with molten glasses, alloys or slurries. Robocasting has also been used to deposit polymer and sol-gel inks through much finer nozzle diameters (less than 2 μm) than is possible with ceramic inks. References External links Ceramic engineering 3D printing processes Articles containing video clips 1996 introductions American inventions 1996 establishments in the United States
Robocasting
[ "Engineering" ]
1,586
[ "Ceramic engineering" ]
44,116,080
https://en.wikipedia.org/wiki/POODLE
POODLE (which stands for "Padding Oracle On Downgraded Legacy Encryption") is a security vulnerability which takes advantage of the fallback to SSL 3.0. If attackers successfully exploit this vulnerability, on average, they only need to make 256 SSL 3.0 requests to reveal one byte of encrypted messages. Bodo Möller, Thai Duong and Krzysztof Kotowicz from the Google Security Team discovered this vulnerability; they disclosed the vulnerability publicly on October 14, 2014 (despite the paper being dated "September 2014"). On December 8, 2014, a variation of the POODLE vulnerability that affected TLS was announced. The CVE-ID associated with the original POODLE attack is . F5 Networks filed for as well, see POODLE attack against TLS section below. Prevention To mitigate the POODLE attack, one approach is to completely disable SSL 3.0 on the client side and the server side. However, some old clients and servers do not support TLS 1.0 and above. Thus, the authors of the paper on POODLE attacks also encourage browser and server implementation of TLS_FALLBACK_SCSV, which will make downgrade attacks impossible. Another mitigation is to implement "anti-POODLE record splitting". It splits the records into several parts and ensures none of them can be attacked. However the problem of the splitting is that, though valid according to the specification, it may also cause compatibility issues due to problems in server-side implementations. A full list of browser versions and levels of vulnerability to different attacks (including POODLE) can be found in the article Transport Layer Security. Opera 25 implemented this mitigation in addition to TLS_FALLBACK_SCSV. Google's Chrome browser and their servers had already supported TLS_FALLBACK_SCSV. Google stated in October 2014 it was planning to remove SSL 3.0 support from their products completely within a few months. Fallback to SSL 3.0 has been disabled in Chrome 39, released in November 2014. SSL 3.0 has been disabled by default in Chrome 40, released in January 2015. Mozilla disabled SSL 3.0 in Firefox 34 and ESR 31.3, which were released in December 2014, and added support of TLS_FALLBACK_SCSV in Firefox 35. Microsoft published a security advisory to explain how to disable SSL 3.0 in Internet Explorer and Windows OS, and on October 29, 2014, Microsoft released a fix which disables SSL 3.0 in Internet Explorer on Windows Vista / Server 2003 and above and announced a plan to disable SSL 3.0 by default in their products and services within a few months. Microsoft disabled fallback to SSL 3.0 in Internet Explorer 11 for Protect Mode sites on February 10, 2015, and for other sites on April 14, 2015. Apple's Safari (on OS X 10.8, iOS 8.1 and later) mitigated against POODLE by removing support for all CBC protocols in SSL 3.0, however, this left RC4 which is also completely broken by the RC4 attacks in SSL 3.0. POODLE was completely mitigated in OS X 10.11 (El Capitan 2015) and iOS 9 (2015). To prevent the POODLE attack, some web services dropped support of SSL 3.0. Examples include CloudFlare and Wikimedia. Network Security Services version 3.17.1 (released on October 3, 2014) and 3.16.2.3 (released on October 27, 2014) introduced support for TLS_FALLBACK_SCSV, and NSS will disable SSL 3.0 by default in April 2015. OpenSSL versions 1.0.1j, 1.0.0o and 0.9.8zc, released on October 15, 2014, introduced support for TLS_FALLBACK_SCSV. LibreSSL version 2.1.1, released on October 16, 2014, disabled SSL 3.0 by default. POODLE attack against TLS A new variant of the original POODLE attack was announced on December 8, 2014. This attack exploits implementation flaws of CBC encryption mode in the TLS 1.0 - 1.2 protocols. Even though TLS specifications require servers to check the padding, some implementations fail to validate it properly, which makes some servers vulnerable to POODLE even if they disable SSL 3.0. SSL Pulse showed "about 10% of the servers are vulnerable to the POODLE attack against TLS" before this vulnerability was announced. The CVE-ID for F5 Networks' implementation bug is . The entry in NIST's NVD states that this CVE-ID is to be used only for F5 Networks' implementation of TLS, and that other vendors whose products have the same failure to validate the padding mistake in their implementations like A10 Networks and Cisco Systems need to issue their own CVE-IDs for their implementation errors because this is not a flaw in the protocol but in the implementation. The POODLE attack against TLS was found to be easier to initiate than the initial POODLE attack against SSL. There is no need to downgrade clients to SSL 3.0, meaning fewer steps are needed to execute a successful attack. References External links This POODLE Bites: Exploiting TheSSL 3.0 Fallback - The original publication of POODLE 1076983 – (POODLE) Padding oracle attack on SSL 3.0 Mozilla Bugzilla What Is the POODLE Attack? - Acunetix article explaining POODLE attack algorithm Internet security Web security exploits Cryptography Transport Layer Security Computer security exploits
POODLE
[ "Mathematics", "Technology", "Engineering" ]
1,207
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Computer security exploits", "Web security exploits" ]
64,067,273
https://en.wikipedia.org/wiki/Unified%20strength%20theory
The unified strength theory (UST). proposed by Yu Mao-Hong is a series of yield criteria (see yield surface) and failure criteria (see Material failure theory). It is a generalized classical strength theory which can be used to describe the yielding or failure of material begins when the combination of principal stresses reaches a critical value. Mathematical formulation Mathematically, the formulation of UST is expressed in principal stress state as where are three principal stresses, is the uniaxial tensile strength and is tension-compression strength ratio (). The unified yield criterion (UYC) is the simplification of UST when , i.e. Limit surfaces The limit surfaces of the unified strength theory in principal stress space are usually a semi-infinite dodecahedron cone with unequal sides. The shape and size of the limiting dodecahedron cone depends on the parameter b and . The limit surfaces of UST and UYC are shown as follows. Derivation Due to the relation (), the principal stress state () may be converted to the twin-shear stress state () or (). Twin-shear element models proposed by Mao-Hong Yu are used for representing the twin-shear stress state. Considering all the stress components of the twin-shear models and their different effects yields the unified strength theory as The relations among the stresses components and principal stresses read The and C should be obtained by uniaxial failure state By substituting Eqs.(4a), (4b) and (5a) into the Eq.(3a), and substituting Eqs.(4a), (4c) and (5b) into Eq.(3b), the and C are introduced as History The development of the unified strength theory can be divided into three stages as follows. 1. Twin-shear yield criterion (UST with and ) 2. Twin-shear strength theory (UST with ). 3. Unified strength theory. Applications Unified strength theory has been used in Generalized Plasticity, Structural Plasticity, Computational Plasticity and many other fields References Mechanical failure
Unified strength theory
[ "Materials_science", "Engineering" ]
431
[ "Mechanical failure", "Materials science", "Mechanical engineering" ]
42,670,400
https://en.wikipedia.org/wiki/Gabriela%20Gonz%C3%A1lez
Gabriela Ines González, (born 24 February 1965) is an Argentine professor of physics and astronomy at the Louisiana State University and was the spokesperson for the LIGO Scientific Collaboration from March 2011 until March 2017. Biography Gabriela González was born on February 24, 1965, in Córdoba, Argentina. She is the daughter of Dora Trembinsky, a professor of mathematics, and Pedro González, a doctor in Economic Sciences. González completed her primary school studies at the Colegio Luterano Concordia in the city of Córdoba, and her secondary school studies at the Instituto Manuel Lucero. An avid student, González received exemplary grades at school and even had the ability to quickly solve equations in her head. González attended university at the National University of Córdoba, which she graduated from with a Bachelor's of Science in Physics in 1988. According to González, she began studying physics because she thought of it as a way to answer all of the pressing questions that humanity was faced with. In the end, however, she realized that physics does not answer all these questions but rather face us a species with more. One year following this, she moved to the United States to study at the Syracuse University, and under the tutelage of Peter Sawlson obtained her doctorate in Physics in 1995. She began a postdoc at MIT, where she later worked as a researcher, and then after that as a professor at Pennsylvania State University. Beginning in April 2001, she began working as a professor at Louisiana State University. In 2008, González became the first woman to receive a full professorship in the Department of Physics and Astronomy at Louisiana State University. González believes that science will be much better off when there are as many women as men, and that this will happens when the common myths and misconceptions about physicists, which tend to segregate women from research, start to fall apart, and furthermore when people believe that physicists are simply normal people with normal lives. Before moving to the United States, González began a relationship with Jorge Pullin, an Argentinian theoretical physicist with specializations in black hole collisions and the theory of quantum gravity working at Louisiana State University. They later wed. The two have no children. Career González has published several papers on Brownian motion as a limit to the sensitivity of gravitational-wave detectors, and has an interest in data analysis for gravitational-wave astronomy. In February 2016, she was one of five LIGO scientists present for the announcement that the first direct gravitational wave observation had been detected in September 2015. Awards González was elected fellow of the Institute of Physics (2004), the American Physical Society (2007), and the American Astronomical Society (2020). She won the Bouchet Award in 2007, the Bruno Rossi Prize in 2017, the National Academy of Sciences Award for Scientific Discovery in 2017, and the Petrie Prize Lecture in 2019. González was elected to membership in the National Academy of Sciences and the American Academy of Arts and Sciences in 2017. Personal life González is married to Jorge Pullin, the Horace Hearne Chair in theoretical Physics at the Louisiana State University. Notes References External links 1965 births Living people Syracuse University College of Arts and Sciences alumni Louisiana State University faculty Pennsylvania State University faculty Argentine women physicists Argentine scientists Argentine physicists Gravitational-wave astronomy Fellows of the American Physical Society Fellows of the Institute of Physics 21st-century Argentine scientists 20th-century Argentine physicists 20th-century Argentine women scientists 21st-century American women scientists 20th-century American women scientists Members of the United States National Academy of Sciences Fellows of the American Astronomical Society American women academics 20th-century Argentine astronomers National University of Córdoba alumni
Gabriela González
[ "Physics", "Astronomy" ]
730
[ "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
42,671,200
https://en.wikipedia.org/wiki/Epigenetic%20regulation%20of%20transposable%20elements%20in%20the%20plant%20kingdom
Transposable elements (transposons, TEs, 'jumping genes') are short strands of repetitive DNA that can self-replicate and translocate within genomes of plants, animals, and prokaryotes, and they are generally perceived as parasitic in nature. Their transcription can lead to the production of dsRNAs (double-stranded RNAs), which resemble retrovirus transcripts. While most host cellular RNA has a singular, unpaired sense strand, dsRNA possesses sense and anti-sense transcripts paired together, and this difference in structure allows a host organism to detect dsRNA production, and thereby the presence of transposons. Plants lack distinct divisions between somatic cells and reproductive cells, and also have, generally, larger genomes than animals and prokaryotes, making plants an intriguing case-study for better understanding the epigenetic regulation and function of transposable elements. Classes of Transposons Transposons vary in their structure and manner of proliferation, both of which help to define their classification. Each class contains autonomous elements, a sub-variety distinguished by the ability to self-proliferate, and also non-autonomous elements, which lack that ability. Class I Also known as retrotransposons, these employ a strategy of self-copying via RNA transcriptase and subsequently inserting themselves into a new site within the host genome. The presence or absence of transcriptase (the enzyme that allows for self-copying) within the coding of the transposon defines class I elements as autonomous or non-autonomous. Class I transposons can take the form of: LTRs, long terminal repeats, which contain immensely repetitive code (hundreds or thousands of the same few nucleotides) Non-LTRs, which lack lengthy repetitive coding, and can be LINEs, long interspersed nuclear elements, which code for their transpositional machinery, and SINEs, short interspersed nuclear elements, which piggy-back off of LINE machinery Retrotransposons have been discovered to be the predominant form of transpositional element in plants with large genomes, such as maize and wheat, potentially indicating the rapid success of this class of transposon in the creation of hybrids, such as wheat, and peppermint and, in the distant past, maize. Plant hybridization often creates polyploids, with double, triple, quadruple or more the number of chromosomes present in the parent generation. Polyploid hybrids seem to be particularly susceptible to genetic intrusion by retrotransposons, as supported by a study in sunflower hybridization, which showed that the hybridized flowers possessed genomes that were about 50% larger than that of their parents, with the majority of this increase linked to the amplification of a single retrotransposon class. Class II Also known as DNA transposons, these employ a strategy by which the transposon is excised from its position via transposase, and re-integrated elsewhere in the genome. These can be identified by the following: TIRs, terminal inverted repeats, which allow transposase to recognize the transposon and excise/reintegrate it TSDs, target site duplications, which are generated during re-integration and are thought to add to the difficulties in recognizing transposons Those DNA transposons lacking the coding necessary to synthesize transposase function non-autonomously, likely piggy-backing off of the machinery generated by neighboring transposons of the same class. An example of this would be MITEs, miniature inverted-repeat transposable elements, which, while having both TIRs and TSDs, cannot produce transposase. These are particularly prevalent in plants and are thought to be derived from deletions in the more autonomous DNA transposons. Similarly, these types of transposons can become non-autonomous by capturing or replicating pieces of host DNA. Helitrons Another variety of transposons, discovered in 2001, can also potentially capture host DNA. Helitrons are thought to replicate via a "rolling circle", in which transposase links the helitron to two distinct regions of the genome at once, using a helicase, ligase, and nuclease in the process to unravel the strands involved, replicate the helitron, and subsequently ligate the replicated material into the new site. During this process, it is thought that the helitrons often encode for the surrounding DNA and integrate this into their own material. Non-autonomous helitrons may lack a transposase, a helicase, a ligase, or a nuclease. All are thought to be necessary for this complex process of transposition. Silencing of Transposons Due to their invasive nature, and their potentially disruptive production of non-coding RNAs (ncRNAs), most transposons are dangerous to plants and metazoans alike. Given the lack of distinction between germ-line and somatic cells in the plant kingdom, this is doubly so, since alterations to the genetic and epigenetic code will be more easily inherited. While transposable elements may affect any number of different cell types in an animal, be a skin cell, a liver cell, a brain cell, these changes are not heritable, due to the fact that an animal inherits only a parent's gametic genetic code. In plants, however, there is no such distinction; a flower develops from a meristem, which is a form of somatic cell, and which will pass down to the flower, and thus to the offspring, any genetic or epigenetic alteration. Since each meristem will have developed differently, each different flower from each meristem of the same plant will potentially possess different modifications. In contrast to animals, however, plants do not undergo chromatin remodelling between generations, making the maintenance and inheritance of silencing an entirely different process. There are distinct and identifiable mechanisms for the maintenance of transposon inactivation in plants but, unfortunately, there is significantly less information on the initiation of these inactivation mechanisms. Recognition Though the effects of transposition can sometimes manifest phenotypically, and indeed, this effect led to their discovery, transposons can be difficult for the cellular machinery to detect. Many TEs contain stretches of genuine coding DNA, copied from the host, and there is no distinct structure, code, or identifying characteristic of any kind that would allow a cell to recognize the full range of transposable elements with accuracy. Even besides coding for functional proteins or RNAs, some transposons, like class II elements, contain code copied from the nearby strand, allowing them to blend in. This fact suggests that transposons are recognized by hosts more by their effect than their structure. Thus, cell machinery, as detailed in the next section, exists that is capable of detecting transcripts that are atypical of host genomes, such as: Double-stranded RNAs (dsRNAs), which are indicative of both retroviruses and transposons and, more specifically: Small interfering RNAs (siRNAs), which are processed from dsRNA transcribed from inverted repeating elements in the transposon code; a short sense and anti-sense strand are created, which form dsRNA MicroRNAs (miRNAs), which are similar to siRNAs, but have an imperfect base pair complement; are usually formed as a result of shared complementarity between a transposon and a host gene mRNA transcript Methods Silencing of transposon transcripts can vary in the completeness of silencing as well as in the duration of alteration. Plants employ a number of methods, which range from the elimination of transcripts to complete epigenetic silencing. In general, these can be sorted into two 'strategies': Post-transcriptional gene silencing, in which siRNA or miRNA derived from transposon activity is loaded onto an RNA-induced silencing complex (RISC), which cleaves targeted mRNA transcripts Transcriptional gene silencing, in which siRNA transposon transcript is loaded onto an RNA-Directed DNA Methylation complex, which methylates the region of DNA that is reactive to the siRNA used in the complex. This can lead to histone modification and, if further epigenetic modification occurs, heterochromatin formation. This process is not well understood, as almost all information regarding it comes to us from the study of the FWA gene in Arabidopsis thaliana, a relatively TE-poor example in the plant kingdom. This paucity of information is further complicated by the relatively small genome and the low variability of the Arabidopsis epigenetic code. In general, the initiation of transposon silencing has yet to be fully explained. For example, there have been recorded examples of spontaneous silencing in maize, which carries a high number of transposons (~85% of the genome), though the mechanism by which this occurs is unknown. While it is known that heritable methylation occurs, must occur with frequency, and must be initiated, triggered by some distinct factor, the only known example of this is in the case of Mu killer (Muk). This gene in maize silences MuDR, a class II autonomous transposable element. Muk encodes a natural inverted derivative of the transposase coding sequence in MuDR, which, when transcribed, forms a dsRNA that is subsequently cut into siRNA, which renders MUDR incapable of 'cutting and pasting' itself by way of RNAi interference of the transposase. Muk also engages in RNai-directed methylation to create a stable and heritable suppression. Mutualistic/Parasitic Interactions Though transposable elements were discovered due in large part to their deleterious effects, epigenetic research has shown that they may be, in some cases, beneficial to the host organism.(1,5) This research indicates that the distinction between those two aspects, mutualist and parasite, may be harder to accurately describe than was once thought. Mutualism The primary mutualistic interaction between transposon and host organism is in the formation of epialleles. True to the name, an epiallele is a kind of epigenetic mutant of a certain allelic type that produces distinct morphological differences from the wild type. The predominant research into this subject has been conducted on Arabidopsis thaliana, which has the dual disadvantages of being both TE-poor and an overly genetically stable organism. The manner of formation of epialleles is somewhat unclear, but it is thought to be due to the fact that some transposable elements, in stealing pieces of genetic code from their host organism, blend in so well as to confuse the host cellular machinery into thinking that its own genes are the transposons, which leads to epigenetic silencing of certain alleles, forming an epiallele. Some examples of this are: FWA, a dominant allele in Arabidopsis, turned 'off' by transposon regulation elements. The overall effect of this heritable silencing is to delay flowering. BNS, a recessive allele in Arabidopsis, hypermethylated via siRNA co-opting of RISC complex, which results in silencing. The overall effect of this is the loss of a putative anaphase promoting complex gene. FLC, the flowering locus C gene, which represses flowering time in Arabidopsis, can be partially inactivated by the insertion of a Mu-like element (MULE) into the first intron of the gene, resulting in earlier flowering time. There is also evidence to suggest that transposons play a more general role than was previously thought in the formation of miRNAs as well as in the silencing of centromeres. Parasitism Though the majority of information on transposons is in relation to their parasitic effect, it is sometimes unclear as to how exactly they hurt the host organism. To clarify, there are several ways in which a negative effect can be produced by transposable elements. Production of siRNA or miRNA that target specific cellular mRNAs, resulting in their destruction or inhibiting their translation through an RNAi-related mechanism Production of siRNA or miRNA that stimulates in RNA-directed DNA methylation (RdDM) silencing of a similarly coded gene Insertion into a specific gene, interrupting its normal function Any one of these can have an extreme or minimal effect, depending on what systems the mutation affects. For example, if a transposon were to interrupt the coding for the enzyme which allows for seeds to digest the nourishing endosperm, then the seed would fail to propagate at all, meaning that the mutation was, in essence, fatal. As a counter-example, a transposon could be inserted into a non-coding region (likely the remnant of a now inactive transposon) and have no effect. Future Research Very little is known about the initiation of epigenetic silencing of transposable elements and there are many unclear aspects of how transposons are regulated in plant genomes. Future research into this field will possibly change our conceptions of transposons and their role in eukaryote development. References Molecular biology Mobile genetic elements
Epigenetic regulation of transposable elements in the plant kingdom
[ "Chemistry", "Biology" ]
2,770
[ "Biochemistry", "Molecular genetics", "Mobile genetic elements", "Molecular biology" ]
42,671,488
https://en.wikipedia.org/wiki/Corey%E2%80%93Seebach%20reaction
The Corey–Seebach reaction, or Seebach Umpolung is a name reaction of organic chemistry that allows for acylation by converting aldehydes into lithiated 1,3-dithianes. The lithiated 1,3-dithianes serves as an acyl anion equivalent, undergoing alkylation with electrophiles. The reaction is named in honor of its discoverers, Elias J. Corey and Dieter Seebach. Implementation The aldehyde is first converted into a dithiane, usually with 1,3-propanedithiol. The resulting 1,3-dithiane is then lithiated with the use of butyllithium. The 2-lithio-1,3-dithiane reacts with electrophiles to give a 2-alkyl-1,3-dithiane. Finally, the 2-alkyl-1,3-dithiane can be converted to a carbonyl by hydrolysis, usually with the use of mercury(II) oxide. Alternatively the 2-alkyl-1,3-dithiane can be reduced to an alkane. Scope As a strategy for protecting aldehydes and ketones, dithiane formation is cumbersome because deprotection is inefficient. Typically ketones and aldehydes are protected as their dioxolanes instead of dithianes. The Corey–Seebach reaction is of interest as an acyl anion equivalent, allowing aldehydes to be converted to ketones. The lithiated 1,3-dithiane can be alkylated with alkyl halides, epoxides, ketones, acyl halides, and iminium salts, which after hydrolysis of dithioacetals can yield ketones, β-hydroxyketones, α-hydroxyketones, 1,2-diketones and α-aminoketones. Notably, α-hydroxyketones and 1,2-diketones can not be generated through typical reactions of aldehydes such as the aldol reaction. Other possible electrophiles include aldehydes, amides, and esters. The reaction between lithiated 1,3-dithianes and arenesulfonates offers a similar path to that of alkyl halides, being able to form dithioacetals which can be converted to ketones. Historic references References Name reactions
Corey–Seebach reaction
[ "Chemistry" ]
524
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
42,673,974
https://en.wikipedia.org/wiki/Edible%20algae%20vaccine
Edible algae based vaccination is a vaccination strategy under preliminary research to combine a genetically engineered sub-unit vaccine and an immunologic adjuvant into Chlamydomonas reinhardtii microalgae. Microalgae can be freeze-dried and administered orally. While spirulina is accepted as safe to consume, edible algal vaccines remain under basic research with unconfirmed safety and efficacy as of 2018. In 2003, the first documented algal-based vaccine antigen was reported, consisting of a foot-and-mouth disease antigen complexed with the cholera toxin subunit B, which delivered the antigen to digestion mucosal surfaces in mice. The vaccine was grown in C. reinhardtii algae and provided oral vaccination in mice, but was hindered by low vaccine antigen expression levels. Proteins expressed inside the chloroplast of algae (the most common site of genetic engineering and protein production) do not undergo glycosylation, a form of posttranslational modification. Glycosylation of proteins that are not naturally modified like the malaria vaccine candidate pfs25 can occur in common expression systems like yeast. Notes References U.S. Food and Drug Administration (2002) GRAS Notification for Spirulina Microalgae Vaccines Edible algae
Edible algae vaccine
[ "Biology" ]
272
[ "Edible algae", "Vaccination", "Vaccines", "Algae" ]
42,676,080
https://en.wikipedia.org/wiki/SIR%20proteins
Silent Information Regulator (SIR) proteins are involved in regulating gene expression. SIR proteins organize heterochromatin near telomeres, ribosomal DNA (rDNA), and at silent loci including hidden mating type loci in yeast. The SIR family of genes encodes catalytic and non-catalytic proteins that are involved in de-acetylation of histone tails and the subsequent condensation of chromatin around a SIR protein scaffold. Some SIR family members are conserved from yeast to humans. History SIR proteins have been identified in many screens, and have historically been known as SIR (silent information regulator), MAR (mating-type regulator), STE (sterile), CMT (change of mating type) or SSP (sterile suppressor) according to which screen led to their identification. Ultimately, the name SIR had the most staying power, because it most accurately describes the function of the encoded proteins. One of the early yeast screens to identify SIR genes was performed by Anita Hopper and Benjamin Hall, who screened with mutagenesis for alleles that allow sporulation in a normally sporulation-deficient heterothallic α/α (ho/ho MATα/MATα). Their screen identified a mutation in a novel gene that was not linked to HO that allowed the α/α diploid to sporulate, as if it were an α/a diploid, and inferred that the mutation affected a change in mating type by an HO-independent mechanism. Later, it was discovered at the CMT allele identified by Hopper & Hall did not cause a mating type conversion at the MAT locus, but rather allowed the expression of cryptic mating type genes that are silenced in wild-type yeast. In their paper clarifying the mechanism of the CMT mutation, Haber and acknowledge the contribution of Amar Klar, who presented his MAR mutant strains that had similar properties as the CMT mutants at the Cold Spring Harbor Laboratory yeast genetics meeting, which led Haber and to consider the hypothesis that the cmt mutants may act by de-repressing silent information. In the same year that Haber & demonstrated that the cmt mutant restores sporulation by de-repressing hidden mating type loci, two other groups published screens for genes involved in the regulation of silent mating type cassettes. The first study, performed by Amar Klar, Seymour Fogel and Kathy Macleod, identified a mutation in a spontaneous a/a diploid that caused the products of sporulation to be haploids with an apparent diploid phenotype, as assayed by ability to mate. The authors reasoned that the mutation caused the de-repression of then-recently appreciated silent mating type loci HMa and HMα, which would allow an a/a diploid to sporulate and would cause haploid segregants inheriting the mutant allele to behave as a/α diploids despite being haploid. The authors named the mutation MAR for its apparent role in mating type regulation, and were able to map the mutation to chromosome IV, and determined that it was located 27.3 cM from a commonly used trp1 marker. A few months later, Jasper Rine and Ira Herskowitz published a different screen for genes that affect the ability of yeast to mate, and ultimate discovered the gene family that they called SIR, a name that remains in the modern parlance. Unlike the Klar et al. screen that identified a mutant by its inability to mate, Rine & Herskowitz took a more directed approach towards discovering factors responsible for mating type silencing. Specifically, Rine & Herskowitz reasoned that a haploid yeast cell with a recessive mutation in matα1 could be complemented if the silent copy of MATα were de-repressed. Starting in a ho matα1 haploid strain, Rine & Herskowitz screened mutants arising from mutagenesis and identified five mutants that restored a MATα phenotype in matα cells, but were not linked to the MAT locus and did not cause a gene conversion between the HMα locus and matα. These mutants, they reasoned, were specifically defective in silencing the cryptic mating type genes. Eventually, all of the mutants resulting from the original Hopper & Hall screen as well as the later Rine & Herskowitz screen and the Klar et al. screen were characterized and mapped, and it was shown that the causative genes were the same. In fact, the genes that are now referred to as SIR1-4 have at one time been referred to as MAR, CMT or STE according to the screen that identified the mutants. Although Klar, Hartwell and Hopper identified mutations in SIR genes and applied other names to the genes before Rine performed his screen, the SIR name was eventually adopted because Rine eventually identified the most complete set of functionally related genes (SIR1-4), and because the work by Rine and Herskowitz most accurately described the function of the SIR family genes. Later it would be shown that in yeast and in higher organisms, SIR proteins are important for transcriptional regulation of many chromatin domains. Molecular mechanism In budding yeast, SIR proteins are found at the silent mating type loci, telomeres, and at the rDNA locus. At the silent mating type loci and at the telomeres, SIR proteins participate in transcriptional silencing of genes within their domain of localization. At the rDNA locus, SIR proteins are thought to primarily be important for repressing recombination between rDNA repeats rather than for suppressing transcription. Transcriptional silencing in budding yeast In transcriptional silencing, SIR2,3,4 are required in stoichiometric amounts to silence specific chromosomal regions. In yeast, SIR proteins bind sites on nucleosome tails and form a multimeric compound of SIR2,3,4 that condenses chromatin and is thought to physically occlude promoters in the silenced interval, preventing their interaction with transcription machinery. The establishment of SIR-repressed heterochromatin domains is a complicated process that involves different subsets of proteins and regulatory proteins depending on the locus in the genome. At the silent mating type loci and at yeast telomeres, the transcription factors Abf1 (ARS binding factor) and Rap1 (repressor-activator protein) associate with specific nucleotide sequences in the silencers that flank heterochromatic regions. Rap1 contains a Sir3-binding domain that recruits SIR3 to the silencers. Once at the silencers, Sir3 recruits Sir4-Sir2 dimers to the chromatin nucleation site. Sir2 then deacetylates histone H3 and H4 tails, and free Sir3 binds the now-deacetylated lysine residues H4K16,79, and recruits additional Sir4-Sir2 dimers to promote the further spreading of the heterochromatin domain. Once it has spread to cover a genomic locus, the SIR2,3,4 effectively prevents transcription from the region it occupies, in a process that is thought to depend on the physical occlusion of DNA by SIR proteins. Recently, it has been shown that certain promoters are capable of directing transcription inside regions that are otherwise silenced by SIR proteins. Specifically, if an inducible promoter is induced inside a silent chromatin domain, it can achieve ~200x increase in expression levels with little detectable change in covalent histone modifications. Roles and interactions between SIR proteins SIR2 SIR2 is an NAD-dependent lysine deacetylase. It was the first-discovered member of the Sirtuin protein family and it is highly conserved, with homologs found in organisms ranging from humans to bacteria and archaea. It interacts with a variety of protein substrates, but does not exhibit strong affinity for DNA, chromatin, or other silencer-binding factors. Instead, it relies on other SIR proteins to find its appropriate silencing target. In the SIR protein complex, SIR2 removes acetyl groups from the lysine on histone tails H3 and H4, 'priming' the nucleosome for chromatin packaging by the SIR3 component of the complex. Stabilization of rDNA in budding yeast Beyond its canonical role in the SIR complex, SIR2 also plays a role in rDNA repression. As part of the cell's regulation mechanism, rDNA repeats are excised from the chromosome so they cannot be expressed. SIR2 forms a complex with NET1 (a nuclear protein) and CDC14 (a phosphatase) to form the regulator of nucleolar silencing and telophase (RENT) complex. The RENT complex sequesters excised rDNA in 'extrachromosomal circles,' preventing recombination. Accumulation of these circles has been linked to premature aging. Sirtuin 2 (SIRT2), SIR2's human analog, has also been linked to age-related disease. SIR3 SIR3 is principally involved in heterochromatin spreading, the silencing activity of the SIR protein complex. When overexpressed, SIR3 leads to spreading beyond the normal nucleation site. SIR3 can continue to operate at very low levels of SIR2 and SIR4, but not without them. It preferentially binds to unmodified nucleosomes (no acetylation at H4K16 or methylation at H3K79), and relies on SIR2's deacetylation of H4K16 to enhance silencing. H3K79 methylation by DOT1 methyltransferase inhibits SIR3, resulting in an unsilenced chromatin region. SIR3 is recruited to target sequence by the transcription factors RAP1 or ABF1. SIR4 SIR4 is involved in scaffolding the assembly of silenced chromatin. It binds to DNA with high affinity, but low specificity. It is most stable when co-expressed with SIR2, but neither SIR2 nor SIR3 are required for it to operate at the telomeres. Each half of the SIR4 protein has distinct responsibilities in heterochromatin spreading. SIR4's N-terminus is required for telomeric silencing, but not for homothallic mating-type (HM) silencing. Conversely, its C-terminus supports HM but not telomeric repression. The N-terminus is positively charged and can be recruited to the telomeric repression site by SIR1 and YKU80. The C-terminus contains the coiled-coil region, which interacts with SIR3 in the heterotrimeric SIR complex and can also interact with RAP1 and YKU70 for recruitment to the telomeric region of the chromosome. The C-terminus also contains the SIR2-interacting domain (SID), where SIR4 can bind to the extended N-terminus of SIR2. SIR2 can catalyze reactions without being bound to SIR4, but SIR2's catalytic activity is enhanced when interacting with SIR4. Conservation SIR proteins are conserved from yeast to humans, and lend their name to a class of mammalian histone deacetylases (Sirtuins, homologs of Sir2). Sirtuins have been implicated in myriad human traits including Alzheimer's and diabetes, and have been proposed to regulate of lifespan. See also References Molecular biology Epigenetics
SIR proteins
[ "Chemistry", "Biology" ]
2,393
[ "Biochemistry", "Molecular biology" ]
62,683,332
https://en.wikipedia.org/wiki/Fairness%20%28machine%20learning%29
Fairness in machine learning (ML) refers to the various attempts to correct algorithmic bias in automated decision processes based on ML models. Decisions made by such models after a learning process may be considered unfair if they were based on variables considered sensitive (e.g., gender, ethnicity, sexual orientation, or disability). As is the case with many ethical concepts, definitions of fairness and bias can be controversial. In general, fairness and bias are considered relevant when the decision process impacts people's lives. Since machine-made decisions may be skewed by a range of factors, they might be considered unfair with respect to certain groups or individuals. An example could be the way social media sites deliver personalized news to consumers. Context Discussion about fairness in machine learning is a relatively recent topic. Since 2016 there has been a sharp increase in research into the topic. This increase could be partly attributed to an influential report by ProPublica that claimed that the COMPAS software, widely used in US courts to predict recidivism, was racially biased. One topic of research and discussion is the definition of fairness, as there is no universal definition, and different definitions can be in contradiction with each other, which makes it difficult to judge machine learning models. Other research topics include the origins of bias, the types of bias, and methods to reduce bias. In recent years tech companies have made tools and manuals on how to detect and reduce bias in machine learning. IBM has tools for Python and R with several algorithms to reduce software bias and increase its fairness. Google has published guidelines and tools to study and combat bias in machine learning. Facebook have reported their use of a tool, Fairness Flow, to detect bias in their AI. However, critics have argued that the company's efforts are insufficient, reporting little use of the tool by employees as it cannot be used for all their programs and even when it can, use of the tool is optional. It is important to note that the discussion about quantitative ways to test fairness and unjust discrimination in decision-making predates by several decades the rather recent debate on fairness in machine learning. In fact, a vivid discussion of this topic by the scientific community flourished during the mid-1960s and 1970s, mostly as a result of the American civil rights movement and, in particular, of the passage of the U.S. Civil Rights Act of 1964. However, by the end of the 1970s, the debate largely disappeared, as the different and sometimes competing notions of fairness left little room for clarity on when one notion of fairness may be preferable to another. Language Bias Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al. show that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Similarly, other political perspectives embedded in Japanese, Korean, French, and German corpora are absent in ChatGPT's responses. ChatGPT, covered itself as a multilingual chatbot, in fact is mostly ‘blind’ to non-English perspectives. Gender Bias Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men. Political bias Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Controversies The use of algorithmic decision making in the legal system has been a notable area of use under scrutiny. In 2014, then U.S. Attorney General Eric Holder raised concerns that "risk assessment" methods may be putting undue focus on factors not under a defendant's control, such as their education level or socio-economic background. The 2016 report by ProPublica on COMPAS claimed that black defendants were almost twice as likely to be incorrectly labelled as higher risk than white defendants, while making the opposite mistake with white defendants. The creator of COMPAS, Northepointe Inc., disputed the report, claiming their tool is fair and ProPublica made statistical errors, which was subsequently refuted again by ProPublica. Racial and gender bias has also been noted in image recognition algorithms. Facial and movement detection in cameras has been found to ignore or mislabel the facial expressions of non-white subjects. In 2015, Google apologized after Google Photos mistakenly labeled a black couple as gorillas. Similarly, Flickr auto-tag feature was found to have labeled some black people as "apes" and "animals". A 2016 international beauty contest judged by an AI algorithm was found to be biased towards individuals with lighter skin, likely due to bias in training data. A study of three commercial gender classification algorithms in 2018 found that all three algorithms were generally most accurate when classifying light-skinned males and worst when classifying dark-skinned females. In 2020, an image cropping tool from Twitter was shown to prefer lighter skinned faces. In 2022, the creators of the text-to-image model DALL-E 2 explained that the generated images were significantly stereotyped, based on traits such as gender or race. Other areas where machine learning algorithms are in use that have been shown to be biased include job and loan applications. Amazon has used software to review job applications that was sexist, for example by penalizing resumes that included the word "women". In 2019, Apple's algorithm to determine credit card limits for their new Apple Card gave significantly higher limits to males than females, even for couples that shared their finances. Mortgage-approval algorithms in use in the U.S. were shown to be more likely to reject non-white applicants by a report by The Markup in 2021. Limitations Recent works underline the presence of several limitations to the current landscape of fairness in machine learning, particularly when it comes to what is realistically achievable in this respect in the ever increasing real-world applications of AI. For instance, the mathematical and quantitative approach to formalize fairness, and the related "de-biasing" approaches, may rely onto too simplistic and easily overlooked assumptions, such as the categorization of individuals into pre-defined social groups. Other delicate aspects are, e.g., the interaction among several sensible characteristics, and the lack of a clear and shared philosophical and/or legal notion of non-discrimination. Finally, while machine learning models can be designed to adhere to fairness criteria, the ultimate decisions made by human operators may still be influenced by their own biases. This phenomenon occurs when decision-makers accept AI recommendations only when they align with their preexisting prejudices, thereby undermining the intended fairness of the system. Group fairness criteria In classification problems, an algorithm learns a function to predict a discrete characteristic , the target variable, from known characteristics . We model as a discrete random variable which encodes some characteristics contained or implicitly encoded in that we consider as sensitive characteristics (gender, ethnicity, sexual orientation, etc.). We finally denote by the prediction of the classifier. Now let us define three main criteria to evaluate if a given classifier is fair, that is if its predictions are not influenced by some of these sensitive variables. Independence We say the random variables satisfy independence if the sensitive characteristics are statistically independent of the prediction , and we write We can also express this notion with the following formula: This means that the classification rate for each target classes is equal for people belonging to different groups with respect to sensitive characteristics . Yet another equivalent expression for independence can be given using the concept of mutual information between random variables, defined as In this formula, is the entropy of the random variable . Then satisfy independence if . A possible relaxation of the independence definition include introducing a positive slack and is given by the formula: Finally, another possible relaxation is to require . Separation We say the random variables satisfy separation if the sensitive characteristics are statistically independent of the prediction given the target value , and we write We can also express this notion with the following formula: This means that all the dependence of the decision on the sensitive attribute must be justified by the actual dependence of the true target variable . Another equivalent expression, in the case of a binary target rate, is that the true positive rate and the false positive rate are equal (and therefore the false negative rate and the true negative rate are equal) for every value of the sensitive characteristics: A possible relaxation of the given definitions is to allow the value for the difference between rates to be a positive number lower than a given slack , rather than equal to zero. In some fields separation (separation coefficient) in a confusion matrix is a measure of the distance (at a given level of the probability score) between the predicted cumulative percent negative and predicted cumulative percent positive. The greater this separation coefficient is at a given score value, the more effective the model is at differentiating between the set of positives and negatives at a particular probability cut-off. According to Mayes: "It is often observed in the credit industry that the selection of validation measures depends on the modeling approach. For example, if modeling procedure is parametric or semi-parametric, the two-sample K-S test is often used. If the model is derived by heuristic or iterative search methods, the measure of model performance is usually divergence. A third option is the coefficient of separation...The coefficient of separation, compared to the other two methods, seems to be most reasonable as a measure for model performance because it reflects the separation pattern of a model." Sufficiency We say the random variables satisfy sufficiency if the sensitive characteristics are statistically independent of the target value given the prediction , and we write We can also express this notion with the following formula: This means that the probability of actually being in each of the groups is equal for two individuals with different sensitive characteristics given that they were predicted to belong to the same group. Relationships between definitions Finally, we sum up some of the main results that relate the three definitions given above: Assuming is binary, if and are not statistically independent, and and are not statistically independent either, then independence and separation cannot both hold except for rhetorical cases. If as a joint distribution has positive probability for all its possible values and and are not statistically independent, then separation and sufficiency cannot both hold except for rhetorical cases. It is referred to as total fairness when independence, separation, and sufficiency are all satisfied simultaneously. However, total fairness is not possible to achieve except in specific rhetorical cases. Mathematical formulation of group fairness definitions Preliminary definitions Most statistical measures of fairness rely on different metrics, so we will start by defining them. When working with a binary classifier, both the predicted and the actual classes can take two values: positive and negative. Now let us start explaining the different possible relations between predicted and actual outcome: True positive (TP): The case where both the predicted and the actual outcome are in a positive class. True negative (TN): The case where both the predicted outcome and the actual outcome are assigned to the negative class. False positive (FP): A case predicted to befall into a positive class assigned in the actual outcome is to the negative one. False negative (FN): A case predicted to be in the negative class with an actual outcome is in the positive one. These relations can be easily represented with a confusion matrix, a table that describes the accuracy of a classification model. In this matrix, columns and rows represent instances of the predicted and the actual cases, respectively. By using these relations, we can define multiple metrics which can be later used to measure the fairness of an algorithm: Positive predicted value (PPV): the fraction of positive cases which were correctly predicted out of all the positive predictions. It is usually referred to as precision, and represents the probability of a correct positive prediction. It is given by the following formula: False discovery rate (FDR): the fraction of positive predictions which were actually negative out of all the positive predictions. It represents the probability of an erroneous positive prediction, and it is given by the following formula: Negative predicted value (NPV): the fraction of negative cases which were correctly predicted out of all the negative predictions. It represents the probability of a correct negative prediction, and it is given by the following formula: False omission rate (FOR): the fraction of negative predictions which were actually positive out of all the negative predictions. It represents the probability of an erroneous negative prediction, and it is given by the following formula: True positive rate (TPR): the fraction of positive cases which were correctly predicted out of all the positive cases. It is usually referred to as sensitivity or recall, and it represents the probability of the positive subjects to be classified correctly as such. It is given by the formula: False negative rate (FNR): the fraction of positive cases which were incorrectly predicted to be negative out of all the positive cases. It represents the probability of the positive subjects to be classified incorrectly as negative ones, and it is given by the formula: True negative rate (TNR): the fraction of negative cases which were correctly predicted out of all the negative cases. It represents the probability of the negative subjects to be classified correctly as such, and it is given by the formula: False positive rate (FPR): the fraction of negative cases which were incorrectly predicted to be positive out of all the negative cases. It represents the probability of the negative subjects to be classified incorrectly as positive ones, and it is given by the formula: The following criteria can be understood as measures of the three general definitions given at the beginning of this section, namely Independence, Separation and Sufficiency. In the table to the right, we can see the relationships between them. To define these measures specifically, we will divide them into three big groups as done in Verma et al.: definitions based on a predicted outcome, on predicted and actual outcomes, and definitions based on predicted probabilities and the actual outcome. We will be working with a binary classifier and the following notation: refers to the score given by the classifier, which is the probability of a certain subject to be in the positive or the negative class. represents the final classification predicted by the algorithm, and its value is usually derived from , for example will be positive when is above a certain threshold. represents the actual outcome, that is, the real classification of the individual and, finally, denotes the sensitive attributes of the subjects. Definitions based on predicted outcome The definitions in this section focus on a predicted outcome for various distributions of subjects. They are the simplest and most intuitive notions of fairness. Demographic parity, also referred to as statistical parity, acceptance rate parity and benchmarking. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal probability of being assigned to the positive predicted class. This is, if the following formula is satisfied: Conditional statistical parity. Basically consists in the definition above, but restricted only to a subset of the instances. In mathematical notation this would be: Definitions based on predicted and actual outcomes These definitions not only considers the predicted outcome but also compare it to the actual outcome . Predictive parity, also referred to as outcome test. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV. This is, if the following formula is satisfied: Mathematically, if a classifier has equal PPV for both groups, it will also have equal FDR, satisfying the formula: False positive error rate balance, also referred to as predictive equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FPR. This is, if the following formula is satisfied: Mathematically, if a classifier has equal FPR for both groups, it will also have equal TNR, satisfying the formula: False negative error rate balance, also referred to as equal opportunity. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal FNR. This is, if the following formula is satisfied: Mathematically, if a classifier has equal FNR for both groups, it will also have equal TPR, satisfying the formula: Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal TPR and equal FPR, satisfying the formula: Conditional use accuracy equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal PPV and equal NPV, satisfying the formula: Overall accuracy equality. A classifier satisfies this definition if the subject in the protected and unprotected groups have equal prediction accuracy, that is, the probability of a subject from one class to be assigned to it. This is, if it satisfies the following formula: Treatment equality. A classifier satisfies this definition if the subjects in the protected and unprotected groups have an equal ratio of FN and FP, satisfying the formula: Definitions based on predicted probabilities and actual outcome These definitions are based in the actual outcome and the predicted probability score . Test-fairness, also known as calibration or matching conditional frequencies. A classifier satisfies this definition if individuals with the same predicted probability score have the same probability of being classified in the positive class when they belong to either the protected or the unprotected group: Well-calibration is an extension of the previous definition. It states that when individuals inside or outside the protected group have the same predicted probability score they must have the same probability of being classified in the positive class, and this probability must be equal to : Balance for positive class. A classifier satisfies this definition if the subjects constituting the positive class from both protected and unprotected groups have equal average predicted probability score . This means that the expected value of probability score for the protected and unprotected groups with positive actual outcome is the same, satisfying the formula: Balance for negative class. A classifier satisfies this definition if the subjects constituting the negative class from both protected and unprotected groups have equal average predicted probability score . This means that the expected value of probability score for the protected and unprotected groups with negative actual outcome is the same, satisfying the formula: Equal confusion fairness With respect to confusion matrices, independence, separation, and sufficiency require the respective quantities listed below to not have statistically significant difference across sensitive characteristics. Independence: (TP + FP) / (TP + FP + FN + TN) (i.e., ). Separation: TN / (TN + FP) and TP / (TP + FN) (i.e., specificity and recall ). Sufficiency: TP / (TP + FP) and TN / (TN + FN) (i.e., precision and negative predictive value ). The notion of equal confusion fairness requires the confusion matrix of a given decision system to have the same distribution when computed stratified over all sensitive characteristics. Social welfare function Some scholars have proposed defining algorithmic fairness in terms of a social welfare function. They argue that using a social welfare function enables an algorithm designer to consider fairness and predictive accuracy in terms of their benefits to the people affected by the algorithm. It also allows the designer to trade off efficiency and equity in a principled way. Sendhil Mullainathan has stated that algorithm designers should use social welfare functions to recognize absolute gains for disadvantaged groups. For example, a study found that using a decision-making algorithm in pretrial detention rather than pure human judgment reduced the detention rates for Blacks, Hispanics, and racial minorities overall, even while keeping the crime rate constant. Individual fairness criteria An important distinction among fairness definitions is the one between group and individual notions. Roughly speaking, while group fairness criteria compare quantities at a group level, typically identified by sensitive attributes (e.g. gender, ethnicity, age, etc.), individual criteria compare individuals. In words, individual fairness follow the principle that "similar individuals should receive similar treatments". There is a very intuitive approach to fairness, which usually goes under the name of fairness through unawareness (FTU), or blindness, that prescribes not to explicitly employ sensitive features when making (automated) decisions. This is effectively a notion of individual fairness, since two individuals differing only for the value of their sensitive attributes would receive the same outcome. However, in general, FTU is subject to several drawbacks, the main being that it does not take into account possible correlations between sensitive attributes and non-sensitive attributes employed in the decision-making process. For example, an agent with the (malignant) intention to discriminate on the basis of gender could introduce in the model a proxy variable for gender (i.e. a variable highly correlated with gender) and effectively using gender information while at the same time being compliant to the FTU prescription. The problem of what variables correlated to sensitive ones are fairly employable by a model in the decision-making process is a crucial one, and is relevant for group concepts as well: independence metrics require a complete removal of sensitive information, while separation-based metrics allow for correlation, but only as far as the labeled target variable "justify" them. The most general concept of individual fairness was introduced in the pioneer work by Cynthia Dwork and collaborators in 2012 and can be thought of as a mathematical translation of the principle that the decision map taking features as input should be built such that it is able to "map similar individuals similarly", that is expressed as a Lipschitz condition on the model map. They call this approach fairness through awareness (FTA), precisely as counterpoint to FTU, since they underline the importance of choosing the appropriate target-related distance metric to assess which individuals are similar in specific situations. Again, this problem is very related to the point raised above about what variables can be seen as "legitimate" in particular contexts. Causality-based metrics Causal fairness measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment. An entire branch of the academic research on fairness metrics is devoted to leverage causal models to assess bias in machine learning models. This approach is usually justified by the fact that the same observational distribution of data may hide different causal relationships among the variables at play, possibly with different interpretations of whether the outcome are affected by some form of bias or not. Kusner et al. propose to employ counterfactuals, and define a decision-making process counterfactually fair if, for any individual, the outcome does not change in the counterfactual scenario where the sensitive attributes are changed. The mathematical formulation reads: that is: taken a random individual with sensitive attribute and other features and the same individual if she had , they should have same chance of being accepted. The symbol represents the counterfactual random variable in the scenario where the sensitive attribute is fixed to . The conditioning on means that this requirement is at the individual level, in that we are conditioning on all the variables identifying a single observation. Machine learning models are often trained upon data where the outcome depended on the decision made at that time. For example, if a machine learning model has to determine whether an inmate will recidivate and will determine whether the inmate should be released early, the outcome could be dependent on whether the inmate was released early or not. Mishler et al. propose a formula for counterfactual equalized odds: where is a random variable, denotes the outcome given that the decision was taken, and is a sensitive feature. Plecko and Bareinboim propose a unified framework to deal with causal analysis of fairness. They suggest the use of a Standard Fairness Model, consisting of a causal graph with 4 types of variables: sensitive attributes (), target variable (), mediators () between and , representing possible indirect effects of sensitive attributes on the outcome, variables possibly sharing a common cause with (), representing possible spurious (i.e., non causal) effects of the sensitive attributes on the outcome. Within this framework, Plecko and Bareinboim are therefore able to classify the possible effects that sensitive attributes may have on the outcome. Moreover, the granularity at which these effects are measured—namely, the conditioning variables used to average the effect—is directly connected to the "individual vs. group" aspect of fairness assessment. Bias mitigation strategies Fairness can be applied to machine learning algorithms in three different ways: data preprocessing, optimization during software training, or post-processing results of the algorithm. Preprocessing Usually, the classifier is not the only problem; the dataset is also biased. The discrimination of a dataset with respect to the group can be defined as follows: That is, an approximation to the difference between the probabilities of belonging in the positive class given that the subject has a protected characteristic different from and equal to . Algorithms correcting bias at preprocessing remove information about dataset variables which might result in unfair decisions, while trying to alter as little as possible. This is not as simple as just removing the sensitive variable, because other attributes can be correlated to the protected one. A way to do this is to map each individual in the initial dataset to an intermediate representation in which it is impossible to identify whether it belongs to a particular protected group while maintaining as much information as possible. Then, the new representation of the data is adjusted to get the maximum accuracy in the algorithm. This way, individuals are mapped into a new multivariable representation where the probability of any member of a protected group to be mapped to a certain value in the new representation is the same as the probability of an individual which doesn't belong to the protected group. Then, this representation is used to obtain the prediction for the individual, instead of the initial data. As the intermediate representation is constructed giving the same probability to individuals inside or outside the protected group, this attribute is hidden to the classifier. An example is explained in Zemel et al. where a multinomial random variable is used as an intermediate representation. In the process, the system is encouraged to preserve all information except that which can lead to biased decisions, and to obtain a prediction as accurate as possible. On the one hand, this procedure has the advantage that the preprocessed data can be used for any machine learning task. Furthermore, the classifier does not need to be modified, as the correction is applied to the dataset before processing. On the other hand, the other methods obtain better results in accuracy and fairness. Reweighing Reweighing is an example of a preprocessing algorithm. The idea is to assign a weight to each dataset point such that the weighted discrimination is 0 with respect to the designated group. If the dataset was unbiased the sensitive variable and the target variable would be statistically independent and the probability of the joint distribution would be the product of the probabilities as follows: In reality, however, the dataset is not unbiased and the variables are not statistically independent so the observed probability is: To compensate for the bias, the software adds a weight, lower for favored objects and higher for unfavored objects. For each we get: When we have for each a weight associated we compute the weighted discrimination with respect to group as follows: It can be shown that after reweighting this weighted discrimination is 0. Inprocessing Another approach is to correct the bias at training time. This can be done by adding constraints to the optimization objective of the algorithm. These constraints force the algorithm to improve fairness, by keeping the same rates of certain measures for the protected group and the rest of individuals. For example, we can add to the objective of the algorithm the condition that the false positive rate is the same for individuals in the protected group and the ones outside the protected group. The main measures used in this approach are false positive rate, false negative rate, and overall misclassification rate. It is possible to add just one or several of these constraints to the objective of the algorithm. Note that the equality of false negative rates implies the equality of true positive rates so this implies the equality of opportunity. After adding the restrictions to the problem it may turn intractable, so a relaxation on them may be needed. Adversarial debiasing We train two classifiers at the same time through some gradient-based method (f.e.: gradient descent). The first one, the predictor tries to accomplish the task of predicting , the target variable, given , the input, by modifying its weights to minimize some loss function . The second one, the adversary tries to accomplish the task of predicting , the sensitive variable, given by modifying its weights to minimize some loss function . An important point here is that, to propagate correctly, above must refer to the raw output of the classifier, not the discrete prediction; for example, with an artificial neural network and a classification problem, could refer to the output of the softmax layer. Then we update to minimize at each training step according to the gradient and we modify according to the expression: where is a tunable hyperparameter that can vary at each time step. The intuitive idea is that we want the predictor to try to minimize (therefore the term ) while, at the same time, maximize (therefore the term ), so that the adversary fails at predicting the sensitive variable from . The term prevents the predictor from moving in a direction that helps the adversary decrease its loss function. It can be shown that training a predictor classification model with this algorithm improves demographic parity with respect to training it without the adversary. Postprocessing The final method tries to correct the results of a classifier to achieve fairness. In this method, we have a classifier that returns a score for each individual and we need to do a binary prediction for them. High scores are likely to get a positive outcome, while low scores are likely to get a negative one, but we can adjust the threshold to determine when to answer yes as desired. Note that variations in the threshold value affect the trade-off between the rates for true positives and true negatives. If the score function is fair in the sense that it is independent of the protected attribute, then any choice of the threshold will also be fair, but classifiers of this type tend to be biased, so a different threshold may be required for each protected group to achieve fairness. A way to do this is plotting the true positive rate against the false negative rate at various threshold settings (this is called ROC curve) and find a threshold where the rates for the protected group and other individuals are equal. Reject option based classification Given a classifier let be the probability computed by the classifiers as the probability that the instance belongs to the positive class +. When is close to 1 or to 0, the instance is specified with high degree of certainty to belong to class + or – respectively. However, when is closer to 0.5 the classification is more unclear. We say is a "rejected instance" if with a certain such that . The algorithm of "ROC" consists on classifying the non-rejected instances following the rule above and the rejected instances as follows: if the instance is an example of a deprived group () then label it as positive, otherwise, label it as negative. We can optimize different measures of discrimination (link) as functions of to find the optimal for each problem and avoid becoming discriminatory against the privileged group. See also Algorithmic bias Machine learning Representational harm References Machine learning Information ethics Computing and society Philosophy of artificial intelligence Discrimination Bias
Fairness (machine learning)
[ "Technology", "Engineering", "Biology" ]
6,765
[ "Behavior", "Machine learning", "Aggression", "Computing and society", "Discrimination", "Ethics of science and technology", "Artificial intelligence engineering", "Information ethics" ]
62,684,129
https://en.wikipedia.org/wiki/Jennifer%20Dionne
Jennifer (Jen) Dionne is an American scientist and pioneer of nanophotonics. She is currently full professor of materials science and engineering at Stanford University and by courtesy, of radiology, and also a Chan Zuckerberg Biohub Investigator. She is Deputy Director of Q-NEXT, a National Quantum Information Science funded by the DOE. From 2020-2024, she served as Stanford's inaugural Vice Provost of Shared Facilities, where she advanced funding, infrastructure, education, and staff support within shared facilities. During this time, she also was Director of the Department of Energy's "Photonics at Thermodynamic Limits" Energy Frontier Research Center (EFRC), which strives to create thermodynamic engines driven by light. She is also an editor of the ACS journal Nano Letters. Dionne's research develops photonic materials and methods to observe and control chemical and biological processes as they unfold with nanometer scale resolution, emphasizing critical challenges in global health and sustainability. Early life and education Dionne was born October 28, 1981, in Warwick, Rhode Island, to Sandra Dionne (Draper), an intensive care unit nurse, and George Dionne, a cabinet maker. She grew up figure skating, but also enjoyed science and math. As a student at Bay View Academy, she was selected to be a student ambassador to Australia. She also participated in the Washington University Summer Scholars Program and the Harvard University Secondary School Program. She attended Washington University in St. Louis, where she received bachelor's degrees in physics and systems science and mathematics in 2003. There, she served on the Mission Control of Steve Fosset's first attempted solo hot air balloon circumnavigation. She also worked as student lead of the Crow Observatory. She then received her and doctoral degrees in Applied Physics from Caltech in 2009, advised by Harry Atwater. At Caltech, she was named an Everhart Lecturer, and awarded the Francis and Milton Clauser Prize for Best Ph.D. Thesis, recognizing her work developing the first negative refractive index material at visible wavelengths and nanoscale Si-based photonic modulators. Before starting her faculty position at Stanford, she spent a year as a postdoctoral fellow in Chemistry at Berkeley and Lawrence Berkeley National Lab, advised by Paul Alivisatos. Career Dionne began as an assistant professor at Stanford in March, 2010. In 2016, she was promoted to associate professor, and became an affiliate faculty of the Wu Tsai Neuroscience Institute, Bio-X, and the Precourt Institute for Energy. In 2019, she joined the department of radiology as a courtesy faculty. In 2019–2021, she was director of the TomKat Center for Sustainable Energy, and initiated their graduate student fellowship. In 2020, she became a senior fellow of the Precourt Institute and was appointed senior associate vice provost for shared facilities/research platforms. In her vice provost role, she helped Stanford to modernize shared research facilities across the schools of engineering, medicine, humanities and sciences, earth sciences, and SLAC. She initiated the Community for Shared Research Platforms (c-ShaRP), which has enabled improved education, instrumentation, organization, staffing, and translational efforts in the shared facilities. In her research, Dionne is a pioneer in manipulating light at the atomic and molecular scale. Under Dionne's leadership, her lab helped to establish the field of quantum plasmonics. She also made critical contributions to the field of plasmon photocatalysis, including developing combined optical and environmental electron microscopy to image chemical transformations with near-atomic-scale resolution. Her work in plasmon catalysis could enable sustainable materials manufacturing, overturning the traditional trade-offs in thermal catalysis between selectivity and activity. Her group is also credited with developing the first high-quality-factor phase-gradient metasurfaces for resonant beam-shaping and beam-steering. Dionne uses this platform to detect pathogens, and view the intricacies of molecular-to-cellular structure, binding, and dynamics. Awards In 2011, MIT Technology Review Top Innovator under 35 In 2012, Washington University in St. Louis Outstanding Young Alum Award In 2013, Oprah's 50 Things That Will Make You Say "Wow!" In 2014, the Presidential Early Career Award for Scientists and Engineers given by President Barack Obama In 2015, the Sloan Research Fellowship In 2015, the Dreyfus Teacher-Scholar Award In 2016, the Adolph Lomb Medal from Optica/the Optical Society of America In 2017, the Moore Inventor's Fellowship In 2019, the NIH Director's New Innovator Award In 2019, the Alan T. Waterman Award for top US Scientist under 40, National Science Foundation In 2021, a Fellow of The Optical Society Patents Patents include: Metal Oxide Si field effect plasmonic modulator Quantum converting nanoparticles as electrical field sensors Method and structure for plasmonic optical trapping of nanoscale particles Slot waveguide for color display Direct detection of nucleic acids and proteins Multiplexed nanophotonic microarray biosensor A method for compact and low-cost vibrational spectroscopy platforms References Washington University in St. Louis physicists Washington University in St. Louis alumni California Institute of Technology alumni Stanford University faculty American women physicists Year of birth missing (living people) Living people 21st-century American women scientists Women in optics American optical engineers American optical physicists American women engineers American materials scientists 21st-century American engineers 21st-century American physicists Metamaterials scientists American nanotechnologists Recipients of the Presidential Early Career Award for Scientists and Engineers
Jennifer Dionne
[ "Materials_science" ]
1,156
[ "Metamaterials scientists", "Metamaterials" ]
62,687,342
https://en.wikipedia.org/wiki/Gurzadyan-Savvidy%20relaxation
In cosmology, Gurzadyan-Savvidy (GS) relaxation is a theory developed by Vahe Gurzadyan and George Savvidy to explain the relaxation over time of the dynamics of N-body gravitating systems such as star clusters and galaxies. Stellar systems observed in the Universe – globular clusters and elliptical galaxies – reveal their relaxed state reflected in the high degree of regularity of some of their physical characteristics such as surface luminosity, velocity dispersion, geometric shapes, etc. The basic mechanism of relaxation of stellar systems has been considered the 2-body encounters (of stars), to lead to the observed fine-grained equilibrium. The coarse-grained phase of evolution of gravitating systems is described by violent relaxation developed by Donald Lynden-Bell. The 2-body mechanism of relaxation is known in plasma physics. The difficulties with description of collective effects in N-body gravitating systems arise due to the long-range character of gravitational interaction, as distinct of plasma where due to two different signs of charges the Debye screening takes place. The 2-body relaxation mechanism e.g. for elliptical galaxies predicts around years i.e. time scales exceeding the age of the Universe. The problem of relaxation and evolution of stellar systems and the role of collective effects are studied by various techniques, see. Among the efficient methods of study of N-body gravitating systems are the numerical simulations, particularly, Sverre Aarseth's N-body codes are widely used. Stellar system time scales Using the geometric methods of theory of dynamical systems, Gurzadyan and Savvidy showed the exponential instability (chaos) of spherical N-body systems interacting by Newtonian gravity and derived the collective (N-body) relaxation time (see also ) where denotes the average stellar velocity, is the mean stellar mass and is the stellar density. Normalized for parameters of stellar systems like globular clusters it yields For clusters of galaxies it yields 10-1000 Gyr. Comparing this (GS) relaxation time to the 2-body relaxation time (see ) Gurzadyan and Savvidy obtain where is the radius of gravitational influence and d is the mean distance between stars. With increasing density, d decreases and approaches so that the 2-body encounters become the dominant in the relaxation mechanism. The times and are related to the dynamical time by the relations and reflect the fact of existence of 3 scales of time and length for stellar systems (see also ) That approach (from the analysis of so-called two-dimensional curvature of the configuration space of the system) enabled to conclude that while the spherical systems are exponentially instable systems (Kolmogorov K-systems), the spiral galaxies "spend a large amount of time in regions with positive two-dimensional curvature" and hence "elliptical and spiral galaxies should have a different origin". Within the same geometric approach Gurzadyan and Armen Kocharyan had introduced the Ricci curvature criterion for relative instability (chaos) of dynamical systems. Derivation of GS-time scale by stochastic differential equation approach GS-time scale has been rederived by Gurzadyan and Kocharyan using stochastic differential equation approach Observational indication and numerical simulations Observational support to the GS-time scale is reported for globular clusters. Numerical simulations supporting GS-time scale are claimed in. References Mathematical physics Stellar dynamics
Gurzadyan-Savvidy relaxation
[ "Physics", "Mathematics" ]
712
[ "Applied mathematics", "Theoretical physics", "Astrophysics", "Mathematical physics", "Stellar dynamics" ]
62,690,611
https://en.wikipedia.org/wiki/Mario%20Barbatti
Mario Barbatti (born December 28, 1971) is a Brazilian physicist, computational theoretical chemist, and writer. He is specialized in the development and application of mixed quantum-classical dynamics for the study of molecular excited states. He is also the leading developer of the Newton-X software package for dynamics simulations. Mario Barbatti held an A*Midex Chair of Excellence at the Aix Marseille University between 2015 and 2019, where he is a professor since 2015. Honors and awards 2021: Fellow of the European Academy of Sciences. 2021: Senior member of the Institut Universitaire de France. 2019: Opening lecture of the XX Brazilian Symposium of Theoretical Chemistry. 2019: ERC Advanced Grant. Mario Barbatti has been the first Brazilian scientist and the first computational chemist in France to receive this money grant to do research. 2015: A*Midex Chair of Excellence. Scientific contributions, interests, and production By the end of 2019, Mario Barbatti had published over 150 scientific works, which had been cited about 7000 times (h-index 48). Since 2007, Barbatti has been the leading developer of the Newton-X platform, a software collection for dynamics and spectrum simulations, using surface hopping and the nuclear ensemble approach. Using dynamics and other quantum chemical methods, his research has focused on the simulations of the ultrafast photochemistry and photophysics of organic molecules. Among his main contributions, Barbatti, in collaboration with Hans Lischka, delivered a comprehensive map of the internal conversion channels of nucleobases. These results help to explain how DNA is stabilized after UV excitation. Although Barbatti's research has been strongly oriented towards photoinduced processes in nucleic acids, he and his co-workers have contributed to many different sub-fields. In 2013, in collaboration with Walter Thiel, they showed how UV irradiation can generate nucleobases out of inorganic components. Although this chemical reaction has been known since the 1960s, their work was the first one to unveil the exact reaction mechanism. Barbatti also discovered a new internal conversion mechanism, allowing molecules quickly return to ground state. In this mechanism, a conical intersection between the ground and the excited electronic states is formed by an electron transfer from the solvent to the excited chromophore. This solvent-chromophore electron-transfer mechanism has been predicted to occur in 7H-adenine in water. Barbatti and his colleagues at the Federal University of Paraiba have shown that CH...Cl hydrogen bonds can be formed in small molecules in the gas phase. This type of bond had previously been observed only in densely packed crystal structures. He has also contributed on topics in organic photodevices, astrochemistry, and atmospheric photochemistry. Currently, Barbatti and his team—the Light and Molecules group —are focusing on method developments, attempting to extend the excited-state simulation methods into the nanosecond regime. In a collaboration with Pavlo Dral and Walter Thiel, they implemented one of the first algorithms for nonadiabatic dynamics using machine learning. Popularization and presence in the media Some of the main results from Barbatti's work have been picked by diverse news outlets. These media have dedicated special attention to his research on internal conversion of nucleobases, prebiotic reactions, and new chemical reactions and mechanisms. His work is also popularized through blog posts on his group website and YouTube channel. References External links The Light and Molecules research group 1971 births Living people Academic staff of Aix-Marseille University Brazilian physicists Brazilian scientists Computational chemists Federal University of Rio de Janeiro alumni People from Petrópolis Theoretical chemists
Mario Barbatti
[ "Chemistry" ]
753
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
62,690,615
https://en.wikipedia.org/wiki/List%20of%20MOSFET%20applications
The MOSFET (metal–oxide–semiconductor field-effect transistor) is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is the basic building block of most modern electronics, and the most frequently manufactured device in history, with an estimated total of 13sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. It is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors. MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives, and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems. Integrated circuits The MOSFET is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. Planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, was critical to the invention of the monolithic integrated circuit chip by Robert Noyce later in 1959. The MOSFET was invented at Bell Labs between 1955 and 1960. This was followed by the development of clean rooms to reduce contamination to levels never before thought necessary, and coincided with the development of photolithography which, along with surface passivation and the planar process, allowed circuits to be made in few steps. Atalla realised that the main advantage of a MOS transistor was its ease of fabrication, particularly suiting it for use in the recently invented integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. These two factors, along with its rapidly scaling miniaturization and low energy consumption, led to the MOSFET becoming the most widely used type of transistor in IC chips. The earliest experimental MOS IC to be demonstrated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuits in 1964, consisting of 120 p-channel transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild Semiconductor researchers Federico Faggin and Tom Klein used to develop the first silicon-gate MOS IC. Chips There are various different types of MOS IC chips, which include the following. Large-scale integration With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density IC chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of MOSFETs on a chip by the late 1960s. MOS technology enabled the integration of more than 10,000 transistors on a single LSI chip by the early 1970s, before later enabling very large-scale integration (VLSI). Microprocessors The MOSFET is the basis of every microprocessor, and was responsible for the invention of the microprocessor. The origins of both the microprocessor and the microcontroller can be traced back to the invention and development of MOS technology. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. The earliest microprocessors were all MOS chips, built with MOS LSI circuits. The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first commercial single-chip microprocessor, the Intel 4004, was developed by Federico Faggin, using his silicon-gate MOS IC technology, with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. With the arrival of CMOS microprocessors in 1975, the term "MOS microprocessors" began to refer to chips fabricated entirely from PMOS logic or fabricated entirely from NMOS logic, contrasted with "CMOS microprocessors" and "bipolar bit-slice processors". CMOS circuits Complementary metal–oxide–semiconductor (CMOS) logic was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. CMOS had lower power consumption, but was initially slower than NMOS, which was more widely used for computers in the 1970s. In 1978, Hitachi introduced the twin-well CMOS process, which allowed CMOS to match the performance of NMOS with less power consumption. The twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s. By the 1980s CMOS logic consumed over times less power than NMOS logic, and about 100,000 times less power than bipolar transistor-transistor logic (TTL). Digital The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases. Analog The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to V can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies. Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors the size of the device does not significantly affect its performance. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. Also, MOSFETs can be configured to perform as capacitors and gyrator circuits which allow op-amps made from them to appear as inductors, thereby allowing all of the normal analog devices on a chip (except for diodes, which can be made smaller than a MOSFET anyway) to be built entirely out of MOSFETs. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback. Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density. RF CMOS In the late 1980s, Asad Abidi pioneered RF CMOS technology, which uses MOS VLSI circuits, while working at UCLA. This changed the way in which RF circuits were designed, away from discrete bipolar transistors and towards CMOS integrated circuits. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices. RF CMOS is also used in nearly all modern Bluetooth and wireless LAN (WLAN) devices. Analog switches MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source/drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate–source, gate–drain, and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch. Single-type This analog switch uses a four-terminal simple MOSFET of either P or N type. In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less than V − V. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal. In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher than V − V (threshold voltage V is negative in the case of enhancement-mode P-MOS). Dual-type (CMOS) This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages between VDD − Vtn and gnd − Vtp, both FETs conduct the signal; for voltages less than gnd − Vtp, the N-MOS conducts alone; and for voltages greater than VDD − Vtn, the P-MOS conducts alone. The voltage limits for this switch are the gate–source, gate–drain and source–drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions. Tri-state circuitry sometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off. MOS memory The advent of the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. The first modern computer memory was introduced in 1965, when John Schmidt at Fairchild Semiconductor designed the first MOS semiconductor memory, a 64-bit MOS SRAM (static random-access memory). SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data. MOS technology is the basis for DRAM (dynamic random-access memory). In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM (dynamic random-access memory) memory cell, based on MOS technology. MOS memory enabled higher performance, was cheaper, and consumed less power, than magnetic-core memory, leading to MOS memory overtaking magnetic core memory as the dominant computer memory technology by the early 1970s. Frank Wanlass, while studying MOSFET structures in 1963, noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology. In 1967, Dawon Kahng and Simon Sze proposed that floating-gate memory cells, consisting of floating-gate MOSFETs (FGMOS), could be used to produce reprogrammable ROM (read-only memory). Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM, EEPROM (electrically erasable programmable ROM) and flash memory. Types of MOS memory There are various different types of MOS memory. The following list includes various different MOS memory types. MOS sensors A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate FET (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. The two main types of image sensors used in digital imaging technology are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on MOS technology, with the CCD based on MOS capacitors and the CMOS sensor based on MOS transistors. Image sensors MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting. The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team at NASA's Jet Propulsion Laboratory in the early 1990s. MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5μm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors. Other sensors MOS sensors, also known as MOSFET sensors, are widely used to measure physical, chemical, biological and environmental parameters. The ion-sensitive field-effect transistor (ISFET), for example, is widely used in biomedical applications. MOSFETs are also widely used in microelectromechanical systems (MEMS), as silicon MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965. Common applications of other MOS sensors include the following. Power MOSFET The power MOSFET, which is commonly used in power electronics, was developed in the early 1970s. The power MOSFET enables low gate drive power, fast switching speed, and advanced paralleling capability. The power MOSFET is the most widely used power device in the world. Advantages over bipolar junction transistors in power electronics include MOSFETs not requiring a continuous flow of drive current to remain in the ON state, offering higher switching speeds, lower switching power losses, lower on-resistances, and reduced susceptibility to thermal runaway. The power MOSFET had an impact on power supplies, enabling higher operating frequencies, size and weight reduction, and increased volume production. Switching power supplies are the most common applications for power MOSFETs. They are also widely used for MOS RF power amplifiers, which enabled the transition of mobile networks from analog to digital in the 1990s. This led to the wide proliferation of wireless mobile networks, which revolutionised telecommunications systems. The LDMOS in particular is the most widely used power amplifier in mobile networks such as 2G, 3G, 4G and 5G, as well as broadcasting and amateur radio. Over 50billion discrete power MOSFETs are shipped annually, as of 2018. They are widely used for automotive, industrial and communications systems in particular. Power MOSFETs are commonly used in automotive electronics, particularly as switching devices in electronic control units, and as power converters in modern electric vehicles. The insulated-gate bipolar transistor (IGBT), a hybrid MOS-bipolar transistor, is also used for a wide variety of applications. LDMOS, a power MOSFET with lateral structure, is commonly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications. DMOS and VMOS Power MOSFETs, including DMOS, LDMOS and VMOS devices, are commonly used for a wide range of other applications, which include the following. RF DMOS RF DMOS, also known as RF power MOSFET, is a type of DMOS power transistor designed for radio-frequency (RF) applications. It is used in various radio and RF applications, which include the following. Consumer electronics MOSFETs are fundamental to the consumer electronics industry. According to Colinge, numerous consumer electronics would not exist without the MOSFET, such as digital wristwatches, pocket calculators, and video games, for example. MOSFETs are commonly used for a wide range of consumer electronics, which include the following devices listed. Computers or telecommunication devices (such as phones) are not included here, but are listed separately in the Information and communications technology (ICT) section below. Pocket calculators One of the earliest influential consumer electronic products enabled by MOS LSI circuits was the electronic pocket calculator, as MOS LSI technology enabled large amounts of computational capability in small packages. In 1965, the Victor 3900 desktop calculator was the first MOS LSI calculator, with 29 MOS LSI chips. In 1967 the Texas Instruments Cal-Tech was the first prototype electronic handheld calculator, with three MOS LSI chips, and it was later released as the Canon Pocketronic in 1970. The Sharp QT-8D desktop calculator was the first mass-produced LSI MOS calculator in 1969, and the Sharp EL-8 which used four MOS LSI chips was the first commercial electronic handheld calculator in 1970. The first true electronic pocket calculator was the Busicom LE-120A HANDY LE, which used a single MOS LSI calculator-on-a-chip from Mostek, and was released in 1971. By 1972, MOS LSI circuits were commercialized for numerous other applications. Audio-visual (AV) media MOSFETs are commonly used for a wide range of audio-visual (AV) media technologies, which include the following list of applications. Power MOSFET applications Power MOSFETs are commonly used for a wide range of consumer electronics. Power MOSFETs are widely used in the following consumer applications. Information and communications technology (ICT) MOSFETs are fundamental to information and communications technology (ICT), including modern computers, modern computing, telecommunications, the communications infrastructure, the Internet, digital telephony, wireless telecommunications, and mobile networks. According to Colinge, the modern computer industry and digital telecommunication systems would not exist without the MOSFET. Advances in MOS technology has been the most important contributing factor in the rapid rise of network bandwidth in telecommunication networks, with bandwidth doubling every 18 months, from bits per second to terabits per second (Edholm's law). Computers MOSFETs are commonly used in a wide range of computers and computing applications, which include the following. Telecommunications MOSFETs are commonly used in a wide range of telecommunications, which include the following applications. Power MOSFET applications Insulated-gate bipolar transistor (IGBT) The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). , the IGBT is the second most widely used power transistor, after the power MOSFET. The IGBT accounts for 27% of the power transistor market, second only to the power MOSFET (53%), and ahead of the RF amplifier (11%) and bipolar junction transistor (9%). The IGBT is widely used in consumer electronics, industrial technology, the energy sector, aerospace electronic devices, and transportation. The IGBT is widely used in the following applications. Quantum physics 2D electron gas and quantum Hall effect In quantum physics and quantum mechanics, the MOSFET is the basis for two-dimensional electron gas (2DEG) and the quantum Hall effect. The MOSFET enables physicists to study electron behavior in a two-dimensional gas, called a two-dimensional electron gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures. In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji observed the Hall effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery of the quantum Hall effect. Quantum technology The MOSFET is used in quantum technology. A quantum field-effect transistor (QFET) or quantum well field-effect transistor (QWFET) is a type of MOSFET that takes advantage of quantum tunneling to greatly increase the speed of transistor operation. Transportation MOSFETs are widely used in transportation. For example, they are commonly used for automotive electronics in the automotive industry. MOS technology is commonly used for a wide range of vehicles and transportation, which include the following applications. Automotive industry MOSFETs are widely used in the automotive industry, particularly for automotive electronics in motor vehicles. Automotive applications include the following. Power MOSFET applications Power MOSFETs are widely used in transportation technology, which includes the following vehicles. In the automotive industry, power MOSFETs are widely used in automotive electronics, which include the following. IGBT applications The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). IGBTs are widely used in the following transportation applications. Space industry In the space industry, MOSFET devices were adopted by NASA for space research in 1964, for its Interplanetary Monitoring Platform (IMP) program and Explorers space exploration program. The use of MOSFETs was a major step forward in the electronics design of spacecraft and satellites. The IMP D (Explorer 33), launched in 1966, was the first spacecraft to use the MOSFET. Data gathered by IMP spacecraft and satellites were used to support the Apollo program, enabling the first crewed Moon landing with the Apollo 11 mission in 1969. The Cassini–Huygens to Saturn in 1997 had spacecraft power distribution accomplished 192 solid-state power switch (SSPS) devices, which also functioned as circuit breakers in the event of an overload condition. The switches were developed from a combination of two semiconductor devices with switching capabilities: the MOSFET and the ASIC (application-specific integrated circuit). This combination resulted in advanced power switches that had better performance characteristics than traditional mechanical switches. Other applications MOSFETs are commonly used for a wide range of other applications, which include the following. References Applications 1960 introductions 20th-century inventions Arab inventions Biosensors Digital electronics Electronic design Integrated circuits Semiconductor devices Sensors Silicon Solid state switches South Korean inventions Transistor amplifiers Transistor types Transistors
List of MOSFET applications
[ "Technology", "Engineering", "Biology" ]
6,197
[ "Computer engineering", "Digital electronics", "Electronic design", "Measuring instruments", "Biosensors", "Electronic engineering", "Sensors", "Design", "Integrated circuits" ]
61,701,849
https://en.wikipedia.org/wiki/BWRX-300
The BWRX-300 is a design for a small modular nuclear reactor proposed by GE Hitachi Nuclear Energy (GEH). The BWRX-300 would feature passive safety, in that neither external power nor operator action would be required to maintain a safe state, even in extreme circumstances. Technology The BWRX-300 is a smaller evolution of an earlier GE Hitachi reactor design, the Economic Simplified Boiling Water Reactor (ESBWR) design and utilizing components of the operational Advanced boiling water reactor (ABWR) reactor. Boiling water reactors are nuclear technology that use ordinary light water as a nuclear reactor coolant. Like most boiling water reactors, the BWRX-300 will use low pressure water to remove heat from the core. A distinct feature of this reactor design is that water is circulated within the core by natural circulation. This is in contrast to most nuclear reactors which require electrical pumps to provide active cooling of the fuel. This system has advantages in terms of both simplicity and economics. Decay heat removal Immediately after a nuclear reactor shuts down, almost 7% of its previous operating power continues to be generated, from the decay of short half-life fission products. In conventional reactors, removing this decay heat passively is challenging because of their low temperatures. The BWRX-300 reactor would be cooled by the natural circulation of water, making it distinct from most nuclear plants which require active cooling with electrical pumps. New build proposals In 2019, GEH expected construction to start in 2024/2025 in the US or Canada, entering commercial operation in 2027/2028, and for the first unit to cost less than $1 billion to build. Canada On December 1, 2021 Ontario Power Generation (OPG) selected the BWRX-300 SMR for use at the Darlington Nuclear Generating Station. In October 2022, OPG applied for a construction license for the reactor, with plans to start operations in 2028. On July 7, 2023 Ontario Power Generation chose three additional BWRX-300 SMR for construction at the Darlington New Nuclear Project in Ontario, Canada, joining the first already planned. On June 27, 2022 Saskatchewan Power Corporation selected the BWRX-300 SMR for potential deployment in Saskatchewan in the mid-2030s Poland On December 16, 2021 Synthos Green Energy (SGE), GE Hitachi Nuclear Energy and BWXT Canada announced their intention to deploy at least 10 BWRX-300 reactors in Poland in the early 2030s. On July 8, 2022 Orlen Synthos Green - a joint venture between SGE and PKN Orlen - applied to the National Atomic Energy Agency for a general opinion on the BWRX-300 SMR technology. In August same year a date of delivery of the reactor was announced: 2029. Construction of the reactor will begin in 2024, in Darlington, Ontario. In December 2023 the initial government permit was issued to Synthos Green. USA On August 3, 2022 TVA announced that it had entered into an agreement with GEH to support its planning and preliminary licensing for the potential deployment of a BWRX-300 small modular reactor at the Clinch River site near Oak Ridge in Tennessee. In January 2025, a TVA-led coalition applied for federal funding to accelerate construction of the first SMR with commercial operation planned for 2033. Sweden On March 14, 2022 Kärnfull Future AB signed a Memorandum of Understanding with GEH to deploy the BWRX-300 in Sweden. Estonia On February 8, 2023 Fermi Energia AS chose the BWRX-300 SMR for potential deployment in Lääne-Viru County of Estonia in the early-2030s Notes GEH describes the BWRX as the tenth version of their Boiling Water Reactors., following BWR 1-6, ABWR, SBWR, and ESBWR. References External links The BWRX-300 Small Modular Reactor Nuclear power reactor types Nuclear power
BWRX-300
[ "Physics" ]
812
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
38,464,658
https://en.wikipedia.org/wiki/Relativistic%20disk
In general relativity, the relativistic disk expression refers to a class of axi-symmetric self-consistent solutions to Einstein's field equations corresponding to the gravitational field generated by axi-symmetric isolated sources. To find such solutions, one has to pose correctly and solve together the ‘outer’ problem, a boundary value problem for vacuum Einstein's field equations whose solution determines the external field, and the ‘inner’ problem, whose solution determines the structure and the dynamics of the matter source in its own gravitational field. Physically reasonable solutions must satisfy some additional conditions such as finiteness and positiveness of mass, physically reasonable kind of matter and finite geometrical size. Exact solutions describing relativistic static thin disks as their sources were first studied by Bonnor and Sackfield and Morgan and Morgan. Subsequently, several classes of exact solutions corresponding to static and stationary thin disks have been obtained by different authors. References General relativity Exact solutions in general relativity
Relativistic disk
[ "Physics", "Mathematics" ]
193
[ "Exact solutions in general relativity", "Mathematical objects", "Equations", "General relativity", "Relativity stubs", "Theory of relativity" ]
38,469,083
https://en.wikipedia.org/wiki/Matrix%20representation%20of%20Maxwell%27s%20equations
In electromagnetism, a branch of fundamental physics, the matrix representations of the Maxwell's equations are a formulation of Maxwell's equations using matrices, complex numbers, and vector calculus. These representations are for a homogeneous medium, an approximation in an inhomogeneous medium. A matrix representation for an inhomogeneous medium was presented using a pair of matrix equations. A single equation using 4 × 4 matrices is necessary and sufficient for any homogeneous medium. For an inhomogeneous medium it necessarily requires 8 × 8 matrices. Introduction Maxwell's equations in the standard vector calculus formalism, in an inhomogeneous medium with sources, are: The media is assumed to be linear, that is , where scalar is the permittivity of the medium and scalar the permeability of the medium (see constitutive equation). For a homogeneous medium and are constants. The speed of light in the medium is given by . In vacuum, 8.85 × 10−12 C2·N−1·m−2 and × 10−7 H·m−1 One possible way to obtain the required matrix representation is to use the Riemann–Silberstein vector given by If for a certain medium and are scalar constants (or can be treated as local scalar constants under certain approximations), then the vectors satisfy Thus by using the Riemann–Silberstein vector, it is possible to reexpress the Maxwell's equations for a medium with constant and as a pair of constitutive equations. Homogeneous medium In order to obtain a single matrix equation instead of a pair, the following new functions are constructed using the components of the Riemann–Silberstein vector The vectors for the sources are Then, where * denotes complex conjugation and the triplet, is a vector whose component elements are abstract 4×4 matricies given by The component M-matrices may be formed using: where from which, get: Alternately, one may use the matrix Which only differ by a sign. For our purpose it is fine to use either Ω or J. However, they have a different meaning: J is contravariant and Ω is covariant. The matrix Ω corresponds to the Lagrange brackets of classical mechanics and J corresponds to the Poisson brackets. Note the important relation Each of the four Maxwell's equations are obtained from the matrix representation. This is done by taking the sums and differences of row-I with row-IV and row-II with row-III respectively. The first three give the y, x, and z components of the curl and the last one gives the divergence conditions. The matrices M are all non-singular and all are Hermitian. Moreover, they satisfy the usual (quaternion-like) algebra of the Dirac matrices, including, The (Ψ±, M) are not unique. Different choices of Ψ± would give rise to different M, such that the triplet M continues to satisfy the algebra of the Dirac matrices. The Ψ± via the Riemann–Silberstein vector has certain advantages over the other possible choices. The Riemann–Silberstein vector is well known in classical electrodynamics and has certain interesting properties and uses. In deriving the above 4×4 matrix representation of the Maxwell's equations, the spatial and temporal derivatives of ε(r, t) and μ(r, t) in the first two of the Maxwell's equations have been ignored. The ε and μ have been treated as local constants. Inhomogeneous medium In an inhomogeneous medium, the spatial and temporal variations of ε = ε(r, t) and μ = μ(r, t) are not zero. That is they are no longer local constant. Instead of using ε = ε(r, t) and μ = μ(r, t), it is advantageous to use the two derived laboratory functions namely the resistance function and the velocity function In terms of these functions: . These functions occur in the matrix representation through their logarithmic derivatives; where is the refractive index of the medium. The following matrices naturally arise in the exact matrix representation of the Maxwell's equation in a medium where Σ are the Dirac spin matrices and α are the matrices used in the Dirac equation, and σ is the triplet of the Pauli matrices Finally, the matrix representation is The above representation contains thirteen 8 × 8 matrices. Ten of these are Hermitian. The exceptional ones are the ones that contain the three components of w(r, t), the logarithmic gradient of the resistance function. These three matrices, for the resistance function are antihermitian. The Maxwell's equations have been expressed in a matrix form for a medium with varying permittivity ε = ε(r, t) and permeability μ = μ(r, t), in presence of sources. This representation uses a single matrix equation, instead of a pair of matrix equations. In this representation, using 8 × 8 matrices, it has been possible to separate the dependence of the coupling between the upper components (Ψ+) and the lower components (Ψ−) through the two laboratory functions. Moreover, the exact matrix representation has an algebraic structure very similar to the Dirac equation. Maxwell's equations can be derived from the Fermat's principle of geometrical optics by the process of "wavization" analogous to the quantization of classical mechanics. Applications One of the early uses of the matrix forms of the Maxwell's equations was to study certain symmetries, and the similarities with the Dirac equation. The matrix form of the Maxwell's equations is used as a candidate for the Photon Wavefunction. Historically, the geometrical optics is based on the Fermat's principle of least time. Geometrical optics can be completely derived from the Maxwell's equations. This is traditionally done using the Helmholtz equation. The derivation of the Helmholtz equation from the Maxwell's equations is an approximation as one neglects the spatial and temporal derivatives of the permittivity and permeability of the medium. A new formalism of light beam optics has been developed, starting with the Maxwell's equations in a matrix form: a single entity containing all the four Maxwell's equations. Such a prescription is sure to provide a deeper understanding of beam-optics and polarization in a unified manner. The beam-optical Hamiltonian derived from this matrix representation has an algebraic structure very similar to the Dirac equation, making it amenable to the Foldy-Wouthuysen technique. This approach is very similar to one developed for the quantum theory of charged-particle beam optics. References Notes Others Bialynicki-Birula, I. (1994). On the wave function of the photon. Acta Physica Polonica A, 86, 97–116. Bialynicki-Birula, I. (1996a). The Photon Wave Function. In Coherence and Quantum Optics VII. Eberly, J. H., Mandel, L. and Emil Wolf (ed.), Plenum Press, New York, 313. Bialynicki-Birula, I. (1996b). Photon wave function. in Progress in Optics, Vol. XXXVI, Emil Wolf. (ed.), Elsevier, Amsterdam, 245–294. Jackson, J. D. (1998). Classical Electrodynamics, Third Edition, John Wiley & Sons. Jagannathan, R. , (1990). Quantum theory of electron lenses based on the Dirac equation. Physical Review A, 42, 6674–6689. Jagannathan, R. and Khan, S. A. (1996). Quantum theory of the optics of charged particles. In Hawkes Peter, W. (ed.), Advances in Imaging and Electron Physics, Vol. 97, Academic Press, San Diego, pp. 257–358. Jagannathan, R. , Simon, R., Sudarshan, E. C. G. and Mukunda, N. (1989). Quantum theory of magnetic electron lenses based on the Dirac equation. Physics Letters A 134, 457–464. Khan, S. A. (1997). Quantum Theory of Charged-Particle Beam Optics, Ph.D Thesis, University of Madras, Chennai, India. (complete thesis available from Dspace of IMSc Library, The Institute of Mathematical Sciences, where the doctoral research was done). Sameen Ahmed Khan. (2002). Maxwell Optics: I. An exact matrix representation of the Maxwell equations in a medium. E-Print: https://arxiv.org/abs/physics/0205083/. Sameen Ahmed Khan. (2005). An Exact Matrix Representation of Maxwell's Equations. Physica Scripta, 71(5), 440–442. Sameen Ahmed Khan. (2006a). The Foldy-Wouthuysen Transformation Technique in Optics. Optik-International Journal for Light and Electron Optics. 117(10), pp. 481–488 http://www.elsevier-deutschland.de/ijleo/. Sameen Ahmed Khan. (2006b). Wavelength-Dependent Effects in Light Optics. in New Topics in Quantum Physics Research, Editors: Volodymyr Krasnoholovets and Frank Columbus, Nova Science Publishers, New York, pp. 163–204. ( and ). Sameen Ahmed Khan. (2008). The Foldy-Wouthuysen Transformation Technique in Optics, In Hawkes Peter, W. (ed.), Advances in Imaging and Electron Physics, Vol. 152, Elsevier, Amsterdam, pp. 49–78. ( and ). Sameen Ahmed Khan. (2010). Maxwell Optics of Quasiparaxial Beams, Optik-International Journal for Light and Electron Optics, 121(5), 408–416. (http://www.elsevier-deutschland.de/ijleo/). Laporte, O., and Uhlenbeck, G. E. (1931). Applications of spinor analysis to the Maxwell and Dirac Equations. Physical Review, 37, 1380–1397. Majorana, E. (1974). (unpublished notes), quoted after Mignani, R., Recami, E., and Baldo, M. About a Diraclike Equation for the Photon, According to Ettore Majorana. Lettere al Nuovo Cimento, 11, 568–572. Moses, E. (1959).Solutions of Maxwell's equations in terms of a spinor notation: the direct and inverse problems. Physical Review, 113(6), 1670–1679. Panofsky, W. K. H., and Phillips, M. (1962). Classical Electricity and Magnetics, Addison-Wesley Publishing Company, Reading, Massachusetts, USA. Pradhan, T. (1987). Maxwell's Equations From Geometrical Optics. IP/BBSR/87-15; Physics Letters A 122(8), 397–398. Ludwig Silberstein. (1907a). Elektromagnetische Grundgleichungen in bivektorieller Behandlung, Ann. Phys. (Leipzig), 22, 579–586. Ludwig Silberstein. (1907b). Nachtrag zur Abhandlung ber Elektromagnetische Grundgleichungen in bivektorieller Behandlung. Ann. Phys. (Leipzig), 24, 783–784. Electrodynamics Maxwell's equations
Matrix representation of Maxwell's equations
[ "Physics", "Mathematics" ]
2,460
[ "Electrodynamics", "Maxwell's equations", "Equations of physics", "Dynamical systems" ]
38,470,594
https://en.wikipedia.org/wiki/Froissart%E2%80%93Stora%20equation
The Froissart–Stora equation describes the change in polarization which a high energy charged particle beam in a storage ring will undergo as it passes through a resonance in the spin tune. It is named after the French physicists Marcel Froissart and Raymond Stora. The polarization following passage through the resonance is given by where is the resonance strength and is the speed at which the resonance is crossed. is the initial polarization before resonance crossing. The resonance may be crossed by raising the energy so that the spin tune passes through a resonance, or driven with a transverse magnetic field at a frequency that is in resonance with the spin oscillations. The Froissart–Stora equation has a direct analogy in condensed matter physics in the Landau–Zener effect. Other spin-dynamics effects The original Froissart–Stora equation was derived for polarized protons. It may also be applied to polarized electrons in storage rings. In this case, there are additional polarization effects resulting from the synchrotron radiation. In particular, the Sokolov–Ternov effect describes the polarization due to spin flip radiation. In the case of a non-planar ring, this must be generalized as was done by Derbenev and Kondratenko. Notes Accelerator physics Eponymous equations of physics Quantum mechanics
Froissart–Stora equation
[ "Physics" ]
274
[ "Applied and interdisciplinary physics", "Equations of physics", "Theoretical physics", "Eponymous equations of physics", "Quantum mechanics", "Experimental physics", "Accelerator physics" ]
41,255,511
https://en.wikipedia.org/wiki/Biodiversity%20offsetting
Biodiversity offsetting is a system used predominantly by planning authorities and developers to fully compensate for biodiversity impacts associated with economic development, through the planning process. In some circumstances, biodiversity offsets are designed to result in an overall biodiversity gain. Offsetting is generally considered the final stage in a mitigation hierarchy, whereby predicted biodiversity impacts must first be avoided, minimised and reversed by developers, before any remaining impacts are offset. The mitigation hierarchy serves to meet the environmental policy principle of "No Net Loss" of biodiversity alongside development. Individuals or companies involved in arranging biodiversity offsets will use quantitative measures to determine the amount, type and quality of habitat that is likely to be affected by a proposed project. Then, they will establish a new location or locations (often called receptor sites) where it would be possible to re-create the same amount, type and quality of habitat. The aim of biodiversity offsets is not simply to provide financial compensation for the biodiversity losses associated with development, although developers might pay financial compensation in some cases if it can be demonstrated exactly what the physical biodiversity gains achieved by that compensation will be. The type of environmental compensation provided by biodiversity offsetting is different from biodiversity banking in that it must show both measurable and long-term biodiversity improvements, that can be demonstrated to counteract losses. However, there is so far mixed evidence that biodiversity offsets successfully counteract the biodiversity losses caused by associated developments, with evidence that offsets are generally more successful in less structurally-complex and more rapidly-recovering habitats such as loss of biodiversity simplified wetland habitats. For biodiversity offsets to successfully compensate for the loss of biodiversity elsewhere, it is necessary that they demonstrate additionality (i.e. the deliver an improvement in biodiversity that would not otherwise have occurred). While there are individual case studies of offsets that have successfully delivered additional outcomes, other evaluations of large-scale biodiversity offsetting markets have demonstrated serious additionality shortcomings. Terminology Biodiversity offsets are defined by the Business and Biodiversity Offsets Programme as "measurable conservation outcomes of actions designed to compensate for significant residual adverse biodiversity impacts arising from project development after appropriate prevention and mitigation measures have been taken." The definition also states that the goal of biodiversity offsets is to achieve no net loss of biodiversity, or ideally, a net gain. No net loss (NNL) is an environmental policy approach, defined as a goal for development projects/activities and policies where impacts on biodiversity are either counterbalanced or outweighed by measures to ensure that biodiversity is at the same level as it was before the project. Related terms Biodiversity offsetting may be confused with related terms like biodiversity banking. Biodiversity banking refers to a market-based mechanism, whereby offsets become assets in the form of biodiversity credits that can be traded to offset the debit of negative impacts of development. Biodiversity banks refer to sites where conservation or restoration activities have been carried out for the benefit of biodiversity. The positive outcomes for biodiversity or a given area of the bank is quantified in the form of a biodiversity credit.  In some languages, such as Spanish or Mandarin Chinese, biodiversity offsets are described as "compensation" because there is no corresponding term for offsetting. The term compensation is generally used more broadly in English to describe measures to counterbalance damages to biodiversity caused by development projects. Compensation does not necessarily require the aim of a no net loss goal, equivalence in biodiversity loss and gain, or measurable outcomes for conservation. Biodiversity offsets may therefore be seen as a more specific and outcome-oriented type of compensation measure. The term "mitigation" is also sometimes used synonymously with offsetting, such as in the United States, where biodiversity offsetting is described by "compensatory mitigation". However, "mitigation" can be used to refer to the sequence of actions described by the mitigation hierarchy, a framework commonly used to guide the application of biodiversity offsetting within planning processes like Environmental Impact Assessments. The mitigation hierarchy describes a series of measures that should be applied in sequence to reduce impacts on biodiversity to the point where no adverse effects remain, often including the steps avoid, reduce, restore/rehabilitate, and offset. Offsetting is often regarded as the "final resort" in the mitigation hierarchy. Relevant conservation activities Biodiversity offset projects can involve various management activities that can be demonstrated to deliver gains in biodiversity. These activities very often include active habitat restoration or creation projects (e.g. new wetland creation, grassland restoration). However, also viable are so-called "averted loss" biodiversity offsets, in which measures are taken to prevent ecological degradation from occurring where it almost certainly would have happened otherwise. Averted loss offsets might involve the creation of new protected areas (to conserve fauna species that would otherwise have disappeared), the removal of invasive species from areas of habitat (which otherwise would have reduced or displaced populations of native species), or positive measures to reduce extensive natural resource use (e.g. the offer of alternative livelihood creation to prevent activities leading to deforestation). Any activities that do not result in a positive and measurable gain for biodiversity would not generally be counted as part of a biodiversity offset. For instance, if a developer funds ecological conservation research in a region that they are impacting through a project, would not count as an offset (unless it could be shown quantitatively how specific fauna and flora would benefit). instead, this would be a more general form of compensation. Note that biodiversity offsets can be considered a very specific, robust and transparent category of ecological compensation. Receptor sites Under many offset systems, receptor sites are areas of land put forward by companies or individuals looking to receive payment in return for creating (or restoring) biodiversity habitats on their property. The biodiversity restoration projects are financed by compensation from developers looking to offset their biodiversity impact. The resulting change in biodiversity levels at the new receptor sites should be equal to, or greater than, the losses at the original ‘impact site’; in order to achieve no net loss – and preferably gain – of overall biodiversity. Such systems often rely on the buying (by developers) and selling (by landowners) of conservation credits. However, characteristics of receptor sites can vary across different jurisdictions. In some countries, for instance, land is primarily state-owned, and so it is the government that owns and manages biodiversity offset projects. For biodiversity offsets in marine environments, receptor sites might be subject to multiple management organisations and not necessarily owned by anyone. Controversially, some biodiversity offsets use existing protected areas as receptor sites (i.e. improving the effectiveness of areas that are already managed for biodiversity conservation). Requirement to offset biodiversity Biodiversity offset projects can be found on every major continent besides Antarctica. As of 2019, over 100 countries had, or were developing, policies for biodiversity offsetting and more than 37 countries required biodiversity offsets by law. These policies generally implement biodiversity offsetting within planning systems to compensate for unavoidable residual damage to biodiversity as the final step of a mitigation hierarchy, a tool to manage biodiversity risk. Where damage to biodiversity cannot be avoided or reduced, biodiversity offsetting may then be used as a conservation tool with the idea that development projects will result in either "no net loss", "net gain", or "net positive impact" of/on biodiversity. The terms used to describe biodiversity offsetting and the method of implementation differ regionally. The term 'biodiversity offsetting' is generally used across Australia, New Zealand, South Africa, and the United Kingdom. However, different terms are used elsewhere: Biodiversity offsetting may also be required by lending institutions that co-finance developments. For example, any project financed by the International Finance Corporation (IFC) must deliver "no net loss" or "net gain" of biodiversity, required under the IFC's Performance Standard 6 (PS6). PS6 is regarded as influential and an example of best practice. However, as of 2019, only 8 offset projects had been implemented directly because of this requirement. Finally, offset projects may arise from voluntary commitments made by corporations or across a sector. Only a small proportion of offsets arise in this way, but the projects generated tend to be larger than those arising from public policy requirements. For example, the Ambatovy mine in Madagascar uses voluntary avoided loss offsets to mitigate impacts on biodiversity by compensating for forest clearance at the mine. The project is on track to achieve no net loss, but the permanence of conservation outcomes achieved using its biodiversity offsets is not yet known. Biodiversity compensation in Colombia In Colombia, the equivalent term for biodiversity offsetting translates literally to biodiversity compensation (Spanish: compensaciones de biodiversidad). Principles to govern application of offsets have been established and, since 2012, the country has had a 'Biodiversity Offsetting Manual' (Spanish: Manual para la asignación de compensaciones por pérdida de biodiversidad) under Resolution 1517, with an updated manual released in 2018 under Resolution 256. Required principles to guide offset design include a no net loss objective, equivalence between the offset and impacted ecosystem, additionality, and a minimum duration of the length of the development project. Several different regulations are in place to govern biodiversity offsets, including in relation to the environmental licensing system, forest reserve areas, harvesting of forests, and the exploitation of endangered species. Biodiversity compensation in Peru In Peru, the equivalent term for biodiversity offsetting literally translates to "biodiversity compensation" (Spanish: compensaciones de biodiversidad). The country has explicit legal frameworks requiring biodiversity offsetting for some projects subject to Environmental Impact Assessments (EIAs), under laws that govern environmental licensing. The term environmental licensing (Spanish: certifcación ambiental) is used to describe measures to evaluate or mitigate potential environmental impacts of developments. The official guidelines on offsetting published by the SEIA (National Environmental Impact Assessment System,Spanish: Sistema Nacional de Evaluación de Impacto Ambiental) in 2015 require an objective of "no net loss" of biodiversity and ecosystem functionality, also requiring offsets to be based on principles of additionality, ecological equivalence, and compliance with the mitigation hierarchy. Offsets must last for the duration of the environmental impacts and must be in place when an environmental licence is approved. Ecological compensation in China In China, biodiversity offsetting is referred to using the term ecological compensation (simplified Chinese: 生态补偿机制; traditional Chinese: 生態補償機制; pinyin: shēngtài bǔcháng jīzhì), or eco-compensation. The system for biodiversity offsetting in China requires that developers complete an environmental impact assessment to determine the impact of their project, then choose either to offset the impacts themselves or pay the government to do it on their behalf. A goal similar to "no net loss", referred to as "maintain biodiversity" (simplified Chinese: 维护生物多样性 ; traditional Chinese: 維護生物多樣性; pinyin: wéihù shēngwù duōyàng xìng) is used in eco-compensation. However, offsets do not have to adhere to a "like for like" principle, where the offset is ecologically equivalent to the development site. The term 'ecological compensation' takes on multiple meanings in Chinese environmental policy, including compensation for ongoing development impacts (equivalent to biodiversity offsetting policies in other countries), compensation for previous development impacts, payments for ecosystem services, and compensation for illegal use of natural resources. In the context of biodiversity offsetting, compensation involves mitigation of negative impacts on biodiversity arising from development projects by enhancing biodiversity elsewhere, typically aiming for "no net loss" or "net positive" biodiversity outcomes. With the aim of reversing the habitat destruction caused by rapid expansion of infrastructure, the Chinese Government first launched its eco-compensation scheme between the late 1990s and early 2000s. Some of the biodiversity offsetting mechanisms used in China include the forestry vegetation restoration fee (FVRF), grassland vegetation restoration fee (GVRF, simplified Chinese: 草原植被恢复费), and wetland restoration fee (WRF). The forestry vegetation restoration fee (FVRF) (simplified Chinese: 森林植被恢复费) was the earliest ecological compensation mechanism developed in China and widely regarded as China's principal "no net loss" (NNL) instrument because it incorporates a legal commitment to no net loss of forest cover. This means that developments (such as mining operations) occupying forest land with approval from the National Forestry and Grassland Administration should pay fees to restore this vegetation. FVRFs were launched in 1998 as part of China's first Forestry Law, which established "a compensation fund for the benefit of the forest ecology". They are also the most widely used compensation mechanism in China. This is because of a policy focus on prioritising forest protection and afforestation to promote sustainability. By contrast, WRF is in its infancy and GVRF has only been applied to a some regions. Eco-compensation in China is criticised for the substantial degree of government participation through use of public funds as finance sources. On the other hand, government participation is also regarded as important in developing countries to ensure that biodiversity offset projects operate smoothly. In addition, there is no standardised measurement for compensation programmes and quantitative metrics to determine impact on biodiversity are not mandated. Compensatory mitigation in the US Biodiversity offsetting tends to come under the term "compensatory mitigation" in the United States, where biodiversity offsetting and its objective of "no net loss" originated. Compensatory mitigation (in the wetlands context) is defined by the United States Department of Agriculture (USDA) as "mitigation that offsets unavoidable impacts to wetlands or other aquatic resources in advance." This is achieved through restoration, creation, enhancement, or preservation. The most common mechanism for compensatory mitigation in the United States is mitigation banking - a concept that has since been expanded to create other forms of biodiversity banking, such as conservation banking and habitat banking. Mitigation banking is a market-based system to compensate for manipulation of wetlands (or other aquatic resources, like streams) through restoration, creation, or enhancement of wetlands at mitigation banks that generate credits. Credits can be purchased by developers to offset/compensate for the debit incurred by unavoidable adverse impacts to wetlands.   Section 404 of the Clean Water Act forms the legal basis of wetland mitigation banking in the United States, administered by the US Army Corps of Engineers and overseen by the Environmental Protection Agency. Offsetting in Australia In Australia, biodiversity offsetting has been applied since at least 2001, under the conditions of the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act). At federal and state/territory levels, policies have been established to regulate biodiversity offsetting; potential biodiversity offsets may need approval both under the EPBC Act and under the policies of the state/territory where the development is occurring. As well, much of the scientific research into biodiversity offsetting outside of the US has been conducted by Australia, especially organisations such as CEED and CSIRO. The Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) regulates biodiversity offsetting at the federal level and forms the basis of the government's 'Environmental Offsets Policy'. Under the EPBC Act, if a proposed development (such as housing developments, mining projects, or road construction) is likely to have an impact on a protected area, an Environmental Impact Assessment must be conducted. Offsetting can be carried out, as part of a mitigation hierarchy, to compensate for adverse impacts that cannot be avoided or minimised. The involvement of the federal government is limited to matters of national environmental significance, known as 'protected matters' under the EPBC Act. For example, potential adverse impacts on biodiversity where world heritage properties, wetlands of international importance under the Ramsar convention, and listed threatened species are concerned. Offsets are applied to nearly 80% of approved actions in Australia under the legal conditions of the EPBC Act, according to a report by the Australian National Audit Office in 2020. State and territory offsetting requirements State and territory governments within Australia have established their own biodiversity offsetting policies, including in the Federal Capital Territory, New South Wales, Queensland, Victoria, Western Australia, and the Northern Territory - in Tasmania, biodiversity offsetting policy is only applied in specific contexts.        Biodiversity banking mechanisms are also operated on a regional level within Australia. Biodiversity banking involves the generation of biodiversity credits (as proxies for biodiversity) from assessing the biodiversity value of land where conservation activities to restore or manage habitats have been conducted. These sites, located away from the development site, are known as 'biobanks'. The biodiversity credits generated from biobanks can then be traded within a market framework to deliver biodiversity offsets that aim to mitigate the negative impacts of development projects. Offsetting in South Africa South Africa has a legal framework to govern the implementation of biodiversity offsetting through the National Environmental Management Act (NEMA) and the Environmental Impact Assessment Regulations (EIAR), though the term is not explicitly mentioned in these laws. NEMA puts forward a polluter pays principle, which could be implemented using biodiversity offsetting, and requires developers to consider the need to avoid, or to minimise and remedy (where avoidance is not possible), the loss of biodiversity, as part of sustainable development. The EIAR includes implicit legal provisions for the use of offsets. These laws form the foundation of the 'National Biodiversity Offset Guideline', issued by the Ministry of Forestry, Fisheries & the Environment in January 2023. According to these guidelines, biodiversity offsets are required when it is likely that a proposed activity could have residual negative impacts on biodiversity of "medium or high significance" (where biodiversity may be lost in vulnerable areas, or areas of recognised importance), once measures have been taken to avoid or minimise these impacts. The implementation of a national guideline on biodiversity offsetting was recommended by the National Biodiversity Framework (2019-2024). In response to the recommendation, the 'National Biodiversity Offset Guideline' was released by the Ministry of Forestry, Fisheries & the Environment to guide the implementation of EIAR and NEMA. It provides guidance on the principles of biodiversity offsets, the requirements for biodiversity offsets, biodiversity offsets in the context of Environmental Impact Assessments, selection of sites, and planning. The principles of the guideline acknowledge offsetting as the last step of the mitigation hierarchy, a preference for ecological equivalence of offsets, and the need for offsets to be additional to conservation measures that are already legally required. It does not mention "no net loss" or "net positive impact" as goals for biodiversity, instead discussing the need to "counterbalance a residual impact". Provincial offsetting guidelines In addition to national guidelines, some South African provinces have their own offsetting guidelines. The first to develop a biodiversity offsetting framework was the Western Cape Province with the Provincial Biodiversity Offsetting Guideline. The KwaZulu-Natal and Gauteng provinces have also published guidelines for biodiversity offsetting and other provinces are drafting their own policies. Guidelines in the Western Cape Province require developers to compensate for residual impacts on biodiversity and ecosystem services, as part of the environmental impact assessment process. However, if a project proposal is deemed to be fatally flawed (it has a major defect that should result in its rejection) through its impact on biodiversity, this means that offsets cannot be applied. Like national guidelines, the Western Cape's guidelines do not use the goal of "no net loss" to guide ambitions for offsets because it is considered to be unrealistic as a result of South Africa's status as a developing country. Instead, the guideline attaches offset requirements to an acceptable loss of threatened vegetation types and ecosystem services. The use of biodiversity offsetting in South Africa has attracted debate. A range of barriers to effective implementation have been identified by researchers. For example, the lack of common understanding of the theory and practical application of biodiversity offsetting within the country is a particular challenge. Offsetting in Uganda Legal provisions for biodiversity offsets have been introduced in Uganda under the National Environment Act (NEA) 2019 with the goal of achieving no net loss, and aspiring to net gain. In addition, Uganda has published a 'National Biodiversity and Social Offset Strategy' and a 'National Biodiversity Strategy and Action Plan' for 2015-2025 which mentions biodiversity offsets. The NEA puts forward principles of environmental management that include a requirement to apply the mitigation hierarchy in environmental and social impact assessments (ESIA). The Act requires biodiversity offsets to be designed to address residual impacts, achieve measurable conservation outcomes, and adhere to the "like-for-like or better" principle. According to the "like-for-like or better principle" offsets must provide outcomes for biodiversity that are either equivalent to or better the biodiversity lost. Developers are also required to monitor projects to ensure that mitigation measures are effective and that offsets achieve NNL, as part of the Act. Prior to government policy requirements, biodiversity offset projects had been implemented in Uganda as part of lending requirements from the World Bank. For example, creation of an offset between Kalagala Falls and Itanda Falls on the River Nile to mitigate the negative impacts of the Bujagali Hydroelectric Power Station, agreed between the government of Uganda and the World Bank as a condition for financing a dam at Bujagali Falls in 2007. Bujagali Falls was flooded as a result of the project. This was criticised for its impacts on biodiversity, the tourism industry that relied on recreational activities there, and because Bujagali Falls had spiritual importance for local people. The government later broke the offset agreement in the area when it supported the construction of the Isimba Hydroelectric Power (started in 2013 and now complete) within the Kalagala-Itanda offset area. Offsetting in the UK Biodiversity Net Gain in England Biodiversity offsetting has been formally implemented into the planning process in England through the introduction of Biodiversity Net Gain (BNG) on February 12, 2024 under the Environment Act 2021 through modification to the Town and Country Planning Act. BNG is England’s domestic ecological compensation policy, designed to compensate for ecological harms caused by new developments. BNG requires that, to gain planning permission from Local Planning Authorities (LPAs), developers must demonstrate a 10% net gain in biodiversity under the proposed development, relative to the pre-development scenario, using a 'Statutory Biodiversity Metric'. Failure to meet this criterion obligates the developer to adjust their project plan, or compensate for the shortfall in biodiversity units through the purchase of biodiversity offsets, which are delivered either through a payment to the council or a third party, such as a broker managing a habitat bank. If no compensation sites are available within the local planning authority where the development is planned, compensation is permitted in other local authorities, triggering an increase in compensatory units required due to a spatial multiplier within the Metric. As a final option, developers can purchase 'statutory biodiversity credits' from the national government. Offsetting therefore represents a small proportion of biodiversity enhancements delivered through the policy; the majority of biodiversity enhancements come through habitat management activities implemented within the boundaries of new developments themselves. Assessments for Biodiversity Net Gain are conventionally integrated into the Ecological Impact Assessment (EcIA) process. This involves using data gathered from pre-development ecological surveys and processing it through the Statutory Biodiversity Metric (an Excel-based tool), to give a measure of the ecological value of a site in 'biodiversity units'. The metric uses habitat as a proxy for biodiversity by combining factors like area, habitat condition, distinctiveness, and multiple parameters (like risk, the time required for habitat development, and the ecological significance of the site on a landscape scale) for each habitat section within the development area. Using the metric, an overall biodiversity score, measured in biodiversity units, is generated. Baseline biodiversity units within the development area and associated compensation areas owned or managed by developers are compared with anticipated biodiversity units following development. For example, if a develop damages a habitat of “high distinctiveness”, they will be required to compensate with habitat of the same type, instead of trading for a less ecologically-valuable habitat. Preliminary scientific evidence on the ecological outcomes of Biodiversity Net Gain suggests the policy facilitates the trade of habitat losses from construction for smaller, but more ecologically valuable habitats to be delivered in the future. There are concerns that the monitoring and evaluation of the biodiversity benefits delivered through the policy is insufficient to ensure these future biodiversity outcomes are effectively secured. Because of this, it is thought that enforcing the policy's use by developers will be a challenge. Additionally, there are concerns that the Biodiversity Metric may not be an effective proxy for biodiversity, and therefore that a net gain in biodiversity demonstrated by the metric may not translate into real-world improvements in biodiversity such as wildlife populations. Prior to Biodiversity Net Gain Prior to this policy, developers could voluntarily incorporate offsets into project plans after following a mitigation hierarchy to manage risk to biodiversity by taking steps to avoid and minimise ecological harm at the development site, unless legally required for impacts to protected sites and species. The Lawton Review in 2010 proposed that biodiversity offsets established through planning processes could be used to enhance ecological networks, but warned that biodiversity offsetting must not become ‘a licence to destroy’. At the time the report was written, offsetting was mandatory only in areas where a development of great public interest would have a significant impact on the European Union’s Natura 2000 network or any site inhabited by a European protected species. The Review recommended the establishment of pilot schemes to test potential biodiversity offsetting systems in the country. A 2011 white paper ‘The natural choice: securing the value of nature’ responded to the Lawton review and announced plans to introduce voluntary biodiversity offsetting through pilot schemes. In April 2012, the Department for Environment, Food, and Rural Affairs (Defra) launched a voluntary biodiversity offsetting pilot scheme. Developers in pilot areas were required to provide compensation for biodiversity loss under planning policy and were able to choose offsetting to do so. The scheme also aimed to test a biodiversity offsetting metric developed by Defra. This scheme included 6 pilot areas: Doncaster, Devon, Essex, Greater Norwich, Nottinghamshire, and Warwickshire. In March 2014, the pilot scheme ended and was reviewed by Collingwood Environmental Planning Limited in partnership with the Institute for European Environmental Policy (IEEP). However, the scheme also drew criticism from Friends of the Earth who described it as a “licence to destroy” and the possibility of like-for-like compensation of biodiversity loss has been questioned. In 2012, a standard metric for biodiversity was piloted by Defra for use in the biodiversity offsetting pilot scheme. Consultation from environment, planning, land management, academic, and development sectors led to numerous updated biodiversity metrics over a period of several years. Biodiversity Metric 4.0 was launched by Defra and Natural England in March 2023 to measure Biodiversity Net Gain. A Statutory Biodiversity Metric was later introduced as part of the Environment Act as the legally mandated metric for use under the biodiversity net gain policy. This metric uses habitat as a proxy for biodiversity by combining factors like area, habitat condition, distinctiveness, and multiple parameters (like risk, the time required for habitat development, and the ecological significance of the site on a landscape scale) for each habitat section within the development area. The Government announced plans to mandate a biodiversity net gain policy in England in March 2019, as part of an Environment Bill that would require 'developers to ensure habitats for wildlife are enhanced and left in a measurably better state than they were pre-development’. The Bill was later enacted as the Environment Act 2021. Initially, BNG was planned to come into force by November 2023, but delays meant that it was not implemented until February 12, 2024. This delay was criticised by environmentalists, including The Wildlife Trusts, who called it “another hammer blow for nature.” In response to these criticisms, a government spokesman reaffirmed the government’s commitment to BNG, saying that “we are fully committed to biodiversity net gain which will have benefits for people and nature.” Economic value Biodiversity is increasingly seen as having economic value due to growing recognition of the world's finite natural resources and through the benefits of ecosystem services (nature providing clean air, food and water, natural flood defences, pollination services and recreation opportunity). Placing financial value on biodiversity has created a marketplace for retaining and restoring habitats. Financial gain from biodiversity offsetting is brought about through the sale of conservation credits by landowners through biodiversity banking mechanisms. Individuals or companies who are looking to receive financial payment in return for creating or enhancing particular wildlife habitats on their property can have their land valued in conservation credits by a biodiversity offsetting broker who will then register their credits for sale to developers looking to offset any residual impact to biodiversity from their approved developments. Developers can also find the business of biodiversity offsetting appealing financially as the compensation payment for their project's residual biodiversity impact is handled in one agreement and the landowner receiving that payment (and therefore the habitat re-creation duties) is responsible for the biodiversity restoration and management thereafter. The cost may represent a small proportion of a developer's budget and is often outweighed by a project's long-term gains. As corporate social responsibility is often part of larger companies’ business priorities, being able to demonstrate environmentally responsible practices can be an additional incentive. Biodiversity offsetting based upon showing the economic value of lost habitat is highly controversial. The schemes proposed for the UK have been regarded as failing to protect biodiversity and indeed leading to further losses in the prioritisation of development over conservation. The basic economics has been described by ecological economist Clive Spash as leading to the “bulldozing of biodiversity” under an approach that regards optimal species extinction as being necessary to achieve economic efficiency. Conservation credits The cost of re-creating an area of habitat affected by a development proposal (impact site) can be calculated and represented as a number of conservation credits that a developer could purchase in order to offset their biodiversity impact. Land put forward for investment to re-create impacted biodiversity (receptor site) is also calculated in conservation credits (to account for the cost of creating or restoring biodiversity at that particular site and to cover the cost of its long-term conservation management). This situation enables the buying (by developers) and selling (by landowners) of conservation credits. Government approved (quantitative and qualitative) metrics should be used to calculate the number of conservation credits that can be applied to each site, in order to maintain accuracy and consistency in the value of a conservation credit. Motivation A decline in global biodiversity is being driven, partially, by land-use changes, including for the purpose of developing infrastructure. Reconciling economic development with the need to conserve biodiversity can therefore be a challenge, particularly in developing countries. The need to address this decline acted as a motivation for creating a system within the planning process that tackles unavoidable and residual impacts to biodiversity. Putting this into practice often involves formal evaluation of possible impacts on wildlife (and their habitat) at a potential development site before developers can receive approval. This may occur in the form of Environmental Impact Assessments (EIA), which look at how proposed projects would impact the environment (including biodiversity) at the development site in conjunction with social and economic issues. EIAs have become widespread within the work of government planning authorities. In some jurisdictions, they are legally required and these requirements often motivate the use of biodiversity offsetting. The approval of a project proposal may depend upon the use of measures to mitigate its potential impacts. A package of measures, including biodiversity offsetting, could be recommended as part of the EIA process. The mitigation hierarchy is commonly applied to EIAs to guide the mitigation of negative impacts on biodiversity. The mitigation hierarchy is a framework of sequential steps (avoid, reduce/minimise, restore/rehabilitate, and offset) and biodiversity offsetting is its final step to counterbalance impacts that cannot be avoided or reduced. Critique Biodiversity offsetting is a subject of significant debate. Challenges associated with putting offsets into practice and governing them effectively have been recognised by both supporters and opponents of the concept. For example, some of these challenges include: application of the mitigation hierarchy in practice, monitoring and evaluation programmes to track whether offsets are meeting targets, and the metrics used as a proxy for biodiversity losses and gains. There is disagreement when it comes to whether offsets are feasible or acceptable as a tool for conserving biodiversity. No net loss (NNL), commonly used as an objective for biodiversity offsets, is one reason for debate. A no net loss goal requires that biodiversity loss in one area is counterbalanced by potential but uncertain gains in another area. A review of research conducted to determine the success of no net loss policies found that around one-third of NNL policies and individual biodiversity offsets reported achieving no net loss. Concerns have been raised over the feasibility of achieving NNL because of the complex nature of biodiversity in all of its aspects (such as species diversity, genetic diversity, etc.), meaning that efforts to quantify biodiversity and determine the equivalence between biodiversity in two different areas to determine losses and gains may be regarded as either difficult or impossible. Further concerns have been expressed over substitution of biodiversity in a specific place for efforts to conserve biodiversity elsewhere, given that biodiversity can have a place-based cultural and spiritual value for humans but also because of a view of biodiversity as having an intrinsic value outside of benefits to humans. For the reasons mentioned above and others, critics have argued that offsetting is an ethically misguided process. For example, it has been argued that biodiversity offsetting legitimises ongoing habitat destruction and promotes the "bulldozing of biodiversity". A similar view is taken by the environmental organisation Friends of the Earth, who oppose the use of biodiversity offsets and have expressed concern at the use of measurable units to value nature. Biodiversity offsetting has also been described by critics as a "licence to trash". Some have argued that the debate on biodiversity offsetting also forms part of a wider discussion on the ethics of economically valuing biodiversity and the application of neoliberal principles to biodiversity conservation. See also Biodiversity Biodiversity banking Cross-sector biodiversity initiative Economics of biodiversity Ecosystem services Mitigation banking No net loss Environmental mitigation References Further reading "Business and Biodiversity Offsets Programme" Conservation when nothing stands still: moving targets and biodiversity offsets; Joseph Bull, Kenwyn B Suttle, Navinder J Singh, EJ Milner-Gulland – Imperial College London, Swedish University of Agricultural Sciences Defra Biodiversity Offsetting Pilots: guidance for developers Exploring the potential demand for and supply of habitat banking in the EU and appropriate design elements for a habitat banking scheme Realising nature's value: the final report of the Ecosystem Markets Task Force (March 2013) Builders: The saviours of meadows? Sunday Telegraph, 25 Oct 2013 External links Biodiversity offsetting in the UK Economic and environmental opportunities in Europe UK Biodiversity offsetting brokers: The Environment Bank Biodiversity offsetting in Australia Biodiversity offsets programme in New Zealand Latin American biodiversity offsetting Biodiversity Economics of sustainability Environmental economics Environmental mitigation
Biodiversity offsetting
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
7,242
[ "Environmental economics", "Environmental mitigation", "Biodiversity", "Environmental engineering", "Environmental social science" ]
41,259,155
https://en.wikipedia.org/wiki/Subsurface%20utility%20engineering
Subsurface utility engineering (SUE) refers to a branch of engineering that involves managing certain risks associated with utility mapping at appropriate quality levels, utility coordination, utility relocation design and coordination, utility condition assessment, communication of utility data to concerned parties, utility relocation cost estimates, implementation of utility accommodation policies, and utility design. The SUE process begins with a work plan that outlines the scope of work, project schedule, levels of service vs. risk allocation and desired delivery method. Non-destructive surface geophysical methods are then leveraged to determine the presence of subsurface utilities and to mark their horizontal position on the ground surface. Vacuum excavation techniques are employed to expose and record the precise horizontal and vertical position of the assets. This information is then typically presented in CAD format or a GIS-compatible map. A conflict matrix is also created to evaluate and compare collected utility information with project plans, identify conflicts and propose solutions. The concept of SUE is gaining popularity worldwide as a framework to mitigate costs associated with project redesign and construction delays and to avoid risk and liability that can result from damaged underground utilities. History The practice of collecting, recording and managing subsurface data has historically been widely unregulated. In response to this challenge, in 2003, The American Society of Civil Engineers (ASCE) developed standard 38-02: Guideline for the Collection and Depiction of Existing Subsurface Utility Data, which defined the practice of SUE. Many countries followed the U.S. lead by creating similar standards including Malaysia, Canada, Australia, Great Britain and most recently, Ecuador. Developed and refined over the last 20 years, SUE classifies information according to quality levels with an objective to vastly improve data reliability. This provides project owners and engineers with a benchmark to determine the integrity of utility data at the outset of an infrastructure project. Governing standards A number of standards for care have been developed to maintain the use of SUE. ASCE Standard 38-02 In 2003, the American Society of Civil Engineers (ASCE) published Standard 38-02 titled Standard Guideline for the Collection and Depiction of Existing Subsurface Utility Data. The standard defined SUE and set guidance for the collection and depiction of subsurface utility information. ASCE involvement with SUE is substantially through Utility Engineering & Surveying Institute (UESI). The ASCE standard presents a system to classify the quality of existing subsurface utility data, in accordance with four quality levels: Quality Level D. QL-D is the most basic level of information for utility locations. It comes from existing utility records or verbal recollections. QL-D is useful primarily for project planning and route selection activities. Quality Level C. QL-C involves surveying visible above ground utility facilities (e.g., manholes, valve boxes, etc.) and correlating this information with existing utility records (QL-D information). Quality Level B. QL-B involves the application of appropriate surface geophysical methods to determine the existence and horizontal position of virtually all subsurface utilities within a project's limits. Quality Level A. QL-A, also known as "daylighting", is the highest level of accuracy presently available. It provides information for the precise plan and profile mapping of underground utilities through the actual exposure of underground utilities (usually at a specific point), and also provides the type, size, condition, material and other characteristics of underground features. Exposure is typically achieved through hand digging or Hydro-Vacuuming. Malaysia Standard Guideline for Underground Utility Mapping The Standard Guideline for Underground Utility Mapping in Malaysia was launched in 2006 to create, populate and maintain the national underground utility database. This standard addresses issues such as roles of stakeholders and how utility information can be obtained, and was a call to action from the Malaysian government due to increasing demands for improvements on basic infrastructure facilities including utilities. The Standard is similar to ASCE 38-02, using quality levels D-A as its basis. Although it does not classify utility definition, colours or symbols, the Malaysian standard does specify an accuracy ±10 cm for both horizontal and vertical readings. The Standard is supported by the Malaysian government but is not backed by an Association or governing body. CSA Standard S250 In 2011, the Canadian Standards Association (CSA) released Standard S250 Mapping of Underground Utility Infrastructure. The Standard is described as a collective framework for best practices to map, depict and manage records across Canada. CSA S250 complements and extends ASCE Standard 38-02 by setting out requirements for generating, storing, distributing, and using mapping records to ensure that underground utilities are readily identifiable and locatable. Accuracy levels expand upon ASCE 38-02 Quality Level A, prescribing a finer level of detail to define the positional location of the infrastructure. Standards Australia Committee AS 5488-2013 In June, 2013, the Standards Australia Committee IT-036 on Subsurface Utility Engineering Information launched Standard 5488-2013 Classification of Subsurface Utility Information to provide utility owners, operators and locators with a framework for the consistent classification of information concerning subsurface utilities. The standard also provides guidance on how subsurface utility information can be obtained and conveyed to users. In 2019 the standard was split into: AS 5488.1 Classification of Subsurface Utility Information (SUI) Part 1: Subsurface Utility Information AS 5488.2 Classification of Subsurface Utility Information (SUI) Part 2: Subsurface utility engineering (SUE) The standards was since revised in 2022. British Standards Institute PAS 128 An industry consultation event in January 2012 kicked off the development of a British SUE standard. The first technical draft was reviewed by the committee in December 2012 and it was released for public/general industry review in March 2013. PAS 128 applies to the detection, verification and location of active, abandoned, redundant or unknown underground utilities and associated surface features that facilitate the location and identification of underground utility infrastructure. It sets out the accuracy to which the data is captured for specific purposes, the quality expected of that data and a means by which to assess and indicate the confidence that can be placed in the data. Ecuadorian Institute for Standardization NTE INEN 2873 In March, 2015 the Ecuadorian Institute for Standardization (INEN) have published the Standard NTE INEN 2873 for the Detection and Mapping of Utilities and Underground Infrastructure. This Standard establishes procedures for the mapping of utilities for the purposes of reducing the uncertainties created by existing underground utilities. Its systematic use can provide both a means for continual improvement in the reliability, accuracy, and precision of future utility records; and immediate value during project development. It combines two basic concepts. The first concept is the means of classifying the reliability of the existence and location of utilities already installed and hidden in the ground. It is used during project development and is a major component of Subsurface Utility Engineering (SUE). The second concept is how to specify the recording of utilities exposed during their installation or during maintenance/repair operations so that future records are reliable. It is used primarily during utility installation. It is fundamentally a traditional survey and documentation function. Combining these concepts will lead to a continual reduction in the risks created by underground utilities during future projects involving excavation of any kind. Applications SUE is mainly used at the design stage of a capital works project and when information is being collected for asset management purposes. In both situations, a similar process is followed but the scope of the work and presentation of the information may vary. When a SUE investigation is carried out for a capital works project prior to construction, the objective is generally to collect accurate utility information within the project area to avoid conflict at later stages of the project. For initiatives involving asset management, project owners may be missing information about their underground utilities or have inaccurate data. In this situation a SUE provider would collect the required information and add it to the asset management database, according to the four quality levels prescribed by ASCE Standard 38-02. See also Utility location References Civil engineering Geotechnical engineering Subterranea (geography)
Subsurface utility engineering
[ "Engineering" ]
1,640
[ "Construction", "Civil engineering", "Geotechnical engineering" ]
57,479,799
https://en.wikipedia.org/wiki/Speculative%20Store%20Bypass
Speculative Store Bypass (SSB) () is the name given to a hardware security vulnerability and its exploitation that takes advantage of speculative execution in a similar way to the Meltdown and Spectre security vulnerabilities. It affects the ARM, AMD and Intel families of processors. It was discovered by researchers at Microsoft Security Response Center and Google Project Zero (GPZ). After being leaked on 3 May 2018 as part of a group of eight additional Spectre-class flaws provisionally named Spectre-NG, it was first disclosed to the public as "Variant 4" on 21 May 2018, alongside a related speculative execution vulnerability designated "Variant 3a". Details Speculative execution exploit Variant 4, is referred to as Speculative Store Bypass (SSB), and has been assigned . SSB is named Variant 4, but it is the fifth variant in the Spectre-Meltdown class of vulnerabilities. Steps involved in exploit: "Slowly" store a value at a memory location "Quickly" load that value from that memory location Utilize the value that was just read to disrupt the cache in a detectable way Impact and mitigation Intel claims that web browsers that are already patched to mitigate Spectre Variants 1 and 2 are partially protected against Variant 4. Intel said in a statement that the likelihood of end users being affected was "low" and that not all protections would be on by default due to some impact on performance. The Chrome JavaScript team confirmed that effective mitigation of Variant 4 in software is infeasible, in part due to performance impact. Intel is planning to address Variant 4 by releasing a microcode patch that creates a new hardware flag named Speculative Store Bypass Disable (SSBD). A stable microcode patch is yet to be delivered, with Intel suggesting that the patch will be ready "in the coming weeks". Many operating system vendors will be releasing software updates to assist with mitigating Variant 4; however, microcode/firmware updates are required for the software updates to have an effect. Speculative execution exploit variants References See also Speculative execution CPU vulnerabilities External links Website detailing the Meltdown and Spectre vulnerabilities, hosted by Graz University of Technology Google Project Zero write-up Meltdown/Spectre Checker Gibson Research Corporation Transient execution CPU vulnerabilities 2018 in computing X86 memory management
Speculative Store Bypass
[ "Technology" ]
482
[ "Transient execution CPU vulnerabilities", "Computer security exploits" ]
57,491,435
https://en.wikipedia.org/wiki/Broadband%20viscoelastic%20spectroscopy
Broadband viscoelastic spectroscopy (BVS) is a technique for studying viscoelastic solids in both bending and torsion. It provides the ability to measure viscoelastic behavior over eleven decades (orders of magnitude) of time and frequency: from 10−6 to 105 Hz. BVS is typically either used to investigate viscoelastic properties isothermally over a large frequency range or as a function of temperature at a single frequency. It is capable of measuring mechanical properties directly over these frequency and temperature ranges; as such, it does not require time-temperature superposition or the assumption that material properties obey an Arrhenius-type temperature dependence. As a result, it can be used for heterogeneous and anisotropic specimens for which these assumptions do not apply. BVS is often used for the determination of attenuation coefficients, dynamic moduli, and especially damping ratios. BVS was developed primarily to overcome shortcomings in the functional ranges of other viscoelastic characterization techniques. For example, resonant ultrasound spectroscopy (RUS), another popular technique for studying viscoelastic solids, experiences difficulty in determining a material's parameters below its resonant frequency. Furthermore, BVS is less sensitive to sample preparation than RUS. History BVS was first developed by C. P. Chen and R. S. Lakes in 1989 in order to address the shortcomings of existing laboratory techniques for studying viscoelastic materials. It was later refined by M. Brodt et al. to improve the rigidity and resolution of the apparatus, which were sources of error in the original design. First used to study poly(methyl methacrylate) (PMMA), it has since seen applications in determining the properties of bone, capacitor dielectrics, high damping metals, and other such viscoelastic materials. Design The BVS apparatus consists of a specimen surrounded by Helmholtz coils and isolated from external vibrations by a framework constructed from insulating foam and either lead or brass. The specimen is affixed with both a permanent magnet and a mirror. The orientation of the coils with respect to the magnet when a current is driven through them determines whether the specimen undergoes bending or torsion. Angular displacement of the specimen is measured by an interferometer that detects the spatial movement of a reflected laser. This spatial waveform is converted to an electrical one by a light detector and read out on an oscilloscope. This oscilloscope also displays the torque or force waveform from the capacitor driving the current in the Helmholtz coils. Phase delay is determined by comparing these waveforms. Resonance is minimized through the use of short specimens—which have higher resonant frequencies—and by reducing the inertia (magnetic and mass moments) of the magnet. Cubic samarium-cobalt magnets are ideal for high frequency studies. Due to the sample geometry being a short rectangular bar or cylinder, the equation governing the resonance of the BVS specimen geometry has an exact analytic solution, which allows the technique to yield results even for high loss materials. This exact solution provides a relationship between dynamic moduli, angular displacement, and geometric parameters. The inherent lack of drift and friction in the apparatus is responsible for its large range of operating frequencies. References Spectroscopy
Broadband viscoelastic spectroscopy
[ "Physics", "Chemistry" ]
676
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
39,867,144
https://en.wikipedia.org/wiki/Ancestral%20sequence%20reconstruction
Ancestral sequence reconstruction (ASR) – also known as ancestral gene/sequence reconstruction/resurrection – is a technique used in the study of molecular evolution. The method uses related sequences to reconstruct an "ancestral" gene from a multiple sequence alignment. The method can be used to 'resurrect' ancestral proteins and was suggested in 1963 by Linus Pauling and Emile Zuckerkandl. In the case of enzymes, this approach has been called paleoenzymology (British: palaeoenzymology). Some early efforts were made in the 1980s and 1990s, led by the laboratory of Steven A. Benner, showing the potential of this technique. Thanks to the improvement of algorithms and of better sequencing and synthesis techniques, the method was developed further in the early 2000s to allow the resurrection of a greater variety of and much more ancient genes. Over the last decade, ancestral protein resurrection has developed as a strategy to reveal the mechanisms and dynamics of protein evolution. Principles Unlike conventional evolutionary and biochemical approaches to studying proteins, i.e. the so-called horizontal comparison of related protein homologues from different branch ends of the tree of life; ASR probes the statistically inferred ancestral proteins within the nodes of the tree – in a vertical manner (see diagram, right). This approach gives access to protein properties that may have transiently arisen over evolutionary time and has recently been used as a way to infer the potential selection pressures that resulted in present-day sequences. ASR has been used to probe the causative mutation that resulted in a protein's neofunctionalization after duplication by first determining that said mutation was located between ancestors '5' and '4' on the diagram (illustratively) using functional assays. In the field of protein biophysics, ASR has also been used to study the development of a protein's thermodynamic and kinetic landscapes over evolutionary time as well as protein folding pathways by combining many modern day analytical techniques such as HX/MS. These sort of insights are typically inferred from several ancestors reconstructed along a phylogeny – referring to the previous analogy, by studying nodes higher and higher (further and further back in evolutionary time) within the tree of life. Most ASR studies are conducted in vitro, and have revealed ancestral protein properties that seem to be evolutionarily desirable traits – such as increased thermostability, catalytic activity and catalytic promiscuity. These data have been accredited to artifacts of the ASR algorithms, as well as indicative illustrations of ancient Earth's environment – often, ASR research must be complemented with extensive controls (usually alternate ASR experiments) to mitigate algorithmic error. Not all studied ASR proteins exhibit this so-called 'ancestral superiority'. The nascent field of 'evolutionary biochemistry' has been bolstered by the recent increase in ASR studies using the ancestors as ways to probe organismal fitness within certain cellular contexts – effectively testing ancestral proteins in vivo. Due to inherent limitations in these sorts of studies – primarily being the lack of suitably ancient genomes to fit these ancestors in to, the small repertoire of well categorized laboratory model systems, and the inability to mimic ancient cellular environments; very few ASR studies in vivo have been conducted. Despite the above mentioned obstacles, preliminary insights into this avenue of research from a 2015 paper, have revealed that observed 'ancestral superiority' in vitro were not recapitulated in vivo of a given protein. ASR presents one of a few mechanisms to study biochemistry of the Precambrian era of life (>541Ma) and is hence often used in 'paleogenetics'; indeed Zuckerkandl and Pauling originally intended ASR to be the starting point of a field they termed 'Paleobiochemistry'. Methodology Several related homologues of the protein of interest are selected and aligned in a multiple sequence alignment (MSA), a 'phylogenetic tree' is constructed with statistically inferred sequences at the nodes of the branches. It is these sequences that are the so-called 'ancestors' – the process of synthesizing the corresponding DNA, transforming it into a cell and producing a protein is the so-called 'reconstruction'. Ancestral sequences are typically calculated by maximum likelihood, however Bayesian methods are also implemented. Because the ancestors are inferred from a phylogeny, the topology and composition of the phylogeny plays a major role in the output ASR sequences. Given that there is much discourse and debate over how to construct phylogenies – for example whether or not thermophilic bacteria are basal or derivative in bacterial evolution – many ASR papers construct several phylogenies with differing topologies and hence differing ASR sequences. These sequences are then compared and often several (~10) are expressed and studied per phylogenetic node. ASR does not claim to recreate the actual sequence of the ancient protein/DNA, but rather a sequence that is likely to be similar to the one that was indeed at the node. This is not considered a shortcoming of ASR as it fits into the 'neutral network' model of protein evolution, whereby at evolutionary junctions (nodes) a population of genotypically different but phenotypically similar protein sequences existed in the extant organismal population. Hence, it is possible that ASR would generate one of the sequences of a node's neutral network and while it may not represent the genotype of the last common ancestor of the modern day sequences, it does likely represent the phenotype. This is supported by the modern day observation that many mutations in a protein's non-catalytic/functional site cause minor changes in biophysical properties. Hence, ASR allows one to probe the biophysical properties of past proteins and is indicative of ancient genetics. Maximum likelihood (ML) methods work by generating a sequence where the residue at each position is predicted to be the most likely to occupy said position by the method of inference used – typically this is a scoring matrix (similar to those used in BLASTs or MSAs) calculated from extant sequences. Alternate methods include maximum parsimony (MP) that construct a sequence based on a model of sequence evolution – usually the idea that the minimum number of nucleotidal sequence changes represents the most efficient route for evolution to take and by Occam's razor is the most likely. MP is often considered the least reliable method for reconstruction as it arguably oversimplifies evolution to a degree that is not applicable on the billion year scale. Another method involves the consideration of residue uncertainty – so-called Bayesian methods – this form of ASR is sometimes used to complement ML methods but typically produces more ambiguous sequences. In ASR, the term 'ambiguity' refers to residue positions where no clear substitution can be predicted – often in these cases, several ASR sequences are produced, encompassing most of the ambiguities and compared to one-another. ML ASR often needs complementing experiments to indicate that the derived sequences are more than just consensuses of the input sequences. This is particularly necessary in the observation of 'Ancestral Superiority'. In the trend of increasing thermostability, one explanation is that ML ASR creates a consensus sequence of several different, parallel mechanisms evolved to confer minor protein thermostability throughout the phylogeny – leading to an additive effect resulting in 'superior' ancestral thermostability. The expression of consensus sequences and parallel ASR via non-ML methods are often required to disband this theory per experiment. One other concern raised by ML methods is that the scoring matrices are derived from modern sequences and particular amino acid frequencies seen today may not be the same as in Precambrian biology, resulting in skewed sequence inference. Several studies have attempted to construct ancient scoring matrices via various methodologies and have compared the resultant sequences and their protein's biophysical properties. While these modified sequences result in somewhat different ASR sequences, the observed biophysical properties did not seem to vary outside from experimental error. Because of the 'holistic' nature of ASR and the intense complexity that arises when one considers all the possible sources of experimental error – the experimental community considers the ultimate measurement of ASR reliability to be the comparison of several alternate ASR reconstructions of the same node and the identification of similar biophysical properties. While this method does not offer a robust statistical, mathematical measure of reliability it does build off of the fundamental idea used in ASR that individual amino acid substitutions do not cause significant biophysical property changes in a protein – a tenant that must be held true in order to be able to overcome the effect of inference ambiguity. Candidates used for ASR are often selected based on the particular property of interest being studied – e.g. thermostability. By selecting sequences from either end of a property's range (e.g., psychrophilic proteins and thermophilic proteins) but within a protein family, ASR can be used to probe the specific sequence changes that conferred the observed biophysical effect – such as stabilising interactions. Consider in the diagram, if sequence 'A' encoded a protein that was optimally functional at neutral pHs and 'D' in acidic conditions, sequence changes between '5' and '2' may illustrate the precise biophysical explanation for this difference. As ASR experiments can extract ancestors that are likely billions of years old, there are often tens if not hundreds of sequence changes between ancestors themselves and ancestors and extant sequences – because of this, such sequence-function evolutionary studies can take a lot of work and rational direction. Resurrected proteins There are many examples of ancestral proteins that have been computationally reconstructed, expressed in living cell lines, and – in many cases – purified and biochemically studied. The Thornton lab notably resurrected several ancestral hormone receptors (from about 500Ma) and collaborated with the Stevens lab to resurrect ancient V-ATPase subunits from yeast (800Ma). The Marqusee lab has recently published several studies concerning the evolutionary biophysical history of E. coli Ribonuclease H1. Some other examples are ancestral visual pigments in vertebrates, enzymes in yeast that break down sugars (800Ma); enzymes in bacteria that provide resistance to antibiotics (2 – 3Ga); the ribonucleases involved in ruminant digestion; the alcohol dehydrogenases (Adhs) involved in yeast fermentation(~85Ma); and RuBisCO in Solanaceae. The 'age' of a reconstructed sequence is determined using a molecular clock model, and often several are employed. This dating technique is often calibrated using geological time-points (such as ancient ocean constituents or BIFs) and while these clocks offer the only method of inferring a very ancient protein's age, they have sweeping error margins and are difficult to defend against contrary data. To this end, ASR 'age' should really be only used as an indicative feature and is often surpassed altogether for a measurement of the number of substitutions between the ancestral and the modern sequences (the fundament on which the clock is calculated). That being said, the use of a clock allows one to compare observed biophysical data of an ASR protein to the geological or ecological environment at the time. For example, ASR studies on bacterial EF-Tus (proteins involved in translation, that are likely rarely subject to HGT and typically exhibit Tms ~2C greater than Tenv) indicate a hotter Precambrian Earth which fits very closely with geological data on ancient earth ocean temperatures based on Oxygen-18 isotopic levels. ASR studies of yeast Adhs reveal that the emergence of subfunctionalized Adhs for ethanol metabolism (not just waste excretion) arose at a time similar to the dawn of fleshy fruit in the Cambrian Period and that before this emergence, Adh served to excrete ethanol as a byproduct of excess pyruvate. The use of a clock also perhaps indicates that the origin of life occurred before the earliest molecular fossils indicate (>4.1Ga), but given the debatable reliability of molecular clocks, such observations should be taken with caution. Thioredoxin One example is the reconstruction of thioredoxin enzymes from up to 4 billion year old organisms. Whereas the chemical activity of these reconstructed enzymes were remarkably similar to modern enzymes, their physical properties showed significantly elevated thermal and acidic stability. These results were interpreted as suggesting that ancient life may have evolved in oceans that were much hotter and more acidic than today. Significance These experiments address various important questions in evolutionary biology: does evolution proceed in small steps or in large leaps; is evolution reversible; how does complexity evolve? It has been shown that slight mutations in the amino acid sequence of hormone receptors determine an important change in their preferences for hormones. These changes mean huge steps in the evolution of the endocrine system. Thus very small changes at the molecular level may have enormous consequences. The Thornton lab has also been able to show that evolution is irreversible studying the glucocorticoid receptor. This receptor was changed by seven mutations in a cortisol receptor, but reversing these mutations didn't give the original receptor back. Indicating that epistasis plays a major role in protein evolution – an observation that in combination with the observations of several examples of parallel evolution, support the neutral network model mentioned above. Other earlier neutral mutations acted as a ratchet and made the changes to the receptor irreversible. These different experiments on receptors show that, during their evolution, proteins are greatly differentiated and this explains how complexity may evolve. A closer look at the different ancestral hormone receptors and the various hormones shows that at the level of interaction between single amino acid residues and chemical groups of the hormones arise by very small but specific changes. Knowledge about these changes may for example lead to the synthesis of hormonal equivalents capable of mimicking or inhibiting the action of a hormone, which might open possibilities for new therapies. Given that ASR has revealed a tendency towards ancient thermostability and enzymatic promiscuity, ASR poses as a valuable tool for protein engineers who often desire these traits (producing effects sometimes greater than current, rationally lead tools). ASR also promises to 'resurrect' phenotypically similar 'ancient organisms' which in turn would allow evolutionary biochemists to probe the story of life. Proponents of ASR such as Benner state that through these and other experiments, the end of the current century will see a level of understanding in biology analogous to the one that arose in classical chemistry in the last century. References Evolutionary biology Molecular biology Molecular evolution Paleobiology
Ancestral sequence reconstruction
[ "Chemistry", "Biology" ]
3,035
[ "Evolutionary biology", "Evolutionary processes", "Molecular evolution", "Paleobiology", "Molecular biology", "Biochemistry" ]
48,955,231
https://en.wikipedia.org/wiki/Plant%20press
A plant press is a set of equipment used by botanists to flatten and dry field samples so that they can be easily stored. A professional plant press is made to the standard maximum size for biological specimens to be filed in a particular herbarium. A flower press is a similar device of no standard size that is used to make flat dried flowers for pressed flower craft. Specimens prepared in a plant press are later glued to archival-quality card stock with their labels, and are filed in a herbarium. Labels are made with archival ink (or pencil) and paper, and attached with archival-quality glue. Construction A modern plant press consists of two strong outer boards with straps that can be tightened around them to exert pressure. Between the boards, fresh plant samples are placed, carefully labelled, between layers of paper. Further layers of absorbent paper and corrugated cardboard are usually added to help to dry the samples as quickly as possible, which prevents decay and improves colour retention. Layers of a sponge material can be used in order to prevent squashing parts of the specimens, such as fruit. Older plant presses and some modern flower presses have screws to supply the pressure, which often limits the thickness of the stack of samples that can be put into one press. History Luca Ghini (1490—1556) Italian physician and botanist, created the first recorded herbarium, and is considered the first person to have used drying under pressure to prepare a plant collection. William Withering English botanist, geologist, chemist and physician wrote popular books on British botany, and by describing the screw-down plant press (and the vasculum) he brought it to the attention of amateur naturalists in Britain around 1771. References External links — illustrates use of a plant press. Press
Plant press
[ "Biology" ]
356
[ "Plants", "Botany" ]
48,955,331
https://en.wikipedia.org/wiki/Mobolize
Mobolize is a mobile device software company with headquarters in Los Altos, CA. In 2013, Sprint announced a technology partnership with Mobolize. In October 2020, Akamai and Mobolize announced a partnership to offer security to mobile devices for enterprises. Mobolize's Data Management Engine will support Akamai Enterprise Threat Protector, a cloud secure web gateway (SWG). In June 2021, Akamai expanded its partnership with Mobolize to include zero trust capabilities on mobile devices. Mobolize was recognized in 2014 by CTIA as winner of the Telecom Council Showcase. In 2015, Mobolize was recognized by LightReading as a Leading Lights finalist for Best New Mobile Product, and also by FierceWireless as a Fierce Innovation Awards finalist for Network Service Delivery. The company is led by co-founders David Cohen and William Chow as Co–CEOs. David Cohen was previously a founder of Frontbridge Technologies, which was acquired by Microsoft and became Microsoft Exchange Hosted Services. William Chow was previously the chief architect of QLogic Storage Solutions Group (formerly Troika Networks) and security team lead at Stamps.com. References External links The Data Optimization Paradox Mobile software Wireless networking Security technology
Mobolize
[ "Technology", "Engineering" ]
248
[ "Wireless networking", "Computer networks engineering" ]
48,956,727
https://en.wikipedia.org/wiki/Pioneer%20SX-1980
The Pioneer SX-1980 is an AM/FM radio receiver that Pioneer Corporation introduced in 1978, to be matched with the HPM series of speakers. It was rated at 270 watts RMS per channel into 8 ohms, both channels driven. However, in the September 1978 issue of the magazine Audio, Leonard Feldman performed a specification test on the SX-1980 and stated in his report: Though the new [IHF mandated] "Dynamic Headroom" measurement is specified in dB, it should be mentioned that based upon the short-term signal used to measure the 2.3 dB headroom of this amplifier, it was producing nearly 460 watts of short-term power under these test conditions! At an official rating of 270 watts RMS per channel into 8 ohms with a measured 2.3 dB dynamic headroom, this makes the SX-1980 Pioneer's most powerful receiver, as well as being one of the most powerful receivers ever manufactured in the world, to date. It was also tested in the December 1978 issue of Stereo Review. Some results were: With both channels driving 8-ohm loads at 1,000 Hz, the outputs clipped at 300 watts per channel (IHF clipping headroom equals 0.46 dB). The dynamic headroom was 0.63 dB. ... The distortion at 1,000 Hz was nearly unmeasurable at any power level. It was no more than 0.003% from 0.1 to 100 watts output, rose to 0.0045% between 200 and 290 watts, and reached its maximum of 0.008% at 300 watts, just before clipping occurred. The intermodulation distortion (IM) was about 0.03% at most power levels up to 100 watts and reached 0.045% at 300 watts. The SX-1980 is wide, deep, and high, and weighs . The case, like the Pioneer HPM-100, has a fine-grain, walnut veneer finish. It has a pair of large die-cast aluminium heatsinks, located at the sides towards the back, in order to dissipate the immense heat that the receiver can generate. The receiver had 12 Field Effect Transistors (FETs), 11 Integrated Circuits (ICs), 130 transistors and 84 diodes. Its retail price in 1978 was US$1295. According to the CPI inflation calculator, that would equate to about US$5,450 in 2021. The SX-1980 is known for its total harmonic distortion (THD) rating of less than 0.03% at rated power, which is much less than the 1–10% commonly used today. The build quality is also higher than average. The unit compares favorably in side-by-side tests with newer large receivers. With a rated power consumption of 1400 volt-amperes or nearly 1000 watts, it would consume a fair amount of the power a standard 15-amp, 120-volt (North America) circuit can safely deliver. From the operating instructions document: The adoption of a single-stage differential amplifier with low-noise dual transistors, a current mirror load and a 3-stage Darlington triple SEPP circuit provides a bumper power output of 270 watts + 270 watts (20 hertz to 20,000 hertz with no more than 0.03% THD) which is extremely stable. ... The power amplifier is configured as a DC power amplifier with the capacitors removed from the NFB circuit for a flat gain response. ... The large-sized toroidal transformers with their superb regulation employ 22,000 μF large-capacity electrolytic capacitors (two per each channel). There are independent dual power supply circuits with separate power transformer windings to provide power for the left and right channels. ... The FM front end incorporates a two-stage RF circuit that employs a 5-gang tuning capacitor and three dual gate MOS FETs for high gain and low noise. This configuration excels in ridding the sound of undesirable interference ... The FM IF amplifier combines five dual-element ceramic filters ... for high selectivity (80 dB) and low distortion ... The local oscillator includes Pioneer's very own quartz sampling locked APC (Automatic Phase Control). The output of this extremely precise quartz oscillator is divided into frequencies of 100 kHz and so reception frequencies which are a multiple of 100 kHz are locked at every 100 kHz. Apart from its high output power capability, the Pioneer SX-1980 had quite advanced performance in many other important areas. The THD versus frequency response curve showed very low distortion levels over a wide range of operating conditions. Phono cartridge load selectors allowed the user to select from three input resistances (10, 50, and 100 kΩ) and four input capacitances (100, 200, 300, and 400 pF). The RIAA equalisation was accurate to within ±0.2 dB from 20 Hz to 20 kHz (tested as ±0.1 dB), and a high-accuracy performance specification such as this is not often achieved, even today. References Pioneer Corporation products Radio technology Products introduced in 1978
Pioneer SX-1980
[ "Technology", "Engineering" ]
1,081
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
48,963,212
https://en.wikipedia.org/wiki/Four%20Corners%20Methane%20Hot%20Spot
The Four Corners Methane Hot Spot (also called the San Juan Basin methane leak or New Mexico methane source or various related permutations) refers to a clustering of large methane sources near San Juan Basin, near Four Corners, New Mexico, United States. It is perhaps the largest source of methane release in the United States and accounts for about a tenth of the annual gas industry amount. The area has upwards of 40,000 oil and gas wells. The exact cause of the methane leak remained unidentified as of 2015, but appeared to be related to coalbed methane extraction. The San Juan Basin contains the Fruitland coal formation. Ashley Ager, a geologist with LT Environmental, Inc., a company with oil and gas industry contracts, has argued that the leak is naturally occurring due to this formation contacting the surface. However, NASA researchers concluded in 2016 that oil and gas production and distribution activities were principally responsible for the methane releases. The Four Corners area includes other methane sources such as seepage from coal mines, but researchers found these sources too small to explain the bulk of the observed emissions. See also Aliso Canyon gas leak References External links Four Corners Oil & Gas Conference website San Juan Citizens Methane Air pollution in the United States Open problems Energy in New Mexico
Four Corners Methane Hot Spot
[ "Chemistry" ]
254
[ "Greenhouse gases", "Methane" ]
48,963,327
https://en.wikipedia.org/wiki/Monotonicity%20%28mechanism%20design%29
In mechanism design, monotonicity is a property of a social choice function. It is a necessary condition for being able to implement such a function using a strategyproof mechanism. Its verbal description is: In other words: Notation There is a set of possible outcomes. There are agents which have different valuations for each outcome. The valuation of agent is represented as a function:which expresses the value it assigns to each alternative. The vector of all value-functions is denoted by . For every agent , the vector of all value-functions of the other agents is denoted by . So . A social choice function is a function that takes as input the value-vector and returns an outcome . It is denoted by or . In mechanisms without money A social choice function satisfies the strong monotonicity property (SMON) if for every agent and every , if:then: (by the initial preferences, the agent prefers the initial outcome). (by the final preferences, the agent prefers the final outcome). Or equivalently: Necessity If there exists a strategyproof mechanism without money, with an outcome function , then this function must be SMON. PROOF: Fix some agent and some valuation vector . Strategyproofness means that an agent with real valuation weakly prefers to declare than to lie and declare ; hence: Similarly, an agent with real valuation weakly prefers to declare than to lie and declare ; hence: In mechanisms with money When the mechanism is allowed to use money, the SMON property is no longer necessary for implementability, since the mechanism can switch to an alternative which is less preferable for an agent and compensate that agent with money. A social choice function satisfies the weak monotonicity property (WMON) if for every agent and every , if:then: Necessity If there exists a strategyproof mechanism with an outcome function , then this function must be WMON. PROOF: Fix some agent and some valuation vector . A strategyproof mechanism has a price function , that determines how much payment agent receives when the outcome of the mechanism is ; this price depends on the outcome but must not depend directly on . Strategyproofness means that a player with valuation weakly prefers to declare over declaring ; hence:Similarly, a player with valuation weakly prefers to declare over declaring ; hence:Subtracting the second inequality from the first gives the WMON property. Sufficiency Monotonicity is not always a sufficient condition for implementability, but there are some important cases in it is sufficient (i.e, every WMON social-choice function can be implemented): When the agents have single-parameter utility functions. In many convex domains, most notably when the range of each value-function is . When the range of each value-function is , or a cube (Gui, Müller, and Vohra (2004)). In any convex domain (Saks and Yu (2005)). In any domain with a convex closure. In any "monotonicity domain". Examples When agents have single peaked preferences, the median social-choice function (selecting the median among the outcomes that are best for the agents) is strongly monotonic. Indeed, the mechanism selecting the median vote is a truthful mechanism without money. See median voting rule. When agents have general preferences represented by cardinal utility functions, the utilitarian social-choice function (selecting the outcome that maximizes the sum of the agents' valuations) is not strongly-monotonic but it is weakly monotonic. Indeed, it can be implemented by the VCG mechanism, which is a truthful mechanism with money. In job-scheduling, the makespan-minimization social-choice function is not strongly-monotonic nor weakly-monotonic. Indeed, it cannot be implemented by a truthful mechanism; see truthful job scheduling. See also The monotonicity criterion in voting systems. Maskin monotonicity Other meanings of monotonicity in different fields. References Mechanism design
Monotonicity (mechanism design)
[ "Mathematics" ]
801
[ "Game theory", "Mechanism design" ]
48,969,866
https://en.wikipedia.org/wiki/Phytoprogestogen
Phytoprogestogens, also known as phytoprogestins, are phytochemicals (that is, naturally occurring, plant-derived chemicals) with progestogenic effects. Relative to their phytoestrogen counterparts, phytoprogestogens are rare. However, a number have been identified, including kaempferol, diosgenin (found in yam), apigenin (found in chasteberry), naringenin, and syringic acid, among others. In addition, 3,8-dihydrodiligustilide from Ligusticum chuanxiong is a potent progestogen (EC50 = 90 nM), whereas riligustilide is a weak progestogen (EC50 ≈ 81 μM). References Progestogens
Phytoprogestogen
[ "Chemistry" ]
183
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
48,969,961
https://en.wikipedia.org/wiki/ViennaRNA%20Package
The ViennaRNA Package is software, a set of standalone programs and libraries used for predicting and analysing RNA nucleic acid secondary structures. The source code for the package is released as free and open-source software and compiled binaries are available for the operating systems Linux, macOS, and Windows. The original paper has been cited over 2,000 times. Background The three dimensional structure of biological macromolecules like proteins and nucleic acids play a critical role in determining their functional role. This process of decoding function from the sequence is an experimentally and computationally challenging question addressed widely. RNA structures form complex secondary and tertiary structures compared to DNA which form duplexes with full complementarity between two strands. This is partly because the extra oxygen in RNA increases the propensity for hydrogen bonding in the nucleic acid backbone. The base pairing and base stacking interactions of RNA play critical role in formation of ribosome, spliceosome, or tRNA. Secondary structure prediction is commonly done using approaches like dynamic programming, energy minimisation (for most stable structure) and generating suboptimal structures. Many structure prediction tools have been implemented also. Development The first version of the ViennaRNA Package was published by Hofacker et al. in 1994. The package distributed tools to compute either minimum free energy structures or partition functions of RNA molecules; both using the idea of dynamic programming. Non-thermodynamic criterion like formation of maximum matching or various versions of kinetic folding along with an inverse folding heuristic to determine structurally neutral sequences were implemented. Additionally, the package also contained a statistics suite with routines for cluster analysis, statistical geometry, and split decomposition. The package was made available as library and a set of standalone routines. Version 2.0 A number of major systemic changes were introduced in this version with the use of a new parametrized energy model (Turner 2004), restructuring of the RNAlib to support concurrent computations in thread-safe manner, improvements to the application programming interface (API), and inclusion of several new auxiliary tools. For example, tools to assess RNA-RNA interactions and restricted ensembles of structures. Further, other features included additional output information such as centroid structures and maximum expected accuracy structures derived from base pairing probabilities, or z-scores for locally stable secondary structures, and support for input in FASTA format. The updates, however, are compatible with earlier versions without affecting the computational efficiency of the core algorithms. Web server The tools provided by the ViennaRNA Package are also available for public use through a web interface. Tools In addition to prediction and analysis tools, the ViennaRNA Package contains several scripts and utilities for plotting and input-output processing. A summary of the available programs is collected in the table below (an exhaustive list with examples can be found in the official documentation). References See also Nucleic acid structure determination Nucleic acid structure prediction List of RNA structure prediction software External links Bioinformatics software Bioinformatics algorithms Computational biology
ViennaRNA Package
[ "Biology" ]
613
[ "Bioinformatics", "Computational biology", "Bioinformatics software", "Bioinformatics algorithms" ]
48,970,294
https://en.wikipedia.org/wiki/G-fibration
In algebraic topology, a G-fibration or principal fibration is a generalization of a principal G-bundle, just as a fibration is a generalization of a fiber bundle. By definition, given a topological monoid G, a G-fibration is a fibration p: P→B together with a continuous right monoid action P × G → P such that (1) for all x in P and g in G. (2) For each x in P, the map is a weak equivalence. A principal G-bundle is a prototypical example of a G-fibration. Another example is Moore's path space fibration: namely, let be the space of paths of various length in a based space X. Then the fibration that sends each path to its end-point is a G-fibration with G the space of loops of various lengths in X. References Algebraic topology Differential geometry Fiber bundles
G-fibration
[ "Mathematics" ]
200
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic topology" ]
48,970,783
https://en.wikipedia.org/wiki/Gq-mER
The Gq-coupled membrane estrogen receptor (Gq-mER) is a G protein-coupled receptor present in the hypothalamus that has not yet been cloned. It is a membrane-associated receptor that is Gq-coupled to a phospholipase C–protein kinase C–protein kinase A (PLC–PKC–PKA) pathway. The receptor has been implicated in the control of energy homeostasis. Gq-mER is bound and activated by estradiol, and is a putative membrane estrogen receptor (mER). A nonsteroidal diphenylacrylamide derivative, STX, which is structurally related to 4-hydroxytamoxifen (afimoxifene), is an agonist of the receptor with greater potency than estradiol (20-fold higher affinity) that has been discovered. Fulvestrant (ICI-182,780) has been identified as an antagonist of Gq-mER, but is not selective. See also Estrogen receptor GPER (GPR30) ER-X ERx References {{DISPLAYTITLE:Gq-mER}} G protein-coupled receptors Human proteins
Gq-mER
[ "Chemistry" ]
252
[ "G protein-coupled receptors", "Signal transduction" ]
48,971,220
https://en.wikipedia.org/wiki/ER-X
ER-X is a membrane-associated receptor that is bound and activated by 17α-estradiol and 17β-estradiol and is a putative membrane estrogen receptor (mER). It shows sequence homology with ERα and ERβ and activates the MAPK/ERK pathway. The receptor is insensitive to the antiestrogen ICI-182,780 (fulvestrant). See also ERx GPER (GPR30) Gq-mER Estrogen receptor References Human proteins Transmembrane receptors
ER-X
[ "Chemistry" ]
115
[ "Transmembrane receptors", "Signal transduction" ]
48,972,042
https://en.wikipedia.org/wiki/SC-5233
SC-5233, also known as 6,7-dihydrocanrenone or 20-spirox-4-ene-3,20-dione, is a synthetic, steroidal antimineralocorticoid of the spirolactone group which was developed by G. D. Searle & Company in the 1950s but was never marketed. It was the first synthetic antagonist of the mineralocorticoid receptor to have been identified and tested in humans. The drug was found to lack appreciable oral bioavailability and to be of low potency when administered parenterally, but it nonetheless produced a mild diuretic effect in patients with congestive heart failure. SC-8109, the 19-nor (19-demethyl) analogue, was developed and found to have improved oral bioavailability and potency, but still had low potency. Spironolactone (SC-9420; Aldactone) followed and had both good oral bioavailability and potency, and was the first synthetic antimineralocorticoid to be marketed. It has about 46-fold higher oral potency than SC-5233. SC-5233 is the propionic acid lactone of testosterone (androst-4-en-17β-ol-3-one) and is also known 3-(3-oxo-17β-hydroxyandrost-4-en-17α-yl)propionic acid γ-lactone or as 17α-(2-carboxyethyl)testosterone γ-lactone. It is the unsubstituted parent or prototype compound of the spirolactone family of steroidal antimineralocorticoids. Similarly to other spirolactones like canrenone and spironolactone, SC-5233 has some antiandrogenic activity and antagonizes the effects of testosterone in animals. In addition, along with SC-8109, it has been found to possess potent progestogenic activity. References Abandoned drugs Antimineralocorticoids Lactones Pregnanes Progestogens Spiro compounds Spirolactones Steroidal antiandrogens
SC-5233
[ "Chemistry" ]
479
[ "Organic compounds", "Spiro compounds", "Drug safety", "Abandoned drugs" ]
65,538,868
https://en.wikipedia.org/wiki/CYP74%20family
Cytochrome P450, family 74, also known as CYP74, is a cytochrome P450 family in land plant supposed to derived from horizontal gene transfer of marine animal CYPs. References Plant genes 74 Protein families
CYP74 family
[ "Biology" ]
51
[ "Protein families", "Protein classification" ]
65,542,554
https://en.wikipedia.org/wiki/SRD5A3-CDG
SRD5A3-CDG (also known as CDG syndrome type Iq, CDG-Iq, CDG1Q or Congenital disorder of glycosylation type 1q) is a rare, non X-linked congenital disorder of glycosylation (CDG) due to a mutation in the steroid 5 alpha reductase type 3 gene. It is one of over 150 documented types of Congenital disorders of Glycosylation. Like many other CDGs, SRD5A3 is ultra-rare, with around 38 documented cases in the world. It is an inheritable autosomal recessive disorder that causes developmental delays and problems with vision. The gene is located at 4q12, which is the long (q) arm of chromosome 4 at position 12. Presentation SRD5A3-CDG is characterized by a highly variable phenotype. Typical clinical manifestations include: Less common manifestations may include: Molecular mechanism The protein encoded by the SRD5A3 gene is involved in the production of androgen 5-alpha-dihydrotestosterone (DHT) from testosterone, and maintenance of the androgen-androgen receptor activation pathway. This protein is also necessary for the conversion of polyprenol into dolichol, which is required for the synthesis of dolichol-linked monosaccharides and the oligosaccharide precursor used for N-linked glycosylation of proteins. Dolichol is a key building block in the body's glycosylation process. Typically, the dolichol generated is further modified into dolichol-linked oligosaccharide (DLO) by the addition of phosphates and sugars. Complex sugar molecules get added to DLO and are then transferred onto proteins. When insufficient DLO is produced in the body, many proteins are inadequately glycosylated. Both glycosylation defects and an accumulation of polyprenol have been observed in SRD5A3-CDG patients and mouse models, and it is not currently known whether the disease is caused due to incorrect glycosylation, polyprenol accumulation, or a combination of the two. Diagnosis Confirmation of clinical diagnosis for SRD5A3-CDG requires genetic testing and gene sequencing to identify deleterious mutations in the SRD5A3 gene. Other diagnostic tools include Isoelectrofocusing of Transferrin (TIEF), an assay from transferrin levels in blood, to screen for N-glycosylation defects which occur in CDGs. A CDG blood analysis test using mass spectrometry technology is also available. As SRD5A3-CDG is also an inheritable disorder, parental genetic testing can indicate if one or both of the parents are carriers of the faulty gene. The gene is recessive in nature, so if both parents are carriers of the condition, there is a 25% chance that the offspring will have SRD5A3-CDG. Treatment At present, there is no available treatment for SRD5A3-CDG. However, the disorder can be managed and some of the symptoms can be treated. Some eye problems that manifest with SRD5A3-CDG can be surgically corrected and coagulation disorders may be treated. The quality of life is mainly determined by the nature and the degree of the brain and eye involvement. Ongoing care and management for individuals with SRD5A3-CDG typically includes a combination of physical therapy (to alleviate issues pertaining to reduced muscle tone, mobility, etc.), occupational therapy (for vision and speech impairments) and palliative measures, where needed. When a genetic risk or anomaly is identified, parents may have access to counselling to prepare them for any special needs their child may have and approaches on managing their condition as they grow. Documented cases SRD5A3-CDG is an ultra-rare disorder with a frequency of less than 1 in 10 million. As of 2018, there were at least 38 reported cases of SRD5A3-CDG from 26 different families. While the exact number of patients worldwide is unknown, most recorded cases so far have been reported from Afghanistan, the Czech Republic, Iran, Pakistan, Poland, Puerto Rico and Turkey. Research SRD5A3-CDG is caused by a single-gene mutation, which makes it an attractive candidate for gene therapy. However, due to the extreme rarity of the disorder, research around it has been limited. Research has predominantly been focused on two types of research models: Cell-based models and model organisms. Common cell-based models include patient cells such as fibroblast cells derived from skin samples (patient-derived fibroblasts [PDFs]), induced pluripotent stem cells (iPSCs) created by reprogramming fibroblasts, and specialized cells, such as neurons derived from stem cell differentiation. Patient-derived cell models are important preclinical model systems as they contain the same genome and mutation(s) as the patient, allowing researchers to assess potential therapies for individual patients early on in the drug development process. For SRD5A3-CDG, patient-derived cell models could be crucial in understanding the impact of polyprenol reductase enzyme deficiency and be used to investigate various treatment options such as dietary supplementation, novel or repurposed drugs and gene therapy. Model organisms like worms, zebrafish and mice have been genetically modified to study the impact of several mutations, including those in the SRD5A3 gene. Researchers in the United States and France have been working on genetically modified mice that have SRD5A3 mutations limited to the cerebellum region of their brain. These mice are viable, show CDG symptoms in the brain, and are part of planned studies for new experimental treatments. See also References Congenital disorders of glycosylation Medical genetics
SRD5A3-CDG
[ "Chemistry" ]
1,242
[ "Congenital disorders of glycosylation", "Carbohydrate chemistry" ]
55,729,546
https://en.wikipedia.org/wiki/Neuman%E2%80%93S%C3%A1ndor%20mean
In mathematics of special functions, the Neuman–Sándor mean M, of two positive and unequal numbers a and b, is defined as: This mean interpolates the inequality of the unweighted arithmetic mean A = (a + b)/2) and of the second Seiffert mean T defined as: so that A < M < T. The M(a,b) mean, introduced by Edward Neuman and József Sándor, has recently been the subject of intensive research and many remarkable inequalities for this mean can be found in the literature. Several authors obtained sharp and optimal bounds for the Neuman–Sándor mean. Neuman and others utilized this mean to study other bivariate means and inequalities. See also Mean Arithmetic mean Geometric mean Stolarsky mean Identric mean Means in Mathematical Analysis References Means Special functions
Neuman–Sándor mean
[ "Physics", "Mathematics" ]
181
[ "Means", "Mathematical analysis", "Point (geometry)", "Special functions", "Geometric centers", "Combinatorics", "Symmetry" ]
55,731,874
https://en.wikipedia.org/wiki/Complex%20Wishart%20distribution
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices. The complex Wishart distribution is the density of a complex-valued sample covariance matrix. Let where each is an independent column p-vector of random complex Gaussian zero-mean samples and is an Hermitian (complex conjugate) transpose. If the covariance of G is then where is the complex central Wishart distribution with n degrees of freedom and mean value, or scale matrix, M. where is the complex multivariate Gamma function. Using the trace rotation rule we also get which is quite close to the complex multivariate pdf of G itself. The elements of G conventionally have circular symmetry such that . Inverse Complex Wishart The distribution of the inverse complex Wishart distribution of according to Goodman, Shaman is where . If derived via a matrix inversion mapping, the result depends on the complex Jacobian determinant Goodman and others discuss such complex Jacobians. Eigenvalues The probability distribution of the eigenvalues of the complex Hermitian Wishart distribution are given by, for example, James and Edelman. For a matrix with degrees of freedom we have where Note however that Edelman uses the "mathematical" definition of a complex normal variable where iid X and Y each have unit variance and the variance of . For the definition more common in engineering circles, with X and Y each having 0.5 variance, the eigenvalues are reduced by a factor of 2. While this expression gives little insight, there are approximations for marginal eigenvalue distributions. From Edelman we have that if S is a sample from the complex Wishart distribution with such that then in the limit the distribution of eigenvalues converges in probability to the Marchenko–Pastur distribution function This distribution becomes identical to the real Wishart case, by replacing by , on account of the doubled sample variance, so in the case , the pdf reduces to the real Wishart one: A special case is or, if a Var(Z) = 1 convention is used then . The Wigner semicircle distribution arises by making the change of variable in the latter and selecting the sign of y randomly yielding pdf In place of the definition of the Wishart sample matrix above, , we can define a Gaussian ensemble such that S is the matrix product . The real non-negative eigenvalues of S are then the modulus-squared singular values of the ensemble and the moduli of the latter have a quarter-circle distribution. In the case such that then is rank deficient with at least null eigenvalues. However the singular values of are invariant under transposition so, redefining , then has a complex Wishart distribution, has full rank almost certainly, and eigenvalue distributions can be obtained from in lieu, using all the previous equations. In cases where the columns of are not linearly independent and remains singular, a QR decomposition can be used to reduce G to a product like such that is upper triangular with full rank and has further reduced dimensionality. The eigenvalues are of practical significance in radio communications theory since they define the Shannon channel capacity of a MIMO wireless channel which, to first approximation, is modeled as a zero-mean complex Gaussian ensemble. References Continuous distributions Multivariate continuous distributions Covariance and correlation Random matrices Conjugate prior distributions Exponential family distributions Complex distributions
Complex Wishart distribution
[ "Physics", "Mathematics" ]
737
[ "Random matrices", "Matrices (mathematics)", "Statistical mechanics", "Mathematical objects" ]
55,736,220
https://en.wikipedia.org/wiki/RsmW%20sRNA
RsmW is a part of the Rsm/Csr family of non-coding RNAs (ncRNAs) discovered in Pseudomonas aeruginosa. It specifically binds to RsmA protein in vitro, restores biofilm production (possibly due to the interaction with RsmA) and partially complements the loss of RsmY and RsmZ in rsmY/rsmZ double mutant in regards to their contribution to swarming. Compared to RsmY and RsmZ its production is induced in high temperatures and rsmW is not transcriptionally activated by GacA. See also CsrB/RsmB RNA family CsrC RNA family PrrB/RsmZ RNA family RsmY RNA family RsmX CsrA protein References Non-coding RNA
RsmW sRNA
[ "Chemistry" ]
156
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
51,309,062
https://en.wikipedia.org/wiki/17%CE%B1-Epiestriol
17α-Epiestriol, or simply 17-epiestriol, also known as 16α-hydroxy-17α-estradiol or estra-1,3,5(10)-triene-3,16α,17α-triol, is a minor and weak endogenous estrogen, and the 17α-epimer of estriol (which is 16α-hydroxy-17β-estradiol). It is formed from 16α-hydroxyestrone. In contrast to other endogenous estrogens like estradiol, 17α-epiestriol is a selective agonist of the ERβ. It is described as a relatively weak estrogen, which is in accordance with its relatively low affinity for the ERα. 17α-Epiestriol has been found to be approximately 400-fold more potent than estradiol in inhibiting tumor necrosis factor α (TNFα)-induced vascular cell adhesion molecule 1 (VCAM-1) expression in vitro. See also Epimestrol 16β,17α-Epiestriol 16β-Epiestriol 17α-Estradiol 2-Methoxyestradiol References Estranes Estrogens Hormones of the hypothalamus-pituitary-gonad axis Selective ERβ agonists Sex hormones
17α-Epiestriol
[ "Chemistry", "Biology" ]
290
[ "Behavior", "Sex hormones", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Sexuality" ]
51,309,714
https://en.wikipedia.org/wiki/Trimethyltrienolone
Trimethyltrienolone (TMT), also known by its developmental code name R-2956 or RU-2956, is an antiandrogen medication which was never introduced for medical use but has been used in scientific research. Side effects Due to its close relation to metribolone (methyltrienolone), it is thought that TMT may produce hepatotoxicity. Pharmacology Pharmacodynamics TMT is a selective and highly potent competitive antagonist of the androgen receptor (AR) with very low intrinsic/partial androgenic activity and no estrogenic, antiestrogenic, progestogenic, or antimineralocorticoid activity. The drug is a derivative of the extremely potent androgen/anabolic steroid metribolone (R-1881; 17α-methyltrenbolone), and has been reported to possess only about 4-fold lower affinity for the AR in comparison. In accordance, it has relatively high affinity for the AR among steroidal antiandrogens, and almost completely inhibits dihydrotestosterone (DHT) binding to the AR in vitro at a mere 10-fold molar excess. The AR weak partial agonistic activity of TMT is comparable to that of cyproterone acetate. Chemistry TMT, also known as 2α,2β,17α-trimethyltrienolone or as δ9,11-2α,2β,17α-trimethyl-19-nortestosterone, as well as 2α,2β,17α-trimethylestra-4,9,11-trien-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone and 19-nortestosterone. It is the 2α,2β,17α-trimethyl derivative of trenbolone (trienolone) and the 2α,2β-dimethyl derivative of metribolone (methyltrienolone), both of which are synthetic androgens/anabolic steroids. History TMT was developed by Roussel Uclaf in France and was first known as early as 1969. It was one of the earliest antiandrogens to be discovered and developed, along with others such as benorterone, BOMT, cyproterone, and cyproterone acetate. The drug was under investigation by Roussel Uclaf for potential medical use, but was abandoned in favor of nonsteroidal antiandrogens like flutamide and nilutamide due to their comparative advantage of a complete lack of androgenicity. Roussel Uclaf subsequently developed and introduced nilutamide for medical use. References Abandoned drugs Tertiary alcohols Estranes Hepatotoxins Ketones Steroidal antiandrogens
Trimethyltrienolone
[ "Chemistry" ]
605
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
50,480,603
https://en.wikipedia.org/wiki/XbaI
XbaI is a restriction enzyme isolated from the bacterium Xanthomonas badrii References Restriction enzymes
XbaI
[ "Biology" ]
23
[ "Genetics techniques", "Restriction enzymes" ]
50,483,916
https://en.wikipedia.org/wiki/Gitel%20Steed
Gitel (Gertrude) Poznanski Steed (May 3, 1914 – September 6, 1977) was an American cultural anthropologist known for her research in India 1950–52 (and returning in 1970) involving ethnological work in three villages to study the complex detail of their social structure. She supplemented her research with thousands of ethnological photographs of the individuals and groups studied, the quality of which was recognised by Edward Steichen. She experienced chronic illnesses after her return from the field, but nevertheless completed publications and many lectures but did not survive to finish a book The Human Career in Village India which was to integrate and unify her many-sided studies of human character formation in the cultural/historical context of India. Early life Gertrude Poznanski was born on May 3, 1914, in Cleveland, Ohio, the home of her mother, Sarah Auerbach. She was the youngest of sisters Mary and Helen, who both later emigrated to Israel. Her father was Jakob Poznanski, a businessman and Polish native who had come to the United States from Belgium. When Poznanski was still a baby, the family moved to The Bronx, New York, where she was schooled at Wadleigh High. Though not religious, she adopted the Yiddish name of Gitel. Her mother was active in the women's suffrage movement and in leftist politics. Steed began a BA in banking and finance at New York University, but embraced the Greenwich Village artistic and political life, often singing blues in nightclubs, and dropped out to take a job as a writer with the Works Progress Administration. Rafael Soyer painted portrait of her at eighteen, Girl in a White Blouse, (1932) which is in the collection of the Metropolitan Museum of Art in New York City, and she is the medusa-haired subject in Soyer's Two Girls 1933 (oil on canvas) at the Smart Museum. In 1933 she met the painter Robert Steed (b.1903), whom she married in 1947. Introduction to anthropology Philosopher Sidney Hook persuaded Steed to return to NYU and in 1938 she completed her B.A. with honors in sociology and anthropology, then studied as a graduate at Columbia University until 1940 as a Research Fellow in the Department of Anthropology under Professor Ruth Benedict whom she had met in 1937 and who in 1939 led Gitel Steed's first field experience among the Blackfoot Indians of Montana. Benedict's Patterns of Culture was published in 1934 and had become standard reading for anthropology courses in American universities for years, and Steed was influenced by Benedict's position in that book that, "A culture, like an individual, is a more or less consistent pattern of thought and action", and that each culture chooses from "the great arc of human potentialities" only a few characteristics which become the leading personality traits of the persons living in that culture. From 1939 to 1941, Steed undertook research for Vilhjalmur Stefansson, the explorer and writer on Inuit life then planning a two-volume Lives of the Hunters, on diet and subsistence; Steed worked on the South American Ona, Yahgan, and the Antillean Arawak and Carib, and from this formative experience began a dissertation on hunter-gatherer subsistence (not finished until 1969, well after she had established her career). In 1944, after the Nazi holocaust against the Jews was exposed, Gitel Steed set aside her anthropology and joined the Jewish Black Book Committee, an organization of the World Jewish Congress and other Jewish anti-Fascist groups. With a few peers, she participated during 1944 and 1945 in writing The Black Book: The Nazi Crime Against the Jewish People, an exhaustive indictment of calculated Nazi anti-Jewish war crimes intended for submission to the United Nations War Crimes Commission. Steed wrote "The Strategy of Decimation" published in 1946 for the Jewish Black Book Committee. Steed taught at Hunter College in New York 1945 and 1947. While teaching at Fisk University in 1946/1947 she researched the Negro in the United States and in Africa and was managing editor of the university's and the American Council on Race Relations' monthly summary of "Race Relations" in America. In 1947–1949 she joined Dr. Ruth Bunzel in establishing the China group of Columbia's Research in Contemporary Cultures to study immigrant Chinese culture, primarily in New York City, from 1947 into 1949, working with migrants from the same community from China's Kwantung Provincial village, Toi Shan. She undertook life histories, community self-analysis, and projective tests, then proposed an extended field project in the Toi Shan community to understand the interdependency in social and economic relations between migrants and their kinsmen at home. Margaret Mead and others joined these Columbia comparative studies of contemporary cultures and they were incorporated into Mead's "The Study of Cultures at a Distance," Steed planned "The Effects of Village Institutions on Personality in South China" and had funds granted for its continuation of her Chinese research in China. However the occurrence of the Chinese Revolution of 1948 scuttled the project. India Steadfastly committed to field study of institutional determinants of individual and social character, she took up the suggestion of psychiatrist, Abram Kardiner, then associated with Columbia University's Department of Sociology, to pursue a similar study in India. Shortly thereafter, Professor Theodore Abel, department chairman, appointed Steed Director of a two-year field project of research in contemporary India. Funding was received through a grant from the Department of the Navy. She assembled a research team of Dr. James Silverberg, Dr. Morris Carstairs of Edinburgh University, and her husband Robert Steed, leaving for India in 1949, where there was added a small staff of Indian workers. They included Bhagvati Masher and Kantilal Mehta, who worked as interpreters; Nandlal Dosajh, a psychologist; N. Prabhudas, an economist who conducted the land use survey; and Jerome D'Souza as cook. In the second year of research, the team also included an Indian assistant, Tahera, as well as Americans Grace Langley and John Koos. Her preparations were aided by her friendship with Gautam Sarabhai, a Gujarati Indian she had met in New York who assisted her in learning Hindustani. She also trained in the use of a professional camera. The research goals and procedures were ambitious; to empirically bring together interactions of individual, culture, community and institutions, relating real individuals, not merely statistical patterns, for functional-historical analysis of character formation at the village level in the three settlements chosen: Bakrana, a Hindu village in Gujarat; Sujarupa, a Hindu hamlet in Rajputana; and Adhon, a predominantly Muslim village in the United Provinces. Bakrana was a farming economy in fertile flatlands, exclusively Hindu and with an active caste system, largely untouched by the former British rule or by land reform changes current then in India. Sujarupa hamlet was a single-caste community in the upland valleys. Adhon was controlled by pro-independence Muslims, with more occupational castes and subgroups than Bakrana, with religious minorities. Consolidation of ethnographic research in India Steed returned to the United States in December 1951 with more than 30,000 pages of handwritten notes and some thousands of ethnological photographs, but infected with malaria, and shortly after her return she also developed diabetes which was particularly difficult to control and frequently put her in hospital. In addition, she had had pituitary cancer for over thirty years. Illness impaired her later career so that she was unable to receive her doctorate until 1969, 8 years before her death. In 1963, however, Conrad Arensberg wrote "her reputation and accomplishments are such as to make her lack of a PhD of little moment for her standing in the profession". Her reputation also rests on her unpublished notes; the thousands of pages of interviews, observations, projective test results, life histories, and villagers' paintings, most of which are now in the special collections of the University of Chicago Libraries. During the eleven years after returning from India and taking up a position at Hofstra College (now Hofstra University) in 1962 where she continued to her death in 1977, Steed had no university affiliation and promoted her work through seminars and lectures at Columbia University, University of Chicago, and the University of Pennsylvania. In 1953, Steed participated in a Social Science Research Council Conference on Economic Development in Brazil, India, and Japan, analyzing Dr. Morris Opler's "Cultural Aspects of Economic Development in Rural India", then later that year, 1953/4, she analyzed "The Individual, Family, and Community in Village India" in Columbia University's Department of Sociology graduate seminar on the psychodynamics of culture, chaired by Abram Kardiner. In 1954 Steed lectured on "The Child, Family and Community in Rural Gujarat" for the University of Chicago Seminar on Village India. The lectures and discussions are recorded in the archive. She presented in India symposia at meetings of the American Anthropological Association, and the Social Science Research Council. Her one publication of note during this period was a chapter "Notes on an Approach to a Study of Personality Formation in a Hindu Village in Gujarat", illustrating cultural and institutional influences on the personality of a single Rajput landowner, for a volume Village India: studies in the little community edited by Alan Beals and Dr. McKim Marriott, published in 1955; Steed's chapter has been held up as a model for the treatment of personality problems and culture in India. This, and the presentations she made at conferences assisted her in delimiting her doctoral thesis Caste and Kinship in Rural Gujarat: The Social Use of Space. During these eleven years she gave birth to her son, Andrew Hart (b. 1953), and taught English at the Jefferson School, a Manhattan private school favored by the political left. Gitel and husband Robert Steed opened the doors of their house on West 23rd Street to visitors who included Ruth Bunzel, Sula Benet, Vera Rubin, Stanley Diamond, Alexander Lesser, Margaret Mead, and Conrad Arensberg. In 1970 Steed revisited Bakrana, its population then doubled since her last visit, to observe the impact of the transforming politics in India. Photographer Edward Steichen, Director of the Department of Photography of the Museum of Modern Art, said Steed's photographs of Indian villagers, which though taken for the anthropological record and used as illustrations in varied lectures and presentations, ranked "with first-rate human interpretations by professional photographers." In 1953, Steichen mounted Always the Young Strangers exhibition at MoMA in honour of Carl Sandberg's 75th birthday and included Gitel Steed's photos, six of which are in the Museum's permanent photographic collection. Steichen again used her pictures in the Museum's world-touring blockbuster, The Family of Man exhibition and book. Her photographs were republished in the New York Times, and featured in the St. Louis Post Despatch. While at Hofstra University, her photographic work was exhibited as part of the university's "Focus on India" presentation in 1962, and in 1963 Hofstra showed Steed photographs of Hindu and Muslim villagers. That same year Steed held the exhibition Child Life in Village India at the New Canaan Art Association Gallery in Connecticut and another, Cradle to Grave in Village India at the Hudson Guild Gallery in New York. In 1967 Vincent Fresno's Human Actions in Four Societies used a selection as illustrations. Death and legacy Steed died in the night of September 6, 1977, evidently from a heart attack, at the age of sixty-three. She was survived by her husband and supporter for thirty years, artist Robert Steed, and by her son, Andrew. The Gitel P. Steed papers 1907–1980 are archived in the University of Chicago Library and extend to approximately 13 linear metres (43 feet) of material. Most is data from her Columbia University Research in Contemporary India Field Project of 1949–1951 collected from three villages in western and northern India; extensive life histories of informants, psychological tests, typed notes, field notebooks, photographs, genealogies, histories, transcripts of interviews, and art work, mostly watercolours, by both researchers and child and adult villagers. These are joined by records of lectures and other publications relating to the India Project by Steed and by other scholars. Also held is data from Steed's previous fieldwork project among Chinese immigrants in New York City. The collection was given to the University of Chicago Committee on Southern Asian Studies by Robert Steed in 1978 and conveyed to the University of Chicago Library in 1984. Prior to their arrival, James Silverberg and McKim Marriott put the papers a preliminary order reflected in the collection's current organization. The collection was augmented by Robert Steed in 1985 and 1989, and by McKim Marriott and James Silverberg in 1994. Publications 1946 The Strategy of Decimation. In The Black Book: The Nazi Crime against the Jewish People. The Jewish Black Book Committee and Ursula Wasserman, eds. pp. 111–240. New York: Duell, Sloan, and Pearce. 1947 Review of The Origin of Things, by Julius E. Lips. New York Times, June 15. 1947 Review of The City of Women, by Ruth Landes. New York Times, August 3, Pt. VII, 1947 Review of Men Out of Asia, by Harold S. Gladwin. New York Times, November 30, p. 14. 1948 Review of Zulu Woman, by Rebecca Hourwich Raynher. New York Times, June 1948 Review of Man and His Works, by Melville J. Herskovits. New York Times, November 14, Pt. VII, p. 26:4. 1948 Review of The Heathens, by William D. Howells. New York Times, November 14, 1953 Guest Exhibitor, Museum of Modern Art's Edward Steichen Exhibition Always the Young Stangers. Photographs of Indian Hindu and Muslim Villagers. Six were acquired for the permanent collection of photographs of the Museum of Modern Art. 1953 Materials on Friendship and Childhood among Chinese Families in New York. In The Study of Culture at a Distance, Margaret Mead and Rhoda Metraux, eds. Pp. 192–98. Chicago: University of Chicago Press. 1955 Review of SC Dube Indian Village 1955 Notes to an Approach to a Study of Personality Formation in a Hindu Village in Gujarat. In Village India: Studies in the Little Community. Memoirs of the American Anthropological Association, No. 83. McKim Marriott, ed. Pp. 102-‐44. Chicago: University of Chicago Press. 1955 Photographs of Indian villagers. Published in The Family of Man, by Edward Steichen. New York: Museum of Modern Art. 1964 The Human Career in Village India. Part I: Introduction. Mimeographed copy of draft on file in the Department of Sociology and Anthropology, Hofstra University, Hempstead, New York. 1968 Devgar. Unpublished screenplay on file with Robert Steed, New York. 1967 Photographs. Published as illustrations in Human Action in Four Societies, by Vincent Fresno. Englewood Cliffs, N.J.: Prentice-Hall. 1969 Caste and Kinship in Rural Gujarat: The Social Use of Space. Unpublished doctoral dissertation. Ms. in Columbia University Library, New York. Steed, Gitel P. n.d. Unpublished papers and field notes. University of Chicago Library. References Bibliography Arensberg, Conrad 1963 Unpublished letter on file in the Department of Anthropology and Sociology, Hofstra University, Hempstead, N.Y. Berleant-Schiller, Riva 1988 Gitel (Gertrude) Poznanski Steed. IN Women Anthropologists: A Biographical Dictionary. Edited by Ute Gacs, Aisha Khan, Jerrie McIntyre, Ruth Weinberg. Westport, CT: Greenwood Press. pp. 331–336. Gitel (Gertrude) Poznanski Steed Bunzel, Ruth 1962 Unpublished letter on file in the Department of Anthropology and Sociology, Hofstra University, Hempstead, N.Y. Contemporary Authors 1972 "Gitel Steed." 1st rev. ed., vol. 41–44, p. 663. Ann Evory, ed. Detroit: Gale Research. Lesser, Alexander 1979 Obituary of Gitel Steed. American Anthropologist 81:88-‐91 New York Times 1977 Obituary of Gitel Steed. September 9. External links Guide to the Gitel P. Steed Papers 1907-1980 at the University of Chicago Special Collections Research Center Gitel P. Steed Niels Rasmussen Interview Transcripts at Dartmouth College Library American women anthropologists Anthropology educators Kinship and descent Holocaust studies Society of India Rural society in India 1914 births 1977 deaths New York University alumni Columbia Graduate School of Arts and Sciences alumni American women photographers 20th-century American anthropologists 20th-century American women artists 20th-century American people Jewish women scientists
Gitel Steed
[ "Biology" ]
3,473
[ "Behavior", "Human behavior", "Kinship and descent" ]
50,485,535
https://en.wikipedia.org/wiki/Mind-controlled%20wheelchair
A mind-controlled wheelchair is a motorized wheelchair controlled by a brain–computer interface. Such a wheelchair could be of great importance to patients with locked-in syndrome (LIS), in which a patient is aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles in the body except the eyes. Such wheelchairs can also be used in case of muscular dystrophy, a disease that weakens the musculoskeletal system and hampers locomotion. History The technology behind brain or mind control goes back to at least 2002, when researchers implanted electrodes into the brains of macaque monkeys, which enabled them to control a cursor on a computer screen. Similar techniques were able to control robotic arms and simple joysticks. In 2009, researchers at the University of South Florida developed a wheelchair-mounted robotic arm that captured the user's brain waves and converted them into robotic movements. The Brain-Computer Interface (BCI), which captures P-300 brain wave responses and converts them to actions, was developed by USF psychology professor Emanuel Donchin and colleagues. The P-300 brain signal serves a virtual "finger" for patients who cannot move, such as those with locked-in syndrome or those with Lou Gehrig's Disease (ALS). The first mind-controlled wheelchair reached production in 2016. It was designed by Diwakar Vaish, Head of Robotics and Research at A-SET Training & Research Institutes. In November of 2022, the University of Texas at Austin developed a mind-controlled wheelchair using an EEG device. In addition, March of 2022 saw a paper from Clarkson University planning the design of a mind-controlled wheelchair also using an EEG. Technology Operation A mind-controlled wheelchair functions using a brain–computer interface: an electroencephalogram (EEG) worn on the user's forehead detects neural impulses that reach the scalp allowing the micro-controller on board to detect the user's thought process, interpret it, and control the wheelchair's movement. In November of 2022 the University of Texas in Austin conducted a study on the effectiveness of a model of mind-controlled wheelchair. Similar to the BCI, the machine translates brain waves into movements. Specifically, the participants were instructed to visualize moving extremities to prompt the wheelchair to move. This study saw the use of non-invasive electrodes, using an electroencephalogram cap as opposed to internally installed electrodes. In March of 2022, Stoyell et al. at Clarkson University published a paper in which they planned a design of a mind-controlled wheelchair based on an Emotiv EPOC X headset, an electroencephalogram device. Functionality The A-SET wheelchair comes standard with many different types of sensors, like temperature sensors, sound sensors and an array of distance sensors which detect any unevenness in the surface. The chair automatically avoids stairs and steep inclines. It also has a "safety switch": in case of danger, the user can close his eyes quickly to trigger an emergency stop. In the case of the chair designed by Stoyell et al., the only equipment needed to use the chair is the EMOTIV EPOC X headset. Both the University of Texas' and Clarkson University's designs have the benefit of being noninvasive, with the electrodes being placed onto the head as opposed to being surgically implanted. This makes these products relatively more accessible. References https://web.archive.org/web/20160602134908/http://www.networkedindia.com/2016/03/18/the-worlds-first-mind-controlled-wheelchair/ https://web.archive.org/web/20160424100051/http://www.startupstalk.org/indian-develops-mind-controlled-wheelchair/ Neuroscience Mobility devices Robotics Indian inventions Electroencephalography Muscular dystrophy
Mind-controlled wheelchair
[ "Engineering", "Biology" ]
827
[ "Neuroscience", "Robotics", "Automation" ]
50,492,922
https://en.wikipedia.org/wiki/Pathophysiology%20of%20Parkinson%27s%20disease
The pathophysiology of Parkinson's disease is death of dopaminergic neurons as a result of changes in biological activity in the brain with respect to Parkinson's disease (PD). There are several proposed mechanisms for neuronal death in PD; however, not all of them are well understood. Five proposed major mechanisms for neuronal death in Parkinson's Disease include protein aggregation in Lewy bodies, disruption of autophagy, changes in cell metabolism or mitochondrial function, neuroinflammation, and blood–brain barrier (BBB) breakdown resulting in vascular leakiness. Protein aggregation The first major proposed cause of neuronal death in Parkinson's disease is the bundling, or oligomerization, of proteins. The protein alpha-synuclein has increased presence in the brains of Parkinson's Disease patients and, as α-synuclein is insoluble, it aggregates to form Lewy bodies (shown to left) in neurons. Traditionally, Lewy bodies were thought to be the main cause of cell death in Parkinson's disease; however, more recent studies suggest that Lewy bodies lead to other effects that cause cell death. Regardless, Lewy bodies are widely recognized as a pathological marker of Parkinson's disease. Lewy bodies first appear in the olfactory bulb, medulla oblongata, and pontine tegmentum; patients at this stage are asymptomatic. As the disease progresses, Lewy bodies develop in the substantia nigra, areas of the midbrain and basal forebrain, and in the neocortex. This mechanism is substantiated by the facts that α-synuclein lacks toxicity when unable to form aggregates; that heat-shock proteins, which assist in refolding proteins susceptible to aggregation, beneficially affect PD when overexpressed; and that reagents which neutralize aggregated species protect neurons in cellular models of α-synuclein overexpression. Alpha-synuclein appears to be a key link between reduced DNA repair and Parkinson's disease. Alpha-synuclein activates ATM (ataxia-telangiectasia mutated), a major DNA damage repair signaling kinase. Alpha-synuclein binds to breaks in double-stranded DNA and facilitates the DNA repair process of non-homologous end joining. It was suggested that cytoplasmic aggregation of alpha-synuclein to form Lewy bodies reduces its nuclear levels leading to decreased DNA repair, increased DNA double-strand breaks and increased programmed cell death of neurons. Autophagy disruption The second major proposed mechanism for neuronal death in Parkinson's disease, autophagy, is a mechanism by which inner components of the cell are broken down and recycled for use. Autophagy has been shown to play a role in brain health, helping to regulate cellular function. Disruption of the autophagy mechanism can lead to several different types of diseases like Parkinson's disease. Autophagy dysfunction in Parkinson's disease has also been shown to lead to dysregulated mitochondria degradation. Changes in cell metabolism The third major proposed cause of cell death in Parkinson's disease involves the energy-generating mitochondrion organelle. In Parkinson's disease, mitochondrial function is disrupted, inhibiting energy production and resulting in death. The mechanism behind mitochondrial dysfunction in Parkinson's disease is hypothesized to be centered in the PINK1 and Parkin complex, having this been shown to drive autophagy of mitochondria (also known as mitophagy). PINK1 is a protein normally transported into the mitochondrion, but can also accumulate on the surface of impaired mitochondria. Accumulated PINK1 then recruits Parkin; Parkin initiates the breakdown of dysfunctional mitochondria, a mechanism that acts as a "quality control". In Parkinson's disease, the genes coding PINK1 and Parkin are thought to be mutated so as to impair the ability of these proteins to breakdown dysfunctional mitochondria, leading to abnormal mitochondrial function and morphology, and eventually cell death. Mitochondrial DNA (mtDNA) mutations have also been shown to accumulate with age indicating that susceptibility to this mechanism of neuronal death increases with age. Another mitochondrial-related mechanism for cell death in Parkinson's disease is the generation of reactive oxygen species (ROS). ROS are highly reactive molecules that contain oxygen and can disrupt functions within the mitochondria and the rest of the cell. With increasing age, mitochondria lose their ability to remove ROS yet still maintain their production of ROS, causing an increase in net production of ROS and eventually cell death. As reviewed by Puspita et al. studies have demonstrated that in the mitochondria and the endoplasmic reticulum, alpha-synuclein and dopamine levels are likely involved in contributing to oxidative stress as well as PD symptoms. Oxidative stress appears to have a role in mediating separate pathological events that together ultimately result in cell death in PD. Oxidative stress leading to cell death may be the common denominator underlying multiple processes. Oxidative stress causes oxidative DNA damage. Such damage is increased in the mitochondria of the substantia nigra of PD patients and may lead to nigral neuronal cell death. Neuroinflammation The fourth proposed major mechanism of neuronal death in Parkinson's Disease, neuroinflammation, is generally understood for neurodegenerative diseases, however, specific mechanisms are not completely characterized for PD. One major cell type involved in neuroinflammation is the microglia. Microglia are recognized as the innate immune cells of the central nervous system. Microglia actively survey their environment and change their cell morphology significantly in response to neural injury. Acute inflammation in the brain is typically characterized by rapid activation of microglia. During this period, there is no peripheral immune response. Over time, however, chronic inflammation causes the degradation of tissue and of the blood–brain barrier. During this time, microglia generate reactive oxygen species and release signals to recruit peripheral immune cells for an inflammatory response. In addition, microglia are known to have two major states: M1, a state in which cells are activated and secrete pro-inflammatory factors; and M2, a state in which cells are deactivated and secrete anti-inflammatory factors. Microglia are usually in a resting state (M2), but in Parkinson's disease can enter M1 due to the presence of α-synuclein aggregates. The M1 microglia release pro-inflammatory factors which can cause motor neurons to die. In this case, dying cells can release factors to increase the activation of M1 microglia, leading to a positive feedback loop which causes continually increasing cell death. BBB breakdown The fifth proposed major mechanism for cell death is the breakdown of the blood–brain barrier (BBB). The BBB has three cell types which tightly regulate the flow of molecules in and out of the brain: endothelial cells, pericytes, and astrocytes. In neurodegenerative diseases, BBB breakdown has been measured and identified in specific regions of the brain, including the substantia nigra in Parkinson's disease and hippocampus in Alzheimer's disease. Protein aggregates or cytokines from neuroinflammation may interfere with cell receptors and alter their function in the BBB. Most notably, vascular endothelial growth factor (VEGF) and VEGF receptors are thought to be dysregulated in neurodegenerative diseases. The interaction between the VEGF protein and its receptors leads to cell proliferation, but is believed to be disrupted in Parkinson's disease and Alzheimer's disease. This then causes cells to stop growing and therefore, prevents new capillary formation via angiogenesis. Cell receptor disruption can also affect the ability for cells to adhere to one another with adherens junctions. Without new capillary formation, the existing capillaries break down and cells start to dissociate from each other. This in turn leads to the breakdown of gap junctions. Gap junctions in endothelial cells in the BBB help prevent large or harmful molecules from entering the brain by regulating the flow of nutrients to the brain. However, as gap junctions break down, plasma proteins are able to enter in extracellular matrix the brain. This mechanism is also known as vascular leakiness, where capillary degeneration leads to blood and blood proteins "leaking" into the brain. Vascular leakiness can eventually cause neurons to alter their function and shift towards apoptotic behavior or cell death. Impact on locomotion Dopaminergic neurons are the most abundant type of neuron in the substantia nigra, a part of the brain regulating motor control and learning. Dopamine is a neurotransmitter which modulates the activity of motor neurons in the central nervous system. The activated motor neurons then transmit their signals, via action potential, to motor neurons in the spinal cord. However, when a significant percentage of the motor neurons die (about 50-60%), this decreases dopamine levels by up to 80%. This inhibits the ability for neurons to generate and transmit a signal. This transmission inhibition ultimately causes the characteristic Parkinsonian gait with symptoms such as hunched and slowed walking or tremors. References Parkinson's disease Programmed cell death
Pathophysiology of Parkinson's disease
[ "Chemistry", "Biology" ]
1,973
[ "Senescence", "Programmed cell death", "Signal transduction" ]
50,493,193
https://en.wikipedia.org/wiki/Robotic%20non-destructive%20testing
Robotic non-destructive testing (NDT) is a method of inspection used to assess the structural integrity of petroleum, natural gas, and water installations. Crawler-based robotic tools are commonly used for in-line inspection (ILI) applications in pipelines that cannot be inspected using traditional intelligent pigging tools (or unpiggable pipelines). Robotic NDT tools can also be used for mandatory inspections in inhospitable areas (e.g., tank interiors, subsea petroleum installations) to minimize danger to human inspectors, as these tools are operated remotely by a trained technician or NDT analyst. These systems transmit data and commands via either a wire (typically called an umbilical cable or tether) or wirelessly (in the case of battery-powered tetherless crawlers). Applications Robotic NDT tools help pipeline operators and utility companies complete required structural integrity data sets for maintenance purposes in the following applications: Petroleum and public utility pipelines Pipe walls Girth welds Nuclear cooling systems Storage tanks Floor plates Shell plates Welds Pipeline conditions that may prevent or hinder a flow-driven pig inspection include: Some pipe fittings (e.g., small-radius bends, tees, butterfly valves, reducers) may be impassable for bulky inspection pigs. Technicians can manually adjust robotic tool travel speed, orientation, and configuration to navigate fittings that might trap or damage a free-flowing pig. Product flow may not be conducive to pig travel. Technician control of self-propelled crawler travel reduces the risk of velocity-based sensor malfunction. Real-time tool monitoring allows the technician to adjust the tool run immediately if readings become unacceptable, including adjusting tool settings to re-scan missed areas or repairing damaged components. Most robotic tools employ non-contact examination methods – technicians are not forced to manage a layer of couplant. Limited tool access may impact use of traditional tools – smart pigs require special entry and exit points (called launchers and receivers, respectively), which may be permanently or temporarily installed. Some crawlers can be inserted via removed fittings or cut-out spools as small as 24” in length, providing greater flexibility in launch and retrieval options – these tools do not require special fixtures. Some crawlers are designed to enter and exit natural gas lines via hot taps, which can be placed at pipeline operator convenience without taking the line out of service. Even in pipelines that could feasibly accept a traditional smart pig, the ability of crawlers to perform short inspections inside specific areas of concern is much more efficient for pipeline operators than arranging a lengthy pig run just to reach the same small area. Robotic NDT tools also offer safety advantages in inhospitable areas: Tank shell inspection crawlers typically climb the sides of the tanks, avoiding the danger to the inspectors and time/expense to the tank owner of providing fall protection or/and scaffolding. Similarly, tank floor inspection crawlers that can be lowered into the tank via portholes on the tank roof eliminate the hazards of confined space entry and the time/expense involved in air quality monitoring. Tools capable of working while submerged eliminate the hazards, difficulty, and expense of draining the inspection area. When used in storage tank inspections and subsea applications, these tools also eliminate hazards associated with diving. Robotic ILI crawler variants Tethered tool overview Tethered robotic inspection tools have an umbilical cable attached to them, which provides power and control commands to the tool while relaying sensor data back to the technician. Tethered crawlers have the following advantages over untethered crawlers: Technicians can use the tether to help retrieve the crawler in an emergency or to perform repairs Unlimited power supply from the umbilical cable allows technicians to examine potential defects as necessary without concern for battery life The umbilical cable supplies real-time control and sensor data to technicians, allowing re-inspection of questionable findings if necessary as well as alerting technicians immediately to tool malfunctions (i.e., minimizing false calls or/and missed anomalies) Most tethered ILI crawlers are small enough to be inserted via removed fittings/flanges or small cuts in a pipeline, minimizing inconvenience to the pipeline operator Bi-directional capabilities require only one access point for pipe inspections Tethered crawlers have the following disadvantages against untethered crawlers: The length and weight of the umbilical cable limits the distance these tools can travel Pipelines and tanks typically must be taken out of service to accommodate ILI tool entry and travel Untethered ILI crawler overview Untethered robotic ILI crawlers are powered by onboard batteries; these tools transmit sensor data wirelessly to the tool operator or store the data for downloading upon tool retrieval. Untethered crawlers have the following advantages over tethered crawlers: Untethered tools have a greater effective distance without the limitations imposed by an umbilical cable Pipelines can be sealed with untethered tools inside – the pipe can often remain in service during the inspection Bi-directional capabilities require only one access point for pipe inspections Untethered crawlers have the following disadvantages against tethered crawlers: Untethered robotic ILI crawlers can get stuck, requiring excavation and pipe cutting to retrieve the tool Data-recording robotic ILI crawlers do not supply real time data to operators, which can require additional inspection runs to analyze possible findings Untethered robotic ILI crawlers typically require large launchers to deploy and retrieve Inspection technologies Robotic NDT tools employ suites of inspection sensors. This section describes common sensor types; most tools combine several types of sensor depending on factors such as robot size, design, and application. Electromagnetic Acoustic Transducers (EMAT) – milled steel Main article – Electromagnetic acoustic transducers Electromagnetic acoustic transducers (EMAT) induce ultrasonic waves into uniformly-milled metal inspection objects (e.g., pipe walls, tank floors). Technicians can assess metal condition and detect anomalies based on the reflections of these waves – when the transducer passes over an anomaly, a new reflection appears between the initial pulse and the normal reflection. Direct beam EMAT, where the tool induces ultrasonic waves into the metal at a 0° angle (or perpendicular to the metal surface), is the most common inspection method. Direct beam inspections determine metal thickness as well as detect and measure the following defects: Metal loss on the internal surface (e.g., pitting corrosion, general metal loss) Metal loss on the external surface (e.g., pitting corrosion, gouges), including a residual thickness measurement in defect areas Mid-wall pipe mill anomalies (e.g., laminations, non-metal inclusions), including depth measurement Angle beam inspections, where the tool induces ultrasonic waves into the metal at an angle relative to the metal surface, can be performed concurrently with direct beam inspections to confirm anomaly detections. An angle beam transducer only registers echoes from anomalies or reflectors that fall into the beam path; unlike direct beam, it does not receive reflections from the opposite wall of normal steel. The combination of angle beam and direct beam methods may find additional anomalies and increase inspection accuracy. However, the angle beam method has a lower tolerance for surface debris than the direct beam method. Angle beam inspections discover crack-like anomalies parallel to the pipe axis and metal loss defects that are too small to detect via direct beam, including the following: Stress corrosion cracking Mechanical damage (e.g., scores, feed marks, scratches) Pitting corrosion Besides its uses in unpiggable pipelines, the non-contact nature of EMAT tools makes this method ideal for dry applications where liquid couplant requirements may make traditional UT tools undesirable (e.g., natural gas lines). EMAT – girth welds Weld integrity is a crucial component of pipeline safety, especially girth welds (or the circumferential welds that join each section of pipe together). However, unlike the consistent molecular structure of milled steel, welds and their heat-affected zones (HAZs) have an anisotropic grain structure that attenuates ultrasonic signals and creates wave velocity variances that are difficult for ILI tools to analyze. One angle-beam EMAT method employs a set of nine frequency-time (FT) scans on each side of the girth weld, where each frequency corresponds to a different input wave angle. The following figure shows a diagram of the inspection area covered by this method, where the green area represents the propagation of shear waves in the weld and surrounding metal. The tool merges each set of FT scans into a single frequency-time matrix scan to display weld conditions, with anomalies color-coded by severity. This method of girth weld scanning is designed to detect the following weld defects: Planar defects (e.g., lack of fusion, cracks) Volumetric defects (e.g., porosity, nonmetallic inclusions) Magnetic Flux Leakage (MFL) Main article – Magnetic flux leakage Magnetic flux leakage (MFL) tools use a sensor sandwiched between multiple powerful magnets to create and measure the flow of magnetic flux in the pipe wall. Structurally-sound steel has a uniform structure that allows regular flow of the magnetic flux, while anomalies and features interrupt the flow of flux in identifiable patterns; the sensor registers these flow interruptions and records them for later analysis. The following figure illustrates the principle of a typical MFL inspection tool; the left side of the diagram shows how an MFL tool works in structurally sound pipe, while the right side shows how the tool detects and measures a metal loss defect. MFL tools are used primarily to detect pitting corrosion, and some tool configurations can detect weld defects. One advantage of MFL tools over ultrasonic tools is the ability to maintain reasonable sensitivity through relatively thick surface coatings (e.g., paint, pipe liners). Video inspection Main article – video inspection Robotic NDT tools employ cameras to provide technicians an optimal view of the inspection area. Some cameras provide specific views of the pipeline (e.g., straight forward, sensor contact area on the metal) to assist in controlling the tool, while other cameras are used to take high-resolution photographs of inspection findings. Some tools exist solely to perform video inspection; many of these tools include a mechanism to aim the camera to completely optimize technicians’ field of vision, and the lack of other bulky ILI sensor packages makes these tools exceptionally maneuverable. Cameras on multipurpose ILI tools are usually placed in locations that maximize technicians’ ability to analyze findings as well as optimally control the tool. Laser profilometry Main article – surface metrology Laser profilometers project a shape onto the object surface. Technicians configure the laser (both angle of incidence and distance from the object) to ensure the shape is uniform on normal metal. Superficial anomalies (e.g., pitting corrosion, dents) distort the shape, allowing the inspection technicians to measure the anomalies using proprietary software programs. Photographs of these laser distortions provide visual evidence that improves the data analysis process and contributes to structural integrity efforts. Pulsed-Eddy Current (PEC) Main article – Pulsed-eddy current Pulsed-eddy current (PEC) tools use a probe coil to send a pulsed magnetic field into a metal object. The varying magnetic field induces eddy currents on the metal surface. The tool processes the detected eddy current signal and compares it to a reference signal set before the tool run; the material properties are eliminated to give a reading for the average wall thickness within the area covered by the magnetic field. The tool logs the signal for later analysis. The following diagram illustrates the principle of a typical PEC inspection tool. PEC tools can inspect accurately with a larger gap between the transducer and the inspection object than other tools, making it ideal for inspecting metal through non-metal substances (e.g., pipe coatings, insulation, marine growth). Case studies United States federal law requires baseline inspections to establish pipeline as-built statistics and subsequent periodic inspections to monitor asset deterioration. Pipeline operators also are responsible to designate high-consequence areas (HCAs) in all pipelines, perform regular assessments to monitor pipeline conditions, and develop preventive actions and response plans. State regulations for inspecting pipelines vary based on the level of public safety concerns. For example, a 2010 natural gas pipeline explosion in a San Bruno residential neighborhood led the California Public Utilities Commission to require safety enhancement plans from California natural gas transmission operators. The safety plan included numerous pipeline replacements and in-line inspections. Tethered robotic ILI crawler application examples The federal Pipeline and Hazardous Materials Safety Administration (PHMSA) does not permit use of tetherless crawlers in HCAs due to the risk of getting stuck. Excavating buried pipelines to retrieve stuck tools beneath freeway crossings, river crossings or dense urban areas would impact the community infrastructure too greatly. Natural gas and oil pipeline operators therefore rely on tethered robotic ILI crawlers to inspect unpiggable pipelines. Williams used a tethered robotic ILI crawler to inspect an unpiggable section of the Transco Pipeline in New Jersey in 2015. The pipeline system ran beneath the Hudson River; construction of a new condominium development nearby created a new HCA, requiring Williams to create an integrity management program per PHMSA regulations. Alyeska Pipeline Service Company inspected Pump Station 3 on the Trans-Alaska Pipeline System after an oil leak was discovered in an underground oil pipeline at Pump Station 1 in 2011. The spill resulted in a consent agreement between Alyeska and PHMSA requiring Alyeska to remove all liquid-transport piping from its system that could not be assessed using ILI tools or a similar suitable inspection technique. Because other ILI tools could not navigate the pipeline geometry common to each of the eleven pump stations along the pipeline, Alyeska received approval to use a tethered robotic ILI crawler manufactured by Diakont to complete an inspection project at Pump Station 3. This tool allowed Alyeska to only remove a few small aboveground fittings to permit crawler entry into the piping, saving the time and expense necessary to excavate hundreds of feet of pipe (some of which was also encased in concrete vaults) to inspect by hand. Nuclear power plants in the United States are subject to unique integrity management mandates per the Nuclear Energy Institute (NEI) NEI 09-14, Guideline for the Management of Buried Piping Integrity. The Cooper Nuclear Station in Nebraska performed buried pipe inspections to comply with these industry mandates as part of a 2010 nuclear power plant license renewal. Part of the plant pipeline integrity management program included inspecting a high pressure coolant injection (HPCI) line using a tethered robotic ILI crawler manufactured by Diakont. The South Texas Project Electric Generating Station performed an inspection of a service water pipe in 2014 using a GE Hitachi Nuclear Energy crawler. Tetherless robotic ILI crawler application examples Natural gas pipeline operators can use tetherless robotic ILI crawlers for smaller distribution pipelines that are not located beneath critical infrastructure elements (e.g., freeway crossings). In 2011, Southern California Gas Company (SoCalGas) used a tetherless robotic ILI crawler manufactured by Pipetel to inspect an 8” natural gas pipeline whose product flow lacked the pressure to propel a traditional smart pig. The tool successfully inspected 2.5 miles of pipeline, including a cased segment and an area underneath a railway track. Southwest Gas Corporation used the same tool in 2013 to inspect approximately one mile of a 6” natural gas line in Las Vegas, Nevada. Central Hudson Gas & Electric used a similar crawler in 2015 to inspect a 3000’ section of a 16” natural gas line that included a roadway crossing. NDT method comparison Robotic NDT tools have the following advantages over other NDT methods: Real-time data analysis makes structural integrity efforts more effective and convenient. Faster preliminary results make structural integrity management more efficient – results from a smart pig are not available until the tool run is complete and may take up to 90 days to analyze, whereas the shorter inspection scope and close real-time monitoring allow robotic tool results to be formally reported in as little as 30 days. Robotic tools inspections can include an immediate reporting threshold. Crews can use the separate reporting thresholds to better prioritize findings. The ability to stop the tool and alert customer engineers to the most serious findings helps expedite structural integrity efforts. Continuous monitoring allows for tool repair or/and inspection scope adjustment to prevent the cost/inconvenience of a whole repeat tool run. Real-time data monitoring allows daily reports and makes a preliminary report (containing only the most serious anomalies) possible. The inspection crew can stop the tool’s forward progress to re-examine findings in order to gather additional data and confirm defect identity/severity. The ability to monitor tool function ensures tool data integrity for the entirety of the inspection. The compact footprint of these tools allows them to be deployed at customer convenience rather than limited to pre-established endpoints (i.e., pig launcher/receiver). This makes tethered tools less likely to get stuck, and easier to retrieve if stuck/damaged. Pipeline operators can enjoy major savings on excavation costs when examining underground installations, especially if the tool run can be coordinated with an existing excavation during other maintenance efforts. The smaller space requirements make robotic NDT crawlers much easier to use in urban environments and other cramped settings where pedestrians, vehicular traffic, and/or other workers are present. Robotic NDT tools are specifically designed to navigate more complex environments. The inspection crew can adapt tool travel to accommodate fixtures (e.g., tees, bends, tank roof supports) as well as findings (e.g., dents, corrosion pits) to prevent the tool from becoming damaged or stuck. The inspection crew can also manipulate the tool to maximize sensor reception in areas where the tool’s normal travel path would impact readings. Many inspection areas pose significant safety hazards to human occupants that can be eliminated or greatly reduced by robotic NDT tools: The modest entry requirements and remote operation of pipeline inspection crawlers minimizes hazards associated with working in trenches. Robotic inspection inside liquid tanks eliminates the hazards associated with working in confined spaces, especially if the tank contents include dangerous fumes. Robotic inspection of tank shells eliminates the need for fall protection and the dangers involved with working at significant heights. The cost of an outage for an inspection (and planned maintenance, if necessary) is a fraction of the costs involved in an asset failure. Robotic tools have the following disadvantages against other NDT methods: The need for the inspection crew to maintain communication with the tool limits its effective range. Tethered tools may also be limited by the crawler’s ability to pull the tether over long distances. Tension on a tethered crawler’s cable may limit tool movement after passing too many bends in pipeline applications, or after wrapping around roof supports during tank floor inspections. Many self-propelled pipeline inspection tools are slower than pigs that can flow with product. Unlike some remote-control vehicles that are commercially available for rent or sale, robotic NDT crawlers require significant training before they can be used for formal inspection. Regulatory requirements often specify that inspection data must be gathered, analyzed, and collated for reporting by technicians who are certified as experts in the applicable inspection technology by an independent organization (e.g., the American Society for Nondestructive Testing, the American Society of Mechanical Engineers). Many crawlers require the inspection area to be taken out of service and cleaned before operations. Continuous air-quality monitoring may be necessary during operations, up to provision of a blanket of inert gas (e.g., nitrogen) if the area contains especially flammable/explosive fumes. Loose debris (e.g., ferromagnetic dust, paraffin) or internal corrosion can impact EMAT and MFL readings. These services can often be performed during scheduled outages, but special shut-down may be necessary if regulatory requirements do not align with other planned service outages. References Codes and standards US federal HCA identification guidelines – 49 CFR 192.905 US federal baseline pipeline assessment – 49 CFR 192.921 US federal pipeline integrity evaluation process – 49 CFR 192.937 NTSB identification of HCAs Pipeline Operators Forum American Petroleum Institute (API) 653 API 1163 The American Society for Mechanical Engineers (ASME) B31.8 ASME B31G NACE SP0102-2010 Guideline for the Management of Buried Piping Integrity – NEI 09-14 External links Diakont - pipeline ILI Innerspec - Robotic Inspection Systems Pipetel Technologies - pipeline ILI Applus - subsea pipe inspection TechCorr - in-service tank floor inspection Newton Labs – in-service tank floor inspection Invert Robotics – tank shell inspection Structural Integrity Associates - pipeline ILI Inline Inspection and Pipeline Pigging Resource Introduction to Inline Inspection “How Does Pipeline Pigging Work?" – rigzone.com NDT Resource Center – Shear Wave Generation NDT Resource Center – Basic Principles of Eddy Current Inspection “What is MFL?” – MFE Inc. MFL limitations – MFE Inc. MFL Frequently Asked Questions (GE) NDT.net – example wireless crawler description NDT-ed.org – storage tank inspection overview NYSEARCH Pipetel reporting: NYSEARCH Pipeline Safety Update Commercial Products Pipeline & Gas Journal – unpiggable pipeline overview (GE tool) Silverwing – remote-control tank shell inspection vehicle PHMSA Pipeline Safety homepage Weld Failure fact sheet American Society for Nondestructive Testing certification American Society of Mechanical Engineers American Society for Testing and Materials Robotics Nondestructive testing
Robotic non-destructive testing
[ "Materials_science", "Engineering" ]
4,543
[ "Nondestructive testing", "Materials testing", "Robotics", "Automation" ]
47,237,260
https://en.wikipedia.org/wiki/Tomviz
tomviz is an open source software platform for reproducible volumetric visualization and data processing. The platform is designed for a wide range scientific applications but is especially tailored to high-resolution electron tomography, with features that allow alignment and reconstruction of nanoscale materials. The tomviz platform allows graphical analysis of 3D datasets, but also comes packaged with Python, NumPy, and SciPy tools to allow advanced data processing and analysis. Current version is 1.10.0. In 2022 the tomviz platform was used to enable 3D visualization of specimens during an electron or cryo-electron tomography experiment. Tomviz is built with multi-threaded data analysis pipeline runs dynamic visualizations that update as new data is collected or reconstruction algorithms proceed. Scientists can interactively analyse 3D specimen structure concurrent with a tomographic reconstruction after or during an experiment. References 3D graphics software 3D imaging Data and information visualization software Image processing software Graphics software Physics software Science software
Tomviz
[ "Physics" ]
203
[ "Physics software", "Computational physics" ]
47,237,357
https://en.wikipedia.org/wiki/SolidRun
SolidRun is an Israeli company producing embedded systems components, mainly mini computers, Single-board computers and computer-on-module devices. It is specially known for the CuBox family of mini-computers, and for producing motherboards and processing components such as the HummingBoard motherboard. Situated in Acre, Israel, SolidRun develops and manufactures products aimed both for the private entertainment sector, and for companies developing processor based products, notably components of "Internet of Things" technology systems. Within the scope of the IoT technology, SolidRun's mini computers are aimed to cover the intermediate sphere, between sensors and user devices, and between the larger network or Cloud framework. Within such a network, mini computers or system-on-module devices, act as mediators gathering and processing information from sensors or user devices and communicating with the network - this is also known as Edge computing. History SolidRun was founded in 2010 by co-founders Rabeeh Khoury (formally an engineer at Marvell Technology Group) and Kossay Omary. The goal of SolidRun has been to develop, produce and market components aimed for integration with IoT systems. The company today is situated in Acre in the Northern District of Israel, and headed by Dr. Atai Ziv (CEO). The major product development line aimed at the consumer market is the CuBox family of mini-computers. The first of which was announced in December 2011, followed by the development of the CuBox-i series, announced in November 2013. The most recent addition to the CuBox line has been the CuBoxTV (announced in December 2014), which has been marketed primarily for the home entertainment market. A further primary product developed by SolidRun is the Hummingboard, an uncased single-board computer, marketed to developers as an integrated processing component. SolidRun develops all of its products using Open-source software (such as Linux and OpenELEC), identifying itself as a member of the OSS community and a promoter of Open-source software platforms. The products developed by SolidRun are classed into a number of families, based upon the processor maker. Each family offers a range of mini-computers, SOM's & and networking solutions - currently divided into NXP's i.MX 6, i.MX 8 and LX2160A processor families, Marvell Armada and Octeon families, and Texas Instruments Sitara family. Every processing family offering different advantages with different application capacities. IoT and industrial products SOMs A compact system-on-module ARM based processing board, with a Freescale i.MX 6 system-on-chip & networking, power management and storage capabilities. At , the MicroSoM is aimed for device and system developing, as an all rounded modular processing component. The SOM varies between 4 models ranging in performance, especially in regard to processing. The Single-core and Dual-Light-core SOMs house a Vivante GC880 GPU, 10/100 Mbit/s Ethernet network connection and a 2 Lane CSI camera interface port. The Single-core variant holds 32-bit DDR3, 512 MB memory, while the Dual-light variant holds 64-bit DDR3, 1 GB memory. The Dual-core and Quad-core SOM's house a Vivante GC2000 GPU, 10/100/1000 Mbit/s Ethernet network connection and a 4 Lane CSI camera interface port, they also include a built in 802.1 b/g/n wireless and a 4.0 Bluetooth port. Both variants offer 64-bit DDR3 memory at a 1066 Mbit/s speed, the dual-core coming with 1 GB of memory, while the Quad-core comes with 2 GB of memory. Models & specifications: TI AM64x Sitara CuBox-i & CuBox-M Announced in December 2011, CuBox and CuBox-i are a series of fanless nettop-class mini computers, all cube shaped and approximate 2 × 2 × 2 inches in size, weighing around 91 g (3.2 oz). The first generation CuBox was a low-power ARM architecture CPU based computer, using the Marvell Armada 510 (88AP510) SoC with an ARM v6/v7-compliant superscalar processor core, Vivante GC600 OpenGL 3.0 and OpenGL ES 2.0 capable 2D/3D graphics processing unit, Marvell vMeta HD Video Decoder hardware engine, and TrustZone security extensions, Cryptographic Engines and Security Accelerator (CESA) co-processor. In November 2013, SolidRun released a family of CuBox-i computers initially named CuBox-i1, i2, i2eX, and i4Pro, containing a range of different i.MX6 processors by Freescale Semiconductor. A further development in the family, CuBoxTV was announced in December 2014 as a mid-range CuBox-i SOM device designed to run Kodi on an OpenELEC Operating system, developed for the home entertainment market. CuBoxTV was based on an ARM architecture Quad core CPU, 1 GB, 64 bit memory, GC2000 GPU with an OpenGL ES quad shader, and a host of video, audio and picture decoders and encoders supporting all major file type. The device has a number of connection ports including HDMI, 10/100/1000 Ethernet, USB 2.0, eSATA and optical audio. HummingBoard A compact computer-on-module ARM-based mini computer, running an i.MX6 or iMX8M SoC. HummingBoard is marketed as a modular fanless mini computer, to be integrated with larger networks or systems, especially in the area of IoT development. Networking products Marvell ARMADA A388 family A388 SOM Based on the Marvell ARMADA 388 SoC, the SOM features a Dual core ARM Cortex-A9 with 1.6 GHz processing power (up to 1.3 GHz in industrial grade), and up to 2 GB, 32-bit DDR3L memory. At 30 mm × 50 mm the ARMADA MicroSoM is the basis for a number of SolidRun's products in this product family. ClearFog A388 Announced in November 2015, SolidRun's ClearFog Single-board computer (SBC) is based on Marvell's Armada 38x ARM Cortex-A9 Dual SoC and is marketed as a modular development integration SBC. The ClearFog is divided into two grades: Base and Pro, differing mainly in connectivity options and size. The ClearFog is a fanless SBC based on a Marvell ARMADA A388 dual 1.6 GHz core SOM, with 1 GB memory, Mikroelektronika mikroBUS Click Board support, and various connection ports including USB 3.0, mPCIE & Ethernet ports. The Clearfog Pro has a Marvell 88E6176 DSA chip. NXP Layerscape LX2160A family LX2160A COM Express type 7 Marvell OCTEON TX2 CN9130 family See also Raspberry Pi Internet of Things Industry 4.0 References External links cuboxtv.com Internet of things companies Technology companies established in 2010 Technology companies of Israel Electronics companies of Israel Embedded systems
SolidRun
[ "Technology", "Engineering" ]
1,511
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
47,238,046
https://en.wikipedia.org/wiki/Fordow%20Fuel%20Enrichment%20Plant
Fordow Fuel Enrichment Plant (FFEP) is an Iranian underground uranium enrichment facility located northeast of the Iranian city of Qom, near Fordow village, at a former Islamic Revolutionary Guard Corps base. The site is under the control of the Atomic Energy Organization of Iran (AEOI). It is the second Iranian uranium enrichment facility, the other one being that of Natanz. According to the Institute for Science and International Security, possible coordinates of the facility's location are: . Disclosure Existence of the then-unfinished enrichment plant was disclosed to the International Atomic Energy Agency (IAEA) by Iran on 21 September 2009, but only after the site became known to Western intelligence services. Western officials strongly condemned Iran for not disclosing the site earlier; U.S. President Barack Obama said that Fordow had been under U.S. surveillance. Iran argues that this disclosure was consistent with its legal obligations under its Safeguards Agreement with the IAEA, which Iran claims requires Iran to declare new facilities 180 days before they receive nuclear material. However, the IAEA stated that Iran was bound by its agreement in 2003 to declare the facility as soon as Iran decided to construct it. Capacity In its initial declaration, Iran stated that the purpose of the facility was the production of UF6 enriched up to 5% U-235, and that the facility was being built to contain 16 cascades, with a total of approximately 3000 centrifuges. Later, in September 2011, Iran said it would move its production of 20% LEU to Fordow from Natanz, and enrichment started in December 2011. In January 2012, the IAEA announced that Iran had started producing uranium enriched up to 20% for medical purposes and that material "remains under the agency's containment and surveillance.” Under the Joint Comprehensive Plan of Action of April 2015, the Fordow plant was to be restructured to less intensive research use. The Fordow facility was to stop enriching uranium and researching uranium enrichment for at least fifteen years, and the facility was to be converted into a nuclear physics and technology centre. For 15 years, it would maintain no more than 1,044 IR-1 centrifuges in six cascades in one wing of Fordow. "Two of those six cascades will spin without uranium and will be transitioned, including through appropriate infrastructure modification," for stable radioisotope production for medical, agricultural, industrial, and scientific use. "The other four cascades with all associated infrastructure will remain idle." Iran is not permitted to have any fissile material in Fordow. In 2018, the Israeli company ImageSat published satellite photographs showing renewed construction and development at the Fordow facility. On 5 November 2019, Iranian nuclear chief Ali Akbar Salehi announced that Iran will enrich uranium to 5% at Fordow. In January 2020 the Fordow site had 1,044 centrifuges designed to enrich uranium hexafluoride. In January 2021 the Fordow site began to produce uranium enriched to a 20% level. In March 2023 CNN reported that "near bomb-grade" uranium had been found at Fordow. The IAEA confirmed that 83.7% pure U238 was discovered at Fordow, and that this had been very much a surprise to the agency. In June 2024 the IAEA remarked that Iran had built additional centrifuges, while the Washington Post remarked on the Iranian order to triple the centrifuge capacity of the Fordow plant. The Times of Israel said that four new cascades had been installed but had not yet been commissioned. History Google Maps satellite images for the Fordow site can be found at coordinates 34.885649,50.99669. Images zoomed to the 20 meter level show a large double fence perimeter border erected around the site with towers located every 25 meters. Six 10 meter wide entry portals to the complex are located within the fenced area, as well as several buildings, the largest of which is approximately . Iranian authorities state the facility is built deep in a mountain because of repeated threats by Israel to attack such facilities, which Israel believes can be used to produce nuclear weapons. However, attacking a nuclear facility so close to the city of Qom which is considered so holy between Shia Muslims brings concern of a potential risk of a Shiite religious response. In November 2013, hundreds of Iranians, mostly students of Sharif University of Technology, accompanied by the head of AEOI, Ali Akbar Salehi, and several Majles (parliament) representatives formed a human chain around the Fordow uranium enrichment facility. The students were there to show their support for the Iranian nuclear program. In 2016, Iran stationed anti-aircraft S-300 missile system at the site. In February 2023 the IAEA remarked that the Fordow plant had changed. See also Natanz Nuclear Facility References External links David Albright, Frank Pabian, and Andrea Stricker: The Fordow Enrichment Plant, aka Al Ghadir: Iran’s Nuclear Archive Reveals Site Originally Purposed to Produce Weapon-Grade Uranium for 1–2 Nuclear Weapons per Year – Institute for Science and International Security, March 13, 2019 Nuclear facilities Industrial buildings in Iran Uranium Isotope separation facilities Buildings and structures in Qom province Nuclear program of Iran Underground construction Nuclear facilities in Iran
Fordow Fuel Enrichment Plant
[ "Engineering" ]
1,082
[ "Underground construction", "Civil engineering", "Construction" ]
47,239,058
https://en.wikipedia.org/wiki/Edward%20Goodrich%20Acheson%20Award
The Edward Goodrich Acheson Award was established by The Electrochemical Society (ECS) in 1928 to honor the memory of Edward Goodrich Acheson, a charter member of ECS. The award is presented every 2 years for "conspicuous contribution to the advancement of the objectives, purposes, and activities of the society (ECS)". Recipients of the award receive a gold medal, wall plaque, and cash prize, ECS Life membership, and a complimentary meeting registration. History The Edward Goodrich Acheson Award is the first and most prestigious award of The Electrochemical Society. The award was established by a gift of $25,000 from past president (and namesake of the award) Edward Goodrich Acheson. Originally, recipients were presented with a prize of $1,000, a gold medal, and a bronze replica, with the intention that the gold medal would "find its way to the safe deposit box," while the replica was reserved for "everyday use". The Acheson family later agreed to have the medal be electroplated gold in order to keep the award fund in balance. Thanks to continuous donations from the Acheson family between 1942 and 1991, the endowment fund has allowed the monetary prize to be increased 3 times since its establishment. Recipients of the award As listed by ECS: 2018 Tetsuya Osaka 2016 Barry Miller 2014 Ralph J. Brodd 2012 Dennis W. Hess 2010 John S. Newman 2008 Robert P. Frankenthal 2006 Vittorio de Nora 2004 Wayne L. Worrell 2002 Bruce Deal 2000 Larry R. Faulkner 1998 Jerry M. Woodall 1996 Richard C. Alkire 1994 J. Bruce Wagner, Jr. 1992 Dennis R. Turner 1990 Theodore R. Beck 1988 Herbert H. Uhlig 1986 Eric M. Pell 1984 Norman Hackerman 1982 Henry C. Gatos 1980 Ernest B. Yeager 1978 Dan A. Vermilyea 1976 N. Bruce Hannay 1974 Cecil V. King 1972 Charles W. Tobias 1970 Samuel Ruben 1968 Francis L. LaQue 1966 Warren C. Vosburgh 1964 Earl A. Gulbransen 1962 Charles L. Faust 1960 Henry B. Linford 1958 William J. Kroll 1956 Robert M. Burns 1954 George W. Heise 1952 John W. Marden 1950 George W. Vinal 1948 Duncan A. MacInnes 1946 H. Jermain Creighton 1944 William Blum 1942 Charles F. Burgess 1939 Francis C. Frary 1937 Frederick M. Becket 1935 Frank J. Tone 1933 Colin G. Fink 1931 Edwin Fitch Northrup 1929 Edward Goodrich Acheson See also List of chemistry awards References External links Edward Goodrich Acheson Award Recipients American science and technology awards Chemistry awards Awards established in 1928
Edward Goodrich Acheson Award
[ "Technology" ]
550
[ "Science and technology awards", "Chemistry awards" ]
47,241,003
https://en.wikipedia.org/wiki/Mahaney%27s%20theorem
Mahaney's theorem is a theorem in computational complexity theory proven by Stephen Mahaney that states that if any sparse language is NP-complete, then P = NP. Also, if any sparse language is NP-complete with respect to Turing reductions, then the polynomial-time hierarchy collapses to . Mahaney's argument does not actually require the sparse language to be in NP, so there is a sparse NP-hard set if and only if P = NP. This is because the existence of an NP-hard sparse set implies the existence of an NP-complete sparse set. References Computational complexity theory
Mahaney's theorem
[ "Mathematics", "Technology" ]
122
[ "Computer science stubs", "Theorems in discrete mathematics", "Computer science", "Theorems in computational complexity theory", "Computing stubs" ]
60,558,638
https://en.wikipedia.org/wiki/Monoallelic%20gene%20expression
Monoallelic gene expression (MAE) is the phenomenon of the gene expression, when only one of the two gene copies (alleles) is actively expressed (transcribed), while the other is silent. Diploid organisms bear two homologous copies of each chromosome (one from each parent), a gene can be expressed from both chromosomes (biallelic expression) or from only one (monoallelic expression). MAE can be Random monoallelic expression (RME) or Constitutive monoallelic expression (constitutive). Constitutive monoallelic expression occurs from the same specific allele throughout the whole organism or tissue, as a result of genomic imprinting. RME is a broader class of monoallelic expression, which is defined by random allelic choice in somatic cells, so that different cells of the multi-cellular organism express different alleles. Constitutive monoallelic gene expression Random monoallelic gene expression (RME) X-chromosome inactivation (XCI), is the most striking and well-studied example of RME. XCI leads to the transcriptional silencing of one of the X chromosomes in female cells, which results in expression of the genes that located on the other, remaining active X chromosome. XCI is critical for balanced gene expression in female mammals. The allelic choice of XCI by individual cells takes place randomly in epiblasts of the preimplantation embryo, which leads to mosaic gene expression of the paternal and maternal X chromosome in female tissues. XCI is a chromosome-wide monoallelic expression, that includes expression of all genes that are located on X chromosome, in contrast to autosomal RME (aRME) that relates to single genes that are interspersed over the genome. aRME's can be fixed or dynamic, depending whether or not the allele-specific expression is conserved in daughter cells after mitotic cell division. Types of aRME Fixed aRME are established either by silencing of one allele that previously has been biallelically expressed, or by activation of a single allele from previously silent gene. Expression activation of the silent allele is coupled with a feedback mechanism that prevents expression of the second allele. Another scenario is also possible due to limited time-window of low-probability initiation, that could lead to high frequencies of cells with single-allele expression. It is estimated that 2-10% of all genes are fixed aRME. Studies of fixed aRME require either expansion of monoclonal cultures or lineage-traced in vivo or in vitro cells that are mitotically. Dynamic aRME occurs as a consequence of stochastic allelic expression. Transcription happens in bursts, which results in RNA molecules being synthesized from each allele separately. So over time, both alleles have a probability to initiate transcription. Transcriptional bursts are allelically stochastic, and lead to either maternal or paternal allele being accumulated in the cell. The gene transcription burst frequency and intensity combined with RNA-degradation rate form the shape of RNA distribution at the moment of observation and thus whether the gene is bi- or monoallelic. Studies that distinguish fixed and dynamic aRME require single-cell analyses of clonally related cells. Mechanisms of aRME Allelic exclusion is a process of gene expression when one allele is expressed and the other one kept silent. Two most studied cases of allelic exclusion are monoallelic expression of immunoglobulins in B and T cells and olfactory receptors in sensory neurons. Allelic exclusion is cell-type specific (as opposed to organism-wide XCI), which increases intercellular diversity, thus specificity towards certain antigens or odors. Allele-biased expression is skewed expression level of one allele over the other, but both alleles are still expressed (in contrast to allelic exclusion). This phenomenon is often observed in cells of immune function Methods of detection Methods of MAE detection are based on the difference between alleles, which can be distinguished either by the sequence of expressed mRNA or protein structure. Methods of MAE detection can be divided into single gene or whole genome MAE analysis. Whole genome MAE analysis cannot be performed based on protein structure yet, so these are completely NGS based techniques. Single-gene analysis Genome-wide analysis References External links Gene expression
Monoallelic gene expression
[ "Chemistry", "Biology" ]
903
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
60,562,746
https://en.wikipedia.org/wiki/Manufacture%20of%20the%20International%20Space%20Station
The project to create the International Space Station required the utilization and/or construction of new and existing manufacturing facilities around the world, mostly in the United States and Europe. The agencies overseeing the manufacturing involved NASA, Roscosmos, the European Space Agency, JAXA, and the Canadian Space Agency. Hundreds of contractors working for the five space agencies were assigned the task of fabricating the modules, trusses, experiments and other hardware elements for the station. The fact that the project involved the co-operation of sixteen countries working together created engineering challenges that had to be overcome: most notably the differences in language, culture and politics, but also engineering processes, management, measuring standards and communication; to ensure that all elements connect together and function according to plan. The ISS agreement program also called for the station components to be made highly durable and versatile — as it is intended to be used by astronauts indefinitely. A series of new engineering and manufacturing processes and equipment were developed, and shipments of steel, aluminium alloys and other materials were needed for the construction of the space station components. History and planning The project began as Space Station Freedom, a US only effort, but was long delayed by funding and technical problems. Following the initial 1980's authorization (with an intended ten year construction period) by Ronald Reagan, the Station Freedom concept was designed and renamed in the 1990s to reduce costs and expand international involvement. In 1993, the United States and Russia agreed to merge their separate space station plans into a single facility integrating their respective modules and incorporating contributions from the European Space Agency and Japan. In later months, an international agreement board recruited several more space agencies and companies to collaborate to the project. The International Organization for Standardization played a crucial role in unifying and overcoming different engineering methods (such as measurements and units), languages, standards and techniques to ensure quality, engineering communication and logistical management across all manufacturing activities of the station components. Engineering designs Engineering diagrams of various elements of the ISS, with annotations of various parts and systems on each module. Technical schematics Manufacturing Information and Processes List of factories and manufacturing processes used in the construction and fabrication of the International Space Station modular components: Decommissioned Components are shown in gray. Transportation Once manufactured or fabricated sufficiently, most of the space station elements were transported by aircraft (usually the Airbus Beluga or the Antonov An-124) to the Kennedy Space Center Space Station Processing Facility for final manufacturing stages, checks and launch processing. Some elements arrived by ship at Port Canaveral. Each module for aircraft transport was safely housed in a custom-designed shipping container with foam insulation and an outer shell of sheet metal, to protect it from damage and the elements. At their respective European, Russian and Japanese factories, the modules were transported to their nearest airport by road in their containers, loaded into the cargo aircraft and were flown to Kennedy Space Center's Shuttle Landing Facility for unloading and final transfers to the SSPF and or the Operations and Checkout Building in the KSC industrial area. The American and Canadian-built components such as the US lab, Node 1, Quest airlock, truss and solar array segments, and the Canadarm-2 were either flown by the Aero Spacelines Super Guppy to KSC, or transported by road and rail. After final stages of manufacturing, systems testing and launch checkout, all ISS components are loaded into a payload transfer container in the shape of the Space Shuttle payload bay. This container safely carries the component in its launch configuration until it is hoisted vertically at the launch pad gantry for transfer to the Space Shuttle orbiter for launch and in-orbit assembly of the International Space Station. Pre-launch processing and last stages of manufacturing With the exception of all but one Russian-built module — Rassvet, all ISS components end up here at either one or both of these buildings at Kennedy Space Center. Space Station Processing Facility At the SSPF, ISS modules, trusses and solar arrays are prepped and made ready for launch. In this iconic building are two large 100,000 class clean work environment areas. Workers and engineers wear full non-contaminant clothing while working. Modules receive cleaning and polishing, and some areas are temporarily disassembled for the installation of cables, electrical systems and plumbing. Steel truss parts and module panels are assembled together with screws, bolts and connectors, some with insulation. In another area, shipments of spare materials are available for installation. International Standard Payload Rack frames are assembled and welded together, allowing the installation of instruments, machines and science experiment boxes to be fitted. Once racks are fully assembled, they are hoisted by a special manually operated robotic crane and carefully maneuvered into place inside the space station modules. Each rack weighs from 700 to 1,100 kg, and connect inside the module on special mounts with screws and latches. Cargo bags for MPLM modules were filled with their cargo such as food packages, science experiments and other miscellaneous items on-site in the SSPF, and were loaded into the module by the same robotic crane and strapped in securely. Operations and Checkout Building Adjacent to the Space Station Processing Facility, the Operations and Checkout Building's spacecraft workshop is used for testing of the space station modules in a vacuum chamber to check for leaks which can be repaired on-site. Additionally, systems checking on various electrical elements and machines is conducted. Similar processing operations to the SSPF are conducted in this building if the SSPF area is full, or certain stages of preparation can only be done in the O&C. See also Assembly of the International Space Station Origins of the International Space Station Space architecture Aerospace engineering Space manufacturing Space Station 3D – 2002 Canadian documentary References External links ISS space agency websites  Canadian Space Agency  European Space Agency  Centre national d'études spatiales (National Centre for Space Studies)  German Aerospace Center  Italian Space Agency  Japan Aerospace Exploration Agency  Russian Federal Space Agency  National Aeronautics and Space Administration Manufacturer websites  S.P. Korolev Rocket and Space Corporation Energia  Boeing - International Space Station  Lockheed Martin Space Systems  Thales Alenia Group  Thales Aerospace UK  BSM group (stainless steel supplier)  MDA Space Missions  Institute of Space and Astronautical Science  Brazilian Space Agency  Bigelow Aerospace  Airbus space industries International Space Station Manufacturing Space manufacturing Industry in space
Manufacture of the International Space Station
[ "Astronomy", "Engineering" ]
1,265
[ "Industry in space", "Outer space", "Mechanical engineering", "Manufacturing" ]
60,563,052
https://en.wikipedia.org/wiki/C18H27NO
{{DISPLAYTITLE:C18H27NO}} The molecular formula C18H27NO (molar mass: 273.41 g/mol, exact mass: 273.2093 u) may refer to: 3-MeO-PCP (3-Methoxyphencyclidine) 4-MeO-PCP (4-Methoxyphencyclidine)
C18H27NO
[ "Chemistry" ]
87
[ "Isomerism", "Set index articles on molecular formulas" ]
60,567,436
https://en.wikipedia.org/wiki/Perfluorotriethylcarbinol
Perfluorotriethylcarbinol is a perfluorinated alcohol, namely 3-ethyl-3-pentanol. It is a powerful uncoupling agent and is toxic by inhalation. See also Perfluorinated compound Uncoupling agent References Uncouplers Perfluorinated alcohols Tertiary alcohols
Perfluorotriethylcarbinol
[ "Chemistry" ]
78
[ "Cellular respiration", "Uncouplers" ]
60,570,457
https://en.wikipedia.org/wiki/Roy%20Mugerwa
Roy D. Mugerwa (January 2, 1942 – April 19, 2019) was a Ugandan physician, cardiologist and researcher. His contribution to the world of academics include being a Professor Emeritus at Makerere University College of Health Sciences in Kampala, cardiology in Uganda, researching HIV/AIDS and tuberculosis, and his efforts to find an effective HIV vaccine. Background Dr. Mugerwa was born on January 2, 1942, to Yowana Ziryawula and Maria Namatovu. He pursued his education at St. Mary's College Kisubi for both the O-Level and A-Level and was at the top of his class for all six years. Upon graduation, he admitted to Makerere University, Uganda's oldest and largest public university, and completed both undergraduate and masters programs. He received training in medicine and cardiology at Mulago Hospital and also pursued higher level instruction in the United States, United Kingdom, and the Netherlands. He then returned to Uganda and developed a career in Kampala, serving as faculty at both Mulago Hospital and Makerere University. Career Early career Dr. Mugerwa's initial specialization was cardiology, and this is what he pursued in the early stages of his career. By 1972, he was one of the first five research fellows to be trained at Mulago Cardiac Clinic, which is the precursor of the present-day Uganda Heart Institute (UHI). Not only did Dr. Mugerwa have a role in founding UHI, but he also served as both Executive Director and Director. Among his other efforts in the world of cardiac health in Uganda are introducing the practice of echocardiography, founding the country's first hypertension clinic, and establishing the Uganda Heart Association. HIV/AIDS epidemic Dr. Mugerwa's career would take a turn away from cardiology after the discovery of HIV/AIDS being present in Uganda. When scientist Wilson Carswell first confirmed there were HIV positive patients in Mulago Hospital in 1984, Dr. Mugerwa was part of a team that joined him as they went Masaka and Rakai and ascertained that the virus had spread there too. They published their findings in 1985, believing these Ugandans suffered from a manifestation of AIDS called Slim Disease, although it would later be known that they were dying of AIDS. In October 1985, shortly after their Slim Disease publication, Dr. Mugerwa attended a Workshop on AIDS in Central Africa, which was put on by the World Health Organization (WHO). The workshop discussed the establishment of an AIDS Surveillance System in every African country, which would be charged with confirming the presence of AIDS and collecting data. Dr. Mugerwa, along with two other colleagues, was appointed to Uganda's AIDS Surveillance Sub-Committee, and by 1986 they had succeeded in implementing public health efforts such as educational programs, supplying condoms, and screening potentially infected blood donors. They also emphasized mutual monogamy, openly disclosing one's HIV status, and increasing the number of available HIV tests. In the 1980s, Dr. Mugerwa was the Director of Medicine at Mulago Hospital, where the patient prevalence of AIDS reached up to 40% in 1988. During this time, he struggled with low access to confirmatory HIV tests, hospital overcrowding, and the challenge of deciding to tell patients they were dying of AIDS, due to the stigma and shame surrounding the disease. Outside of hospital work, he was one of the founding members of the Uganda-Case Western Reserve University Research Collaboration and held the position of lead principal investigator for twenty years. The main focus of this collaboration was HIV/AIDS and coinfection of tuberculosis in HIV-positive individuals, and this was conducted mainly through clinical studies, providing care to patients, and investigating treatment and prevention methods. Founded in 1988, the collaboration continues to this day. At its 20th anniversary celebration, it was stated that the collaboration had provided over fifty Ugandans with upper level degrees, published over two-hundred articles in peer-reviewed journals, and presented at over five-hundred conferences. Dr. Mugerwa was also a member of the Academic Alliance for AIDS Care and Prevention in Africa. Established in 2001, the Alliance succeeded in providing treatment to patients with HIV/AIDS, training medical providers about HIV/AIDS care, installing programs to increase outreach and prevention efforts, and providing lab resources. The Infectious Disease Institute at Makerere University broke ground in 2003 to help carry out these efforts. Around this time, suspicions that questioned if HIV truly causes AIDS were circulating, and these were supported by South Africa's president Thabo Mbeki. In opposition to these denialist claims, over five-thousand scientists signed the Durban Declaration in 2000, which cited multiple studies that link HIV as the only cause of AIDS. Dr. Mugerwa demonstrated his support for the Durban Declaration's stance by being on its Organizing Committee. HIV vaccine trial From 1999 to 2002, Dr. Mugerwa conducted a clinical trial of a potential HIV vaccine in Uganda, the first of its kind in Africa. There were concerns about the safety of the volunteers that were recruited, but the normal ethics requirements were waived due to the need for a vaccine as quickly as possible. Uganda had an HIV prevalence of around 20% in 1998, and most citizens could not afford the antiretrovirals needed to prevent the development of AIDS. The prospect of testing a vaccine, however, was not without controversy. There were fears about the vaccine's scientific merit, vaccine recipients falsely testing positive for HIV, and the use of Ugandans as guinea pigs for risky experiments that would only benefit the West. The vaccine (named ALVAC 205) was tested on forty HIV-negative Ugandans, but was stopped in phase I after newer vaccines started to receive more attention. There was previously doubt about whether the different subtypes of HIV would need different vaccines, but volunteers that received the ALVAC 205 vaccine (designed to fight subtype B) produced blood samples that showed resistance to subtypes A and D as well. This demonstrated that a single HIV vaccine could potentially be effective against multiple HIV subtypes. Personal life Mugerwa was married to Rosemary Kibulo Mugerwa, a physical therapist, and they had eleven children together. Many of them followed their parents' example and also pursued careers in the medical field. Outside of his profession, he was both a businessman and a farmer. His wife preceded him in death in November 2018. Death Mugerwa died on 19 April 2019, at Nakasero Hospital in Kampala. At the time of his death, he was a Professor Emeritus at Makerere University, where he had also gone to school and conducted research programs. He was said to be suffering from depression, which led to an onset of other illnesses. He was buried in Meru Village, located in Southwestern Uganda. Selected publications References 1942 births 2019 deaths Ganda people Ugandan cardiologists HIV/AIDS researchers HIV/AIDS in Uganda Ugandan Roman Catholics Makerere University alumni Ugandan academics Academic staff of Makerere University People from Central Region, Uganda People from Bukomansimbi District People educated at St. Mary's College Kisubi HIV vaccine research Case Western Reserve University HIV/AIDS denialism
Roy Mugerwa
[ "Chemistry" ]
1,478
[ "HIV vaccine research", "Drug discovery" ]