id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
38,665,288
https://en.wikipedia.org/wiki/Vilafant%20Bridge
The High Speed Railway line connecting Barcelona and the French border crosses the Municipality of Vilafant 19 ft (6 m) below the ground level. To cross the sunken railroad, two pedestrian bridges were constructed. The structure, with one span of 150 ft (46 m), is monolithically connected with the abutments. The use of unusual geometric shapes fabricated using stainless-steel and GFRP are blended in an innovative fashion, giving rise to an austere and elegant solution. Description The two gateways projected signal constitute a creative and innovative engineering. The forms are soft, curved in a flat landscape, but the purity of line and transparency achieved consistently relate to the environment. The shape evokes the flight of a butterfly on the flat landscape of the Empordà. The structure is both austere and elegant, audible in the distance but not imposed. It is a metaphor for innovation through the use of advanced materials. Represents a driver of technological development and progress of our society. The choice of materials and design combine to provide a balance between economic, formal and constructive with the best results. The lightness of the structure construction ensures a simple, fast and secure and the simplicity of its design can be prefabricated in workshop, ensuring quality finishes and high performance implementation. The two crosses have been designed with the same solution, two bridges of about 164 ft (50 m) total light elastically with a single flush vain in their stirrups (solution integral). The bridges allow for a generous platform 13 ft (4 m) useful for pedestrian and bicycle use. Construction The design concept is based on three basic ideas: the use of light-weight materials, the use of materials free of maintenance such as stainless-steel and GFRP (Fiberglass) and minimalism approach (sober and elegant forms and clean lines, creating a bridge with a clearly identity but not dominating the beautiful port environment). The two bridges have a main longitudinal span of 148 ft (45,2 m) and a width-deck of 13 ft (4 m). The structures are built-in on both abutments. The cross-section consists of two supported Vierendeel trusses combined with double-sheets of GFRP as structural webs. The height of the trusses is variable being 11 ft (3,4 m) at the elastomeric support and 3 ft (1,2 m) at mid-span. References External links Fiberglass Bridges in Catalonia
Vilafant Bridge
[ "Chemistry", "Materials_science" ]
506
[ "Fiberglass", "Polymer chemistry" ]
34,684,267
https://en.wikipedia.org/wiki/Contact%20protection
Contact protection methods are designed to mitigate the wear and degradation occurring during the normal use of contacts within an electromechanical switch, relay or contactor and thus avoid an excessive increase in contact resistance or switch failure. Contact wear A “contact” is a pair of electrodes (typically, one moving; one stationary) designed to control electricity. Electromechanical switches, relays, and contactors “turn power on” when the moving electrode makes contact with the stationary electrode to carry current. Conversely, they “turn power off” when the moving electrode breaks contact and the resulting arc plasma stops burning as the dielectric gap widens sufficiently to prevent current flow. Power relays and contactors have two primary life expectancy ratings: “mechanical life” is based on operating either without current or below the wetting current (i.e., “Dry”) and “electrical life” is based on operating above the wetting current (i.e., “Wet”). These different ratings are due to contacts being designed to compensate for the destructive arcing that naturally occurs between the electrodes during normal Wet operation. Contact arcing is so destructive that the electrical life of power relays and contactors is most often a fraction of their respective mechanical life. Every time the contacts of an electromechanical switch, relay or contactor are opened or closed, there is a certain amount of contact wear. If the contact is cycling without electricity (dry), the impact of the contact electrodes a slightly deformed by the resulting cold forging. When the contact is operating under power (wet), the sources of the wear are the result of high current densities in microscopic areas, and the electric arc. Contact wear includes material transfer between contacts, loss of contact material due to splattering and evaporation, and oxidation or corrosion of the contacts due to high temperatures and atmospheric influences. While a pair of contacts is closed, only a small part of the contacts are in intimate contact due to asperities and low-conductivity films. Because of the constriction of the current to a very small area, the current density frequently becomes so high that it melts a microscopic portion of the contact. During the close-to-open (BREAK) transition, a microscopic molten bridge forms and eventually ruptures asymmetrically, transferring contact material between contacts and increasing the surface roughness. This can also occur during the open-to-close (MAKE) transition due to contact bounce. The electric arc occurs between the contact points (electrodes) both during the transition from closed to open (BREAK) and from open to closed (make) when the contact gap is small and the voltage is high enough. Heating due to arcing and high current density can melt the contact surface temporarily. If some of the melting material solidifies while the contacts are closed, the contact may stick closed due to a micro-weld, similar to spot welding. The arc caused during the contact BREAK (BREAK arc) is similar to arc welding, as the BREAK arc is typically more energetic and more destructive. The arc can cause material transfer between contacts. The arc may also be hot enough to evaporate metal from the contact surface. The high temperatures can also cause the contact metals to more rapidly oxidize and corrode. Contacts reach end of life for one of two reasons. Either the contacts fail to BREAK because they are stuck (welded) closed, or the contacts fail to make (high resistance) because of contact corrosion or because excessive material is lost from one or both contacts. These conditions are the result of cumulative material transfer during successive switching operations, and of material loss due to evaporation and splattering. There are additional mechanisms for stuck closed failures, such as mechanical interlocking of rough contact surfaces due to contact wear. Protection The degradation of the contacts can be limited by including various contact protection methods. Below 2 Amperes, a variety of transient suppressing electronic components have been employed with varying success as arc suppressors, including: capacitors, snubbers, diodes, Zener diodes, transient voltage suppressors (TVS), resistors, varistors or in-rush current limiters (PTC and NTC resistors). However, this is the least effective method as these neither significantly influence the creation of nor suppress the arc between the contacts of electromechanical power switches, relays and contactors. Historically, the two most common approaches to contact protection (above 2 Amperes) have been making the contacts themselves larger, i.e., a contactor and/or making the contacts out of more durable metals or metal alloys such as tungsten. The most effective methods are to employ arc suppression circuitry including electronic power contact arc suppressors, solid state relays, hybrid power relays, mercury displacement relays and hybrid power contactors. See also Arc suppression Contact resistance Wetting current Wetting voltage References Switches Power engineering Electric arcs
Contact protection
[ "Physics", "Engineering" ]
1,021
[ "Electric arcs", "Physical phenomena", "Plasma phenomena", "Energy engineering", "Power engineering", "Electrical engineering" ]
34,684,796
https://en.wikipedia.org/wiki/Euhedral%20and%20anhedral
Euhedral and anhedral are terms used to describe opposite properties in the formation of crystals. Euhedral (also known as idiomorphic or automorphic) crystals are those that are well-formed, with sharp, easily recognised faces. The opposite is anhedral (also known as xenomorphic or allotriomorphic), which describes rock with a microstructure composed of mineral grains that have no well-formed crystal faces or cross-section shape in thin section. Anhedral crystal growth occurs in a competitive environment with no free space for the formation of crystal faces. An intermediate texture with some crystal face-formation is termed subhedral (also known as hypidiomorphic or hypautomorphic). Crystals that grow from cooling liquid magma typically do not form smooth faces or sharp crystal outlines. As magma cools, the crystals grow and eventually touch each other, preventing crystal faces from forming properly or at all. When snowflakes crystallize, they do not touch each other. Thus, snowflakes form euhedral, six-sided twinned crystals. In rocks, the presence of euhedral crystals may signify that they formed early in the crystallization of liquid magma or perhaps crystallized in a cavity or vug, without steric hindrance, or spatial restrictions, from other crystals. Etymology "Euhedral" is derived from the Greek eu meaning "well, good" and hedron meaning a seat or a face of a solid. “Anhedral” derives from the Greek “an”, meaning “not” or “without”. Relation of face orientation to structure Euhedral crystals have flat faces with sharp angles. The flat faces (also called facets) are oriented in a specific way relative to the underlying atomic arrangement of the crystal: They are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram.) See also Xenomorph (geology) Crystal habit Rock microstructure List of rock textures Notes References Crystallography Mineral habits Petrology
Euhedral and anhedral
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
478
[ "Crystallography", "Condensed matter physics", "Mineral habits", "Materials science" ]
1,446,277
https://en.wikipedia.org/wiki/Bessel%27s%20inequality
In mathematics, especially functional analysis, Bessel's inequality is a statement about the coefficients of an element in a Hilbert space with respect to an orthonormal sequence. The inequality was derived by F.W. Bessel in 1828. Let be a Hilbert space, and suppose that is an orthonormal sequence in . Then, for any in one has where ⟨·,·⟩ denotes the inner product in the Hilbert space . If we define the infinite sum consisting of "infinite sum" of vector resolute in direction , Bessel's inequality tells us that this series converges. One can think of it that there exists that can be described in terms of potential basis . For a complete orthonormal sequence (that is, for an orthonormal sequence that is a basis), we have Parseval's identity, which replaces the inequality with an equality (and consequently with ). Bessel's inequality follows from the identity which holds for any natural n. See also Cauchy–Schwarz inequality Parseval's theorem References External links Bessel's Inequality the article on Bessel's Inequality on MathWorld. Hilbert spaces Inequalities
Bessel's inequality
[ "Physics", "Mathematics" ]
249
[ "Mathematical theorems", "Quantum mechanics", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Hilbert spaces", "Mathematical problems" ]
1,446,490
https://en.wikipedia.org/wiki/Temperature%20gradient
A temperature gradient is a physical quantity that describes in which direction and at what rate the temperature changes the most rapidly around a particular location. The temperature spatial gradient is a vector quantity with dimension of temperature difference per unit length. The SI unit is kelvin per meter (K/m). Temperature gradients in the atmosphere are important in the atmospheric sciences (meteorology, climatology and related fields). Mathematical description Assuming that the temperature T is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (often called a scalar field), i.e., that where x, y and z are the coordinates of the location of interest, then the temperature gradient is the vector quantity defined as Physical processes Meteorology Differences in air temperature between different locations are critical in weather forecasting and climate. The absorption of solar light at or near the planetary surface increases the temperature gradient and may result in convection (a major process of cloud formation, often associated with precipitation). Meteorological fronts are regions where the horizontal temperature gradient may reach relatively high values, as these are boundaries between air masses with rather distinct properties. Clearly, the temperature gradient may change substantially in time, as a result of diurnal or seasonal heating and cooling for instance. This most likely happens during an inversion. For instance, during the day the temperature at ground level may be cold while it's warmer up in the atmosphere. As the day shifts over to night the temperature might drop rapidly while at other places on the land stay warmer or cooler at the same elevation. This happens on the West Coast of the United States sometimes due to geography. Weathering Expansion and contraction of rock, caused by temperature changes during a wildfire, through thermal stress weathering, may result in thermal shock and subsequent structure failure. Indoor temperature See also Atmospheric temperature for gradient of Earth's atmosphere Geothermal gradient Gradient Lapse rate References External links IPCC Third Assessment Report Pictorial Representation of Temperature Gradient (Tools). Atmospheric dynamics Climatology Spatial gradient Temperature Physical quantities fr:Gradient#Gradient de température
Temperature gradient
[ "Physics", "Chemistry", "Mathematics" ]
424
[ "Scalar physical quantities", "Physical phenomena", "Temperature", "Thermodynamic properties", "Physical quantities", "Atmospheric dynamics", "SI base quantities", "Intensive quantities", "Quantity", "Thermodynamics", "Wikipedia categories named after physical quantities", "Physical properties...
1,447,921
https://en.wikipedia.org/wiki/Pp-wave%20spacetime
In general relativity, the pp-wave spacetimes, or pp-waves for short, are an important family of exact solutions of Einstein's field equation. The term pp stands for plane-fronted waves with parallel propagation, and was introduced in 1962 by Jürgen Ehlers and Wolfgang Kundt. Overview The pp-waves solutions model radiation moving at the speed of light. This radiation may consist of: electromagnetic radiation, gravitational radiation, massless radiation associated with Weyl fermions, massless radiation associated with some hypothetical distinct type relativistic classical field, or any combination of these, so long as the radiation is all moving in the same direction. A special type of pp-wave spacetime, the plane wave spacetimes, provide the most general analogue in general relativity of the plane waves familiar to students of electromagnetism. In particular, in general relativity, we must take into account the gravitational effects of the energy density of the electromagnetic field itself. When we do this, purely electromagnetic plane waves provide the direct generalization of ordinary plane wave solutions in Maxwell's theory. Furthermore, in general relativity, disturbances in the gravitational field itself can propagate, at the speed of light, as "wrinkles" in the curvature of spacetime. Such gravitational radiation is the gravitational field analogue of electromagnetic radiation. In general relativity, the gravitational analogue of electromagnetic plane waves are precisely the vacuum solutions among the plane wave spacetimes. They are called gravitational plane waves. There are physically important examples of pp-wave spacetimes which are not plane wave spacetimes. In particular, the physical experience of an observer who whizzes by a gravitating object (such as a star or a black hole) at nearly the speed of light can be modelled by an impulsive pp-wave spacetime called the Aichelburg–Sexl ultraboost. The gravitational field of a beam of light is modelled, in general relativity, by a certain axi-symmetric pp-wave. An example of pp-wave given when gravity is in presence of matter is the gravitational field surrounding a neutral Weyl fermion: the system consists in a gravitational field that is a pp-wave, no electrodynamic radiation, and a massless spinor exhibiting axial symmetry. In the Weyl-Lewis-Papapetrou spacetime, there exists a complete set of exact solutions for both gravity and matter. Pp-waves were introduced by Hans Brinkmann in 1925 and have been rediscovered many times since, most notably by Albert Einstein and Nathan Rosen in 1937. Mathematical definition A pp-wave spacetime is any Lorentzian manifold whose metric tensor can be described, with respect to Brinkmann coordinates, in the form where is any smooth function. This was the original definition of Brinkmann, and it has the virtue of being easy to understand. The definition which is now standard in the literature is more sophisticated. It makes no reference to any coordinate chart, so it is a coordinate-free definition. It states that any Lorentzian manifold which admits a covariantly constant null vector field is called a pp-wave spacetime. That is, the covariant derivative of must vanish identically: This definition was introduced by Ehlers and Kundt in 1962. To relate Brinkmann's definition to this one, take , the coordinate vector orthogonal to the hypersurfaces . In the index-gymnastics notation for tensor equations, the condition on can be written . Neither of these definitions make any mention of any field equation; in fact, they are entirely independent of physics. The vacuum Einstein equations are very simple for pp waves, and in fact linear: the metric obeys these equations if and only if . But the definition of a pp-wave spacetime does not impose this equation, so it is entirely mathematical and belongs to the study of pseudo-Riemannian geometry. In the next section we turn to physical interpretations of pp-wave spacetimes. Ehlers and Kundt gave several more coordinate-free characterizations, including: A Lorentzian manifold is a pp-wave if and only if it admits a one-parameter subgroup of isometries having null orbits, and whose curvature tensor has vanishing eigenvalues. A Lorentzian manifold with nonvanishing curvature is a (nontrivial) pp-wave if and only if it admits a covariantly constant bivector. (If so, this bivector is a null bivector.) Physical interpretation It is a purely mathematical fact that the characteristic polynomial of the Einstein tensor of any pp-wave spacetime vanishes identically. Equivalently, we can find a Newman–Penrose complex null tetrad such that the Ricci-NP scalars (describing any matter or nongravitational fields which may be present in a spacetime) and the Weyl-NP scalars (describing any gravitational field which may be present) each have only one nonvanishing component. Specifically, with respect to the NP tetrad the only nonvanishing component of the Ricci spinor is and the only nonvanishing component of the Weyl spinor is This means that any pp-wave spacetime can be interpreted, in the context of general relativity, as a null dust solution. Also, the Weyl tensor always has Petrov type N as may be verified by using the Bel criteria. In other words, pp-waves model various kinds of classical and massless radiation traveling at the local speed of light. This radiation can be gravitational, electromagnetic, Weyl fermions, or some hypothetical kind of massless radiation other than these three, or any combination of these. All this radiation is traveling in the same direction, and the null vector plays the role of a wave vector. Relation to other classes of exact solutions Unfortunately, the terminology concerning pp-waves, while fairly standard, is highly confusing and tends to promote misunderstanding. In any pp-wave spacetime, the covariantly constant vector field always has identically vanishing optical scalars. Therefore, pp-waves belong to the Kundt class (the class of Lorentzian manifolds admitting a null congruence with vanishing optical scalars). Going in the other direction, pp-waves include several important special cases. From the form of Ricci spinor given in the preceding section, it is immediately apparent that a pp-wave spacetime (written in the Brinkmann chart) is a vacuum solution if and only if is a harmonic function (with respect to the spatial coordinates ). Physically, these represent purely gravitational radiation propagating along the null rays . Ehlers and Kundt and Sippel and Gönner have classified vacuum pp-wave spacetimes by their autometry group, or group of self-isometries. This is always a Lie group, and as usual it is easier to classify the underlying Lie algebras of Killing vector fields. It turns out that the most general pp-wave spacetime has only one Killing vector field, the null geodesic congruence . However, for various special forms of , there are additional Killing vector fields. The most important class of particularly symmetric pp-waves are the plane wave spacetimes, which were first studied by Baldwin and Jeffery. A plane wave is a pp-wave in which is quadratic, and can hence be transformed to the simple form Here, are arbitrary smooth functions of . Physically speaking, describe the wave profiles of the two linearly independent polarization modes of gravitational radiation which may be present, while describes the wave profile of any nongravitational radiation. If , we have the vacuum plane waves, which are often called plane gravitational waves. Equivalently, a plane-wave is a pp-wave with at least a five-dimensional Lie algebra of Killing vector fields , including and four more which have the form where Intuitively, the distinction is that the wavefronts of plane waves are truly planar; all points on a given two-dimensional wavefront are equivalent. This not quite true for more general pp-waves. Plane waves are important for many reasons; to mention just one, they are essential for the beautiful topic of colliding plane waves. A more general subclass consists of the axisymmetric pp-waves, which in general have a two-dimensional Abelian Lie algebra of Killing vector fields. These are also called SG2 plane waves, because they are the second type in the symmetry classification of Sippel and Gönner. A limiting case of certain axisymmetric pp-waves yields the Aichelburg/Sexl ultraboost modeling an ultrarelativistic encounter with an isolated spherically symmetric object. (See also the article on plane wave spacetimes for a discussion of physically important special cases of plane waves.) J. D. Steele has introduced the notion of generalised pp-wave spacetimes. These are nonflat Lorentzian spacetimes which admit a self-dual covariantly constant null bivector field. The name is potentially misleading, since as Steele points out, these are nominally a special case of nonflat pp-waves in the sense defined above. They are only a generalization in the sense that although the Brinkmann metric form is preserved, they are not necessarily the vacuum solutions studied by Ehlers and Kundt, Sippel and Gönner, etc. Another important special class of pp-waves are the sandwich waves. These have vanishing curvature except on some range , and represent a gravitational wave moving through a Minkowski spacetime background. Relation to other theories Since they constitute a very simple and natural class of Lorentzian manifolds, defined in terms of a null congruence, it is not very surprising that they are also important in other relativistic classical field theories of gravitation. In particular, pp-waves are exact solutions in the Brans–Dicke theory, various higher curvature theories and Kaluza–Klein theories, and certain gravitation theories of J. W. Moffat. Indeed, B. O. J. Tupper has shown that the common vacuum solutions in general relativity and in the Brans/Dicke theory are precisely the vacuum pp-waves (but the Brans/Dicke theory admits further wavelike solutions). Hans-Jürgen Schmidt has reformulated the theory of (four-dimensional) pp-waves in terms of a two-dimensional metric-dilaton theory of gravity. Pp-waves also play an important role in the search for quantum gravity, because as Gary Gibbons has pointed out, all loop term quantum corrections vanish identically for any pp-wave spacetime. This means that studying tree-level quantizations of pp-wave spacetimes offers a glimpse into the yet unknown world of quantum gravity. It is natural to generalize pp-waves to higher dimensions, where they enjoy similar properties to those we have discussed. C. M. Hull has shown that such higher-dimensional pp-waves are essential building blocks for eleven-dimensional supergravity. Geometric and physical properties PP-waves enjoy numerous striking properties. Some of their more abstract mathematical properties have already been mentioned. In this section a few additional properties are presented. Consider an inertial observer in Minkowski spacetime who encounters a sandwich plane wave. Such an observer will experience some interesting optical effects. If he looks into the oncoming wavefronts at distant galaxies which have already encountered the wave, he will see their images undistorted. This must be the case, since he cannot know the wave is coming until it reaches his location, for it is traveling at the speed of light. However, this can be confirmed by direct computation of the optical scalars of the null congruence . Now suppose that after the wave passes, our observer turns about face and looks through the departing wavefronts at distant galaxies which the wave has not yet reached. Now he sees their optical images sheared and magnified (or demagnified) in a time-dependent manner. If the wave happens to be a polarized gravitational plane wave, he will see circular images alternately squeezed horizontally while expanded vertically, and squeezed vertically while expanded horizontally. This directly exhibits the characteristic effect of a gravitational wave in general relativity on light. The effect of a passing polarized gravitational plane wave on the relative positions of a cloud of (initially static) test particles will be qualitatively very similar. We might mention here that in general, the motion of test particles in pp-wave spacetimes can exhibit chaos. The fact that Einstein's field equation is nonlinear is well known. This implies that if you have two exact solutions, there is almost never any way to linearly superimpose them. PP waves provide a rare exception to this rule: if you have two PP waves sharing the same covariantly constant null vector (the same geodesic null congruence, i.e. the same wave vector field), with metric functions respectively, then gives a third exact solution. Roger Penrose has observed that near a null geodesic, every Lorentzian spacetime looks like a plane wave. To show this, he used techniques imported from algebraic geometry to "blow up" the spacetime so that the given null geodesic becomes the covariantly constant null geodesic congruence of a plane wave. This construction is called a Penrose limit. Penrose also pointed out that in a pp-wave spacetime, all the polynomial scalar invariants of the Riemann tensor vanish identically, yet the curvature is almost never zero. This is because in four-dimension all pp-waves belong to the class of VSI spacetimes. Such statement does not hold in higher-dimensions since there are higher-dimensional pp-waves of algebraic type II with non-vanishing polynomial scalar invariants. If you view the Riemann tensor as a second rank tensor acting on bivectors, the vanishing of invariants is analogous to the fact that a nonzero null vector has vanishing squared length. Penrose was also the first to understand the strange nature of causality in pp-sandwich wave spacetimes. He showed that some or all of the null geodesics emitted at a given event will be refocused at a later event (or string of events). The details depend upon whether the wave is purely gravitational, purely electromagnetic, or neither. Every pp-wave admits many different Brinkmann charts. These are related by coordinate transformations, which in this context may be considered to be gauge transformations. In the case of plane waves, these gauge transformations allow us to always regard two colliding plane waves to have parallel wavefronts, and thus the waves can be said to collide head-on. This is an exact result in fully nonlinear general relativity which is analogous to a similar result concerning electromagnetic plane waves as treated in special relativity. Examples There are many noteworthy explicit examples of pp-waves. ("Explicit" means that the metric functions can be written down in terms of elementary functions or perhaps well-known special functions such as Mathieu functions.) Explicit examples of axisymmetric pp-waves include The Aichelburg–Sexl ultraboost is an impulsive plane wave which models the physical experience of an observer who whizzes by a spherically symmetric gravitating object at nearly the speed of light, The Bonnor beam is an axisymmetric plane wave which models the gravitational field of an infinitely long beam of incoherent electromagnetic radiation. Explicit examples of plane wave spacetimes include exact monochromatic gravitational plane wave and monochromatic electromagnetic plane wave solutions, which generalize solutions which are well-known from weak-field approximation, exact solutions of the gravitational field of a Weyl fermion, the Schwarzschild generating plane wave, a gravitational plane wave which, should it collide head-on with a twin, will produce in the interaction zone of the resulting colliding plane wave solution a region which is locally isometric to part of the interior of a Schwarzschild black hole, thereby permitting a classical peek at the local geometry inside the event horizon, the uniform electromagnetic plane wave; this spacetime is foliated by spacelike hyperslices which are isometric to , the wave of death is a gravitational plane wave exhibiting a strong nonscalar null curvature singularity, which propagates through an initially flat spacetime, progressively destroying the universe, homogeneous plane waves, or SG11 plane waves (type 11 in the Sippel and Gönner symmetry classification), which exhibit a weak nonscalar null curvature singularity and which arise as the Penrose limits of an appropriate null geodesic approaching the curvature singularity which is present in many physically important solutions, including the Schwarzschild black holes and FRW cosmological models. See also Gravitational wave Newman–Penrose formalism Notes References See Section 24.5 See Section 2-5 Yi-Fei Chen and J.X. Lu (2004), "Generating a dynamical M2 brane from super-gravitons in a pp-wave background" Bum-Hoon Lee (2005), "D-branes in the pp-wave background" H.-J. Schmidt (1998). "A two-dimensional representation of four-dimensional gravitational waves," Int. J. Mod. Phys. D7 (1998) 215–224 (arXiv:gr-qc/9712034). Albert Einstein, "On Gravitational Waves," J. Franklin Inst. 223 (1937). 43–54. Nathan Rosen, "Plane Polarized Waves in the General Theory of Relativity," Phys. Z. Sowjetunion 12 (1937). External links Pp-wave on arxiv.org Exact solutions in general relativity
Pp-wave spacetime
[ "Mathematics" ]
3,646
[ "Exact solutions in general relativity", "Mathematical objects", "Equations" ]
1,449,031
https://en.wikipedia.org/wiki/Activity%20coefficient
In thermodynamics, an activity coefficient is a factor used to account for deviation of a mixture of chemical substances from ideal behaviour. In an ideal mixture, the microscopic interactions between each pair of chemical species are the same (or macroscopically equivalent, the enthalpy change of solution and volume variation in mixing is zero) and, as a result, properties of the mixtures can be expressed directly in terms of simple concentrations or partial pressures of the substances present e.g. Raoult's law. Deviations from ideality are accommodated by modifying the concentration by an activity coefficient. Analogously, expressions involving gases can be adjusted for non-ideality by scaling partial pressures by a fugacity coefficient. The concept of activity coefficient is closely linked to that of activity in chemistry. Thermodynamic definition The chemical potential, , of a substance B in an ideal mixture of liquids or an ideal solution is given by , where μ is the chemical potential of a pure substance , and is the mole fraction of the substance in the mixture. This is generalised to include non-ideal behavior by writing when is the activity of the substance in the mixture, , where is the activity coefficient, which may itself depend on . As approaches 1, the substance behaves as if it were ideal. For instance, if  ≈ 1, then Raoult's law is accurate. For  > 1 and  < 1, substance B shows positive and negative deviation from Raoult's law, respectively. A positive deviation implies that substance B is more volatile. In many cases, as goes to zero, the activity coefficient of substance B approaches a constant; this relationship is Henry's law for the solvent. These relationships are related to each other through the Gibbs–Duhem equation. Note that in general activity coefficients are dimensionless. In detail: Raoult's law states that the partial pressure of component B is related to its vapor pressure (saturation pressure) and its mole fraction in the liquid phase, with the convention In other words: Pure liquids represent the ideal case. At infinite dilution, the activity coefficient approaches its limiting value, ∞. Comparison with Henry's law, immediately gives In other words: The compound shows nonideal behavior in the dilute case. The above definition of the activity coefficient is impractical if the compound does not exist as a pure liquid. This is often the case for electrolytes or biochemical compounds. In such cases, a different definition is used that considers infinite dilution as the ideal state: with and The symbol has been used here to distinguish between the two kinds of activity coefficients. Usually it is omitted, as it is clear from the context which kind is meant. But there are cases where both kinds of activity coefficients are needed and may even appear in the same equation, e.g., for solutions of salts in (water + alcohol) mixtures. This is sometimes a source of errors. Modifying mole fractions or concentrations by activity coefficients gives the effective activities of the components, and hence allows expressions such as Raoult's law and equilibrium constants to be applied to both ideal and non-ideal mixtures. Knowledge of activity coefficients is particularly important in the context of electrochemistry since the behaviour of electrolyte solutions is often far from ideal, due to the effects of the ionic atmosphere. Additionally, they are particularly important in the context of soil chemistry due to the low volumes of solvent and, consequently, the high concentration of electrolytes. Ionic solutions For solution of substances which ionize in solution the activity coefficients of the cation and anion cannot be experimentally determined independently of each other because solution properties depend on both ions. Single ion activity coefficients must be linked to the activity coefficient of the dissolved electrolyte as if undissociated. In this case a mean stoichiometric activity coefficient of the dissolved electrolyte, γ±, is used. It is called stoichiometric because it expresses both the deviation from the ideality of the solution and the incomplete ionic dissociation of the ionic compound which occurs especially with the increase of its concentration. For a 1:1 electrolyte, such as NaCl it is given by the following: where and are the activity coefficients of the cation and anion respectively. More generally, the mean activity coefficient of a compound of formula is given by Single-ion activity coefficients can be calculated theoretically, for example by using the Debye–Hückel equation. The theoretical equation can be tested by combining the calculated single-ion activity coefficients to give mean values which can be compared to experimental values. The prevailing view that single ion activity coefficients are unmeasurable independently, or perhaps even physically meaningless, has its roots in the work of Guggenheim in the late 1920s. However, chemists have never been able to give up the idea of single ion activities, and by implication single ion activity coefficients. For example, pH is defined as the negative logarithm of the hydrogen ion activity. If the prevailing view on the physical meaning and measurability of single ion activities is correct then defining pH as the negative logarithm of the hydrogen ion activity places the quantity squarely in the unmeasurable category. Recognizing this logical difficulty, International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only. Despite the prevailing negative view on the measurability of single ion coefficients, the concept of single ion activities continues to be discussed in the literature. Concentrated ionic solutions For concentrated ionic solutions the hydration of ions must be taken into consideration, as done by Stokes and Robinson in their hydration model from 1948. The activity coefficient of the electrolyte is split into electric and statistical components by E. Glueckauf who modifies the Robinson–Stokes model. The statistical part includes hydration index number , the number of ions from the dissociation and the ratio between the apparent molar volume of the electrolyte and the molar volume of water and molality . Concentrated solution statistical part of the activity coefficient is: The Stokes–Robinson model has been analyzed and improved by other investigators. The problem with this widely accepted idea that electrolyte activity coefficients are driven at higher concentrations by changes in hydration is that water activities are completely dependent on the concentration of the ions themselves, as imposed by a thermodynamic relationship called the Gibbs-Duhem equation. This means that the activity coefficients and the corresponding water activities are linked together fundamentally, regardless of molecular-level hypotheses. Due to this high correlation, such hypotheses are not independent enough to be satisfactorily tested. The rise in activity coefficients found with most aqueous strong electrolyte systems can be explained more plausibly by increasing electrostatic repulsions between ions of the same charge which are forced together as the available space between them decreases. In this way, the initial attractions between cations and anions at the low concentrations described by Debye and Hueckel are progressively overcome. It has been proposed that these electrostatic repulsions take place predominantly through the formation of so-called ion trios in which two ions of like charge interact, on average and at distance, with the same counterion as well as with each other. This model accurately reproduces the experimental patterns of activity and osmotic coefficients exhibited by numerous 3-ion aqueous electrolyte mixtures. Experimental determination of activity coefficients Activity coefficients may be determined experimentally by making measurements on non-ideal mixtures. Use may be made of Raoult's law or Henry's law to provide a value for an ideal mixture against which the experimental value may be compared to obtain the activity coefficient. Other colligative properties, such as osmotic pressure may also be used. Radiochemical methods Activity coefficients can be determined by radiochemical methods. At infinite dilution Activity coefficients for binary mixtures are often reported at the infinite dilution of each component. Because activity coefficient models simplify at infinite dilution, such empirical values can be used to estimate interaction energies. Examples are given for water: Theoretical calculation of activity coefficients Activity coefficients of electrolyte solutions may be calculated theoretically, using the Debye–Hückel equation or extensions such as the Davies equation, Pitzer equations or TCPC model. Specific ion interaction theory (SIT) may also be used. For non-electrolyte solutions correlative methods such as UNIQUAC, NRTL, MOSCED or UNIFAC may be employed, provided fitted component-specific or model parameters are available. COSMO-RS is a theoretical method which is less dependent on model parameters as required information is obtained from quantum mechanics calculations specific to each molecule (sigma profiles) combined with a statistical thermodynamics treatment of surface segments. For uncharged species, the activity coefficient γ0 mostly follows a salting-out model: This simple model predicts activities of many species (dissolved undissociated gases such as CO2, H2S, NH3, undissociated acids and bases) to high ionic strengths (up to 5 mol/kg). The value of the constant b for CO2 is 0.11 at 10 °C and 0.20 at 330 °C. For water as solvent, the activity aw can be calculated using: where ν is the number of ions produced from the dissociation of one molecule of the dissolved salt, b is the molality of the salt dissolved in water, φ is the osmotic coefficient of water, and the constant 55.51 represents the molality of water. In the above equation, the activity of a solvent (here water) is represented as inversely proportional to the number of particles of salt versus that of the solvent. Link to ionic diameter The ionic activity coefficient is connected to the ionic diameter by the formula obtained from Debye–Hückel theory of electrolytes: where A and B are constants, zi is the valence number of the ion, and I is ionic strength. Dependence on state parameters The derivative of an activity coefficient with respect to temperature is related to excess molar enthalpy by Similarly, the derivative of an activity coefficient with respect to pressure can be related to excess molar volume. Application to chemical equilibrium At equilibrium, the sum of the chemical potentials of the reactants is equal to the sum of the chemical potentials of the products. The Gibbs free energy change for the reactions, ΔrG, is equal to the difference between these sums and therefore, at equilibrium, is equal to zero. Thus, for an equilibrium such as Substitute in the expressions for the chemical potential of each reactant: Upon rearrangement this expression becomes The sum is the standard free energy change for the reaction, . Therefore, where is the equilibrium constant. Note that activities and equilibrium constants are dimensionless numbers. This derivation serves two purposes. It shows the relationship between standard free energy change and equilibrium constant. It also shows that an equilibrium constant is defined as a quotient of activities. In practical terms this is inconvenient. When each activity is replaced by the product of a concentration and an activity coefficient, the equilibrium constant is defined as where [S] denotes the concentration of S, etc. In practice equilibrium constants are determined in a medium such that the quotient of activity coefficients is constant and can be ignored, leading to the usual expression which applies under the conditions that the activity quotient has a particular (constant) value. References External links AIOMFAC online-model An interactive group-contribution model for the calculation of activity coefficients in organic–inorganic mixtures. Electrochimica Acta Single-ion activity coefficients Thermodynamic models Equilibrium chemistry Dimensionless numbers of chemistry
Activity coefficient
[ "Physics", "Chemistry" ]
2,409
[ "Thermodynamic models", "Dimensionless numbers of chemistry", "Thermodynamics", "Equilibrium chemistry" ]
1,449,329
https://en.wikipedia.org/wiki/Plasma%20arc%20welding
Plasma arc welding (PAW) is an arc welding process similar to gas tungsten arc welding (GTAW). The electric arc is formed between an electrode (which is usually but not always made of sintered tungsten) and the workpiece. The key difference from GTAW is that in PAW, the electrode is positioned within the body of the torch, so the plasma arc is separated from the shielding gas envelope. The plasma is then forced through a fine-bore copper nozzle which constricts the arc and the plasma exits the orifice at high velocities (approaching the speed of sound) and a temperature approaching 28,000 °C (50,000 °F) or higher. Arc plasma is a temporary state of a gas. The gas gets ionized by electric current passing through it and it becomes a conductor of electricity. In ionized state, atoms are broken into electrons (−) and cations (+) and the system contains a mixture of ions, electrons and highly excited atoms. The degree of ionization may be between 1% and greater than 100% (possible with double and triple degrees of ionization). Such states exist as more electrons are pulled from their orbits. The energy of the plasma jet and thus the temperature depends upon the electrical power employed to create arc plasma. A typical value of temperature obtained in a plasma jet torch is on the order of , compared to about in ordinary electric welding arc. All welding arcs are (partially ionized) plasmas, but the one in plasma arc welding is a constricted arc plasma. Just as oxy-fuel torches can be used for either welding or cutting, so too can plasma torches. Concept Plasma arc welding is an arc welding process wherein coalescence is produced by the heat obtained from a constricted arc setup between a tungsten/alloy tungsten electrode and the water-cooled (constricting) nozzle (non-transferred arc) or between a tungsten/alloy tungsten electrode and the job (transferred arc). The process employs two inert gases, one forms the arc plasma and the second shields the arc plasma. Filler metal may or may not be added. History The plasma arc welding and cutting process was invented by Robert M. Gage in 1953 and patented in 1957. The process was unique in that it could achieve precision cutting and welding on both thin and thick metals. It was also capable of spray coating hardening metals onto other metals. One example was the spray coating of the turbine blades of the Saturn V rocket. Principle of operation Plasma arc welding is an advanced form of tungsten inert gas (TIG) welding. In the case of TIG, it is an open arc shielded by argon or helium, whereas plasma uses a special torch where the nozzle is used to constrict the arc while the shielding gas is separately supplied by the torch. The arc is constricted with the help of a water-cooled small diameter nozzle which squeezes the arc, increases its pressure, temperature and heat intensely and thus improves arc stability, arc shape and heat transfer characteristics. Plasma arcs are formed using gas in two forms; laminar (low pressure and low flow) and turbulent (high pressure and high flow). The gases used are argon, helium, hydrogen or a mixture of these. In the case of plasma welding, laminar flow (low pressure and low flow of plasma gas) is employed to ensure that the molten metal is not blown out of the weld zone. The non-transferred arc (pilot arc) is employed during plasma-welding to initiate the welding process. The arc is formed between the electrode(-) and the water-cooled constricting nozzle (+). A non-transferred arc is initiated by using a high-frequency unit in the circuit. After the initial high-frequency start, the pilot arc (low current) is formed between the elect by employing a low current. After the main arc is struck, the nozzle is neutral or in case of welding-mesh using micro plasma, there can be an option given to have a continuous pilot arc. A transferred arc possesses high energy density and plasma jet velocity. Depending on the current used and flow of gas, it can be employed to cut and melt metals. Microplasma uses current between 0.1 and 10 amps and is used for foils, bellow, and thin sheets. This is an autogenous process and normally does not use filler wire or powder. Medium plasma uses current between 10 and 100 amps and is used for higher-thickness plate welding with filler wire or autogenous up to plates and metal deposition (hardfacing) using specialised torches and powder feeders (PTA) using metal powders. High-current plasma above 100 amps is used with filler wires welding at high travel speeds. Other applications of plasma are plasma-cutting, heating, deposition of diamond films (Kurihara et al. 1989), material processing, metallurgy (production of metals and ceramics), plasma-spraying, and underwater cutting. Equipment The equipment needed in plasma arc welding along with their functions are as follows: Current and gas decay control: Required to close the key hole properly while terminating the weld in the structure. Fixture: Required to avoid atmospheric contamination of the molten metal under bead. Materials: Steel, aluminium, and other materials High-frequency generator and current limiting resistors: Used for arc ignition. The arc-starting system may be separate or built into the system. Plasma torch: Used for either transferred arc or non-transferred arc type. It is hand operated or mechanized. At present, almost all applications require automated system. The torch is water-cooled to increase the life of the nozzle and the electrode. The size and the type of nozzle tip are selected depending upon the metal to be welded, weld shapes and desired penetration depth. Power supply: A direct-current power source (generator or rectifier) having drooping characteristics and open circuit voltage of 70 volts or above is suitable for plasma arc welding. Rectifiers are generally preferred over DC generators. Working with helium as an inert gas needs open circuit voltage above 70 volts. This higher voltage can be obtained by series operation of two power sources; or the arc can be initiated with argon at normal open-circuit voltage and then helium can be switched on. Typical welding parameters for plasma arc welding are as follows: Current 50 to 350 amps, voltage 27 to 31 volts, gas flow rates 2 to 40 liters/minute (lower range for orifice gas and higher range for outer shielding gas), direct current electrode negative (DCEN) is normally employed for plasma arc welding except for the welding of aluminum in which cases water-cooled electrode is preferable for reverse-polarity welding, i.e. direct-current electrode positive (DCEP). Shielding gases: Two inert gases or gas mixtures are employed. The orifice gas at lower pressure and flow rate forms the plasma arc. The pressure of the orifice gas is intentionally kept low to avoid weld metal turbulence, but this low pressure is not able to provide proper shielding of the weld pool. To have suitable shielding protection same or another inert gas is sent through the outer shielding ring of the torch at comparatively higher flow rates. Most of the materials can be welded with argon, helium, argon+hydrogen and argon+helium, as inert gases or gas mixtures. Argon is commonly used. Helium is preferred where a broad heat input pattern and flatter cover pass is desired without key-hole mode weld. A mixture of argon and hydrogen supplies heat energy higher than when only argon is used and thus permits keyhole mode welds in nickel-base alloys, copper-base alloys and stainless steels. For cutting purposes, a mixture of argon and hydrogen (10-30%) or that of nitrogen may be used. Hydrogen, because of its dissociation into atomic form and thereafter recombination generates temperatures above those attained by using argon or helium alone. In addition, hydrogen provides a reducing atmosphere, which helps in preventing oxidation of the weld and its vicinity. Care must be taken, as hydrogen diffusing into the metal can lead to embrittlement in some metals and steels. Voltage control: Required in contour welding. In normal key-hole welding, a variation in arc length up to 1.5 mm does not affect weld bead penetration or bead shape to any significant extent and thus a voltage control is not considered essential. Process description The technique of work-piece cleaning and filler-metal addition is similar to that in TIG welding. Filler metal is added at the leading edge of the weld pool. Filler metal is not required in making root-pass weld. Type of Joints: For welding work piece up to 25 mm thick, joints like square butt, J or V are employed. Plasma welding is used to make both key hole and non-key hole types of welds. Making a non-key-hole weld: The process can make non-key-hole welds on work pieces having thickness 2.4 mm and under. Making a keyhole welds: An outstanding characteristic of plasma arc welding, owing to exceptional penetrating power of plasma jet, is its ability to produce keyhole welds in work piece having thickness from 2.5 mm to 25 mm. A keyhole effect is achieved through right selection of current, nozzle-orifice diameter and travel speed, which create a forceful plasma jet to penetrate completely through the work piece. Plasma jet in no case should expel the molten metal from the joint. The major advantages of the keyhole technique are the ability to penetrate rapidly through relatively thick root sections and to produces a uniform under bead without mechanical backing. Also, the ratio of the depth of penetration to the width of the weld is much higher, resulting narrower weld and heat-affected zone. As the weld progresses, base metal ahead the keyhole melts, flow around the same solidifies and forms the weld bead. Key-holing aids deep penetration at faster speeds and produces high-quality bead. While welding thicker pieces, in laying others than root run, and using filler metal, the force of plasma jet is reduced by suitably controlling the amount of orifice gas. Plasma arc welding is an advancement over the GTAW process. This process uses a non-consumable tungsten electrode and an arc constricted through a fine-bore copper nozzle. PAW can be used to join all metals that are weldable with GTAW (i.e., most commercial metals and alloys). Difficult-to-weld in metals by PAW include bronze, cast iron, lead and magnesium. Several basic PAW process variations are possible by varying the current, plasma gas-flow rate, and the orifice diameter, including: Micro-plasma (< 15 Amperes) Melt-in mode (15–100 Amperes) Keyhole mode (>100 Amperes) Plasma arc welding has a greater energy concentration as compared to GTAW. A deep, narrow penetration is achievable, with a maximum depth of depending on the material. Greater arc stability allows a much longer arc length (stand-off), and much greater tolerance to arc-length changes. PAW requires relatively expensive and complex equipment as compared to GTAW; proper torch maintenance is critical. Welding procedures tend to be more complex and less tolerant to variations in fit-up, etc. Operator skill required is slightly greater than for GTAW. Orifice replacement is necessary. Process variables Gases At least two separate (and possibly three) flows of gas are used in PAW: Plasma gas – flows through the orifice and becomes ionized. Shielding gas – flows through the outer nozzle and shields the molten weld from the atmosphere. Back-purge and trailing gas – required for certain materials and applications. These gases can all be same, or of differing composition. Key process variables Current Type and Polarity DCEN from a CC source is standard AC square-wave is common on aluminum and magnesium Welding current and pulsing - Current can vary from 0.5 A to 1200 A; the current can be constant or pulsed at frequencies up to 20 kHz Gas-flow rate (This critical variable must be carefully controlled based upon the current, orifice diameter and shape, gas mixture, and the base material and thickness.) Other plasma arc processes Depending upon the design of the torch (e.g., orifice diameter), electrode design, gas type and velocities, and the current levels, several variations of the plasma process are achievable, including: Plasma arc cutting (PAC) Plasma arc gouging Plasma arc surfacing Plasma arc spraying Plasma arc cutting When used for cutting, the plasma gas flow is increased so that the deeply penetrating plasma jet cuts through the material and molten material is removed as cutting dross. PAC differs from oxy-fuel cutting in that the plasma process operates by using the arc to melt the metal whereas in the oxy-fuel process, the oxygen oxidizes the metal and the heat from the exothermic reaction melts the metal. Unlike oxy-fuel cutting, the PAC process can be applied to cutting metals which form refractory oxides such as stainless steel, cast iron, aluminum and other non-ferrous alloys. Since PAC was introduced by Praxair Inc. at the American Welding Society show in 1954, many process refinements, gas developments, and equipment improvements have occurred. References Bibliography Further reading American Welding Society, Welding Handbook, Volume 2 (8th Ed.) External links Plasma Arc Welding http://mewelding.com/plasma-arc-welding-paw/ Microplasma welding https://www.youtube.com/watch?v=T8g1lULZryk https://www.youtube.com/user/multiplazslovenia#p/u/6/SWbUJh4XuMQ Arc spray welding https://www.youtube.com/watch?v=BtsywbmjKIE&NR=1 https://www.youtube.com/watch?v=ibPPbQC5LeE Arc welding Plasma technology and applications
Plasma arc welding
[ "Physics" ]
2,955
[ "Plasma technology and applications", "Plasma physics" ]
1,449,951
https://en.wikipedia.org/wiki/Mist%20net
Mist nets are nets used to capture wild birds and bats. They are used by hunters and poachers to catch and kill animals, but also by ornithologists and chiropterologists for banding and other research projects. Mist nets are typically made of nylon or polyester mesh suspended between two poles, resembling a volleyball net. When properly deployed in the correct habitat, the nets are virtually invisible. Mist nets have shelves created by horizontally strung lines that create a loose, baggy pocket. When a bird or bat hits the net, it falls into this pocket, where it becomes tangled. The mesh size of the netting varies according to the size of the species targeted for capture. Mesh sizes can be measured along one side of the edge of a single mesh square, or along the diagonal of that square. Measures given here are along the diagonal. Small passerines are typically captured with 16-30 mm mesh, while larger birds, like hawks and ducks, are captured using mesh sizes of ~127 mm. Net dimensions can vary widely depending on the proposed use. Net height for avian mist netting is typically 1.2 - 2.6 m. Net width may vary from 3 to 18 m, although longer nets may also be used. A dho-gazza is a type of mist net that can be used for larger birds, such as raptors. This net lacks shelves. The purchase and use of mist nets requires permits, which vary according to a country or state's wildlife regulations. Mist net handling requires skill for optimal placement, avoiding entangling nets in vegetation, and proper storage. Bird and bat handling requires extensive training to avoid injury to the captured animals. Bat handling may be especially difficult since bats are captured at night and may bite. A 2011 research survey found mist netting to result in low rates of injury while providing high scientific value. Usage of mist nets Mist nets have been used by Japanese hunters for nearly 300 years to capture birds. They were first introduced into use for ornithology in the United States of America by Oliver L. Austin in 1947. Mist netting is a popular and important tool for monitoring species diversity, relative abundance, population size, and demography. There are two ways in which mist nets are primarily utilized: target netting of specific species or individuals, and broadcast netting of all birds within a particular area. Targeted netting is typically used for scientific studies that examine a single species. Nets deployed in this manner often use a playback of a species' song or call, or a model of that species placed near the net to lure the targeted individuals into the net (e.g. ). Because broadcast netting captures birds indiscriminately, this technique is better suited to examining the species that occur within a specific habitat. Bird banding stations throughout the United States use this method. Typically, such stations collect a set of standard measurements from each individual, including mass, wing chord, breeding status, body fat index, sex, age, and molt status. Although setting up mist nets is time-consuming and requires certification, there are certain advantages compared to visual and aural monitoring techniques, such as sampling species that may be poorly detected in other ways. It also allows easy standardization, hands-on examination, and reduces misidentification of species. Because they allow scientists to examine species up close, mist nets are often used in mark-recapture studies over extended periods of time to detect trends in population indices. Some uses of data collected using mist net sampling are: Mark-recapture for population sampling Humane capture and relocation of small birds or bats Tagging and tracking Testing health of bird or bat species and for ectoparasite studies Examination of avian phenology in response to climatic and other variables Examination of patterns of molt Because there is still debate as to whether or not these techniques provide precise data, it is suggested that mist netting be used as a supplement to aural and visual methods of observation. One of the main disadvantages of mist nets is that the numbers captured may only represent a small proportion of the true population size. Mist netting is a unique method in that it provides demographic estimates throughout all seasons, and offers valuable guides to relative abundance of certain species or birds and/or bats. Example study Mist nets can be important tools for collecting data to reveal critical ecological conditions in a variety of situations. This summarized study, "Effects of forest fragmentation on Amazonian understory bird communities" by Richard O. Bierregaard and Thomas E. Lovejoy, used mist nets to analyze the effects of forest fragmentation on understory bird communities in terra firme forest of Central Amazon. Data from intensive mist netting mark-recapture programs on understory birds from isolated forest reserves were compared to pre-isolation data from the same reserves to investigate changes related to isolation from continuous forest. Birds surveyed were from a variety of ecological guilds, including nectivores, insectivores, frugivores, obligatory army ant followers, forest edge specialists and flocking species. Periodic sampling by the mist netting capture program provided the quantitative basis for this project. Reserves of varied sizes (1 and 10 hectare) within the Biological Dynamics of Forest Fragments project site were sampled with transects of tethered mist nets once every three or four weeks. Capture rates from isolated reserves were compared to pre-isolation rates to measure changes in population size and/or avian activity due to isolation. Data was analyzed in the following ways: capture rates per net hour as a function of time since isolation, percent recapture as a function of time since isolation, abundance distribution of species against the species rank by abundance, percent individuals banded according to species and feeding strategy, and finally, capture rates per net hour in isolated reserves against capture rates per net hour in continuous forests. A summary of the results and discussion as stated by Bierregaard and Lovejoy is as follows: ...changes in the understory avian community in isolated patches. Following isolation, capture rates increase significantly as birds fleeing the felled forest entered new forest fragments. Movement to and from the reserve is limited as witnessed by an increase in recapture percentages following isolation. Species of birds that are obligate army ant followers disappeared at the time the surrounding habitat was removed from 1 and 10 ha areas. The complex mixed-species of insectivorous flocks typical of Amazonian forests deteriorated within 2 years of isolation of 1 and 10 ha forest fragments. Several species of mid-story insectivores changed their foraging behavior after isolation of small forest reserves. These data were collected using mist nets. Data from mist netting efforts may be used to gain a greater understanding of ecological effects of factors impacting ecosystems, such human activities or environmental changes. This is just one example of the use of mist nets as a tool for ecological and biological sciences. Mist net data can also have ecosystem management implications. Disadvantages The use of mist nets has several disadvantages. Mist-netting is very time-consuming. Nets have to be set up without mistakes. An animal caught in a mist net becomes entangled, so the net must be checked often and the animal removed promptly. Disentangling an animal from a mist net can be difficult and must be done carefully by trained personnel. If an animal is heavily entangled, the mist net may need to be cut to avoid injuring the animal, damaging the material. Mist nets will not capture birds in direct proportion to their presence in the area (Remsen and Good 1996) and can miss a species completely if it is active in a different strata of vegetation, such as high in the canopy. They can, however, provide an index to population size. People using mist nets must be careful and well-trained, since the capture process can harm birds. One study found the average rate of injury for birds in mist nets is lower than any other method of studying vertebrates, between 0 and 0.59% while the average mortality rate is between 0 and 0.23%. While rare, it has been suggested (without scientific studies) that larger birds may be more prone to leg injuries and internal bleeding. Smaller birds typically have problems with tangling issues and wing injuries. Factors that affect the injury and mortality rate are human error while handling the species, time of year caught, time of day caught, predators in the area, and size/material of the mist net. Banders People who are responsible for banding netted wildlife so they can be tracked are called banders in the United States. Banders are responsible for the animals caught and thus apply their training by looking for stress cues (for birds, these include panting, tiredness, closing of eyes, and raising of feathers). Without this caution, animals can severely injure themselves. In the United States, in order to band a bird or bat, one must have a banding permit from U.S. Fish and Wildlife. The qualifications for permitting vary by species. There are different types of banding permits for birds: the Master Permit and the Sub permit. Master Permits are given to individuals who band on their own or who supervise banding operations. Sub Permits are given to individuals who will be supervised while banding by a person with a Master Permit. In order to receive a permit, one must complete an application and return it to the nearest banding office. Banders must ask for special authorization in their application to use mist nets, cannon nets, chemicals, or auxiliary markers. See also Bal-chatri traps to catch birds of prey (raptors) References Ornithological equipment and methods Fowling Nets (devices) Environmental Sampling Equipment
Mist net
[ "Biology" ]
1,949
[ "Environmental Sampling Equipment" ]
1,450,059
https://en.wikipedia.org/wiki/Helepolis
Helepolis (, meaning: "Taker of Cities") is the Greek name for a movable siege tower. The most famous was that invented by Polyidus of Thessaly, and improved by Demetrius I of Macedon and Epimachus of Athens, for the Siege of Rhodes (305 BC). Descriptions of it were written by Diodorus Siculus, Vitruvius, Plutarch, and in the Athenaeus Mechanicus. Description The Helepolis was essentially a large tapered tower, with each side about high, and wide that was manually pushed into battle. It rested on eight wheels, each high and also had casters, to allow lateral movement as well as direct. The three exposed sides were rendered fireproof with iron plates, and stories divided the interior, connected by two broad flights of stairs, one for ascent and one for descent. The machine weighed , and required 3,400 men working in relays to move it, 200 turning a large capstan driving the wheels via a belt, and the rest pushing from behind. The casters permitted lateral movement, so the entire apparatus could be steered towards the desired attack point, while always keeping the siege engines inside aimed at the walls, and the protective body of the machine directly between the city walls and the men pushing behind it. The Helepolis bore a fearsome complement of heavy armaments, with two catapults, and one (classified by the weight of the projectiles they threw) on the first floor, three catapults on the second, and two on each of the next five floors. Apertures, shielded by mechanically adjustable shutters, lined with skins stuffed with wool and seaweed to render them fireproof, perforated the forward wall of the tower for firing the missile weapons. On each of the top two floors, soldiers could use two light dart throwers to easily clear the walls of defenders. Siege of Rhodes As the Helepolis was pushed towards the city, the Rhodians managed to dislodge some of the metal plates, and Demetrius ordered it withdrawn from battle to protect it from being burned. Following the failure of the siege, the Helepolis along with the other siege engines were abandoned, and the people of Rhodes melted down their metal plating and sold abandoned weapons, using the materials and money to build a statue of their patron god, Helios, the Colossus of Rhodes, which became known as one of the ancient Seven Wonders of the World. Vitruvius offers an alternative version, in which the Rhodians begged Diognetus, once the town architect of Rhodes, to find a way to capture the Helepolis. By cover of night, he had the Rhodians knock a hole through the city wall to channel large amounts of water, mud and sewage onto the area where the Helepolis was expected to attack the following day. Diognetus was successful; the tower was brought forth to the anticipated attack position and became irretrievably stuck in the mire. Once the siege was lifted, the Rhodians sold Demetrius's abandoned siege engines and used the money to erect the enormous Colossus of Rhodes. Later use Demetrius again used a similar machine in 292 BC against the Thebans in the siege of Thebes and captured the city the following year. In subsequent ages, siege engineers continued to use the name helepolis for moving towers which carried battering rams, as well as for machines for throwing spears and heavy stones. The Byzantines much later used the term helepolis to describe a very different siege engine; the traction trebuchet. The first recorded use of this terminology was by Theophylact Simocatta, in describing the siege of Tiflis in the Byzantine–Sassanid War of 602–628. See also List of largest machines References Connolly, Peter. Greece and Rome at War. London: Greenhill Books, 1998. Warry, John. Warfare in the Classical World. Salamanda Books. Campbell, Duncan B. Greek and Roman Siege Machinery 399 BC-AD 363. Osprey Publishing, 2003. External links Helepolis at LacusCurtius Ancient Greek war machines: The Helepolis, a fortified wheeled tower Ancient Greek military terminology Siege engines Hellenistic military engineering Colossus of Rhodes
Helepolis
[ "Engineering" ]
895
[ "Military engineering", "Siege engines" ]
1,450,110
https://en.wikipedia.org/wiki/On%20Formally%20Undecidable%20Propositions%20of%20Principia%20Mathematica%20and%20Related%20Systems
"Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions of Principia Mathematica and Related Systems I") is a paper in mathematical logic by Kurt Gödel. Submitted November 17, 1930, it was originally published in German in the 1931 volume of Monatshefte für Mathematik und Physik. Several English translations have appeared in print, and the paper has been included in two collections of classic mathematical logic papers. The paper contains Gödel's incompleteness theorems, now fundamental results in logic that have many implications for consistency proofs in mathematics. The paper is also known for introducing new techniques that Gödel invented to prove the incompleteness theorems. Outline and key results The main results established are Gödel's first and second incompleteness theorems, which have had an enormous impact on the field of mathematical logic. These appear as theorems VI and XI, respectively, in the paper. In order to prove these results, Gödel introduced a method now known as Gödel numbering. In this method, each sentence and formal proof in first-order arithmetic is assigned a particular natural number. Gödel shows that many properties of these proofs can be defined within any theory of arithmetic that is strong enough to define the primitive recursive functions. (The contemporary terminology for recursive functions and primitive recursive functions had not yet been established when the paper was published; Gödel used the word ("recursive") for what are now known as primitive recursive functions.) The method of Gödel numbering has since become common in mathematical logic. Because the method of Gödel numbering was novel, and to avoid any ambiguity, Gödel presented a list of 45 explicit formal definitions of primitive recursive functions and relations used to manipulate and test Gödel numbers. He used these to give an explicit definition of a formula that is true if and only if is the Gödel number of a sentence and there exists a natural number that is the Gödel number of a proof of . The name of this formula derives from , the German word for proof. A second new technique invented by Gödel in this paper was the use of self-referential sentences. Gödel showed that the classical paradoxes of self-reference, such as "This statement is false", can be recast as self-referential formal sentences of arithmetic. Informally, the sentence employed to prove Gödel's first incompleteness theorem says "This statement is not provable." The fact that such self-reference can be expressed within arithmetic was not known until Gödel's paper appeared; independent work of Alfred Tarski on his indefinability theorem was conducted around the same time but not published until 1936. In footnote 48a, Gödel stated that a planned second part of the paper would establish a link between consistency proofs and type theory (hence the "I" at the end of the paper's title, denoting the first part), but Gödel did not publish a second part of the paper before his death. His 1958 paper in Dialectica did, however, show how type theory can be used to give a consistency proof for arithmetic. Published English translations During his lifetime three English translations of Gödel's paper were printed, but the process was not without difficulty. The first English translation was by Bernard Meltzer; it was published in 1963 as a standalone work by Basic Books and has since been reprinted by Dover and reprinted by Hawking (God Created the Integers, Running Press, 2005:1097ff). The Meltzer version—described by Raymond Smullyan as a 'nice translation'—was adversely reviewed by Stefan Bauer-Mengelberg (1966). According to Dawson's biography of Gödel (Dawson 1997:216), Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology The Undecidable; but it too was not brought to Gödel's attention until almost the last minute, and the new translation was still not wholly to his liking ... when informed that there was not time enough to consider substituting another text, he declared that Mendelson's translation was 'on the whole very good' and agreed to its publication. [Afterward he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints.] The translation by Elliott Mendelson appears in the collection The Undecidable (Davis 1965:5ff). This translation also received a harsh review by Bauer-Mengelberg (1966), who in addition to giving a detailed list of the typographical errors also described what he believed to be serious errors in the translation. A translation by Jean van Heijenoort appears in the collection From Frege to Gödel: A Source Book in Mathematical Logic (van Heijenoort 1967). A review by Alonzo Church (1972) described this as "the most careful translation that has been made" but also gave some specific criticisms of it. Dawson (1997:216) notes: The translation Gödel favored was that by Jean van Heijenoort ... In the preface to the volume van Heijenoort noted that Gödel was one of four authors who had personally read and approved the translations of his works. This approval process was laborious. Gödel introduced changes to his text of 1931, and negotiations between the men were "protracted": "Privately van Heijenoort declared that Gödel was the most doggedly fastidious individual he had ever known." Between them they "exchanged a total of seventy letters and met twice in Gödel's office in order to resolve questions concerning subtleties in the meanings and usage of German and English words." (Dawson 1997:216-217). Although not a translation of the original paper, a very useful 4th version exists that "cover[s] ground quite similar to that covered by Godel's original 1931 paper on undecidability" (Davis 1952:39), as well as Gödel's own extensions of and commentary on the topic. This appears as On Undecidable Propositions of Formal Mathematical Systems (Davis 1965:39ff) and represents the lectures as transcribed by Stephen Kleene and J. Barkley Rosser while Gödel delivered them at the Institute for Advanced Study in Princeton, New Jersey in 1934. Two pages of errata and additional corrections by Gödel were added by Davis to this version. This version is also notable because in it Gödel first describes the Herbrand suggestion that gave rise to the (general, i.e. Herbrand–Gödel) form of recursion. References Stefan Bauer-Mengelberg (1966). Review of The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions. The Journal of Symbolic Logic, Vol. 31, No. 3. (September 1966), pp. 484–494. Alonzo Church (1972). Review of A Source Book in Mathematical Logic 1879–1931. The Journal of Symbolic Logic, Vol. 37, No. 2. (June 1972), p. 405. Martin Davis, ed. (1965). The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, Raven, New York. Reprint, Dover, 2004. . Martin Davis, (2000). Engines of Logic: Mathematics and the Origin of the Computer, W. W. Norton & Company, New York. pbk. Kurt Gödel (1931), "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I". Monatshefte für Mathematik und Physik 38: 173–198. . Available online via SpringerLink. Kurt Gödel (1958). "Über eine bisher noch nicht benüzte Erweiterung des finiten Standpunktes". Dialectica v. 12, pp. 280–287. Reprinted in English translation in Gödel's Collected Works, vol II, Soloman Feferman et al., eds. Oxford University Press, 1990. Jean van Heijenoort, ed. (1967). From Frege to Gödel: A Source Book on Mathematical Logic 1879–1931. Harvard University Press. Bernard Meltzer (1962). On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Translation of the German original by Kurt Gödel, 1931. Basic Books, 1962. Reprinted, Dover, 1992. . Raymond Smullyan (1966). Review of On Formally Undecidable Propositions of Principia Mathematica and Related Systems. The American Mathematical Monthly, Vol. 73, No. 3. (March 1966), pp. 319–322. John W. Dawson, (1997). Logical Dilemmas: The Life and Work of Kurt Gödel, A. K. Peters, Wellesley, Massachusetts. . External links "On formally undecidable propositions of Principia Mathematica and related systems I". Translated by Martin Hirzel, November 27, 2000. "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" on Wilhelm K. Essler's (Prof. em. of Logic, Goethe-Universität Frankfurt am Main) webpage Mathematical logic Mathematics papers 1931 in science 1931 documents Works originally published in German magazines Works originally published in science and technology magazines Logic literature Works by Kurt Gödel
On Formally Undecidable Propositions of Principia Mathematica and Related Systems
[ "Mathematics" ]
2,033
[ "Mathematical logic" ]
21,564,798
https://en.wikipedia.org/wiki/K-factor%20%28aeronautics%29
For aircraft fuel flow meters, K-factor refers to the number of pulses expected for every one volumetric unit of fluid passing through a given flow meter, and is usually encountered when dealing with pulse signals. Pressure and temperature sensors providing pulses can be used to determine mass flow, with division of the pulses by the K-factor, or multiplication with the inverse of the K-factor providing factored totalization, and rate indication. Furthermore, by dividing the pulse rate by the K-Factor, the volumetric throughput per unit time of the rate of flow can be determined. References Aerodynamics
K-factor (aeronautics)
[ "Chemistry", "Engineering" ]
121
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
21,565,017
https://en.wikipedia.org/wiki/Speed%20limiter
A speed limiter is a governor used to limit the top speed of a vehicle. For some classes of vehicles and in some jurisdictions they are a statutory requirement, for some other vehicles the manufacturer provides a non-statutory system which may be fixed or programmable by the driver. Statutory (UK) Mopeds The legal definition of a moped in the United Kingdom was revised in 1977 to include a maximum design speed of 30 mph (48 km/h). This was further revised to 50 km/h (31 mph) in the 1990s, then 45 km/h (28 mph) in the late 2000s to fall in line with unified European Union licensing regulations. To comply with this, mopeds typically include some method of onboard speed restriction to prevent the machine exceeding the prescribed speed (on a flat road, in still air, with a rider of standard height and weight). Older models such as the Honda C50 used a simple centrifugal governor as part of the transmission, which progressively and severely advanced the ignition as speed rose past a set point, causing engine power to fall off rapidly at higher rpm and road speed, but maintaining the low- and moderate-speed hill climbing ability of the unrestricted version. Other systems achieved a similar result with simple restrictor flaps in the air intake, much like those used to restrict the power output of full-size motorcycles. Modern mopeds use electronic systems with speed sensors that can cut the ignition spark (and, where fitted, interrupt fuel injection) once measured speed reaches or exceeds the set point, maintaining full power right up to the limited speed. Early restriction methods could be defeated by simple physical modifications (e.g. cutting out the restriction plate). Modern electronic limiters at the very least require replacing the friction rollers in a scooter's CVT, or even changing wheel size and/or reprogramming the engine management system, all in an effort to fool the sensors into detecting a lower than actual road speed. Public service vehicles Public service vehicles often have a legislated top speed. Scheduled coach services in the United Kingdom (and also bus services) are limited to either 65 mph (105 km/h) or 100 km/h (62 mph) depending on their age (newer coaches have the lower speed version installed, in line with harmonised EU regulations), though for city buses the use of limiters is to satisfy regulatory requirements, as many city buses cannot achieve these speeds even on an open roadway. Heavy goods vehicles HGVs in the UK have been subject to mandatory 60 mph (96 km/h) limiters since the early 1990s, which were subsequently revised to 90 km/h (56 mph) during EU harmonization. Non-statutory (UK) Dynamic (ISA) The newest form of speed limiters currently being deployed feature the ability to dynamically limit a vehicles top speed based upon a vehicles real time location and the road speed limit. The most popular of these systems is one called VMS with SpeedIQ from Sturdy Corporation. Dynamic speed limiters are being widely adopted by emergency service fleets due to their ability to limit a vehicles top speed during normal operations and then releasing to a higher maximum top speed when en route to an emergency. Additionally, fleets that operate in mixed geographic areas benefit greatly from a limiter that will allow a vehicle to travel at highway speeds as well as limit that vehicle to more commonly traveled residential neighborhoods at significantly lower speeds. Programmable European Citroën, BMW, Benz-Benz, Peugeot, Renault, Tesla as well as some Ford and Nissan car and van models have driver-controlled speed limiters fitted or available as an optional accessory which can be set by the driver to any desired speed; the limiter can be overridden if required by pressing hard on the accelerator. The limiter may be considered as setting the maximum speed (with throttle kickdown to override it) easing the throttle to reduce speed, whereas cruise control sets the minimum speed (with the brake pedal to override it) pressing on the throttle to increase speed. The limiter may shift down through automatic gears to hold the maximum speed. The Bugatti Chiron also has a programmed speed limiter, although uniquely, it can be (at least partially) lifted by the owner via a key. Once the key is inserted, the car conducts a brief diagnostic before allowing the owner to drive the car up to speeds of 420 km/h, (approx. 261 mph) provided it is deemed by the car's computer that the conditions allow. This is considerably faster than the top speed of 380 km/h (236 mph) that the car is usually restricted to. Top Speed Mode, as Bugatti dubs it, reduces the overall ride height and lowers the rear spoiler to help in the reduction of drag. Fixed In European markets, General Motors Europe sometimes allow certain high-powered Opel or Vauxhall cars to exceed the mark, whereas their Cadillacs do not. The Chrysler 300C SRT8 is limited to 270 km/h. Most Japanese domestic market vehicles are limited to . The limit for kei cars is . These limits are self imposed through the Japan Automobile Manufacturers Association and is not a legal requirement. BMW, Benz and others have entered into a gentlemen's agreement to a limit of , but may 'unhook' their speed limited cars in Europe, and Benz will provide some vehicles in the U.S. without limiters for an additional price. There are also third-party companies who will re-flash vehicle computers with new software which will remove the speed limits and improve overall performance. Many small and medium-sized commercial vehicles are now routinely fitted with speed limiters as a manufacturer option, with a mind towards reducing fuel bills, maintenance costs and insurance premiums, as well as discouraging employees from abusing company vehicles, in addition to curbing speeding fines and bad publicity. These limiters are often set considerably lower than for passenger cars, typically at in the UK, with options for listed in countries where these speeds are legal. Often the fitting of a limiter is combined with a small warning sticker on the rear of the vehicle, stating its maximum speed, to discourage drivers who may themselves be delayed by having to follow it from tailgating or other aggressive driving intended to intimidate the lead driver into accelerating. Similarly, most electric cars and vans which are not inherently limited by a low power output or "short" gearing tend to implement a maximum speed cap via their power controllers, to prevent the rapid loss of battery charge and corresponding reduction in range caused by the much greater power demands of high speeds; for example, the Smart ED, Nissan Leaf, Mitsubishi MiEV, and the Citroën Berlingo EV. The limits are typically in line with those of other deliberately limited vehicles, for a balance that does not overly compromise either range or travel time; e.g. 90 km/h for the Berlingo, 100~120 km/h for the Smart (depending on version). The Leaf is an unusual case, being instead limited to a much higher 145 km/h (90 mph). Also some supercars have speed limiters to prevent instability. Some small economy cars have limiters, because of stability and other safety concerns (short crumple zones, etc.), and to safeguard their small engines from the prolonged overrevving required to produce the power to achieve higher speeds. The first generation Smart was limited to 135 km/h (84 mph) (later generations were unlimited), and the Mitsubishi i to 130 km/h (81 mph). Some heavy goods vehicle operators (typically big-name retailers, rather than haulage contractors) further reduce their HGV limiters from 90 km/h to a lower speed, typically 85 or 80 km/h (53 or 50 mph), in a claimed bid to reduce fuel consumption and emissions. This is again often highlighted by a warning sticker on the truck's tailgate. All Dodge Challenger and Charger models from the 2015 model year and up received a security update in June 2021, that allows you to set a security code that if you type the incorrect code, the RPM is limited idle speed (3 hp, , and of torque) to deter thieves. See also Intelligent speed adaptation Notes References Documents referenced from 'Notes' section Other references for article Mechanisms (engineering) Mechanical power control Road safety Road speed limit
Speed limiter
[ "Physics", "Engineering" ]
1,713
[ "Mechanics", "Mechanical power control", "Mechanical engineering", "Mechanisms (engineering)" ]
21,568,490
https://en.wikipedia.org/wiki/Hyperdeformation
In nuclear physics, hyperdeformation is theoretically predicted states of an atomic nucleus with an extremely elongated shape and a very high angular momentum. Less elongated states, superdeformation, have been well observed, but the experimental evidence for hyperdeformation is more limited. Hyperdeformed states correspond to an axis ratio of 3:1. They would be caused by a third minimum in the potential energy surface, the second causing superdeformation and the first minimum being normal deformation. Hyperdeformation is predicted to be found in 107Cd. References Nuclear physics
Hyperdeformation
[ "Physics" ]
116
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
21,568,751
https://en.wikipedia.org/wiki/Equidimensionality
In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere. Definition (topology) A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space. Definition (algebraic geometry) A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional. Cohen–Macaulay ring An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional. References Mathematical terminology Topology
Equidimensionality
[ "Physics", "Mathematics" ]
226
[ "Topology stubs", "Topology", "Space", "nan", "Geometry", "Spacetime" ]
21,573,926
https://en.wikipedia.org/wiki/FIND%2C%20the%20global%20alliance%20for%20diagnostics
FIND (Foundation for Innovative New Diagnostics) is a global health non-profit based in Geneva, Switzerland. FIND functions as a product development partnership, engaging in active collaboration with over 150 partners to facilitate the development, evaluation, and implementation of diagnostic tests for poverty-related diseases. The organisation's Geneva headquarters are in Campus Biotech. Country offices are located in New Delhi, India; Cape Town, South Africa; and Hanoi, Viet Nam. History FIND was launched at the 56th World Health Assembly in 2003 in response to the critical need for innovative and affordable diagnostic tests for diseases in low- and middle-income countries. The initiative was launched by the Bill and Melinda Gates Foundation and WHO's Special Programme for Research and Training in Tropical Diseases (TDR), and its initial focus was to speed up the development and evaluation of tuberculosis tests. In 2011, FIND was recognized as an "Other International Organization" by the Swiss Government, alongside DNDi and Medicines for Malaria Venture. Priorities The organization focuses on improving diagnosis in several disease areas, including hepatitis C, HIV, malaria, neglected tropical diseases (sleeping sickness, Chagas disease, leishmaniasis, buruli ulcer), and tuberculosis. Alongside this, FIND works on diagnostic connectivity, antimicrobial resistance, acute febrile illness, and outbreak preparedness. To support this work, FIND engages in development of target product profiles, maintains clinical trial platforms, manages specimen banks, negotiates preferential product pricing for developing markets, and creates and implements trainings and lab strengthening tools. In 2020, FIND became a co-convener of the Diagnostics Pillar of the Access to COVID-19 Tools Accelerator with The Global Fund to Fight AIDS, Tuberculosis and Malaria. Together they supported the development of reliable rapid antigen tests for COVID-19, and guaranteed access to 120 million rapid tests at an affordable price to low- and middle-income countries. FIND also aims at improving the diagnostics ecosystem by working on activities such as sequencing, managing a biobank network to facilitate diagnostic development across diseases, helping countries optimize their networks of diagnostic services, and developing digital tools, such as algorithms, that can help healthcare workers provide better diagnosis. Recent achievements From 2015 to 2020, fifteen new diagnostic technologies supported by FIND received regulatory clearance, and 10 of them were in use by the end of 2020 in low- and middle-income countries. One example of such tests is Abbott's BIOLINE HAT 2.0, a rapid test for African trypanosomiasis, a disease also known as sleeping sickness. In 2021 Abbott donated 450,000 of these tests to scale up testing in low- and middle-income countries. Over the same period, FIND supported the development of four multi-disease diagnostic platforms: Cepheid's GeneXpert MTB/RIF for simultaneous rapid tuberculosis diagnosis and rapid antibiotic sensitivity test Eiken's LAMP platform for the detection of diseases including tuberculosis, malaria, sleeping sickness and leishmaniasis Molbio's Truenat, a point-of-care rapid molecular test for diagnosis of infectious diseases DCN's Fluoro rapid test for gonorrhoea In April 2020, the World Health Organization launched the ACT-Accelerator partnership, a global collaboration to accelerate the development, production and equitable distribution of vaccines, diagnostics and therapeutics for COVID-19. Leading the diagnostic pillar together with The Global Fund to Fight AIDS, Tuberculosis and Malaria, FIND has worked to enable access to tests by boosting research and development, Emergency Use Listing, independent assessment, and manufacturing of tests. Together with partners FIND has developed and made available online courses and training packages for healthcare workers on COVID-19 testing. FIND has also created a portal to provide an overview of the COVID-19 testing landscape, including a directory of COVID-19 diagnostics commercialized., and a tracker centralizing all the data reported by the countries on COVID-19 tests performed, incidence, deaths and positivity rate. Funding and leadership FIND receives its funding from more than thirty donors, including bilateral and multilateral organizations as well as private foundations. Members of the Board of Directors include Ilona Kickbusch, George F. Gao, David L. Heymann, Shobana Kamineni and Sheila Tlou. References Biomedical research foundations Bill & Melinda Gates Foundation Foundations based in Switzerland Organizations established in 2003 International medical and health organizations Tropical diseases Organisations based in Geneva
FIND, the global alliance for diagnostics
[ "Engineering", "Biology" ]
907
[ "Biotechnology organizations", "Biomedical research foundations" ]
24,546,104
https://en.wikipedia.org/wiki/Vorticity%20confinement
Vorticity confinement (VC), a physics-based computational fluid dynamics model analogous to shock capturing methods, was invented by Dr. John Steinhoff, professor at the University of Tennessee Space Institute, in the late 1980s to solve vortex dominated flows. It was first formulated to capture concentrated vortices shed from the wings, and later became popular in a wide range of research areas. During the 1990s and 2000s, it became widely used in the field of engineering. The method VC has a basic familiarity to solitary wave approach which is extensively used in many condensed matter physics applications. The effect of VC is to capture the small scale features over as few as 2 grid cells as they convect through the flow. The basic idea is similar to that of compression discontinuity in Eulerian shock capturing methods. The internal structure is maintained thin and so the details of the internal structure may not be important. Example Consider 2D Euler equations, modified using the confinement term, F: The discretized Euler equations with the extra term can be solved on fairly coarse grids, with simple low order accurate numerical methods, but still yield concentrated vortices which convect without spreading. VC has different forms, one of which is VC1. It involves an added dissipation,, to the partial differential equation, which when balanced with inward convection, , produce stable solutions. Another form is termed as VC2 in which dissipation is balanced with nonlinear anti-diffusion to produce stable solitary wave-like solutions. : Dissipation : Inward convection for VC1 and nonlinear anti-diffusion for VC2 The main difference between VC1 and VC2 is that in the latter the centroid of the vortex follows the local velocity moment weighted by vorticity. This should provide greater accuracy than VC1 in cases where the convecting field is weak compared to the self-induced velocity of the vortex. One drawback is that VC2 is not as robust as VC1 because while VC1 involves convection like inward propagation of vorticity balanced by an outward second order diffusion, VC2 involves a second order inward propagation of vorticity balanced by 4th order outward dissipation. This approach has been further extended to solve wave equation and is called Wave confinement (WC). Immersed boundary To enforce no-slip boundary conditions on immersed surfaces, first, the surface is represented implicitly by a smooth “level set” function, “f”, defined at each grid point. This is the (signed) distance from each grid point to the nearest point on the surface of an object – positive outside, negative inside. Then, at each time step during the solution, velocities in the interior are set to zero. In a computation using VC, this results in a thin vortical region along the surface, which is smooth in the tangential direction, with no “staircase” effects. The important point is that no special logic is required in the “cut” cells, unlike many conventional schemes: only the same VC equations are applied, as in the rest of the grid, but with a different form for F. Also, unlike many conventional immersed surface schemes, which are inviscid because of cell size constraints, there is effectively a no-slip boundary condition, which results in a boundary layer with well-defined total vorticity and which, because of VC, remains thin, even after separation. The method is especially effective for complex configurations with separation from sharp corners. Also, even with constant coefficients, it can approximately treat separation from smooth surfaces. General blunt bodies, which typically shed turbulent vorticity that induces a velocity around an upstream body. It is inconsistent to use body fitted grids as the vorticity convects through a non fitted grid. Applications VC is used in many applications including rotor wake computations, computation of wing tip vortices, drag computations for vehicles, flow around urban layouts, smoke/contaminant propagation and special effects. Also, it is used in wave computations for communication purposes. References Numerical differential equations Computational fluid dynamics
Vorticity confinement
[ "Physics", "Chemistry" ]
833
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
24,547,165
https://en.wikipedia.org/wiki/Spherium
The "spherium" model consists of two electrons trapped on the surface of a sphere of radius . It has been used by Berry and collaborators to understand both weakly and strongly correlated systems and to suggest an "alternating" version of Hund's rule. Seidl studies this system in the context of density functional theory (DFT) to develop new correlation functionals within the adiabatic connection. Definition and solution The electronic Hamiltonian in atomic units is where is the interelectronic distance. For the singlet S states, it can be then shown that the wave function satisfies the Schrödinger equation By introducing the dimensionless variable , this becomes a Heun equation with singular points at . Based on the known solutions of the Heun equation, we seek wave functions of the form and substitution into the previous equation yields the recurrence relation with the starting values . Thus, the Kato cusp condition is . The wave function reduces to the polynomial (where the number of roots between and ) if, and only if, . Thus, the energy is a root of the polynomial equation (where ) and the corresponding radius is found from the previous equation which yields is the exact wave function of the -th excited state of singlet S symmetry for the radius . We know from the work of Loos and Gill that the HF energy of the lowest singlet S state is . It follows that the exact correlation energy for is which is much larger than the limiting correlation energies of the helium-like ions () or Hooke's atoms (). This confirms the view that electron correlation on the surface of a sphere is qualitatively different from that in three-dimensional physical space. Spherium on a 3-sphere Loos and Gill considered the case of two electrons confined to a 3-sphere repelling Coulombically. They report a ground state energy of (). See also List of quantum-mechanical systems with analytical solutions References Further reading Quantum chemistry Quantum models
Spherium
[ "Physics", "Chemistry" ]
407
[ "Quantum chemistry", "Quantum mechanics", "Quantum models", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
24,548,881
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20decoupling
Nuclear magnetic resonance decoupling (NMR decoupling for short) is a special method used in nuclear magnetic resonance (NMR) spectroscopy where a sample to be analyzed is irradiated at a certain frequency or frequency range to eliminate or partially the effect of coupling between certain nuclei. NMR coupling refers to the effect of nuclei on each other in atoms within a couple of bonds distance of each other in molecules. This effect causes NMR signals in a spectrum to be split into multiple peaks. Decoupling fully or partially eliminates splitting of the signal between the nuclei irradiated and other nuclei such as the nuclei being analyzed in a certain spectrum. NMR spectroscopy and sometimes decoupling can help determine structures of chemical compounds. Explanation NMR spectroscopy of a sample produces an NMR spectrum, which is essentially a graph of signal intensity on the vertical axis vs. chemical shift for a certain isotope on the horizontal axis. The signal intensity is dependent on the number of exactly equivalent nuclei in the sample at that chemical shift. NMR spectra are taken to analyze one isotope of nuclei at a time. Only certain types of isotopes of certain elements show up in NMR spectra. Only these isotopes cause NMR coupling. Nuclei of atoms having the same equivalent positions within a molecule also do not couple with each other. 1H (proton) NMR spectroscopy and 13C NMR spectroscopy analyze 1H and 13C nuclei, respectively, and are the most common types (most common analyte isotopes which show signals) of NMR spectroscopy. Homonuclear decoupling is when the nuclei being radio frequency (rf) irradiated are the same isotope as the nuclei being observed (analyzed) in the spectrum. Heteronuclear decoupling is when the nuclei being rf irradiated are of a different isotope than the nuclei being observed in the spectrum. For a given isotope, the entire range for all nuclei of that isotope can be irradiated in broad band decoupling, or only a select range for certain nuclei of that isotope can be irradiated. Practically all naturally occurring hydrogen (H) atoms have 1H nuclei, which show up in 1H NMR spectra. These 1H nuclei are often coupled with nearby non-equivalent 1H atomic nuclei within the same molecule. H atoms are most commonly bonded to carbon (C) atoms in organic compounds. About 99% of naturally occurring C atoms have 12C nuclei, which neither show up in NMR spectroscopy nor couple with other nuclei which do show signals. About 1% of naturally occurring C atoms have 13C nuclei, which do show signals in 13C NMR spectroscopy and do couple with other active nuclei such as 1H. Since the percentage of 13C is so low in natural isotopic abundance samples, the 13C coupling effects on other carbons and on 1H are usually negligible, and for all practical purposes splitting of 1H signals due to coupling with natural isotopic abundance carbon does not show up in 1H NMR spectra. In real life, however, the 13C coupling effect does show up on non-13C decoupled spectra of other magnetic nuclei, causing satellite signals. Similarly for all practical purposes, 13C signal splitting due to coupling with nearby natural isotopic abundance carbons is negligible in 13C NMR spectra. However, practically all hydrogen bonded to carbon atoms is 1H in natural isotopic abundance samples, including any 13C nuclei bonded to H atoms. In a 13C spectrum with no decoupling at all, each of the 13C signals is split according to how many H atoms that C atom is next to. In order to simplify the spectrum, 13C NMR spectroscopy is most often run fully proton decoupled, meaning 1H nuclei in the sample are broadly irradiated to fully decouple them from the 13C nuclei being analyzed. This full proton decoupling eliminates all coupling with H atoms and thus splitting due to H atoms in natural isotopic abundance compounds. Since coupling between other carbons in natural isotopic abundance samples is negligible, signals in fully proton decoupled 13C spectra in hydrocarbons and most signals from other organic compounds are single peaks. This way, the number of equivalent sets of carbon atoms in a chemical structure can be counted by counting singlet peaks, which in 13C spectra tend to be very narrow (thin). Other information about the carbon atoms can usually be determined from the chemical shift, such as whether the atom is part of a carbonyl group or an aromatic ring, etc. Such full proton decoupling can also help increase the intensity of 13C signals. There can also be off-resonance decoupling of 1H from 13C nuclei in 13C NMR spectroscopy, where weaker rf irradiation results in what can be thought of as partial decoupling. In such an off-resonance decoupled spectrum, only 1H atoms bonded to a carbon atom will split its 13C signal. The coupling constant, indicating a small frequency difference between split signal peaks, would be smaller than in an undecoupled spectrum. Looking at a compound's off-resonance proton-decoupled 13C spectrum can show how many hydrogens are bonded to the carbon atoms to further help elucidate the chemical structure. For most organic compounds, carbons bonded to 3 hydrogens (methyls) would appear as quartets (4-peak signals), carbons bonded to 2 equivalent hydrogens would appear as triplets (3-peak signals), carbons bonded to 1 hydrogen would be doublets (2-peak signals), and carbons not bonded directly to any hydrogens would be singlets (1-peak signals). Another decoupling method is specific proton decoupling (also called band-selective or narrowband). Here the selected "narrow" 1H frequency band of the (soft) decoupling RF pulse covers only a certain part of all 1H signals present in the spectrum. This can serve two purposes: (1) decreasing the deposited energy through additionally adjusting the RF pulse shapes/using composite pulses, (2) elucidating connectivities of NMR nuclei (applicable with both heteronuclear and homonuclear decoupling). Point 2 can be accomplished via decoupling e.g. of a single 1H signal which then leads to the collapse of the J coupling pattern of only those observed heteronuclear or non-decoupled 1H signals which are J coupled to the irradiated 1H signal. Other parts of the spectrum remain unaffected. In other words this specific decoupling method is useful for signal assignments which is a crucial step for further analyses e.g. with the aim of solving a molecular structure. Note that more complex phenomena might be observed when for example the decoupled 1H nuclei are exchanging with non-decoupled 1H nuclei in the sample with the exchange process taking place on the NMR time scale. This is exploited e.g. with chemical exchange saturation transfer (CEST) contrast agents in in vivo magnetic resonance spectroscopy. References Nuclear magnetic resonance
Nuclear magnetic resonance decoupling
[ "Physics", "Chemistry" ]
1,471
[ "Nuclear magnetic resonance", "Nuclear physics" ]
24,549,009
https://en.wikipedia.org/wiki/Kotcherlakota%20Rangadhama%20Rao%20Memorial%20Lecture%20Award
Prof. Kotcherlakota Rangadhama Rao Memorial Lecture Award is given for the outstanding contributions in the subject of Spectroscopy in Physics. The award was established by the Indian National Science Academy of Calcutta in the year 1979. The honour is awarded to Indian citizens. History The Memorial Lecture Award was established in the year 1979 in the honour of Professor Kotcherlakota Rangadhama Rao by the students of Prof. K.Rangadhama Rao and Indian National Science Academy, formerly National Institute of Sciences of India, Calcutta. The lecture is awarded for outstanding contributions in the field of Spectroscopy. The award carries an honorarium of Rs. 25,000/- and a citation. The below lists the recipients of the Memorial Award since its inception in the year 1979. Recipients Source: Indian National Science Academy See also List of physics awards Notes References INSA Awards Physics awards Indian science and technology awards Awards established in 1979 Science lecture series Recurring events established in 1979 1979 establishments in India Indian National Science Academy Spectroscopy Physics events
Kotcherlakota Rangadhama Rao Memorial Lecture Award
[ "Physics", "Chemistry", "Technology" ]
207
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Science and technology awards", "Spectroscopy", "Physics awards" ]
24,549,947
https://en.wikipedia.org/wiki/Experimental%20Mechanics
Experimental Mechanics is a peer-reviewed scientific journal covering all areas of experimental mechanics. It is an official journal of the Society for Experimental Mechanics and was established in 1961, being published monthly. From 1983 to 2003, it was published quarterly, increasing to 6 issues per year until 2009. Since then it has 9 issues per year. The journal is published by Springer Science+Business Media and the editor-in-chief is Professor Alan Zehnder (Cornell University). The journal occasionally publishes special issues on focused topics. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.808. References External links English-language journals Engineering journals Academic journals established in 1961 Materials science journals Springer Science+Business Media academic journals 9 times per year journals
Experimental Mechanics
[ "Materials_science", "Engineering" ]
168
[ "Materials science journals", "Materials science" ]
30,665,449
https://en.wikipedia.org/wiki/Centrifugal%20extractor
A centrifugal extractor—also known as a centrifugal contactor or annular centrifugal contactor—uses the rotation of the rotor inside a centrifuge to mix two immiscible liquids outside the rotor and to separate the liquids in the field of gravity inside the rotor. This way, a centrifugal extractor generates a continuous extraction from one liquid phase into another liquid phase. A summary of contactor design principles and applications is included in a recent compilation. History The first liquid-liquid centrifugal contactor was invented by Walter Podbielniak with the patent filed in 1932, then a series of developed models which were and continue to be used for whole variety of processes including solvent extraction of minerals and the purification of vegetable oils, but notably for the production of penicillin in World War II. It has been employed in solvent extraction processes for metals valuable to the nuclear industry, for example as part of the Salt Waste Processing Facility at the Savannah River Site for implementation of the CSSX process to extract radioactive caesium from tank wastes stored there. Uses include recovery of valuable actinides in Spent Nuclear Fuel (SNF) reprocessing, specifically the recovery of fissile material. It is also used in the processing of Waste Electrical and Electronic Equipment. Monostage centrifugal extractor Two immiscible liquids of different densities are fed to the separate inlets and are rapidly mixed in the annular space between the spinning rotor and stationary housing. The mixed phases are directed toward the center of the rotor by radial vanes in the housing base. As the liquids enter the central opening of the rotor, they are accelerated toward the wall. The mixed phases are rapidly accelerated to rotor speed and separation begins as the liquids are displaced upward. A system of weirs at the top of the rotor allow each phase to exit the rotor where it lands in a collector ring and exits the stage. Flow from between stages is by gravity with no need for inter-stage pumps. The centrifugal contactors thus acts as a mixer, centrifuge and pump. Centrifugal contactors are typical referred to by the diameter of their rotor. Thus, a 5-inch centrifugal contactor is one having a 5-inch diameter rotor. Annular centrifugal contactors are relatively low revolutions-per-minute (rpm), moderate gravity enhancing (100–2000 G) machines, and can therefore be powered by a direct drive, variable speed motor. Typical RPM for small units (2 cm) is approximately 3600RPM while larger units would operate at lower RPM depending on their size (typical speed for a 5-inch [12.5 cm] contactor is ~1800RPM). The effectiveness of a centrifugal separation can be easily described as proportional to the product of the force exerted in multiples of gravity (g) and the residence time in seconds or g-seconds. Achieving a particular g-seconds value in a liquid–liquid centrifuge can be obtained in two ways: increasing the multiples of gravity or increasing the residence time. Creating higher g-force values for a specific rotor diameter is a function of rpm only. Multistage centrifugal extractor The feed solution initially containing one or more solutes (heavy phase on the cross section drawing Fig 3.), and an immiscible solvent having a different density (light phase on cross section sketches) flow counter-currently through the extractor’s rotor, designed with a stack of mechanical subassemblies representing the required number of separate stages. The successive mixing and separation operations performed in each mechanical stage permit the mass transfer of the solutes from the feed solution to the solvent. Each stage consists of Mixing chamber where the two phases are mixed and where the transfer of solutes to be extracted is achieved. A fixed disk allows the two phases to be mixed and to create an emulsion. It operates as a pump to draw the two phases from the preceding stage. Decantation chamber where the two previously mixed liquids are thoroughly separated by centrifugal force. Overflow weirs stabilize the separation area independently of flow rates. The interphase position depends on the diameter of the heavy phase overflow weir, which is interchangeable and to be selected according to the phase density ratio. Configurations Mix and separation As described above, the mix & separation configuration is the standard operation for centrifugal contactors used for liquid-liquid extraction processes. The two liquids (typically an aqueous phase (heavy) and an organic phase (light)) enter the annular mixing zone where a liquid-liquid dispersion is formed and extraction occurs as solutes (e.g. dissolved metal ions) are transferred from one phase into the other. Inside the rotor, the liquids will be separated into a heavy (blue) and a light (yellow) phase by their respective densities. This proportion of each phase (phase ratio), total flow rate, rotor speed, and weir sizes are varied to optimize separation efficiency. The separated liquids are discharged without pressure and flow by gravity to exit the stage (note that exit is higher than inlet in Fig. 2). Separation by direct feed For applications requiring only separation of a pre-mixed dispersion (e.g. oil/water separation in environmental cleanup), the direct feed offers the option to feed the mixed liquid stream at a low sheer force directly into the rotor. Inside the rotor, the liquids will be separated into a heavy (blue) and a light (yellow) phase. This principle is used to optimize the separation efficiency. The separated liquids will be discharged without pressure. Multi-stage processing Typically for solvent extraction processes in stage-wise equipment such as the centrifugal contactor, you would have multiple contactors in series for extraction, scrubbing, and stripping (and perhaps others). The number of stages needed in each section of the process would depend on process design requirements (necessary extraction factor). In the case in Fig. 6, four interconnected stages provide a continuous process in which the first stage is a decanting stage. The next two stages show a counter current extraction. The last stage is a neutralization as a cross stream interconnection. See also Radial chromatography References External links CINC Industries Manufacturer of centrifugal contactors ROUSSELET ROBATEL Monostage centrifugal extractors ROUSSELET ROBATEL Multistage centrifugal extractors A centrifugal extractor washes, extracts und separates in a single processing stage (article in the journal Process). CINC Germany single and multi stage liquid liquid centrifugal extractors Centrifugal extractor for liquid liquid applications in Process What is an Annular Centrifugal Contactor? Computational Fluid Dynamics (CFD) modeling of centrifugal contactors Multistage centrifugal extractor Patent Laboratory equipment Liquid-liquid separation Centrifuges
Centrifugal extractor
[ "Chemistry", "Engineering" ]
1,456
[ "Centrifugation", "Separation processes by phases", "Chemical equipment", "Liquid-liquid separation", "Centrifuges" ]
30,667,843
https://en.wikipedia.org/wiki/Minnesota%20functionals
Minnesota Functionals (Myz) are a group of highly parameterized approximate exchange-correlation energy functionals in density functional theory (DFT). They are developed by the group of Donald Truhlar at the University of Minnesota. The Minnesota functionals are available in a large number of popular quantum chemistry computer programs, and can be used for traditional quantum chemistry and solid-state physics calculations. These functionals are based on the meta-GGA approximation, i.e. they include terms that depend on the kinetic energy density, and are all based on complicated functional forms parametrized on high-quality benchmark databases. The Myz functionals are widely used and tested in the quantum chemistry community. Controversies Independent evaluations of the strengths and limitations of the Minnesota functionals with respect to various chemical properties cast doubts on their accuracy. Some regard this criticism to be unfair. In this view, because Minnesota functionals are aiming for a balanced description for both main-group and transition-metal chemistry, the studies assessing Minnesota functionals solely based on the performance on main-group databases yield biased information, as the functionals that work well for main-group chemistry may fail for transition metal chemistry. A study in 2017 highlighted what appeared to be the poor performance of Minnesota functionals on atomic densities. Others subsequently refuted this criticism, claiming that focusing only on atomic densities (including chemically unimportant, highly charged cations) is hardly relevant to real applications of density functional theory in computational chemistry. Another study found this to be the case: for Minnesota functionals, the errors in atomic densities and in energetics are indeed decoupled, and the Minnesota functionals perform better for diatomic densities than for the atomic densities. The study concludes that atomic densities do not yield an accurate judgement of the performance of density functionals. Minnesota functionals have also been shown to reproduce chemically relevant Fukui functions better than they do the atomic densities. Family of functionals Minnesota 05 The first family of Minnesota functionals, published in 2005, is composed by: M05: Global hybrid functional with 28% HF exchange. M05-2X Global hybrid functional with 56% HF exchange. In addition to the fraction of HF exchange, the M05 family of functionals includes 22 additional empirical parameters. A range-separated functional based on the M05 form, ωM05-D which includes empirical atomic dispersion corrections, has been reported by Chai and coworkers. Minnesota 06 The '06 family represent a general improvement over the 05 family and is composed of: M06-L: Local functional, 0% HF exchange. Intended to be fast, good for transition metals, inorganic and organometallics. revM06-L: Local functional, 0% HF exchange. M06-L revised for smoother potential energy curves and improved overall accuracy. M06: Global hybrid functional with 27% HF exchange. Intended for main group thermochemistry and non-covalent interactions, transition metal thermochemistry and organometallics. It is usually the most versatile of the 06 functionals, and because of this large applicability it can be slightly worse than M06-2X for specific properties that require high percentage of HF exchange, such as thermochemistry and kinetics. revM06: Global hybrid functional with 40.4% HF exchange. Intended for a broad range of applications on main-group chemistry, transition-metal chemistry, and molecular structure prediction to replace M06 and M06-2X. M06-2X: Global hybrid functional with 54% HF exchange. It is the top performer within the 06 functionals for main group thermochemistry, kinetics and non-covalent interactions, however it cannot be used for cases where multi-reference species are or might be involved, such as transition metal thermochemistry and organometallics. M06-HF: Global hybrid functional with 100% HF exchange. Intended for charge transfer TD-DFT and systems where self-interaction is pathological. The M06 and M06-2X functionals introduce 35 and 32 empirically optimized parameters, respectively, into the exchange-correlation functional. A range-separated functional based on the M06 form, ωM06-D3 which includes empirical atomic dispersion corrections, has been reported by Chai and coworkers. Minnesota 08 The '08 family was created with the primary intent to improve the M06-2X functional form, retaining the performances for main group thermochemistry, kinetics and non-covalent interactions. This family is composed by two functionals with a high percentage of HF exchange, with performances similar to those of M06-2X: M08-HX: Global hybrid functional with 52.23% HF exchange. Intended for main group thermochemistry, kinetics and non-covalent interactions. M08-SO: Global hybrid functional with 56.79% HF exchange. Intended for main group thermochemistry, kinetics and non-covalent interactions. Minnesota 11 The '11 family introduces range-separation in the Minnesota functionals and modifications in the functional form and in the training databases. These modifications also cut the number of functionals in a complete family from 4 (M06-L, M06, M06-2X and M06-HF) to just 2: M11-L: Local functional (0% HF exchange) with dual-range DFT exchange. Intended to be fast, to be good for transition metals, inorganic, organometallics and non-covalent interactions, and to improve much over M06-L. M11: Range-separated hybrid functional with 42.8% HF exchange in the short-range and 100% in the long-range. Intended for main group thermochemistry, kinetics and non-covalent interactions, with an intended performance comparable to that of M06-2X, and for TD-DFT applications, with an intended performance comparable to M06-HF. revM11: Range-separated hybrid functional with 22.5% HF exchange in the short-range and 100% in the long-range. Intended for good performance for electronic excitations and good predictions across the board for ground-state properties. Minnesota 12 The 12 family uses a nonseparable (N in MN) functional form aiming to provide balanced performance for both chemistry and solid-state physics applications. It is composed by: MN12-L: A local functional, 0% HF exchange. The aim of the functional was to be very versatile and provide good computational performance and accuracy for energetic and structural problems in both chemistry and solid-state physics. MN12-SX: Screened-exchange (SX) hybrid functional with 25% HF exchange in the short-range and 0% HF exchange in the long-range. MN12-L was intended to be very versatile and provide good performance for energetic and structural problems in both chemistry and solid-state physics, at a computational cost that is intermediate between local and global hybrid functionals. Minnesota 15 The 15 functionals are the newest addition to the Minnesota family. Like the 12 family, the functionals are based on a non-separable form, but unlike the 11 or 12 families the hybrid functional doesn't use range separation: MN15 is a global hybrid like in the pre-11 families. The 15 family consists of two functionals MN15, a global hybrid with 44% HF exchange. MN15-L, a local functional with 0% HF exchange. Main Software with Implementation of the Minnesota Functionals * Using LibXC. References External links The Truhlar Group Minnesota Databases for Chemistry and Physics The most recent review article on the performance of the Minnesota functionals Density functional theory Electronic structure methods University of Minnesota
Minnesota functionals
[ "Physics", "Chemistry" ]
1,674
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics", "Computational physics", "Electronic structure methods", "Computational chemistry" ]
30,671,368
https://en.wikipedia.org/wiki/SIMPLE%20%28dark%20matter%20experiment%29
SIMPLE (Superheated Instrument for Massive ParticLe Experiments) is an experiment search for direct evidence of dark matter. It is located in a 61 m3 cavern at the 500 level of the Laboratoire Souterrain à Bas Bruit (LSBB) near Apt in southern France. The experiment is predominantly sensitive to spin-dependent interactions of weakly interacting massive particles (or WIMPs). SIMPLE is an international collaboration with members from Portugal, France, and the United States. Design The SIMPLE detector is based on superheated droplet detectors (SDDs), a suspension of 1–2% superheated liquid C2ClF5 droplets (~30 μm radius) in a viscoelastic 900 ml gel matrix, which undergo transitions to the gas phase upon energy deposition by incident radiation. The refrigerant, freon, is used as the active mass. In effect, each droplet behaves as a miniature bubble chamber. Once a nucleation has occurred, the acoustic shock wave is picked up by microphones. Each acquired signal is then fully discriminated in terms of acoustic external noise, gel-associated noise and most recently particle discrimination. The detectors are typically operated at ~200 kPa and ~280 K. Due to the construction technique, SSDs are almost insensitive to background radiation, and their sensitivity can be adjusted by controlling the temperature and pressure of each device. Results The final phase II analysis was published in Physical Review Letters in 2012. Spin-dependeant cross section limits were set for light WIMPs. References The SIMPLE Phase II dark matter search (2014) Fabrication and response of high concentration SIMPLE superheated droplet detectors with different liquids (2013) Final Analysis and Results of the Phase II SIMPLE Dark Matter Search (2012) Reply to Comment on First Results of the Phase II SIMPLE Dark Matter Search (2012) Comment on First Results of the Phase II SIMPLE Dark Matter Search (2012) First Results of the Phase II SIMPLE Dark Matter Search (2010) SIMPLE dark matter search results (2005) External links SIMPLE experiment website Experiments for dark matter search
SIMPLE (dark matter experiment)
[ "Physics" ]
423
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
40,016,175
https://en.wikipedia.org/wiki/4-HO-DSBT
4-HO-DsBT (4-hydroxy-N,N-di-sec-butyltryptamine) is a tryptamine derivative which acts as a serotonin receptor agonist. It was first made by Alexander Shulgin and is mentioned in his book TiHKAL, but was never tested by him. However it has subsequently been tested in vitro and unlike the n-butyl and isobutyl isomers which are much weaker, the s-butyl derivative retains reasonable potency, with a similar 5-HT2A receptor affinity to MiPT but better selectivity over the 5-HT1A and 5-HT2B subtypes. See also 4-HO-DiPT 4-HO-DBT 4-HO-McPeT 4-HO-PiPT 5-MeO-DBT Dibutyltryptamine N-t-Butyltryptamine Robalzotan References Tryptamines Hydroxyarenes
4-HO-DSBT
[ "Chemistry" ]
208
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
40,017,873
https://en.wikipedia.org/wiki/Diabetes
Diabetes, also known as diabetes mellitus, is a group of common endocrine diseases characterized by sustained high blood sugar levels. Diabetes is due to either the pancreas not producing enough insulin, or the cells of the body becoming unresponsive to the hormone's effects. Classic symptoms include polydipsia (excessive thirst), polyuria (excessive urination), weight loss, and blurred vision. If left untreated, the disease can lead to various health complications, including disorders of the cardiovascular system, eye, kidney, and nerves. Diabetes accounts for approximately 4.2 million deaths every year, with an estimated 1.5 million caused by either untreated or poorly treated diabetes. The major types of diabetes are type 1 and type 2. The most common treatment for type 1 is insulin replacement therapy (insulin injections), while anti-diabetic medications (such as metformin and semaglutide) and lifestyle modifications can be used to manage type 2. Gestational diabetes, a form that arises during pregnancy in some women, normally resolves shortly after delivery. The number of people diagnosed as living with diabetes has increased sharply in recent decades, from 200 million in 1990 to 830 million by 2022. It affects one in seven of the adult population, with type 2 diabetes accounting for more than 95% of cases. These numbers have already risen beyond earlier projections of 783 million adults by 2045. The prevalence of the disease continues to increase, most dramatically in low- and middle-income nations. Rates are similar in women and men, with diabetes being the seventh leading cause of death globally. The global expenditure on diabetes-related healthcare is an estimated US$760 billion a year. Signs and symptoms Common symptoms of diabetes include increased thirst, frequent urination, extreme hunger, and unintended weight loss. Several other non-specific signs and symptoms may also occur, including fatigue, blurred vision, sweet smelling urine/semen and genital itchiness due to Candida infection. About half of affected individuals may also be asymptomatic. Type 1 presents abruptly following a pre-clinical phase, while type 2 has a more insidious onset; patients may remain asymptomatic for many years. Diabetic ketoacidosis is a medical emergency that occurs most commonly in type 1, but may also occur in type 2 if it has been longstanding or if the individual has significant β-cell dysfunction. Excessive production of ketone bodies leads to signs and symptoms including nausea, vomiting, abdominal pain, the smell of acetone in the breath, deep breathing known as Kussmaul breathing, and in severe cases decreased level of consciousness. Hyperosmolar hyperglycemic state is another emergency characterized by dehydration secondary to severe hyperglycemia, with resultant hypernatremia leading to an altered mental state and possibly coma. Hypoglycemia is a recognized complication of insulin treatment used in diabetes. An acute presentation can include mild symptoms such as sweating, trembling, and palpitations, to more serious effects including impaired cognition, confusion, seizures, coma, and rarely death. Recurrent hypoglycemic episodes may lower the glycemic threshold at which symptoms occur, meaning mild symptoms may not appear before cognitive deterioration begins to occur. Long-term complications The major long-term complications of diabetes relate to damage to blood vessels at both macrovascular and microvascular levels. Diabetes doubles the risk of cardiovascular disease, and about 75% of deaths in people with diabetes are due to coronary artery disease. Other macrovascular morbidities include stroke and peripheral artery disease. Microvascular disease affects the eyes, kidneys, and nerves. Damage to the retina, known as diabetic retinopathy, is the most common cause of blindness in people of working age. The eyes can also be affected in other ways, including development of cataract and glaucoma. It is recommended that people with diabetes visit an optometrist or ophthalmologist once a year. Diabetic nephropathy is a major cause of chronic kidney disease, accounting for over 50% of patients on dialysis in the United States. Diabetic neuropathy, damage to nerves, manifests in various ways, including sensory loss, neuropathic pain, and autonomic dysfunction (such as postural hypotension, diarrhoea, and erectile dysfunction). Loss of pain sensation predisposes to trauma that can lead to diabetic foot problems (such as ulceration), the most common cause of non-traumatic lower-limb amputation. Hearing loss is another long-term complication associated with diabetes. Based on extensive data and numerous cases of gallstone disease, it appears that a causal link might exist between type 2 diabetes and gallstones. People with diabetes are at a higher risk of developing gallstones compared to those without diabetes. There is a link between cognitive deficit and diabetes; studies have shown that diabetic individuals are at a greater risk of cognitive decline, and have a greater rate of decline compared to those without the disease. The condition also predisposes to falls in the elderly, especially those treated with insulin. Types Diabetes is classified by the World Health Organization into six categories: type 1 diabetes, type 2 diabetes, hybrid forms of diabetes (including slowly evolving, immune-mediated diabetes of adults and ketosis-prone type 2 diabetes), hyperglycemia first detected during pregnancy, "other specific types", and "unclassified diabetes". Diabetes is a more variable disease than once thought, and individuals may have a combination of forms. Type 1 Type 1 accounts for 5 to 10% of diabetes cases and is the most common type diagnosed in patients under 20 years; however, the older term "juvenile-onset diabetes" is no longer used as onset in adulthood is not unusual. The disease is characterized by loss of the insulin-producing beta cells of the pancreatic islets, leading to severe insulin deficiency, and can be further classified as immune-mediated or idiopathic (without known cause). The majority of cases are immune-mediated, in which a T cell-mediated autoimmune attack causes loss of beta cells and thus insulin deficiency. Patients often have irregular and unpredictable blood sugar levels due to very low insulin and an impaired counter-response to hypoglycemia. Type 1 diabetes is partly inherited, with multiple genes, including certain HLA genotypes, known to influence the risk of diabetes. In genetically susceptible people, the onset of diabetes can be triggered by one or more environmental factors, such as a viral infection or diet. Several viruses have been implicated, but to date there is no stringent evidence to support this hypothesis in humans. Type 1 diabetes can occur at any age, and a significant proportion is diagnosed during adulthood. Latent autoimmune diabetes of adults (LADA) is the diagnostic term applied when type 1 diabetes develops in adults; it has a slower onset than the same condition in children. Given this difference, some use the unofficial term "type 1.5 diabetes" for this condition. Adults with LADA are frequently initially misdiagnosed as having type 2 diabetes, based on age rather than a cause. LADA leaves adults with higher levels of insulin production than type 1 diabetes, but not enough insulin production for healthy blood sugar levels. Type 2 Type 2 diabetes is characterized by insulin resistance, which may be combined with relatively reduced insulin secretion. The defective responsiveness of body tissues to insulin is believed to involve the insulin receptor. However, the specific defects are not known. Diabetes mellitus cases due to a known defect are classified separately. Type 2 diabetes is the most common type of diabetes mellitus accounting for 95% of diabetes. Many people with type 2 diabetes have evidence of prediabetes (impaired fasting glucose and/or impaired glucose tolerance) before meeting the criteria for type 2 diabetes. The progression of prediabetes to overt type 2 diabetes can be slowed or reversed by lifestyle changes or medications that improve insulin sensitivity or reduce the liver's glucose production. Type 2 diabetes is primarily due to lifestyle factors and genetics. A number of lifestyle factors are known to be important to the development of type 2 diabetes, including obesity (defined by a body mass index of greater than 30), lack of physical activity, poor diet such as Western Pattern Diet, stress, and urbanization. Excess body fat is associated with 30% of cases in people of Chinese and Japanese descent, 60–80% of cases in those of European and African descent, and 100% of Pima Indians and Pacific Islanders. Even those who are not obese may have a high waist–hip ratio. Dietary factors such as sugar-sweetened drinks are associated with an increased risk. The type of fats in the diet is also important, with saturated fat and trans fats increasing the risk and polyunsaturated and monounsaturated fat decreasing the risk. Eating white rice excessively may increase the risk of diabetes, especially in Chinese and Japanese people. Lack of physical activity may increase the risk of diabetes in some people. Adverse childhood experiences, including abuse, neglect, and household difficulties, increase the likelihood of type 2 diabetes later in life by 32%, with neglect having the strongest effect. Antipsychotic medication side effects (specifically metabolic abnormalities, dyslipidemia and weight gain) are also potential risk factors. Gestational diabetes Gestational diabetes resembles type 2 diabetes in several respects, involving a combination of relatively inadequate insulin secretion and responsiveness. It occurs in about 2–10% of all pregnancies and may improve or disappear after delivery. It is recommended that all pregnant women get tested starting around 24–28 weeks gestation. It is most often diagnosed in the second or third trimester because of the increase in insulin-antagonist hormone levels that occurs at this time. However, after pregnancy approximately 5–10% of women with gestational diabetes are found to have another form of diabetes, most commonly type 2. Gestational diabetes is fully treatable, but requires careful medical supervision throughout the pregnancy. Management may include dietary changes, blood glucose monitoring, and in some cases, insulin may be required. Though it may be transient, untreated gestational diabetes can damage the health of the fetus or mother. Risks to the baby include macrosomia (high birth weight), congenital heart and central nervous system abnormalities, and skeletal muscle malformations. Increased levels of insulin in a fetus's blood may inhibit fetal surfactant production and cause infant respiratory distress syndrome. A high blood bilirubin level may result from red blood cell destruction. In severe cases, perinatal death may occur, most commonly as a result of poor placental perfusion due to vascular impairment. Labor induction may be indicated with decreased placental function. A caesarean section may be performed if there is marked fetal distress or an increased risk of injury associated with macrosomia, such as shoulder dystocia. Other types Maturity onset diabetes of the young (MODY) is a rare autosomal dominant inherited form of diabetes, due to one of several single-gene mutations causing defects in insulin production. It is significantly less common than the three main types, constituting 1–2% of all cases. The name of this disease refers to early hypotheses as to its nature. Being due to a defective gene, this disease varies in age at presentation and in severity according to the specific gene defect; thus, there are at least 13 subtypes of MODY. People with MODY often can control it without using insulin. Some cases of diabetes are caused by the body's tissue receptors not responding to insulin (even when insulin levels are normal, which is what separates it from type 2 diabetes); this form is very uncommon. Genetic mutations (autosomal or mitochondrial) can lead to defects in beta cell function. Abnormal insulin action may also have been genetically determined in some cases. Any disease that causes extensive damage to the pancreas may lead to diabetes (for example, chronic pancreatitis and cystic fibrosis). Diseases associated with excessive secretion of insulin-antagonistic hormones can cause diabetes (which is typically resolved once the hormone excess is removed). Many drugs impair insulin secretion and some toxins damage pancreatic beta cells, whereas others increase insulin resistance (especially glucocorticoids which can provoke "steroid diabetes"). The ICD-10 (1992) diagnostic entity, malnutrition-related diabetes mellitus (ICD-10 code E12), was deprecated by the World Health Organization (WHO) when the current taxonomy was introduced in 1999. Yet another form of diabetes that people may develop is double diabetes. This is when a type 1 diabetic becomes insulin resistant, the hallmark for type 2 diabetes or has a family history for type 2 diabetes. It was first discovered in 1990 or 1991. The following is a list of disorders that may increase the risk of diabetes: Genetic defects of β-cell function Maturity onset diabetes of the young Mitochondrial DNA mutations Genetic defects in insulin processing or insulin action Defects in proinsulin conversion Insulin gene mutations Insulin receptor mutations Exocrine pancreatic defects (see Type 3c diabetes, i.e. pancreatogenic diabetes) Chronic pancreatitis Pancreatectomy Pancreatic neoplasia Cystic fibrosis Hemochromatosis Fibrocalculous pancreatopathy Endocrinopathies Growth hormone excess (acromegaly) Cushing syndrome Hyperthyroidism Hypothyroidism Pheochromocytoma Glucagonoma Infections Cytomegalovirus infection Coxsackievirus B Drugs Glucocorticoids Thyroid hormone β-adrenergic agonists Statins Pathophysiology Insulin is the principal hormone that regulates the uptake of glucose from the blood into most cells of the body, especially liver, adipose tissue and muscle, except smooth muscle, in which insulin acts via the IGF-1. Therefore, deficiency of insulin or the insensitivity of its receptors play a central role in all forms of diabetes mellitus. The body obtains glucose from three main sources: the intestinal absorption of food; the breakdown of glycogen (glycogenolysis), the storage form of glucose found in the liver; and gluconeogenesis, the generation of glucose from non-carbohydrate substrates in the body. Insulin plays a critical role in regulating glucose levels in the body. Insulin can inhibit the breakdown of glycogen or the process of gluconeogenesis, it can stimulate the transport of glucose into fat and muscle cells, and it can stimulate the storage of glucose in the form of glycogen. Insulin is released into the blood by beta cells (β-cells), found in the islets of Langerhans in the pancreas, in response to rising levels of blood glucose, typically after eating. Insulin is used by about two-thirds of the body's cells to absorb glucose from the blood for use as fuel, for conversion to other needed molecules, or for storage. Lower glucose levels result in decreased insulin release from the beta cells and in the breakdown of glycogen to glucose. This process is mainly controlled by the hormone glucagon, which acts in the opposite manner to insulin. If the amount of insulin available is insufficient, or if cells respond poorly to the effects of insulin (insulin resistance), or if the insulin itself is defective, then glucose is not absorbed properly by the body cells that require it, and is not stored appropriately in the liver and muscles. The net effect is persistently high levels of blood glucose, poor protein synthesis, and other metabolic derangements, such as metabolic acidosis in cases of complete insulin deficiency. When there is too much glucose in the blood for a long time, the kidneys cannot absorb it all (reach a threshold of reabsorption) and the extra glucose gets passed out of the body through urine (glycosuria). This increases the osmotic pressure of the urine and inhibits reabsorption of water by the kidney, resulting in increased urine production (polyuria) and increased fluid loss. Lost blood volume is replaced osmotically from water in body cells and other body compartments, causing dehydration and increased thirst (polydipsia). In addition, intracellular glucose deficiency stimulates appetite leading to excessive food intake (polyphagia). Diagnosis Diabetes mellitus is diagnosed with a test for the glucose content in the blood, and is diagnosed by demonstrating any one of the following: Fasting plasma glucose level ≥ 7.0 mmol/L (126 mg/dL). For this test, blood is taken after a period of fasting, i.e. in the morning before breakfast, after the patient had sufficient time to fast overnight or at least 8 hours before the test. Plasma glucose ≥ 11.1 mmol/L (200 mg/dL) two hours after a 75 gram oral glucose load as in a glucose tolerance test (OGTT) Symptoms of high blood sugar and plasma glucose ≥ 11.1 mmol/L (200 mg/dL) either while fasting or not fasting Glycated hemoglobin (HbA1C) ≥ 48 mmol/mol (≥ 6.5 DCCT %). A positive result, in the absence of unequivocal high blood sugar, should be confirmed by a repeat of any of the above methods on a different day. It is preferable to measure a fasting glucose level because of the ease of measurement and the considerable time commitment of formal glucose tolerance testing, which takes two hours to complete and offers no prognostic advantage over the fasting test. According to the current definition, two fasting glucose measurements at or above 7.0 mmol/L (126 mg/dL) is considered diagnostic for diabetes mellitus. Per the WHO, people with fasting glucose levels from 6.1 to 6.9 mmol/L (110 to 125 mg/dL) are considered to have impaired fasting glucose. People with plasma glucose at or above 7.8 mmol/L (140 mg/dL), but not over 11.1 mmol/L (200 mg/dL), two hours after a 75 gram oral glucose load are considered to have impaired glucose tolerance. Of these two prediabetic states, the latter in particular is a major risk factor for progression to full-blown diabetes mellitus, as well as cardiovascular disease. The American Diabetes Association (ADA) since 2003 uses a slightly different range for impaired fasting glucose of 5.6 to 6.9 mmol/L (100 to 125 mg/dL). Glycated hemoglobin is better than fasting glucose for determining risks of cardiovascular disease and death from any cause. Prevention There is no known preventive measure for type 1 diabetes. However, islet autoimmunity and multiple antibodies can be a strong predictor of the onset of type 1 diabetes. Type 2 diabetes—which accounts for 85–90% of all cases worldwide—can often be prevented or delayed by maintaining a normal body weight, engaging in physical activity, and eating a healthy diet. Higher levels of physical activity (more than 90 minutes per day) reduce the risk of diabetes by 28%. Dietary changes known to be effective in helping to prevent diabetes include maintaining a diet rich in whole grains and fiber, and choosing good fats, such as the polyunsaturated fats found in nuts, vegetable oils, and fish. Limiting sugary beverages and eating less red meat and other sources of saturated fat can also help prevent diabetes. Tobacco smoking is also associated with an increased risk of diabetes and its complications, so smoking cessation can be an important preventive measure as well. The relationship between type 2 diabetes and the main modifiable risk factors (excess weight, unhealthy diet, physical inactivity and tobacco use) is similar in all regions of the world. There is growing evidence that the underlying determinants of diabetes are a reflection of the major forces driving social, economic and cultural change: globalization, urbanization, population aging, and the general health policy environment. Comorbidity Diabetes patients' comorbidities have a significant impact on medical expenses and related costs. It has been demonstrated that patients with diabetes are more likely to experience respiratory, urinary tract, and skin infections, develop atherosclerosis, hypertension, and chronic kidney disease, putting them at increased risk of infection and complications that require medical attention. Patients with diabetes mellitus are more likely to experience certain infections, such as COVID-19, with prevalence rates ranging from 5.3 to 35.5%. Maintaining adequate glycemic control is the primary goal of diabetes management since it is critical to managing diabetes and preventing or postponing such complications. People with type 1 diabetes have higher rates of autoimmune disorders than the general population. An analysis of a type 1 diabetes registry found that 27% of the 25,000 participants had other autoimmune disorders. Between 2% and 16% of people with type 1 diabetes also have celiac disease. Management Diabetes management concentrates on keeping blood sugar levels close to normal, without causing low blood sugar. This can usually be accomplished with dietary changes, exercise, weight loss, and use of appropriate medications (insulin, oral medications). Learning about the disease and actively participating in the treatment is important, since complications are far less common and less severe in people who have well-managed blood sugar levels. The goal of treatment is an A1C level below 7%. Attention is also paid to other health problems that may accelerate the negative effects of diabetes. These include smoking, high blood pressure, metabolic syndrome obesity, and lack of regular exercise. Specialized footwear is widely used to reduce the risk of diabetic foot ulcers by relieving the pressure on the foot. Foot examination for patients living with diabetes should be done annually which includes sensation testing, foot biomechanics, vascular integrity and foot structure. Concerning those with severe mental illness, the efficacy of type 2 diabetes self-management interventions is still poorly explored, with insufficient scientific evidence to show whether these interventions have similar results to those observed in the general population. Lifestyle People with diabetes can benefit from education about the disease and treatment, dietary changes, and exercise, with the goal of keeping both short-term and long-term blood glucose levels within acceptable bounds. In addition, given the associated higher risks of cardiovascular disease, lifestyle modifications are recommended to control blood pressure. Weight loss can prevent progression from prediabetes to diabetes type 2, decrease the risk of cardiovascular disease, or result in a partial remission in people with diabetes. No single dietary pattern is best for all people with diabetes. Healthy dietary patterns, such as the Mediterranean diet, low-carbohydrate diet, or DASH diet, are often recommended, although evidence does not support one over the others. According to the ADA, "reducing overall carbohydrate intake for individuals with diabetes has demonstrated the most evidence for improving glycemia", and for individuals with type 2 diabetes who cannot meet the glycemic targets or where reducing anti-glycemic medications is a priority, low or very-low carbohydrate diets are a viable approach. For overweight people with type 2 diabetes, any diet that achieves weight loss is effective. A 2020 Cochrane systematic review compared several non-nutritive sweeteners to sugar, placebo and a nutritive low-calorie sweetener (tagatose), but the results were unclear for effects on HbA1c, body weight and adverse events. The studies included were mainly of very low-certainty and did not report on health-related quality of life, diabetes complications, all-cause mortality or socioeconomic effects. Exercise has demonstrated to impact people’s lives for a better health outcome. However, fear of hypoglycemia can negatively impact exercise view on youth that have been diagnosed with diabetes. Managing insulin, carbohydrate intake, and physical activity becomes a task that drive youth away benefitting from enjoying exercises. With different studies, an understanding of what can be done and applied to the youth population diagnosed with Type 1 Diabetes has been conducted. A study’s aim was to focus on the impact of an exercise education on physical activity. During the length of a 12-month program, youth and their parents participated in 4 education sessions learning about the benefits, safe procedures, glucose control, and physical activity. With a survey conducted in the beginning, youth and parents demonstrated their fear of hypoglycemia. At the end of the program, most of the youth and parents showed confidence on how to manage and handle situations regarding hypoglycemia. In some instances, youth provided feedback that a continuation of the sessions would be beneficial. In two other studies, exercise was the aim to investigate on how it affects adolescents with T1D. In one of those studies, the impact was assessed in the changes of glucose in exercise by how many minutes per day, intensity, duration, and heart rate. Also, glucose was monitored to see changes during exercise, post exercise, and overnight. The other study investigated how types of exercises can affect glucose levels. The exercise types were continuous moderate exercise and interval-high-intensity exercise. Both types consisted of 2 sets of 10-minute work at different pedaling paces. The continuous pedaled at a 50% and had a 5-minute passive recovery. The high-intensity pedaled at 150% for 15 seconds and was intermixed with a 30-second passive recovery. So, when studies finished collecting data and were able to analyze it, the following were the results. For the studies comparing the different intensities, it was seen that insulin and carbohydrate intake did not have a significant difference before or after exercise. In regards of glucose content, there was a greater drop of blood glucose post exercise in the high intensity (-1.47mmol/L). During recovery, the continuous exercise showed a greater decrease in blood glucose. With all these, continuous exercise resulted in being more favorable for managing blood glucose levels. In the other study, it is mentioned that exercise also contributed to a notable impact on glucose levels. Post-exercise measurements, there was a low mean glucose level that occurred 12 to 16 hours after exercising. Although, with participants exercising for longer sessions (≥90 minutes), hypoglycemia rates were higher. With all these, participants showed well-managed glucose control by intaking proper carbohydrates amount without any insulin adjustments. Lastly, the study, that educated youth and parents about exercise important and management of hypoglycemia, showed many youths feeling confident to continue to exercise regularly and being able to manage their glucose levels. Therefore, as important as exercising is, showing youth and parents that being physical active is possible. That can be done in specific intensities and with proper understanding on how to handle glucose control over time. Diabetes and youth Youth dealing with diabetes face unique challenges. These can include the emotional, psychological, and social implications as a result of managing a chronic condition at such a young age. Both forms of diabetes can have long-term risks for complications like cardiovascular disease, kidney damage, and nerve damage. This is why early intervention and impactful management important to improving long-term health. Physical activity plays a vital role in managing diabetes, improving glycemic control, and enhancing the overall quality of life for children and adolescents. Younger children and adolescents with T1D tend to be more physically active compared to older individuals. This possibly because of the more demanding schedules and sedentary lifestyles of older adolescents, who are often in high school or university. This age-related decrease in physical activity is a potential challenge to keeping up with the ideal healthy lifestyle. People who have had T1D for a longer amount of time also have a tendency to be less active. As diabetes progresses, people may face more barriers to engaging in physical activity. Examples of this could include anxiety about experiencing hypoglycemic events during exercise or the physical challenges posed by the long-term complications that diabetes cause. Increased physical activity in youth with T1D can be associated with improved health. These outcomes can include better lipid profiles (higher HDL-C and lower triglycerides), healthier body composition (reduced waist circumference and BMI), and improved overall physical health. These benefits are especially important during childhood and adolescence because this is when proper growth and development are occurring. Younger people with type 2 diabetes have a tendency to have lower levels of physical activity and CRF compared to their peers without diabetes. This contributes to their poorer overall health and increases the risk of cardiovascular and metabolic complications. Despite recommendations for physical activity as part of diabetes management, many youth and young adolesents with type 2 diabetes do not meet the guidelines, hindering their ability to effectively manage blood glucose levels and improve their health. CRF is a key health indicator. Higher levels of CRF is associated with better health outcomes. This means that increasing CRF through exercise can provide important benefits for managing type 2 diabetes. There is a need for targeted interventions that promote physical activity and improve CRF in youth with type 2 diabetes to help reduce the risk of long-term complications. When it comes to resistance training, it is found to have no significant effect on insulin sensitivity in children and adolescents, despite it having positive trends. Intervention length, training intensity, and the participants' physical maturation might explain the mixed results. Longer and higher-intensity programs showed more promising results. Future research could focus on more dire metabolic conditions like type II diabetes, investigate the role of physical maturation, and think about including longer intervention periods. While resistance training complements aerobic exercise, its standalone effects on insulin sensitivity remain unclear. Medications Glucose control Most medications used to treat diabetes act by lowering blood sugar levels through different mechanisms. There is broad consensus that when people with diabetes maintain tight glucose control – keeping the glucose levels in their blood within normal ranges – they experience fewer complications, such as kidney problems or eye problems. There is, however, debate as to whether this is appropriate and cost effective for people later in life in whom the risk of hypoglycemia may be more significant. There are a number of different classes of anti-diabetic medications. Type 1 diabetes requires treatment with insulin, ideally using a "basal bolus" regimen that most closely matches normal insulin release: long-acting insulin for the basal rate and short-acting insulin with meals. Type 2 diabetes is generally treated with medication that is taken by mouth (e.g. metformin) although some eventually require injectable treatment with insulin or GLP-1 agonists. Metformin is generally recommended as a first-line treatment for type 2 diabetes, as there is good evidence that it decreases mortality. It works by decreasing the liver's production of glucose, and increasing the amount of glucose stored in peripheral tissue. Several other groups of drugs, mainly oral medication, may also decrease blood sugar in type 2 diabetes. These include agents that increase insulin release (sulfonylureas), agents that decrease absorption of sugar from the intestines (acarbose), agents that inhibit the enzyme dipeptidyl peptidase-4 (DPP-4) that inactivates incretins such as GLP-1 and GIP (sitagliptin), agents that make the body more sensitive to insulin (thiazolidinedione) and agents that increase the excretion of glucose in the urine (SGLT2 inhibitors). When insulin is used in type 2 diabetes, a long-acting formulation is usually added initially, while continuing oral medications. Some severe cases of type 2 diabetes may also be treated with insulin, which is increased gradually until glucose targets are reached. Blood pressure lowering Cardiovascular disease is a serious complication associated with diabetes, and many international guidelines recommend blood pressure treatment targets that are lower than 140/90 mmHg for people with diabetes. However, there is only limited evidence regarding what the lower targets should be. A 2016 systematic review found potential harm to treating to targets lower than 140 mmHg, and a subsequent systematic review in 2019 found no evidence of additional benefit from blood pressure lowering to between 130 – 140mmHg, although there was an increased risk of adverse events. 2015 American Diabetes Association recommendations are that people with diabetes and albuminuria should receive an inhibitor of the renin-angiotensin system to reduce the risks of progression to end-stage renal disease, cardiovascular events, and death. There is some evidence that angiotensin converting enzyme inhibitors (ACEIs) are superior to other inhibitors of the renin-angiotensin system such as angiotensin receptor blockers (ARBs), or aliskiren in preventing cardiovascular disease. Although a more recent review found similar effects of ACEIs and ARBs on major cardiovascular and renal outcomes. There is no evidence that combining ACEIs and ARBs provides additional benefits. Aspirin The use of aspirin to prevent cardiovascular disease in diabetes is controversial. Aspirin is recommended by some in people at high risk of cardiovascular disease; however, routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes. 2015 American Diabetes Association recommendations for aspirin use (based on expert consensus or clinical experience) are that low-dose aspirin use is reasonable in adults with diabetes who are at intermediate risk of cardiovascular disease (10-year cardiovascular disease risk, 5–10%). National guidelines for England and Wales by the National Institute for Health and Care Excellence (NICE) recommend against the use of aspirin in people with type 1 or type 2 diabetes who do not have confirmed cardiovascular disease. Surgery Weight loss surgery in those with obesity and type 2 diabetes is often an effective measure. Many are able to maintain normal blood sugar levels with little or no medications following surgery and long-term mortality is decreased. There is, however, a short-term mortality risk of less than 1% from the surgery. The body mass index cutoffs for when surgery is appropriate are not yet clear. It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control. A pancreas transplant is occasionally considered for people with type 1 diabetes who have severe complications of their disease, including end stage kidney disease requiring kidney transplantation. Diabetic peripheral neuropathy (DPN) affects 30% of all diabetes patients. When DPN is superimposed with nerve compression, DPN may be treatable with multiple nerve decompressions. The theory is that DPN predisposes peripheral nerves to compression at anatomical sites of narrowing, and that the majority of DPN symptoms are actually attributable to nerve compression, a treatable condition, rather than DPN itself. The surgery is associated with lower pain scores, higher two-point discrimination (a measure of sensory improvement), lower rate of ulcerations, fewer falls (in the case of lower extremity decompression), and fewer amputations. Self-management and support In countries using a general practitioner system, such as the United Kingdom, care may take place mainly outside hospitals, with hospital-based specialist care used only in case of complications, difficult blood sugar control, or research projects. In other circumstances, general practitioners and specialists share care in a team approach. Evidence has shown that social prescribing led to slight improvements in blood sugar control for people with type 2 diabetes. Home telehealth support can be an effective management technique. The use of technology to deliver educational programs for adults with type 2 diabetes includes computer-based self-management interventions to collect for tailored responses to facilitate self-management. There is no adequate evidence to support effects on cholesterol, blood pressure, behavioral change (such as physical activity levels and dietary), depression, weight and health-related quality of life, nor in other biological, cognitive or emotional outcomes. Epidemiology An estimated 382 million people worldwide had diabetes in 2013 up from 108 million in 1980. Accounting for the shifting age structure of the global population, the prevalence of diabetes is 8.8% among adults, nearly double the rate of 4.7% in 1980. Type 2 makes up about 90% of the cases. Some data indicate rates are roughly equal in women and men, but male excess in diabetes has been found in many populations with higher type 2 incidence, possibly due to sex-related differences in insulin sensitivity, consequences of obesity and regional body fat deposition, and other contributing factors such as high blood pressure, tobacco smoking, and alcohol intake. The WHO estimates that diabetes resulted in 1.5 million deaths in 2012, making it the 8th leading cause of death. However, another 2.2 million deaths worldwide were attributable to high blood glucose and the increased risks of cardiovascular disease and other associated complications (e.g. kidney failure), which often lead to premature death and are often listed as the underlying cause on death certificates rather than diabetes. For example, in 2017, the International Diabetes Federation (IDF) estimated that diabetes resulted in 4.0 million deaths worldwide, using modeling to estimate the total number of deaths that could be directly or indirectly attributed to diabetes. Diabetes occurs throughout the world but is more common (especially type 2) in more developed countries. The greatest increase in rates has, however, been seen in low- and middle-income countries, where more than 80% of diabetic deaths occur. The fastest prevalence increase is expected to occur in Asia and Africa, where most people with diabetes will probably live in 2030. The increase in rates in developing countries follows the trend of urbanization and lifestyle changes, including increasingly sedentary lifestyles, less physically demanding work and the global nutrition transition, marked by increased intake of foods that are high energy-dense but nutrient-poor (often high in sugar and saturated fats, sometimes referred to as the "Western-style" diet). The global number of diabetes cases might increase by 48% between 2017 and 2045. As of 2020, 38% of all US adults had prediabetes. Prediabetes is an early stage of diabetes. History Diabetes was one of the first diseases described, with an Egyptian manuscript from 1500 BCE mentioning "too great emptying of the urine." The Ebers papyrus includes a recommendation for a drink to take in such cases. The first described cases are believed to have been type 1 diabetes. Indian physicians around the same time identified the disease and classified it as madhumeha or "honey urine", noting the urine would attract ants. The term "diabetes" or "to pass through" was first used in 230 BCE by the Greek Apollonius of Memphis. The disease was considered rare during the time of the Roman empire, with Galen commenting he had only seen two cases during his career. This is possibly due to the diet and lifestyle of the ancients, or because the clinical symptoms were observed during the advanced stage of the disease. Galen named the disease "diarrhea of the urine" (diarrhea urinosa). The earliest surviving work with a detailed reference to diabetes is that of Aretaeus of Cappadocia (2nd or early 3rdcentury CE). He described the symptoms and the course of the disease, which he attributed to the moisture and coldness, reflecting the beliefs of the "Pneumatic School". He hypothesized a correlation between diabetes and other diseases, and he discussed differential diagnosis from the snakebite, which also provokes excessive thirst. His work remained unknown in the West until 1552, when the first Latin edition was published in Venice. Two types of diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400–500 CE with one type being associated with youth and another type with being overweight. Effective treatment was not developed until the early part of the 20th century when Canadians Frederick Banting and Charles Best isolated and purified insulin in 1921 and 1922. This was followed by the development of the long-acting insulin NPH in the 1940s. Etymology The word diabetes ( or ) comes from Latin , which in turn comes from Ancient Greek (), which literally means "a passer through; a siphon". Ancient Greek physician Aretaeus of Cappadocia (fl. 1stcentury CE) used that word, with the intended meaning "excessive discharge of urine", as the name for the disease. Ultimately, the word comes from Greek (), meaning "to pass through", which is composed of - (-), meaning "through" and (), meaning "to go". The word "diabetes" is first recorded in English, in the form diabete, in a medical text written around 1425. The word mellitus ( or ) comes from the classical Latin word , meaning "mellite" (i.e. sweetened with honey; honey-sweet). The Latin word comes from -, which comes from , meaning "honey"; sweetness; pleasant thing, and the suffix -, whose meaning is the same as that of the English suffix "-ite". It was Thomas Willis who in 1675 added "mellitus" to the word "diabetes" as a designation for the disease, when he noticed the urine of a person with diabetes had a sweet taste (glycosuria). This sweet taste had been noticed in urine by the ancient Greeks, Chinese, Egyptians, and Indians. Society and culture The 1989 "St. Vincent Declaration" was the result of international efforts to improve the care accorded to those with diabetes. Doing so is important not only in terms of quality of life and life expectancy but also economicallyexpenses due to diabetes have been shown to be a major drain on healthand productivity-related resources for healthcare systems and governments. Several countries established more and less successful national diabetes programmes to improve treatment of the disease. Diabetes stigma Diabetes stigma describes the negative attitudes, judgment, discrimination, or prejudice against people with diabetes. Often, the stigma stems from the idea that diabetes (particularly Type 2 diabetes) resulted from poor lifestyle and unhealthy food choices rather than other causal factors like genetics and social determinants of health. Manifestation of stigma can be seen throughout different cultures and contexts. Scenarios include diabetes statuses affecting marriage proposals, workplace-employment, and social standing in communities. Stigma is also seen internally, as people with diabetes can also have negative beliefs about themselves. Often these cases of self-stigma are associated with higher diabetes-specific distress, lower self-efficacy, and poorer provider-patient interactions during diabetes care. Racial and economic inequalities Racial and ethnic minorities are disproportionately affected with higher prevalence of diabetes compared to non-minority individuals. While US adults overall have a 40% chance of developing type 2 diabetes, Hispanic/Latino adults chance is more than 50%. African Americans also are much more likely to be diagnosed with diabetes compared to White Americans. Asians have increased risk of diabetes as diabetes can develop at lower BMI due to differences in visceral fat compared to other races. For Asians, diabetes can develop at a younger age and lower body fat compared to other groups. Additionally, diabetes is highly underreported in Asian American people, as 1 in 3 cases are undiagnosed compared to the average 1 in 5 for the nation. People with diabetes who have neuropathic symptoms such as numbness or tingling in feet or hands are twice as likely to be unemployed as those without the symptoms. In 2010, diabetes-related emergency room (ER) visit rates in the United States were higher among people from the lowest income communities (526 per 10,000 population) than from the highest income communities (236 per 10,000 population). Approximately 9.4% of diabetes-related ER visits were for the uninsured. Naming The term "type 1 diabetes" has replaced several former terms, including childhood-onset diabetes, juvenile diabetes, and insulin-dependent diabetes mellitus. Likewise, the term "type 2 diabetes" has replaced several former terms, including adult-onset diabetes, obesity-related diabetes, and noninsulin-dependent diabetes mellitus. Beyond these two types, there is no agreed-upon standard nomenclature. Diabetes mellitus is also occasionally known as "sugar diabetes" to differentiate it from diabetes insipidus. Other animals Diabetes can occur in mammals or reptiles. Birds do not develop diabetes because of their unusually high tolerance for elevated blood glucose levels. In animals, diabetes is most commonly encountered in dogs and cats. Middle-aged animals are most commonly affected. Female dogs are twice as likely to be affected as males, while according to some sources, male cats are more prone than females. In both species, all breeds may be affected, but some small dog breeds are particularly likely to develop diabetes, such as Miniature Poodles. Feline diabetes is strikingly similar to human type 2 diabetes. The Burmese, Russian Blue, Abyssinian, and Norwegian Forest cat breeds are at higher risk than other breeds. Overweight cats are also at higher risk. The symptoms may relate to fluid loss and polyuria, but the course may also be insidious. Diabetic animals are more prone to infections. The long-term complications recognized in humans are much rarer in animals. The principles of treatment (weight loss, oral antidiabetics, subcutaneous insulin) and management of emergencies (e.g. ketoacidosis) are similar to those in humans. See also Outline of diabetes Diabetic foot Blood glucose monitoring References External links American Diabetes Association IDF Diabetes Atlas National Diabetes Education Program ADA's Standards of Medical Care in Diabetes 2019 Causes of amputation Endocrine diseases Metabolic disorders Wikipedia emergency medicine articles ready to translate Wikipedia medicine articles ready to translate
Diabetes
[ "Chemistry" ]
9,536
[ "Metabolic disorders", "Metabolism" ]
40,018,674
https://en.wikipedia.org/wiki/Stereopsis%20recovery
Stereopsis recovery, also recovery from stereoblindness, is the phenomenon of a stereoblind person gaining partial or full ability of stereo vision (stereopsis). Recovering stereo vision as far as possible has long been established as an approach to the therapeutic treatment of stereoblind patients. Treatment aims to recover stereo vision in very young children, as well as in patients who had acquired but lost their ability for stereopsis due to a medical condition. In contrast, this aim has normally not been present in the treatment of those who missed out on learning stereopsis during their first few years of life. In fact, the acquisition of binocular and stereo vision was long thought to be impossible unless the person acquired this skill during a critical period in infancy and early childhood. This hypothesis normally went unquestioned and has formed the basis for the therapeutic approaches to binocular disorders for decades. It has been put in doubt in recent years. In particular since studies on stereopsis recovery began to appear in scientific journals and it became publicly known that neuroscientist Susan R. Barry achieved stereopsis well into adulthood, that assumption is in retrospect considered to have held the status of a scientific dogma. Very recently, there has been a rise in scientific investigations into stereopsis recovery in adults and youths who have had no stereo vision before. While it has now been shown that an adult may gain stereopsis, it is currently not yet possible to predict how likely a stereoblind person is to do so, nor is there general agreement on the best therapeutic procedure. Also the possible implications for the treatment of children with infantile esotropia are still under study. Clinical management of strabismus and stereoblindness In cases of acquired strabismus with double vision (diplopia), it is long-established state of the art to aim at curing the double vision and at the same time recovering a patient's earlier ability for stereo vision. For example, a patient may have had full stereo vision but later had diplopia due to a medical condition, losing stereo vision. In this case, medical interventions, including vision therapy and strabismus surgery, may remove the double vision and recover the stereo vision which had temporarily been absent in the patient. Also when children with congenital (infantile) strabismus (e.g. infantile esotropia) receive strabismus surgery within the first few years or two of their life, this goes along with the hope that they may yet develop their full potential for binocular vision including stereopsis. In contrast, in a case where a child's eyes are straightened surgically after the age of about five or six years and the child had no opportunity to develop stereo vision in early childhood, normally the clinical expectation is that this intervention will lead to cosmetic improvements but not to stereo vision. Conventionally, no follow-up for stereopsis was performed in such cases. For instance, one author summarized the accepted scientific view of the time with the words: "Stereopsis will never be obtained unless amblyopia is treated, the eyes are aligned, and binocular fusion and function are achieved before the critical period for stereopsis ends. Clinical data suggest that this occurs before 24 months of age,[...] but we do not know exactly when it occurs, because crucial pieces of basic science information are missing." For purposes of illustration, reference is made to a book of doctors' handouts for patients, written for the general public and published in 2002, which summarizes the limitations in the terms in which they, at the time, were fully accepted as medical state of the art as follows: "If an adult has a childhood strabismus that was never treated, it is too late to improve any amblyopia or depth perception, so the goal may be simply cosmetic – to make the eyes appear to be properly aligned – though sometimes treatment does enlarge the extent of side vision." It has only been accepted very recently that the therapeutic approach was based on an unquestioned notion that has, since, been referred to as "myth" or "dogma". Recently, however, stereopsis recovery is known to have occurred in a number of adults. While this has in some cases occurred after visual exercises or spontaneous visual experiences, recently also the medical community's view of strabismus surgery has become more optimistic with regard to outcomes in terms of binocular function and possibly stereopsis. As one author states:The majority of adults will experience some improvement in binocular function after strabismus surgery even if the strabismus has been longstanding. Most commonly this takes the form of an expansion of binocular visual fields; however, some patients may also regain stereopsis.Scientific investigations on residual neural plasticity in adulthood now also include studies on the recovery of stereopsis. Now it is a matter of active scientific investigation under which conditions and to which degree binocular fusion and stereo vision can be acquired in adulthood, especially if the person is not known to have had any preceding experience of stereo vision, and how outcomes may depend on the patient's history of therapeutic interventions. Examples and case studies Stereopsis recovery has been reported to have occurred in a few adults as a result of either medical treatments including strabismus surgery and vision therapy, or spontaneously after a stereoscopic 3D cinema experience. Personal reports in Fixing my Gaze The most renowned case of regained stereopsis is that of neuroscientist Susan R. Barry, who had had alternating infantile esotropia with diplopia, but no amblyopia, underwent three surgical corrections in childhood without achieving binocular vision at the time, and recovered from stereoblindness in adult age after vision therapy with optometrist Theresa Ruggiero. Barry's case has been reported on by neurologist Oliver Sacks. Also David H. Hubel, winner of the 1981 Nobel Prize in Physiology or Medicine with Torsten Wiesel for their discoveries concerning information processing in the visual system, commented positively on her case. In 2009, Barry published a book Fixing My Gaze: A Scientist's Journey into Seeing in Three Dimensions, reporting on her own and several other cases of stereopsis recovery. In her book Fixing my Gaze, Susan Barry gives a detailed description of her surprise, elation and subsequent experiences when her stereo vision suddenly set in. Hubel wrote of her book: Her book includes reports of further persons who have had similar experiences with stereopsis recovery. Barry cites the personal experiences of several persons, including a man who was an artist and described his experience of seeing with stereopsis as "that he could see one hundred more times negative space", a woman who had been amblyopic before seeing in 3D described how empty space now "looks and feels palpable, tangible—alive!", a woman who had been strabismic since age two and saw in 3D after taking vision therapy and stated that "The coolest thing is the feeling you get being 'in the dimension, a woman who felt quite alarmed at the experience of suddenly seeing roadside trees and signs looming towards her, and two women who experienced an abrupt onset of stereo vision with a wide-angled view of the world, the first stating: "I was able to take in so much more of the room than I did before" and the second: "It was very dramatic as my peripheral vision suddenly filled in on both sides". Common to Barry and at least one person on whom she had reported is the finding that also their mental representation of space changed after having acquired stereo vision: that even with one eye closed the feeling is to see "more" than seeing with one eye closed before recovering stereopsis. Further cases in the media Apart from Barry, another formerly stereoblind adult whose acquired ability for stereopsis has received media attention is neuroscientist Bruce Bridgeman, professor of psychology and psychobiology at University of California Santa Cruz, who had grown up nearly stereoblind and acquired stereo vision spontaneously in 2012 at the age of 67, when watching the 3D movie Hugo with polarizing 3D glasses. The scene suddenly appeared to him in depth, and the ability to see the world in stereo stayed with him also after leaving the cinema. Other first person accounts Michael Thomas has described the experience of instantaneous onset of three dimensional vision at the age of 69 in a Public Facebook post. Recent scientific investigations There is a growing recent body of scientific literature on investigations into the recovery of stereopsis in adults which started to appear shortly before Oliver Sacks' The New Yorker publication drew public attention to Barry's discovery. A number of scientific publications have systematically assessed patients' post-surgical stereopsis, whereas other studies have investigated the effects of eye training procedures. Post-surgical stereopsis Certain conditions are known to be a prerequisite for stereo vision, for instance, that the amount of horizontal deviation, if any is present, needs to be small. In several studies it has been recognized that surgery to correct strabismus can have the effect of improving binocular function. One of these studies, published in 2003, explicitly concluded: "We found that improvement in binocularity, including stereopsis, can be obtained in a substantial portion of adults." That article was published together with a discussion of the results among peers in which the scientific and social implications of the medical treatment were addressed, for example concerning the long-term relevancy of stereopsis, the importance of avoiding diplopia, the necessity of predictable outcomes, and psychosocial and socioeconomic relevance. Among the investigations into post-surgical stereopsis is a publication of 2005 that reported on a total of 43 adults over 18 years of age who had surgical correction after having lived with from constant-horizontal strabismus for more than 10 years with no previous surgery or stereopsis, with visual acuity of 20/40 or more also in the deviating eye; in this group, stereopsis was present in 80% of exotropes and 31% of esotropes, with the recovery of stereopsis and stereoacuity being uncorrelated to the number of years the deviation had persisted. A study that was published 2006 included, aside an extensive review of investigations on stereopsis recovery of the last decades, a re-evaluation of all those patients who had had congenital or early-onset strabismus with a large constant horizontal divergence and had undergone strabismus surgery in the years 1997–1999 in a given clinic, excluding those who had a history of neurologic or systemic diseases or with organic retinal diseases. Among the resulting 36 subjects aged 6–30 years, many had regained binocular vision (56% according to an evaluation with Bagolini striated glasses, 39% with Titmus test, 33% with Worth 4-dot test, and 22% with Random dot E test) and 57% had stereoacuity of 200 sec of arc of better, leading to the conclusion that some degrees of stereopsis can be achieved even in cases of infantile or early-childhood strabism. Another study  found that some chronically strabismic adults with good vision could recover fusion and stereopsis by means of surgical alignment. In contrast, in a study in which a group of 17 adults and older children of at least 8 years of age, all of whom received strabismus surgery and post-operative evaluation after long-standing untreated infantile esotropia, most showed binocular fusion when tested with Bagolini lenses and an increased visual field, but none demonstrated stereo fusion or stereopsis. Stereoacuity is limited by the visual acuity of the eyes, and in particular by the visual acuity of the weaker eye. That is, the more a patient's vision of any one of the two eyes is degraded compared to the 20/20 vision standard, the lower are the prospects of improving or re-gaining stereo vision, unless visual acuity itself were improved by other means. Strabismus surgery itself does not improve visual acuity. Stereopsis following training procedures Orthoptic exercises have proven to be effective for reducing symptoms in patients with convergence insufficiency and decompensating exophoria by improving the near-point convergence of the eyes that is necessary for binocular fusion. Experiments on monkeys, published 2007, revealed improvements in stereoacuity in monkeys who, after having been raised with binocular deprivation through prisms for the first two years, were exposed to extensive psychophysical training. Their stereo vision recovered in part, but remained far more limited than that of normally raised monkeys. Scientists at the University of California, Berkeley have stated that perceptual learning appears to play an important role. One investigation, published 2011, reported on a study on human stereopsis recovery using perceptual learning which was inspired by Barry's work. In this study, a small number of stereoblind subjects who had initially been stereoblind or stereoanomalous recovered stereopsis using perceptual learning exercises. Alongside the scientific assessment of the extent of recovery, also the subjective outcomes are described:After achieving stereopsis, our observers reported that the depth "popped out", which they found very helpful and joyful in their everyday life. The anisometropic observer GD noticed "a surge in depth" one day when shopping in a supermarket. While playing table tennis, she feels that she is able to track a ping-pong ball more accurately and therefore can play better. Strabismic observer AB is more confident now when walking down stairs because she can judge the depth of the steps better. Strabismics AB, DP, and LR, are able to enjoy 3D movies for the first time, and strabismic GJ finds it easier to catch a fly ball while playing baseball.In a follow-up study, the authors of this study pointed out that the stereopsis that was recovered following perceptual learning was more limited in resolution and precision compared to normal subjects' stereopsis. Dennis M. Levi was awarded the 2011 Charles F. Prentice Medal of the American Academy of Optometry for this work. There have been several attempts to make use of modern technology for enhanced binocular eye training, in particular for treating amblyopia and interocular suppression. In some cases these modern techniques have improved patients' stereoacuity. Very early technology-enhanced vision therapy efforts have included the cheiroscope, which is a haploscope in which left- and or right-eye images can be blended into view over a drawing pad, and the subject may be given a task such as to reproduce a line image presented to one eye. However, historically these approaches were not developed much further and they were not put to widespread use. Recent systems are based on dichoptic presentation of the elements of a video game or virtual reality such that each eye receives different signals of the virtual world that the player's brain must combine in order to play successfully. One of the earliest systems of this kind has been proposed by a research group in the University of Nottingham with the aim of treating amblyopia, using virtual reality masks or commercially available 3D shutter glasses. The group also has worked to develop perceptual learning training protocols that specifically target the deficit in stereo acuity to allow the recovery of normal stereo function even in adulthood. Another system of dichoptic presentation for binocular vision therapy has been proposed by researchers of the Research Institute of the McGill University Health Centre. Using a modified puzzle video game Tetris, the interocular suppression of patients with amblyopia was successfully treated with dichotomic training in which certain parameters of the training material were systematically adapted during the course of four weeks. Clinical supervision of such procedures is required to ensure that double vision does not occur. Most of the patients who underwent this treatment gained improved visual acuity of the weaker eye, and some also showed increased stereoacuity. Another study performed at the same institute showed that dichoptic training can be more effective in adults than the more conventional amblyopia treatment of an eye patch. For this investigation, 18 adults played Tetris for one hour each day, half of the group wearing eye patches and the other half playing a dichoptic version of the game. After two weeks, the group who played dichoptically showed a significant improvement of vision in the weaker eye and in stereopsis acuity; the eye patch group had moderate improvements, which increased substantially after they, too, were given the dichoptic training afterwards. Dichoptic-based perceptual learning therapy, presented by means of a head-mounted display, is amenable also to amblyopic children, as it improves both the amblyopic eye's visual acuity and the stereo function. The researchers at McGill University have shown that one to three weeks of playing a dichoptic video game for one to two hours on a hand-held device "can improve acuity and restore binocular function, including stereopsis in adults". Furthermore, it has been suggested that these effect can be enhanced by anodal transcranial direct current stimulation (tDCS). Together with Levi of the University of California, Berkeley, scientists at the University of Rochester have made further developments in terms of virtual reality computer games which have shown some promise in improving both monocular and binocular vision in human subjects. Game developer James Blaha, who developed his own crowd-funded version of a dichoptic VR game for the Oculus Rift together with Manish Gupta and is continuing to experiment with the game, experienced stereopsis for the first time using his game. In 2011, two cases of adults with anisometropic amblyopia were reported whose visual acuity and stereoacuity improved due to learning-based therapies. There are indications that the suppression of binocularity in amblyopic subjects is due to a suppression mechanism that prevents the amblyopic brain from learning to see. It has been suggested that desuppression and neuroplasticity may be favored by specific conditions that are commonly associated with perceptual learning tasks and video game playing such as a heightened requirement of attention, a prospect of reward, a feeling of enjoyment and a sense of flow. Health care policy matters Health insurances always review therapies in terms of clinical effectiveness in view of existing scientific literature, benefit, risk and cost. Even if individual cases of recovery exist, a treatment is only considered effective under this point of view if there is sufficient likelihood that it will predictably improve outcomes. In this context, medical coverage policy of the global health services organization Cigna "does not cover vision therapy, optometric training, eye exercises or orthoptics because they are considered experimental, investigational or unproven for any indication including the management of visual disorders and learning disabilities" based on a bibliographic review published by Cigna which concludes that "insufficient evidence exists in the published, peer-reviewed literature to conclude that vision therapy is effective for the treatment of any of the strabismic disorders except preoperative prism adaptation for acquired esotropia". Similarly, the U.S. managed health care company Aetna offers vision therapy only in contracts with supplemental coverage and limits its prescriptions to a number of conditions that are explicitly specified in a list of vision disorders. See also Recovery from blindness References Neurophysiology Neuroscience Ophthalmology Stereoscopy Visual perception Visual system
Stereopsis recovery
[ "Biology" ]
4,016
[ "Neuroscience" ]
31,651,991
https://en.wikipedia.org/wiki/Protactinium%28V%29%20oxide
Protactinium(V) oxide is a chemical compound with the formula Pa2O5. When it is reduced with hydrogen, it forms PaO2. Aristid V. Grosse was first to prepare 2 mg of Pa2O5 in 1927. Pa2O5 does not dissolve in concentrated HNO3, but dissolves in HF and in a HF + H2SO4 mixture and reacts at high temperatures with solid oxides of alkali metal and alkaline earth metals. As protactinium(V) oxide, like other protactinium compounds, is radioactive, toxic and very rare, it has very limited technological use. Mixed oxides of Nb, Mg, Ga and Mn, doped with 0.005–0.52% Pa2O5, have been used as high temperature dielectrics (up to 1300 °C) for ceramic capacitors. References Oxides Protactinium(V) compounds
Protactinium(V) oxide
[ "Chemistry" ]
202
[ "Inorganic compounds", "Oxides", "Inorganic compound stubs", "Salts" ]
31,656,267
https://en.wikipedia.org/wiki/KIF1A
Kinesin-like protein KIF1A, also known as axonal transporter of synaptic vesicles or microtubule-based motor KIF1A, is a protein that in humans is encoded by the KIF1A gene. KIF1A is a neuron-specific member of the kinesin-3 family and is a microtubule plus end-directed motor protein involved in the anterograde, long-distance transport of vesicles and organelles. Similar to other kinesin proteins, KIF1A harnesses the chemical energy released from Adenosine Triphosphate (ATP) hydrolysis to create mechanical force, allowing it to “walk” along microtubule filaments to transport cargo from the neuron cell body to its periphery. With an important role in the brain, KIF1A function is essential for physiological processes, such as neuronal survival and higher brain function. History KIF1A was originally discovered in C. elegans as UNC-104 in 1991 as a possible novel kinesin paralog acting as a motor in the nervous system. In 1995, human KIF1A was first identified to be a monomeric, globular motor protein that was shown at the time to have the fastest anterograde motor activity. It was also found that KIF1A expressed abundantly in neurons, suggesting its role in axons as an axonal transport motor. To further elucidate the function of KIF1A, in vivo studies were conducted in mice. KIF1A knock-out mice showed deficiency in synaptic vesicle transport and early death soon after birth, suggesting KIF1A's critical role in the viability of neurons and the transport of synaptic vesicle precursors. In 1999, a new model regarding KIF1A motility, contrary to the widely accepted dimeric, two-headed “walking model,” depicted that KIF1A can move processively on microtubules as a monomer in single molecule experiments. As the debate on whether KIF1A functioned as a monomer or dimer ensued, further research in the cryo-EM field resolved the structure of KIF1A and identified the K-loop, a 12-amino acid insert at the L12 region indicated to increase KIF1A's affinity to microtubules. In other efforts to uncover the function of important KIF1A structures, it was reported that the binding of KIF1A's pleckstrin homology (PH) domain to lipids (PtdIns(4,5)P2) is necessary and sufficient for the binding and transporting of vesicles. Further investigations of how the PtdIns(4,5)P2 lipid subdomain facilitates KIF1A vesicle transport led to the idea that this membrane subdomain may cause KIF1A monomers to cluster or dimerize, which would then activate motor activity. Continuing with KIF1A's monomer vs. dimer debate, the proposition that KIF1A functioned as a monomeric motor was challenged with a mechanism similar to that found in conventional kinesin. It was then suggested that KIF1A can dimerize to operate as a two-headed motor and that motility can be regulated by motor dimerization, leading to the conclusion that KIF1A is monomeric in an inactive state, and dimeric in an active state. As to where the debate stands now, more recent research has shown that KIF1A is dimeric in both active and inactive states and that motor activity is instead regulated by autoinhibition. Function KIF1A belongs to the kinesin-3 subfamily and is characterized by its very high microtubule binding rate and its ability to travel further and faster along microtubules compared to other kinesin family groups. With run lengths on the order of 10 um, nearly 10 times longer than those of the well-characterized kinesin-1 motor, KIF1A carries a diverse set of cargo that must be delivered in a precise spatiotemporal manner to ensure proper neuronal function and viability. As KIF1A is predominantly expressed in neurons in the brain, with low levels observed in tissues of the heart, testes, pancreas, adrenal glands, and pituitary glands, it plays a critical role in the axonal (cell body to axon terminal) and dendritic (cell body to dendrites) transport of cargo. The main function of KIF1A is the long-distance transport of membranous cargo, such as synaptic vesicle precursors (SVPs) and dense core vesicles (DCVs), that are essential for the maintenance and viability of neurons. KIF1A is one of the many motors that helps execute the transport of organelles within the cell through axonal anterograde cargo transport and is shown to carry cargo that contain SV proteins, such as synaptophysin, synaptotagmin, and Rab3A, that are essential for SV biogenesis and membrane fusion. Another primary role of KIF1A is the axonal transport of DCVs to their appropriate subcellular sites, which are synthesized in the cell body and then transported by KIF1A to pre- and postsynaptic release sites. DCVs are important in helping with the transport, processing, and secretion of neuropeptide cargos that mediate a number of biological processes, such as neuronal development, survival, and learning and memory, making the role of KIF1A in regard to DCVs absolutely essential for normal neuronal function. In addition, KIF1A is important for sensory neuronal function and survival by transporting the TrkA neurotrophin receptor critically involved in the NGF/TrkA/Ras/PI3K signaling pathway that plays a role in pain sensation. Structure In H. sapiens, KIF1A is a motor protein composed of 1,791 amino acids in length. Similar to other kinesins, KIF1A's structure consists of a neck, a tail, and a motor domain. At the N-terminus is a motor domain that is followed by the neck coil (NC). A series of coiled coils (CCs) and a forkhead associated (FHA) domain follows, with the order being CC1, FHA domain, CC2, and CC3. The C-terminus then ends in a pleckstrin homology (PH) domain that associates with cargo. Unique to KIF1A is its K-loop, organization of its neck region, and FHA domain located in the tail. Motor domain The motor domain, composed of a globular catalytic core and a neck linker, is located at the N-terminus of the molecule and combines microtubule binding and ATPase activity to power along the plus ends of microtubules. The catalytic core contains the ATPase reaction centre and the microtubule binding surface while the neck linker functions to connect the catalytic core to the remaining molecule. Within the motor domain lies a layer of β-sheets nestled in between two layers of α­-helices. In the N-terminal half of the catalytic core is the ATP hydrolysis catalytic centre and the phosphate binding loop (P-loop) that forms a nucleotide binding pocket on top of the catalytic core. Located at the C-terminal end of the catalytic core are five structural elements (loop L11, α4 helix, loop L12, α5 helix, loop L13) that make up the region called switch II, which is responsible for forming the microtubule binding surface. The combined functions of switch II and the neck linker function together to produce mechanical work. Switch I, the link between the P-loop and switch II, works to catalyze ATP hydrolysis and moves to change conformation depending on the nucleotide binding pocket's nucleotide state. Salt bridge rearrangements between switch I and switch II accompany these conformational changes, which leads to larger scale repositioning and conformational changes in switch II. Overall, switch I connects the reaction centre chemical state to the microtubule binding surface of switch II. KIF1A uses the ATP hydrolysis cycle that is coupled to the conformational changes within the motor and neck domains to convert chemical energy into mechanical work, thus allowing for forward directional movement of the motor. As a result of ATP turnover throughout the cycle, microtubule binding affinities of the motor domains change, permitting the “hand over hand” walking movement seen conserved in most kinesin motility. Tail and neck domains Within the tail region are several short coiled coils and an array of protein-lipid interaction domains that help with the binding of cargo and regulators. These coiled coils function to mediate and at times interfere with motor dimerization. In regard to the organization of the neck region, it consists of a helix and β-sheet. The neck coil, an α-helical region, has been shown to help to dimerize motor domains and can effectively dimerize on its own. The neck linker is used to connect the motor domain to cargo and to kinesin partner heads. These elements work together as the neck coil couples the motor domain's conformational changes regulated by ATP hydrolysis to the neck linker, which drives the hand-over-hand walking mechanism of KIF1A. K-Loop KIF1A also possesses a stretch of 12 lysine residues known as the K-loop located on loop 12 of the motor domain that is responsible for much of KIF1A's characteristic behavior, particularly its motility and regulation. The interaction between the positively charged lysine-rich surface and the negatively charged glutamate rich (E hook) C-terminal tail of β-tubulin has been shown to increase KIF1A's microtubule affinity. Although there is an increase in microtubule affinity, the increase in KIF1A processivity is not attributed directly to the K-loop. Rather, the increased microtubule binding rate due to the K-loop allows multiple sites of KIF1A (residues in loops L2, L7, L8, L11, L12, and α4 and α6 helices) to interact with the microtubule surface. These interactions increase affinity, which in turn increase the processivity of dimeric KIF1A. The K-loop is also required for several microtubule associated proteins (MAPs), such as septin-9 and MAP9, to exert their effects on KIF1A. Additionally, the K-loop facilitates KIF1A's interaction between the positively charged lysine-rich region and neuronal tubulin's negatively charged polyglutamylated C-terminal tails. PH and FHA Domain KIF1A's pleckstrin homology (PH) domain, located in the tail region, functions to bind cargo vesicles through interactions with phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2). The forkhead associated (FHA) domain, a small protein module located amongst the coiled coils in the tail domain, plays a structural role and functions to mediate specific cargo interactions via protein-protein interactions and phosphothreonine epitope recognition. Regulation KIF1A has many mechanisms in place to regulate activation, deactivation, energy conservation, and specific control of directional motor activity. These mechanisms include autoinhibition, cargo binding, Rab GTPases, protein interactions. Autoinhibition KIF1A exists in two forms: an extended active state and a folded inactive state. It adopts a compact shape with a folded tail in its inactive state to prevent crowding on microtubules and unnecessary energy waste, which can then be extended in its active state. Although the specifics underlying the causes and regulation of KIF1A autoinhibition needs further investigation, there are two current models that explain this process. The monomer-dimer switch model states that intramolecular interactions regarding the neck and tail regions hold kinesin-3 motors in an inactive, monomeric state. When activated, the motors would then dimerize from interactions between the tail and neck coil regions. Alternatively, in the tail block model, motors act as stable dimers and are inactivated by the tail region interacting with the motor or neck domains. It has been suggested that the autoinhibited state of KIF1A involves CC2 and the FHA domain, where CC2 folds back to interact with the FHA domain and causes a disruption to motor activity. This state of autoinhibition is reversed by cargo binding, phosphorylation, or other regulatory mechanisms. As recent studies have shown that KIF1A is dimeric in both active and inactive states, the tail block model is more readily accepted to explain the process of autoinhibition. With these proposed models, there is a better understanding of the autoinhibition mechanism; however, further investigations are needed to confirm and uncover the specifics of this process in KIF1A. Cargo binding Autoinhibited or inactive KIF1A can be activated from cargo binding directly to the motor. Often, cargo adapter proteins are used to mediate motor activation and cargo recruitment. In UNC-104, the C. elegans homolog for KIF1A, the binding of adapter proteins such as, UNC-16 (JIP3), DNC-1 (DCTN-1/Glued), and SYD-2 (Liprin-α) to UNC-104 lead to the translocation of the motor to different subcellular regions in neuronal cells. These observations suggest that adapters can recruit UNC-104/KIF1A to their cargo and navigate transport. Additionally, studies have shown that LIN-2 (CASK) and SYD-2 positively regulate UNC-104 by increasing its velocity. LIN-2 also increases run lengths and is suggested to be an activator of UNC-104. Rab GTPases Rab GTPases are known to mediate the localization of vesicles from the regulation of GEFs and GAPs that alter its nucleotide state (GTP or GDP). KIF1A is known to transport Rab3-coated vesicles in the axon. Rab3 functions as a synaptic vesicle protein that controls the exocytosis of synaptic vesicles. Studies have shown that a GEF for Rab3, DENN/MAD, binds to Rab3 and KIF1A's tail domain to mediate the motor's transport to the axon terminal. Other protein interactions Microtubule associated proteins (MAPs) mediate the assembly and disassembly kinetics of microtubules and regulate the interactions of motors with microtubules.  Several MAPs are known KIF1A-regulators. Both tau and MAP2, and MAP7 acts as a general inhibitor of KIF1A, preventing it from accessing the microtubule lattice. Three MAPs that localize within dendrites, doublecortin (DCX), doublecortin-like kinase-1 (DCLK1), and MAP9, regulate motor protein activity more broadly by differentially gating access to microtubule filaments. Specifically, DCX, DCLK1, and MAP9 permit KIF1A access to the microtubule, thereby providing a “MAP code” of kinesin regulation in neurons. DCLK1 is shown to mediate KIF1A's transport of DCVs binding to microtubules in dendrites. MAP9 is known to facilitate KIF1A translocation. Additionally, a microtubule associated septin (SEPT9), which localizes specifically in dendrites, has been shown to enhance kinesin-3 motility further into neuronal dendrites via the recognition of the K-Loop. Tubulin post-translational modifications Another form of KIF1A regulation is performed through tubulin post-translational modifications (PTMs), which usually occurs on the C-terminal tails of microtubule tracks. These molecular “traffic signals” include C-terminal tail polyglutamylation and help direct KIF1A motor cargo delivery via interactions between KIF1A's K-loop and microtubule's C-terminal tails. Studies have shown that polyglutamylation of the tubulin C-terminal tail regulates KIF1A by reducing KIF1A pausing as well as run lengths, suggesting a mechanism that mediates KIF1A behavior and motility. Additionally, it has been reported that α-tubulin polyglutamylation functions as a molecular traffic sign for KIF1A's cargo transport by directing the motor to its proper destination, therefore, mediating continuous synaptic transmission. Pathology KIF1A-mediated anterograde axonal transport is of critical importance for the development and maintenance of the nervous system. With KIF1A functioning to transport synaptic vesicle precursors (SVPs) and dense core vesicles (DCVs) along neurons, defects in this motor protein can lead to the improper delivery of cargo and result in the deterioration of neuronal cells that can lead to pathologies. Studies conducted with UNC-104 have shown that loss-of-function UNC-104 mutants were not able to properly transport SVPs to synapses, which resulted in an abnormal accumulation of SVs in cell bodies and dendrites. Other studies depicted that low levels of SVPs in mice from disrupted KIF1A-mediated transport were detrimental to development and survival. Mice with homozygous inactivation of KIF1A showed severe motor and sensory disturbances; most died within 24 hours of birth and all died within 72 hours. Homozygous mice also showed reduced levels of SVPs and significant neurodegeneration and death. DCVs are also necessary for proper neuronal function, as they contain proteins such as BDNF that are essential for survival. BDNF is intimately connected with KIF1A and may provide explanation for the clinical presentation of the KIF1A knockdown phenotype. Loss of KIF1A-mediated BDNF transport results in decreased synaptogenesis and learning enhancement, whereas an up-regulation of KIF1A leads to the formation of presynaptic buttons. In 2011, the first disease associated alleles of KIF1A were found to be related to Hereditary Spastic Paraplegia (HSP), a disorder characterized by abnormal gait and spasticity of lower limbs. With the usage of whole exome sequencing and homozygosity mapping, investigations discovered a causative mutation in KIF1A's motor domain that led to behavior characteristic of HSP. Additional studies found de novo missense mutations in KIF1A to affect protein function in cell culture systems, which suggests pathogenicity. These same mutations have also been reported in patients with intellectual disability and autism, which suggests that heterozygous KIF1A disruption may be involved in Nonsyndromic Intellectual Disability (NID). Studies regarding Hereditary Sensory and Autonomic Neuropathy type II (HSAN II), a rare autosomal-recessive disorder characterized by peripheral nerve degeneration that leads to severe distal sensory loss, found that KIF1A mutations in an alternatively spliced exon are a rare cause of HSAN II. Collectively, these investigations published in 2011 report on the relationships between KIF1A and hereditary human diseases. In contrast to the reports of KIF1A mutations resulting in loss of function behavior and reduced anterograde axonal transport, a recent study showed that some KIF1A mutations lead to hyperactivity of the KIF1A motor and increased axonal transport of SVPs, which can also be pathological. Additionally, most recent findings show that KIF1A variants, a majority of which are located in the motor domain, result in protein transport defects, such as reduced microtubule binding, reduced velocity and processivity, and increased non-motile rigor microtubule binding. Clinical significance Various diseases and disorders are associated with KIF1A, including KIF1A-Associated Neurological Disorder (KAND), Hereditary Spastic Paraplegia, and ataxia. These disorders primarily affect the nervous system and have a diverse set of clinical presentations. KIF1A-associated neurological disorder KAND is a neurodegenerative disorder caused by one or more variations (mutations) in the KIF1A gene that can lead to a spectrum of symptoms, such as neurodevelopmental delay, intellectual disability, autism, microcephaly, progressive spastic paraplegia, periphery neuropathy, optic nerve atrophy, cerebral and cerebellar atrophy, and seizures. KAND has been diagnosed in over 200 patients throughout the world with the large majority being children due to the likely reason that advancements in genetic testing were only recently made more accessible. As of current, there are 119 different variants identified, but it is likely that there are many variants to be discovered. Depending on the type of variation that occurs and where it is in the gene, KAND patients experience a spectrum of symptoms, progression, and severity of disease. KAND can be inherited in an autosomal recessive or dominant pattern and is characterized as a spectrum disorder with a range of symptoms from mild to life-threatening. Because there are many KAND-causing mutations, predominantly heterozygous missense mutations in the KIF1A motor domain, diagnosis for this disease is complicated. In efforts to expand the understanding of the phenotypic spectrum of KIF1A variants, researchers discovered novel de novo KIF1A variants in patients with Rett syndrome (RTT) and severe neurodevelopmental disorder that share clinical features that overlap with KAND. From their microtubule gliding assays and neurite tip accumulation assays, they showed that these novel KIF1A variants reduced KIF1A velocity and microtubule binding and lessened the ability of KIF1A's motor domain to accumulate along neurites. The results from this study expanded the phenotypic characteristics seen in KAND individuals with KIF1A variants in the motor domain, as common clinical features were also observed in RTT individuals. Additionally, the first disease severity score for KAND was recently developed, with disease severity strongly associated with variants that occurs in protein regions involved with ATP and microtubule binding, more specifically the P-Loop, switch I, and switch II. The most severe KAND presentations are observed with mutations in KIF1A's motor domain, generally arising de novo, and the less severe variations are observed in KIF1A's stalk region and are usually inherited. From recent studies, KIF1A variants are shown to exhibit defects such as reduced microtubule (MT) binding, reduced velocity and processivity, and increased non-motile rigor MT binding, all of which could contribute to the signs and symptoms seen in KAND patients. With a current natural history study in play and an established heuristic severity score for KAND, research efforts are progressing towards elucidating unknowns of the disorder and are pushing forward to find treatment. Because KAND can only be accurately diagnosed through genetic testing and there being similarities of its symptoms with Cerebral Palsy (CP), many patients are initially misdiagnosed. The overlap between CP and KAND, in conjunction with the prohibitive cost of genetic testing, leads to the belief that most KAND patients are yet to be correctly diagnosed, resulting in vastly underrepresented reported numbers of cases. Society and culture KIF1A.org, a non-profit organization, dedicated to helping those affected by KAND and funding research to find a cure, was founded by Luke Rosen and Sally Jackson. In 2020, KIF1A.org was chosen to join the Rare As One Project launched by the Chan Zuckerberg Initiative (CZI). Spearheading these pre-clinical investigation efforts to find a treatment for KAND is Dr. Wendy Chung, MD, PhD, who leads the KIF1A program at Columbia University, manages the KIF1A Natural History Study, and plays a tremendous role in supporting the KAND community and organization. On April 7, 2020, part one of The Gene: An Intimate History premiered on PBS, a Ken Burns documentary based on a book of the same name by Siddhartha Mukherjee. The documentary focuses the efforts of Rosen and Jackson, KIF1A.org, and researchers to find treatment for KAND patients. References Further reading External links KIF1A.org – KIF1A-Associated Neurological Disorder KIF1A gene - Genetics Home Reference - NIH NORD – KIF1A-Related Disorder OMIM Database for KIF1A Human proteins Motor proteins
KIF1A
[ "Chemistry" ]
5,281
[ "Molecular machines", "Motor proteins" ]
31,656,862
https://en.wikipedia.org/wiki/3did
The database of three-dimensional interacting domains (3did) is a biological database containing a catalogue of protein-protein interactions for which a high-resolution 3D structure is known. 3did collects and classifies all structural models of domain-domain interactions in the Protein Data Bank, providing molecular details for such interactions. 3did uses the Pfam database to define the position of protein domains in the protein structures. 3did was first published in 2005. The current version also includes a pipeline for the discovery and annotation of novel domain-motif interactions. For every interaction 3did identifies and groups different binding modes by clustering similar interfaces into “interaction topologies”. By maintaining a constantly updated collection of domain-based structural interaction templates, 3did is a reference source of information for the structural characterization of protein interaction networks. 3did is updated every six months and is available for bulk download and for browsing at http://3did.irbbarcelona.org. See also protein interaction three-dimensional structures References External links http://3did.irbbarcelona.org Protein databases Protein structure
3did
[ "Chemistry" ]
230
[ "Protein structure", "Structural biology" ]
33,185,800
https://en.wikipedia.org/wiki/Eisenstein%20integral
In mathematical representation theory, the Eisenstein integral is an integral introduced by Harish-Chandra in the representation theory of semisimple Lie groups, analogous to Eisenstein series in the theory of automorphic forms. Harish-Chandra used Eisenstein integrals to decompose the regular representation of a semisimple Lie group into representations induced from parabolic subgroups. Trombi gave a survey of Harish-Chandra's work on this. Definition Harish-Chandra defined the Eisenstein integral by where: x is an element of a semisimple group G P = MAN is a cuspidal parabolic subgroup of G ν is an element of the complexification of a a is the Lie algebra of A in the Langlands decomposition P = MAN. K is a maximal compact subgroup of G, with G = KP. ψ is a cuspidal function on M, satisfying some extra conditions τ is a finite-dimensional unitary double representation of K HP(x) = log a where x = kman is the decomposition of x in G = KMAN. Notes References Representation theory
Eisenstein integral
[ "Mathematics" ]
227
[ "Representation theory", "Fields of abstract algebra" ]
33,188,525
https://en.wikipedia.org/wiki/Maass%E2%80%93Selberg%20relations
In mathematics, the Maass–Selberg relations are some relations describing the inner products of truncated real analytic Eisenstein series, that in some sense say that distinct Eisenstein series are orthogonal. Hans Maass introduced the Maass–Selberg relations for the case of real analytic Eisenstein series on the upper half plane. Atle Selberg extended the relations to symmetric spaces of rank 1. Harish-Chandra generalized the Maass–Selberg relations to Eisenstein series of higher rank semisimple group (and named the relations after Maass and Selberg) and found some analogous relations between Eisenstein integrals, that he also called Maass–Selberg relations. Informally, the Maass–Selberg relations say that the inner product of two distinct Eisenstein series is zero. However the integral defining the inner product does not converge, so the Eisenstein series first have to be truncated. The Maass–Selberg relations then say that the inner product of two truncated Eisenstein series is given by a finite sum of elementary factors that depend on the truncation chosen, whose finite part tends to zero as the truncation is removed. Notes References Modular forms Representation theory
Maass–Selberg relations
[ "Mathematics" ]
247
[ "Modular forms", "Fields of abstract algebra", "Representation theory", "Number theory" ]
33,190,076
https://en.wikipedia.org/wiki/Orbital%20angular%20momentum%20of%20light
The orbital angular momentum of light (OAM) is the component of angular momentum of a light beam that is dependent on the field spatial distribution, and not on the polarization. OAM can be split into two types. The internal OAM is an origin-independent angular momentum of a light beam that can be associated with a helical or twisted wavefront. The external OAM is the origin-dependent angular momentum that can be obtained as cross product of the light beam position (center of the beam) and its total linear momentum. Concept A beam of light carries a linear momentum , and hence it can be also attributed an external angular momentum . This external angular momentum depends on the choice of the origin of the coordinate system. If one chooses the origin at the beam axis and the beam is cylindrically symmetric (at least in its momentum distribution), the external angular momentum will vanish. The external angular momentum is a form of OAM, because it is unrelated to polarization and depends on the spatial distribution of the optical field (E). A more interesting example of OAM is the internal OAM appearing when a paraxial light beam is in a so-called "helical mode". Helical modes of the electromagnetic field are characterized by a wavefront that is shaped as a helix, with an optical vortex in the center, at the beam axis (see figure). If the phase varies around the axis of such a wave, it carries orbital angular momentum. In the figure to the right, the first column shows the beam wavefront shape. The second column is the optical phase distribution in a beam cross-section, shown in false colors. The third column is the light intensity distribution in a beam cross-section (with a dark vortex core at the center). The helical modes are characterized by an integer number , positive or negative. If , the mode is not helical and the wavefronts are multiple disconnected surfaces, for example, a sequence of parallel planes (from which the name "plane wave"). If , the handedness determined by the sign of , the wavefront is shaped as a single helical surface, with a step length equal to the wavelength . If , the wavefront is composed of distinct but intertwined helices, with the step length of each helix surface equal to , and a handedness given by the sign of . The integer is also the so-called "topological charge" of the optical vortex. Light beams that are in a helical mode carry nonzero OAM. As an example, any Laguerre-Gaussian mode with rotational mode number has such a helical wavefront. Formulation The classical expression of the orbital angular momentum is the following: where and are the electric field and the vector potential, respectively, is the vacuum permittivity and we are using SI units. The -superscripted symbols denote the cartesian components of the corresponding vectors. For a monochromatic wave this expression can be transformed into the following one: This expression is generally nonvanishing when the wave is not cylindrically symmetric. In particular, in a quantum theory, individual photons may have the following values of the OAM: where the topological charge can be extracted numerically from electric field profile of vortex beams. The corresponding wave functions (eigenfunctions of OAM operator) have the following general expression: where is the cylindrical coordinate. As mentioned in the Introduction, this expression corresponds to waves having a helical wavefront (see figure above), with an optical vortex in the center, at the beam axis. Generation Orbital angular momentum states with occur naturally. OAM states of arbitrary can be created artificially using a variety of tools, such as using spiral phase plates, spatial light modulators and q-plates. Spiral wave plates, made of plastic or glass, are plates where the thickness of the material increases in a spiral pattern in order to imprint a phase gradient on light passing through it. For a given wavelength, an OAM state of a given requires that the step height —the height between the thinnest and thickest parts of the plate— be given by where is the refractive index of the plate. Although the wave plates themselves are efficient, they are relatively expensive to produce, and are, in general, not adjustable to different wavelengths of light. Another way to modify the phase of the light is with a diffraction grating. For an state, the diffraction grating would consist of parallel lines. However, for an state, there will be a "fork" dislocation, and the number of lines above the dislocation will be one larger than below. An OAM state with can be created by increasing the difference in the number of lines above and below the dislocation. As with the spiral wave plates, these diffraction gratings are fixed for , but are not restricted to a particular wavelength. A spatial light modulator operates in a similar way to diffraction gratings, but can be controlled by computer to dynamically generate a wide range of OAM states. Recent advances Theoretical work suggests that a series of optically distinct chromophores are capable of supporting an excitonic state whose symmetry is such that in the course of the exciton relaxing, a radiation mode of non-zero topological charge is created directly. Most recently, the geometric phase concept has been adopted for OAM generation. The geometric phase is modulated to coincide with the spatial phase dependence factor, i.e., of an OAM carrying wave. In this way, geometric phase is introduced by using anisotropic scatterers. For example, a metamaterial composed of distributed linear polarizers in a rotational symmetric manner generates an OAM of order 1. To generate higher-order OAM wave, nano-antennas which can produce the spin-orbit coupling effect are designed and then arranged to form a metasurface with different topological charges. Consequently, the transmitted wave carries an OAM, and its order is twice the value of the topological charge. Usually, the conversion efficiency is not high for the transmission-type metasurface. Alternative solution to achieve high transmittance is to use complementary (Babinet-inverted) metasurface. On the other hand, it is much easier to achieve high conversion efficiency, even 100% efficiency in the reflection-type metasurface such as the composite PEC-PMC metasurface. Beside OAM generation in free space, integrated photonic approaches can also realize on-chip optical vortices carrying OAM. Representative approaches include patterned ring resonators, subwavelength holographic gratings, Non-Hermitian vortex lasers, and meta-waveguide OAM emitters. Measurement Determining the spin angular momentum (SAM) of light is simple – SAM is related to the polarization state of the light: the AM is, per photon, in a left and right circularly polarized beam respectively. Thus the SAM can be measured by transforming the circular polarization of light into a p- or s-polarized state by means of a wave plate and then using a polarizing beam splitter that will transmit or reflect the state of light. The development of a simple and reliable method for the measurement of orbital angular momentum (OAM) of light, however, remains an important problem in the field of light manipulation. OAM (per photon) arises from the amplitude cross-section of the beam and is therefore independent of the spin angular momentum: whereas SAM has only two orthogonal states, the OAM is described by a state that can take any integer value N. As the state of OAM of light is unbounded, any integer value of l is orthogonal to (independent from) all the others. Where a beam splitter could separate the two states of SAM, no device can separate the N (if greater than 2) modes of OAM, and, clearly, the perfect detection of all N potential states is required to finally resolve the issue of measuring OAM. Nevertheless, some methods have been investigated for the measurement of OAM. Counting spiral fringes Beams carrying OAM have a helical phase structure. Interfering such a beam with a uniform plane wave reveals phase information about the input beam through analysis of the observed spiral fringes. In a Mach–Zender interferometer, a helically phased source beam is made to interfere with a plane-wave reference beam along a collinear path. Interference fringes will be observed in the plane of the beam waist and/or at the Rayleigh range. The path being collinear, these fringes are pure consequence of the relative phase structure of the source beam. Each fringe in the pattern corresponds to one step through: counting the fringes suffices to determine the value of l. Diffractive holographic filters Computer-generated holograms can be used to generate beams containing phase singularities, and these have now become a standard tool for the generation of beams carrying OAM. This generating method can be reversed: the hologram, coupled to a single-mode fiber of set entrance aperture, becomes a filter for OAM. This approach is widely used for the detection of OAM at the single-photon level. The phase of these optical elements results to be the superposition of several fork-holograms carrying topological charges selected in the set of values to be demultiplexed. The position of the channels in far-field can be controlled by multiplying each fork-hologram contribution to the corresponding spatial frequency carrier. Other methods Other methods to measure the OAM of light include the rotational Doppler effect, systems based on a Dove prism interferometer, the measure of the spin of trapped particles, the study of diffraction effects from apertures, and optical transformations. The latter use diffractive optical elements in order to unwrap the angular phase patterns of OAM modes into plane-wave phase patterns which can subsequently be resolved in the Fourier space. The resolution of such schemes can be improved by spiral transformations that extend the phase range of the output strip-shaped modes by the number of spirals in the input beamwidth. Applications Potential use in telecommunications Research into OAM has suggested that light waves could carry hitherto unprecedented quantities of data through optical fibres. According to preliminary tests, data streams travelling along a beam of light split into 8 different circular polarities have demonstrated the capacity to transfer up to 2.5 terabits of data (equivalent to 66 DVDs or 320 gigabytes) per second. Further research into OAM multiplexing in the radio and mm wavelength frequencies has been shown in preliminary tests to be able to transmit 32 gigabits of data per second over the air. The fundamental communication limit of orbital-angular-momentum multiplexing is increasingly urgent for current multiple-input multiple-output (MIMO) research. The limit has been clarified in terms of independent scattering channels or the degrees of freedom (DoF) of scattered fields through angular-spectral analysis, in conjunction with a rigorous Green function method. The DoF limit is universal for arbitrary spatial-mode multiplexing, which is launched by a planar electromagnetic device, such as antenna, metasurface, etc., with a predefined physical aperture. Quantum-information applications OAM states can be generated in coherent superpositions and they can be entangled, which is an integral element of schemes for quantum information protocols. Photon pairs generated by the process of parametric down-conversion are naturally entangled in OAM, and correlations measured using spatial light modulators (SLM). Using qudits (with d levels, as opposed to a qubit's 2 levels) has been shown to improve the robustness of quantum key distribution schemes. OAM states provide a suitable physical realisation of such a system, and a proof-of-principle experiment (with 7 OAM modes from to ) has been demonstrated. Radio astronomy In 2019, a letter published in the Monthly Notices of the Royal Astronomical Society presented evidence that OAM radio signals had been received from the vicinity of the M87* black hole, over 50 million light years distant, suggesting that optical angular momentum information can propagate over astronomical distances. See also Angular momentum Angular momentum of light Orbital angular momentum of free electrons Circular polarization Hypergeometric-Gaussian modes Laguerre-Gaussian modes Spin angular momentum of light Paraxial approximation Polarization (waves) Siae Microelettronica patent References External links Phorbitech . . Glasgow Optics Group Leiden Institute of Physics ICFO Università Di Napoli "Federico II" (Archived copy) Università Di Roma "La Sapienza" University of Ottawa Elementary demonstration using a laser pointer Light Angular momentum of light
Orbital angular momentum of light
[ "Physics", "Mathematics" ]
2,627
[ "Physical phenomena", "Physical quantities", "Spectrum (physical sciences)", "Quantity", "Angular momentum of light", "Electromagnetic spectrum", "Waves", "Orbital angular momentum of waves", "Light", "Angular momentum", "Moment (physics)" ]
33,193,835
https://en.wikipedia.org/wiki/Westwallbunker%20%28Pachten%29
Westwallbunker is a bunker and museum in Saarland, Germany, that was part of the Siegfried Line. The bunker was built in 1939. See also Regelbau List of surviving elements of the Siegfried Line External links www.bunker20.de Museums in Saarland Siegfried Line Buildings and structures in Saarlouis (district)
Westwallbunker (Pachten)
[ "Engineering" ]
73
[ "Military engineering", "Siegfried Line" ]
33,193,914
https://en.wikipedia.org/wiki/Integrated%20asset%20modelling
Integrated asset modelling (IAM) is the generic term used in the oil industry for computer modelling of both the subsurface and the surface elements of a field development. Historically the reservoir has always been modelled separately from the surface network and the facilities. In order to capture the interaction between those two or more standalone models, several time-consuming iterations were required. For example, a change in the water breakthrough leads to a change in the deliverability of the surface network which in turn leads to a production acceleration or deceleration in the reservoir. In order to go through this lengthy process more quickly, the industry has slowly been adopting a more integrated approach which captures the constraints imposed by the infrastructure on the network immediately. Basis As the aim of an IAM is to provide a production forecast which honours both the physical realities of the reservoir and the infrastructure it needs to contain the following elements; A pressure network A subsurface saturation model An availability model A constraint manager A production optimisation algorithm Some but not all models also contain an economics and risk model component so that the IAM can be used for economic evaluation. IAM vs. IPM The term Integrated Asset Modeling was first used by British Petroleum (BP), and this term is still maintained till date. Integrated asset modeling links individual simulators across technical disciplines, assets, computing environments, and locations. This collaborative methodology represents a shift in oil and gas field management, moving it toward a holistic management approach and away from disconnected teams working in isolation. The open framework of SLB’s Integrated Asset Modeling (IAM) software enables the coupling of a wide number of simulation software applications including reservoir simulation models (Eclipse, Intersect, MBX, IMEX, MBAL), multiphase flow simulation models (Pipesim, Olga, GAP), process and facilities simulation models (Symmetry, HYSYS, Petro-sim, UniSim) and economic domain models (Merak Peep). Historically the terms Integrated Production Modeling and Integrated Asset Modeling have been used interchangeably. The modern use of Integrated Production Modeling was coined when Petroleum Experts Ltd. joined their MBAL modeling software with their GAP and Prosper modeling software to form an Integrated Production Model. Benefits of Integrated Asset Modelling Having an IAM built of an asset or future project offers several advantages; Faster runtimes which allow scenario analysis and Monte Carlo analysis Insight in the interactions between various components of a development An answer in economic rather than recovery terms (not always available) Difficulties of Integrated Asset Modelling By its very nature an IAM requires a multi disciplinary approach. Most companies are too compartmentalised for this to be easy, as a result of this an integrated approach has the following drawbacks; More difficult to spot errors Requires constant communication between various departments, ownership is either vague or too much part of one silo. The biggest barrier to adoption of IAM is frequently the resistance of reservoir engineers to any simplification of the subsurface. This argument is sometimes valid, sometimes not, see below. Appropriate use of IAM As with any other software because of the inherent limitations in any virtual model use of an IAM is only appropriate during various stages of a project life. There are no hard and fast rules for this as there are a variety of software packages on the market which offer very accurate modelling of a very small scope to very rough modelling of a very large scope and anything in between. Currently the definition of IAM contains anything from daily optimisation to portfolio management. The success or failure of an IAM implementation project therefore depends on selecting the tool which is as complex as it needs to be but no more. The following contains some examples of areas where an IAM is the appropriate decision support tool Concept Select Debottlenecking and optimisation of very large or complex infrastructures Life of field analysis of production optimisation scenarios Note that for most of these areas the accuracy of the reservoir proxy is not important, the decision is made based on relative performance differences, not absolute values. Approach Several different software packages are commercially available and there is a clear difference in philosophy between some of them. Linked Existing Software Some vendors who have previously marketed standalone software for the subsurface and the surface are now marketing additional software which provides a datalink between the various packages. The obvious benefit of this approach is that there is no loss in accuracy and it does not require a remodelling exercise. However this approach also has its drawbacks, there is no time gain and the integration component of the entire package requires expertise which is not readily available, external specialist are frequently called upon to build and maintain the links between the components. Bespoke Software There are relatively few software packages on the market which are truly integrated, however these can offer the benefit of shorter runtimes and lower expertise thresholds. Software as a service A number of the established service companies now offer integrated asset modelling as a service. In practice this means that existing models will be either converted or linked by specialists to form an integrated solution. This solution is expensive but frequently the preferred option if the highest accuracy is required. Comparison of IAM tools See also reservoir simulation petroleum engineering References Czwienzek, F., Barreto Perez, J. J., Salve, J., Martinez Ramirez, I., Vasquez, M. G., & Hernandez, R. A. (2009, January 1). Integrated Production Model With Stochastic Simulation to Define Teotleco Exploitation Plan. Society of Petroleum Engineers. doi:10.2118/121801-MS Fernando Pérez, Edwin Tillero, Ender Pérez, and Pedro Niño PDVSA; José Rojas, Juan Araujo, Milciades Marrocchi, Marisabel Montero, and Maikely Piña, Schlumberger. 2012. An Innovative Integrated Asset Modeling for an Offshore-Onshore Field Development. Tomoporo Field Case. Paper SPE 157556 presented at the International Production and Operations Conference and Exhibition held in Doha Qatar, 14–16 May 2012 External links Defining Integrated Asset Modeling Petroleum engineering Scientific modelling
Integrated asset modelling
[ "Engineering" ]
1,242
[ "Petroleum engineering", "Energy engineering" ]
25,960,463
https://en.wikipedia.org/wiki/Pipeline%20bridge
A pipeline bridge is a bridge for running a pipeline over a river or another obstacle. Pipeline bridges for liquids and gases are, as a rule, only built when it is not possible to run the pipeline on a conventional bridge or under the river. However, as it is more common to run pipelines for centralized heating systems overhead, for this application even small pipeline bridges are common. Pipeline bridges may be made of steel, fiber reinforced polymer, reinforced concrete or similar materials. They may vary in size and style depending on the size of the pipeline being run. As there is normally a steady flow in pipelines, they can be designed as suspension bridges. They may also be added to an existing bridge. A pipeline bridge may be equipped with a walkway for maintenance purposes, but for safety and security reasons, the walkway is usually not open to the public. One of the world's longest pipeline bridges, built in 1970, is 1,040 meters long and crosses the Fuji River in Shizuoka Prefecture of Japan. The highest at 393 m is Hegigio Gorge Pipeline Bridge in Papua New Guinea. References Bridges Pipeline transport
Pipeline bridge
[ "Engineering" ]
228
[ "Structural engineering", "Bridges" ]
23,065,279
https://en.wikipedia.org/wiki/Robinson%E2%80%93Foulds%20metric
The Robinson–Foulds or symmetric difference metric, often abbreviated as the RF distance, is a simple way to calculate the distance between phylogenetic trees. It is defined as ( + ) where is the number of partitions of data implied by the first tree but not the second tree and is the number of partitions of data implied by the second tree but not the first tree (although some software implementations divide the RF metric by 2 and others scale the RF distance to have a maximum value of 1). The partitions are calculated for each tree by removing each branch. Thus, the number of eligible partitions for each tree is equal to the number of branches in that tree. RF distances have been criticized as biased, but they represent a relatively intuitive measure of the distances between phylogenetic trees and therefore remain widely used (the original 1981 paper describing Robinson-Foulds distances was cited more than 2700 times by 2023 based on Google Scholar). Nevertheless, the biases inherent to the RF distances suggest that researches should consider using "Generalized" Robinson–Foulds metrics that may have better theoretical and practical performance and avoid the biases and misleading attributes of the original metric. Explanation Given two unrooted trees of nodes and a set of labels (i.e., taxa) for each node (which could be empty, but only nodes with degree greater than or equal to three can be labeled by an empty set) the Robinson–Foulds metric finds the number of and operations to convert one into the other. The number of operations defines their distance. Rooted trees can be examined by attaching a dummy leaf to the root node. The authors define two trees to be the same if they are isomorphic and the isomorphism preserves the labeling. The construction of the proof is based on a function called , which contracts an edge (combining the nodes, creating a union of their sets). Conversely, expands an edge (decontraction), where the set can be split in any fashion. The function removes all edges from that are not in , creating , and then is used to add the edges only discovered in to the tree to build . The number of operations in each of these procedures is equivalent to the number of edges in that are not in plus the number of edges in that are not in . The sum of the operations is equivalent to a transformation from to , or vice versa. Properties The RF distance corresponds to an equivalent similarity metric that reflects the resolution of the strict consensus of two trees, first used to compare trees in 1980. In their 1981 paper Robinson and Foulds proved that the distance is in fact a metric. Algorithms for computing the metric In 1985 Day gave an algorithm based on perfect hashing that computes this distance that has only a linear complexity in the number of nodes in the trees. A randomized algorithm that uses hash tables that are not necessarily perfect has been shown to approximate the Robinson-Foulds distance with a bounded error in sublinear time. Specific applications In phylogenetics, the metric is often used to compute a distance between two trees. The treedist program in the PHYLIP suite offers this function, as does the RAxML_standard package, the DendroPy Python library (under the name "symmetric difference metric"), and R packages TreeDist (RobinsonFoulds() function) and phangorn (treedist() function). For comparing groups of trees, the fastest implementations include HashRF and MrsRF. The Robinson–Foulds metric has also been used in quantitative comparative linguistics to compute distances between trees that represent how languages are related to each other. Strengths and weaknesses The RF metric remains widely used because the idea of using the number of splits that differ between a pair of trees is a relatively intuitive way to assess the differences among trees for many systematists. This is the primary strength of the RF distance and the reason for its continued use in phylogenetics. Of course, the number of splits that differ between a pair of trees depends on the number of taxa in the trees so one might argue that this unit is not meaningful. However, it is straightforward to normalize RF distances so they range between zero and one. However, the RF metric also suffers a number of theoretical and practical shortcomings: Relative to other metrics, lacks sensitivity, and is thus imprecise; it can take two fewer distinct values than there are taxa in a tree. It is rapidly saturated; very similar trees can be allocated the maximum distance value. Its value can be counterintuitive. One example is that moving a tip and its neighbour to a particular point on a tree generates a lower difference value than if just one of the two tips were moved to the same place. Its range of values can depend on tree shape: trees that contain many uneven partitions will command relatively lower distances, on average, than trees with many even partitions. It performs more poorly than many alternative measures in practical settings, based on simulated trees. Another issue to consider when using RF distances is that differences in one clade may be trivial (perhaps if the clade resolves three species within a genus differently) or may be fundamental (if the clade is deep in the tree and defines two fundamental subgroups, such as mammals and birds). However, this issue is not a problem with RF distances per se, it is a more general criticism of tree distances. Regardless of the behaviour of any specific tree distance a practicing evolutionary biologist might view some tree rearrangements as "important" and other rearrangements as "trivial". Tree distances are tools; they are most useful in the context of other information about the organisms in the trees. These issues can be addressed by using less conservative metrics. "Generalized RF distances" recognize similarity between similar, but non-identical, splits; the original Robinson Foulds distance doesn't care how similar two groupings are, if they aren't identical they are discarded. The best-performing generalized Robinson-Foulds distances have a basis in information theory, and measure the distance between trees in terms of the quantity of information that the trees' splits hold in common (measured in bits). The Clustering Information Distance (implemented in R package TreeDist) is recommended as the most suitable alternative to the Robinson-Foulds distance. An alternative approach to tree distance calculation is to use Quartet distance, rather than splits, as the basis for tree comparison. Software implementations References Further reading M. Bourque, Arbres de Steiner et reseaux dont certains sommets sont a localisation variable. PhD thesis, University de Montreal, Montreal, Quebec, 1978 http://www.worldcat.org/title/arbres-de-steiner-et-reseaux-dont-certains-sommets-sont-a-localisation-variable/oclc/053538946 William H. E. Day, "Optimal algorithms for comparing trees with labeled leaves", Journal of Classification, Number 1, December 1985. Makarenkov, V and Leclerc, B. Comparison of additive trees using circular orders, Journal of Computational Biology,7,5,731-744, 2000,"Mary Ann Liebert, Inc." Computational phylogenetics Bioinformatics algorithms
Robinson–Foulds metric
[ "Biology" ]
1,479
[ "Genetics techniques", "Computational phylogenetics", "Bioinformatics algorithms", "Bioinformatics", "Phylogenetics" ]
23,068,180
https://en.wikipedia.org/wiki/Rate%20ratio
In epidemiology, a rate ratio, sometimes called an incidence density ratio or incidence rate ratio, is a relative difference measure used to compare the incidence rates of events occurring at any given point in time. It is defined as: where incidence rate is the occurrence of an event over person-time (for example person-years): The same time intervals must be used for both incidence rates. A common application for this measure in analytic epidemiologic studies is in the search for a causal association between a certain risk factor and an outcome. See also Odds ratio Ratio Risk ratio References Biostatistics Epidemiology Rates Ratios
Rate ratio
[ "Mathematics", "Environmental_science" ]
131
[ "Epidemiology", "Arithmetic", "Environmental social science", "Ratios" ]
23,076,850
https://en.wikipedia.org/wiki/Electro-absorption%20modulator
An electro-absorption modulator (EAM) is a semiconductor device which can be used for modulating the intensity of a laser beam via an electric voltage. Its principle of operation is based on the Franz–Keldysh effect, i.e., a change in the absorption spectrum caused by an applied electric field, which changes the bandgap energy (thus the photon energy of an absorption edge) but usually does not involve the excitation of carriers by the electric field. For modulators in telecommunications, small size and modulation voltages are desired. The EAM is candidate for use in external modulation links in telecommunications. These modulators can be realized using either bulk semiconductor materials or materials with multiple quantum dots or wells. Most EAMs are made in the form of a waveguide with electrodes for applying an electric field in a direction perpendicular to the modulated light beam. For achieving a high extinction ratio, one usually exploits the Quantum-confined Stark effect (QCSE) in a quantum well structure. Compared with an Electro-optic modulator (EOM), an EAM can operate with much lower voltages (a few volts instead of ten volts or more). They can be operated at very high speed; a modulation bandwidth of tens of gigahertz can be achieved, which makes these devices useful for optical fiber communication. A convenient feature is that an EAM can be integrated with distributed feedback laser diode on a single chip to form a data transmitter in the form of a photonic integrated circuit. Compared with direct modulation of the laser diode, a higher bandwidth and reduced chirp can be obtained. Semiconductor quantum well EAM is widely used to modulate near-infrared (NIR) radiation at frequencies below 0.1 THz. Here, the NIR absorption of undoped quantum well was modulated by strong electric field with frequencies between 1.5 and 3.9 THz. The THz field coupled two excited states (excitons) of the quantum wells, as manifested by a new THz frequency-and power- dependent NIR absorption line. The THz field generated a coherent quantum superposition of an absorbing and a nonabsorbing exciton. This quantum coherence may yield new applications for quantum well modulators in optical communications. Recently, advances in crystal growth have triggered the study of self organized quantum dots. Since the EAM requires small size and low modulation voltages, possibility of obtaining quantum dots with enhanced electro-absorption coefficients makes them attractive for such application. See also Optical modulator Electro-optic modulator References S. G. Carter, Quantum Coherence in an Optical Modulator, Science 310 (2005) 651 I. B. Akca, Electro-optic and electro-absorption characterization of InAs quantum dot waveguides, Opt. Exp. 16 (2008) 3439 X. Xu, Coherent Optical Spectroscopy of a Strongly Driven Quantum Dot, Science 317 (2007) 929 Optical devices Nonlinear optics
Electro-absorption modulator
[ "Materials_science", "Engineering" ]
614
[ "Glass engineering and science", "Optical devices" ]
23,079,699
https://en.wikipedia.org/wiki/Optical%20modulators%20using%20semiconductor%20nano-structures
An optical modulator is an optical device which is used to modulate a beam of light with a perturbation device. It is a kind of transmitter to convert information to optical binary signal through optical fiber (optical waveguide) or transmission medium of optical frequency in fiber optic communication. There are several methods to manipulate this device depending on the parameter of a light beam like amplitude modulator (majority), phase modulator, polarization modulator etc. The easiest way to obtain modulation is modulation of intensity of a light by the current driving the light source (laser diode). This sort of modulation is called direct modulation, as opposed to the external modulation performed by a light modulator. For this reason, light modulators are called external light modulators. According to manipulation of the properties of material modulators are divided into two groups, absorptive modulators (absorption coefficient) and refractive modulators (refractive index of the material). Absorption coefficient can be manipulated by Franz-Keldysh effect, Quantum-Confined Stark Effect, excitonic absorption, or changes of free carrier concentration. Usually, if several such effects appear together, the modulator is called electro-absorptive modulator. Refractive modulators most often make use of electro-optic effect (amplitude & phase modulation), other modulators are made with acousto-optic effect, magneto-optic effect such as Faraday and Cotton-Mouton effects. The other case of modulators is spatial light modulator (SLM) which is modified two dimensional distribution of amplitude & phase of an optical wave. Optical modulators can be implemented using Semiconductor Nano-structures to increase the performance like high operation, high stability, high speed response, and highly compact system. Highly compact electro-optical modulators have been demonstrated in compound semiconductors. However, in silicon photonics, electro-optical modulation has been demonstrated only in large structures, and is therefore inappropriate for effective on-chip integration. Electro-optical control of light on silicon is challenging owing to its weak electro-optical properties. The large dimensions of previously demonstrated structures were necessary to achieve a significant modulation of the transmission in spite of the small change of refractive index of silicon. Liu et al. have recently demonstrated a high-speed silicon optical modulator based on a metal–oxide–semiconductor (MOS) configuration. Their work showed a high-speed optical active device on silicon—a critical milestone towards optoelectronic integration on silicon. Electro-optic modulator of nano-structures An electro-optic modulator is a device which can be used for controlling the power, phase or polarization of a laser beam with an electrical control signal. It typically contains one or two Pockels cells, and possibly additional optical elements such as polarizers. The principle of operation is based on the linear electro-optic effect (the Pockels effect, the modification of the refractive index of a nonlinear crystal by an electric field in proportion to the field strength). The crystal which is covered by electrode may be considered to be a voltage-variable wave-plate. When a voltage is applied, the retardation of laser polarization of the light would be changed while a beam passes through an ADP crystal. This variation in polarization results in intensity modulation downstream from the output polarizer. The output polarizer converts the phase shift into an amplitude modulation. Micrometre-scale silicon electro-optic modulator This device was fabricated a shape of the p-i-n ring resonator on a silicon-on-insulator substrate with a 3-mm-thick buried oxide layer. Both the waveguide coupling to the ring and that forming the ring have awidth of 450 nm and a height of 250 nm. The diameter of the ring is 12 mm, and the spacing between the ring and the straight waveguide is 200 nm. Acousto-optic modulator of nano-structures Acousto-optic modulators are used to vary and control laser beam intensity. A Bragg configuration gives a single first order output beam, whose intensity is directly linked to the power of RF control signal. The rise time of the modulator is simply deduced by the necessary time for the acoustic wave to travel through the laser beam. For highest speeds the laser beam will be focused down, forming a beam waist as it passes through the modulator. In an AOM a laser beam is caused to interact with a high frequency ultrasonic sound wave inside an optically polished block of crystal or glass (the interaction medium). By carefully orientating the laser with respect to the sound waves the beam can be made to reflect off the acoustic wave-fronts (Bragg diffraction). Therefore, when the sound field is present the beam is deflected and when it is absent the beam passes through undeviated. By switching the sound field on and off very rapidly the deflected beam appears and disappears in response (digital modulation). By varying the amplitude of the acoustic waves the intensity of the deflected beam can similarly be modulated (analogue modulation). Acoustic solitons in semiconductor nanostructures Acoustic solitons strongly influence the electron states in a semiconductor nanostructure. The amplitude of soliton pulses is so high that the electron states in a quantum well make temporal excursions in energy up to 10 meV. The subpicosecond duration of the solitons is less than the coherence time of the optical transition between the electron states and a frequency modulation of emitted light during the coherence time (chirping effect) is observed. This system is for an ultrafast control of electron states in semiconductor nanostructures. Magneto-optic modulator of nano-structures A dc magnetic field Hdc is applied perpendicular to the light propagation direction to produce a single domain, transverse directed 4~Ms. The rf modulation field Hrf, applied by means of a coil along the light propagation direction, wobbles 4~Ms through an angle of @ and produces a time varying magnetization component in the longitudinal direction. This component then produces an ac variation in the plane of polarization via the longitudinal Faraday effect. Conversion to amplitude modulation is accomplished by the indicated analyzer. Wideband magneto-optic modulation in a bismuth-substituted yttrium iron garnet waveguide The current transient creates a time-varying magnetic field that has a component along the direction of optical propagation. This component (underneath the microstrip line) acts to tip the magnetization, M, along the propagation direction of the optical beam. A static in-plane magnetic field, by, is applied perpendicular to the light propagation direction, thus ensuring the return of M to its initial orientation after the passage of the current transient. Depending on the component of the magnetization along the z-direction, Mz, the optical beam experiences a rotation of its polarization due to the Faraday effect. The polarization modulation is converted into an intensity modulation via a polarization analyzer, which is detected by a high-speed photodiode. Other semiconductor nanostructures of optical modulator MODULATION OF THz RADIATION BY SEMICONDUCTOR NANOSTRUCTURES As a result of increased demand for bandwidth, wireless short-range communication systems are expected to extend into the THz frequency range. Therefore, the fundamental interactions between THz radiation and semiconductors are receiving increasing attention. This new quantum structure is based on the well-established technology for producing high electron mobility transistors where an electron gas is confined at a GaAs/AlxGa1 xAs interface. The electron density at the hetero-interface can be controlled by the application of an external gate voltage, which in turn will alter the transmission/reflection characteristics of the device to an incident THz beam. Applications and Commercial products Electro-optic modulator from THORLABS 40 Gbit/s Phase Modulator The 40 Gbit/s Phase Modulator is a high performance, low drive voltage External Optical Modulator designed for customers developing next generation 40G transmission systems. The increased bandwidth allows for chirp control in high-speed data communications. Applications; Chirp Control for High-Speed Communications (SONET OC-768 Interfaces, SDH STM-256 Interfaces), Coherent communications, C & L Band Operation, Optical Sensing, All-optical frequency shifting. from Mach-40 Acousto-optic modulator of nano-structures Applications; acousto-optic modulators include laser printing, video disk recording, laser projection systems. from ELECTRO-OPTICAL PRODUCTS CORPORATION References Optical devices Optoelectronics Nanoelectronics Materials science ar:التضمين البصري باستخدام أشباه الموصلات ذات التركيب النانوي
Optical modulators using semiconductor nano-structures
[ "Physics", "Materials_science", "Engineering" ]
1,832
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Optical devices", "Materials science", "Nanoelectronics", "nan", "Nanotechnology" ]
23,081,873
https://en.wikipedia.org/wiki/Random%20hexamer
A random hexamer or random hexonucleotides are for various PCR applications such as rolling circle amplification to prime the DNA. They are oligonucleotide sequences of 6 bases which are synthesised entirely randomly to give a numerous range of sequences that have the potential to anneal at many random points on a DNA sequence and act as a primer to commence first strand cDNA synthesis. References Polymerase chain reaction
Random hexamer
[ "Chemistry", "Biology" ]
91
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction" ]
23,081,895
https://en.wikipedia.org/wiki/Lysophosphatidylcholine
Lysophosphatidylcholines (LPC, lysoPC), also called lysolecithins, are a class of chemical compounds which are derived from phosphatidylcholines. Overview Lysophosphatidylcholines are produced within cells mainly by the enzyme phospholipase A2, which removes one of the fatty acid groups from phosphatidylcholine to produce LPC. Among other properties, they activate endothelial cells during early atherosclerosis. LPC also acts as a find-me signal, released by apoptotic cells to recruit phagocytes, which then phagocytose the apoptotic cells. Moreover, LPCs can be used in the lab to cause demyelination of brain slices and to mimic the effects of demyelinating diseases such as multiple sclerosis. LPCs are also known to stimulate phagocytosis of the myelin sheath and can change the surface properties of erythrocytes. LPC-induced demyelination is thought to occur through the actions of recruited macrophages and microglia which phagocytose nearby myelin. Invading T cells are also thought to mediate this process. Bacteria such as Legionella pneumophila utilize phospholipase A2 end-products (fatty acids and lysophospholipids) to cause host cell (macrophage) apoptosis through cytochrome C release. LPCs are present as minor phospholipids in the cell membrane (≤ 3%) and in the blood plasma (8–12%). Since LPCs are quickly metabolized by lysophospholipase and LPC-acyltransferase, they last only shortly in vivo. By replacing the acyl-group within the LPC with an alkyl-group, alkyl-lysophospholipids (ALP) were synthesized. These LPC analogues are metabolically stable, and several ALPs such as edelfosine, miltefosine and perifosine are under research and development as drugs against cancer and other diseases. Lysophosphatidylcholine processing has been discovered to be an essential component of normal human brain development: those born with genes that prevent adequate uptake suffer from lethal microcephaly. MFSD2a has been shown to transport LPC-bound polyunsaturated fatty acids, including DHA and EPA, across the blood-brain and blood-retinal barriers. LPCs occur in many foods naturally. According to the third edition of Starch: Chemistry and Technology, lysophosphatidylcholine makes up about 70% of the lipids in oat starch (p.592). Also, the anti-cancer abilities of synthetic LPC variants are special since they do not target the cell DNA but rather insert into the plasma membrane, causing apoptosis through the influencing of several signal pathways. Therefore, their effects are independent of the proliferation state of the tumor cell. Industrial Applications of Enzymes Producing Lysophosphatidylcholine FoodPro LysoMaxa Oil is an FDA approved commercialized PLA2 enzyme preparation utilized for the degumming of vegetable oils in large-scale productions to increase yield. Variants of lysophosphatidylcholine are the main products of this enzyme. Lysophosphatidylcholine has been studied as an immune activator for differentiating monocytes to mature dendritic cells. Lysophosphatidylcholine present in blood amplifies microbial TLR ligands-induced inflammatory responses from human cells like intestinal epithelial cells and macrophages/monocytes. This has an implication in sepsis induced by microbes. Composition in Foods Lysophosphatidylcholine accounts for 4.6% of phospholipids found in coconut oil, which make up 0.2% of lipids in coconut oil. In contrast, vegetable oils contain about 2-3% phospholipids. Lysophosphatidylcholine and Atherosclerosis Intima-media thickness, which is positively correlated with reduced blood flow, was studied in young smokers. Evidence pointed towards smoking as a major risk factor for increased levels of PLA2, due to tobacco smoke's impact on oxidation of retained LDL particles in the intima of a carotid artery, which may have a detrimental impact on overall health. See also 1-Lysophosphatidylcholine References Lipids Organophosphates
Lysophosphatidylcholine
[ "Chemistry" ]
991
[ "Organic compounds", "Biomolecules by chemical classification", "Lipids" ]
3,909,097
https://en.wikipedia.org/wiki/Projection%20%28mathematics%29
In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: The projection from a point onto a plane or central projection: If is a point, called the center of projection, then the projection of a point different from onto a plane that does not contain is the intersection of the line with the plane. The points such that the line is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point itself is not defined. The projection parallel to a direction , onto a plane or parallel projection: The image of a point is the intersection of the plane with the line parallel to passing through . See for an accurate definition, generalized to any dimension. The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. In cartography, a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. The 3D projections are also at the basis of the theory of perspective. The need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry. Definition Generally, a mapping where the domain and codomain are the same set (or mathematical structure) is a projection if the mapping is idempotent, which means that a projection is equal to its composition with itself. A projection may also refer to a mapping which has a right inverse. Both notions are strongly related, as follows. Let be an idempotent mapping from a set into itself (thus ) and be the image of . If we denote by the map viewed as a map from onto and by the injection of into (so that ), then we have (so that has a right inverse). Conversely, if has a right inverse , then implies that ; that is, is idempotent. Applications The original notion of projection has been extended or generalized to various mathematical situations, frequently, but not always, related to geometry, for example: In set theory: An operation typified by the -th projection map, written , that takes an element of the Cartesian product to the value This map is always surjective and, when each space has a topology, this map is also continuous and open. A mapping that takes an element to its equivalence class under a given equivalence relation is known as the canonical projection. The evaluation map sends a function to the value for a fixed . The space of functions can be identified with the Cartesian product , and the evaluation map is a projection map from the Cartesian product. For relational databases and query languages, the projection is a unary operation written as where is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in are restricted to the set . is a database-relation. In spherical geometry, projection of a sphere upon a plane was used by Ptolemy (~150) in his Planisphaerium. The method is called stereographic projection and uses a plane tangent to a sphere and a pole C diametrically opposite the point of tangency. Any point on the sphere besides determines a line intersecting the plane at the projected point for . The correspondence makes the sphere a one-point compactification for the plane when a point at infinity is included to correspond to , which otherwise has no projection on the plane. A common instance is the complex plane where the compactification corresponds to the Riemann sphere. Alternatively, a hemisphere is frequently projected onto a plane using the gnomonic projection. In linear algebra, a linear transformation that remains unchanged if applied twice: . In other words, an idempotent operator. For example, the mapping that takes a point in three dimensions to the point is a projection. This type of projection naturally generalizes to any number of dimensions for the domain and for the codomain of the mapping. See Orthogonal projection, Projection (linear algebra). In the case of orthogonal projections, the space admits a decomposition as a product, and the projection operator is a projection in that sense as well. In differential topology, any fiber bundle includes a projection map as part of its definition. Locally at least this map looks like a projection map in the sense of the product topology and is therefore open and surjective. In topology, a retraction is a continuous map which restricts to the identity map on its image. This satisfies a similar idempotency condition and can be considered a generalization of the projection map. The image of a retraction is called a retract of the original space. A retraction which is homotopic to the identity is known as a deformation retraction. This term is also used in category theory to refer to any split epimorphism. The scalar projection (or resolute) of one vector onto another. In category theory, the above notion of Cartesian product of sets can be generalized to arbitrary categories. The product of some objects has a canonical projection morphism to each factor. Special cases include the projection from the Cartesian product of sets, the product topology of topological spaces (which is always surjective and open), or from the direct product of groups, etc. Although these morphisms are often epimorphisms and even surjective, they do not have to be. References Further reading Thomas Craig (1882) A Treatise on Projections from University of Michigan Historical Math Collection. Mathematical terminology
Projection (mathematics)
[ "Mathematics" ]
1,401
[ "nan" ]
3,910,661
https://en.wikipedia.org/wiki/Draft%20%28boiler%29
In a water boiler, draft is the difference between atmospheric pressure and the pressure existing in the furnace or flue gas passage. Draft can also be referred to as the difference in pressure in the combustion chamber area which results in the motion of the flue gases and the air flow. Types of draft Drafts are produced by the rising combustion gases in the stack, flue, or by mechanical means. For example, a boiler can be put into four categories: natural, induced, balanced, and forced. Natural draft: When air or flue gases flow due to the difference in density of the hot flue gases and cooler ambient gases. The difference in density creates a pressure differential that moves the hotter flue gases into the cooler surroundings. Forced draft: When air or flue gases are maintained above atmospheric pressure. Normally it is done with the help of a forced draft fan. Induced draft: When air or flue gases flow under the effect of a gradually decreasing pressure below atmospheric pressure. In this case, the system is said to operate under induced draft. The stacks (or chimneys) provide sufficient natural draft to meet the low draft loss needs. In order to meet higher pressure differentials, the stacks must simultaneously operate with draft fans. Balanced draft: When the static pressure is equal to the atmospheric pressure, the system is referred to as balanced draft. Draft is said to be zero in this system. Importance/significance For the proper and the optimized heat transfer from the flue gases to the boiler tubes draft holds a relatively high amount of significance. The combustion rate of the flue gases and the amount of heat transfer to the boiler are both dependent on the movement and motion of the flue gases. A boiler equipped with a combustion chamber which has a strong current of air (draft) through the fuel bed will increase the rate of combustion (which is the efficient utilization of fuel with minimum waste of unused fuel). The stronger movement will also increase the heat transfer rate from the flue gases to the boiler (which improves efficiency and circulation). Drafting in steam locomotives Since the stack of a locomotive is too short to provide natural draft, during normal running forced draft is achieved by directing the exhaust steam from the cylinders through a cone ("blast pipe") upwards and into a skirt at the bottom of the stack. When the locomotive is stationary or in a restricted space "live" steam from the boiler is directed through an annular ring surrounding the blast pipe to produce the same effect. See also Cooling tower system Stack effect Controlling draught References Engines Engine technology Energy conversion
Draft (boiler)
[ "Physics", "Technology" ]
516
[ "Physical systems", "Machines", "Engine technology", "Engines" ]
3,911,673
https://en.wikipedia.org/wiki/Fluoroacetic%20acid
Fluoroacetic acid is a organofluorine compound with the chemical formula . It is a colorless solid that is noted for its relatively high toxicity. The conjugate base, fluoroacetate occurs naturally in at least 40 plants in Australia, Brazil, and Africa. It is one of only five known organofluorine-containing natural products. Toxicity Fluoroacetic acid is a harmful metabolite of some fluorine-containing drugs (median lethal dose, LD50 = 10 mg/kg in humans). The most common metabolic sources of fluoroacetic acid are fluoroamines and fluoroethers. Fluoroacetic acid can disrupt the Krebs cycle. The metabolite of fluoroacetic acid is Fluorocitric acid and is very toxic because it is not processable using aconitase in the Krebs cycle (where fluorocitrate takes place of citrate as the substrate). The enzyme is inhibited and the cycle stops working. In contrast with fluoroacetic acid, difluoroacetic acid and trifluoroacetic acid are far less toxic. Its pKa is 2.66, in contrast to 1.24 and 0.23 for the respective di- and trifluoroacetic acid. Uses Fluoroacetic acid is used to manufacture pesticides especially rodenticides (see sodium fluoroacetate). The overall market is projected to rise at a considerable rate during the forecast period, 2021 to 2027. See also Fluorocitric acid References Carboxylic acids Organofluorides Halogen-containing natural products Respiratory toxins Fluorine-containing natural products Aconitase inhibitors Fluorinated carboxylic acids Plant toxins
Fluoroacetic acid
[ "Chemistry" ]
369
[ "Cellular respiration", "Chemical ecology", "Respiratory toxins", "Carboxylic acids", "Functional groups", "Plant toxins" ]
3,912,709
https://en.wikipedia.org/wiki/Multiple%20%28mathematics%29
In mathematics, a multiple is the product of any quantity and an integer. In other words, for the quantities a and b, it can be said that b is a multiple of a if b = na for some integer n, which is called the multiplier. If a is not zero, this is equivalent to saying that is an integer. When a and b are both integers, and b is a multiple of a, then a is called a divisor of b. One says also that a divides b. If a and b are not integers, mathematicians prefer generally to use integer multiple instead of multiple, for clarification. In fact, multiple is used for other kinds of product; for example, a polynomial p is a multiple of another polynomial q if there exists third polynomial r such that p = qr. Examples 14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no such integers for 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is the only way that the relevant number can be written as a product of 7 and another real number: is not an integer; is not an integer. Properties 0 is a multiple of every number (). The product of any integer and any integer is a multiple of . In particular, , which is equal to , is a multiple of (every integer is a multiple of itself), since 1 is an integer. If and are multiples of then and are also multiples of . Submultiple In some texts, "a is a submultiple of b" has the meaning of "a being a unit fraction of b" (ab/n) or, equivalently, "b being an integer multiple n of a" (bna). This terminology is also used with units of measurement (for example by the BIPM and NIST), where a unit submultiple is obtained by prefixing the main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, a millimetre is the 1000-fold submultiple of a metre. As another example, one inch may be considered as a 12-fold submultiple of a foot, or a 36-fold submultiple of a yard. See also Unit fraction Ideal (ring theory) Decimal and SI prefix Multiplier (linguistics) References Arithmetic Multiplication Integers
Multiple (mathematics)
[ "Mathematics" ]
545
[ "Mathematical objects", "Elementary mathematics", "Arithmetic", "Integers", "Numbers", "Number theory" ]
3,912,953
https://en.wikipedia.org/wiki/Rocket%20turbine%20engine
A rocket turbine engine is a combination of two types of propulsion engines: a liquid-propellant rocket and a turbine jet engine. Its power-to-weight ratio is a little higher than a regular jet engine, and works at higher altitudes. See also Index of aviation articles Air turboramjet References Jet engines Rocket engines
Rocket turbine engine
[ "Astronomy", "Technology" ]
66
[ "Engines", "Rocket engines", "Rocketry stubs", "Astronomy stubs", "Jet engines" ]
3,913,136
https://en.wikipedia.org/wiki/Conjugated%20protein
A conjugated protein is a protein that functions in interaction with other (non-polypeptide) chemical groups attached by covalent bonding or weak interactions. Many proteins contain only amino acids and no other chemical groups, and they are called simple proteins. However, other kind of proteins yield, on hydrolysis, some other chemical component in addition to amino acids and they are called conjugated proteins. The non-amino part of a conjugated protein is usually called its prosthetic group. Most prosthetic groups are formed from vitamins. Conjugated proteins are classified on the basis of the chemical nature of their prosthetic groups. Examples Some examples of conjugated proteins are lipoproteins, glycoproteins, Nucleoproteins, phosphoproteins, hemoproteins, flavoproteins, metalloproteins, phytochromes, cytochromes, opsins, and chromoproteins. Hemoglobin contains the prosthetic group known as heme. Each heme group contains an iron ion (Fe2+) which forms a co-ordinate bond with an oxygen molecule (O2), allowing hemoglobin to transport oxygen through the bloodstream. As each of the four protein subunits of hemoglobin possesses its own prosthetic heme group, each hemoglobin can transport four molecules of oxygen. Glycoproteins are generally the largest and most abundant group of conjugated proteins. They range from glycoproteins in cell surface membranes that constitute the glycocalyx, to important antibodies produced by leukocytes. Chemical synthesized polysaccharide–protein conjugates been used for food industry, vaccines, and drug delivery systems. They are promising alternatives to PEG–protein drugs, in which non-biodegradable high molecular weight PEG causes health concerns. References Protein structure
Conjugated protein
[ "Chemistry" ]
413
[ "Biochemistry stubs", "Protein structure", "Protein stubs", "Structural biology" ]
3,916,819
https://en.wikipedia.org/wiki/Capillary%20pressure
In fluid statics, capillary pressure () is the pressure between two immiscible fluids in a thin tube (see capillary action), resulting from the interactions of forces between the fluids and solid walls of the tube. Capillary pressure can serve as both an opposing or driving force for fluid transport and is a significant property for research and industrial purposes (namely microfluidic design and oil extraction from porous rock). It is also observed in natural phenomena. Definition Capillary pressure is defined as: where: is the capillary pressure is the pressure of the non-wetting phase is the pressure of the wetting phase The wetting phase is identified by its ability to preferentially diffuse across the capillary walls before the non-wetting phase. The "wettability" of a fluid depends on its surface tension, the forces that drive a fluid's tendency to take up the minimal amount of space possible, and it is determined by the contact angle of the fluid. A fluid's "wettability" can be controlled by varying capillary surface properties (e.g. roughness, hydrophilicity). However, in oil-water systems, water is typically the wetting phase, while for gas-oil systems, oil is typically the wetting phase. Regardless of the system, a pressure difference arises at the resulting curved interface between the two fluids. Equations Capillary pressure formulas are derived from the pressure relationship between two fluid phases in a capillary tube in equilibrium, which is that force up = force down. These forces are described as: These forces can be described by the interfacial tension and contact angle of the fluids, and the radius of the capillary tube. An interesting phenomena, capillary rise of water (as pictured to the right) provides a good example of how these properties come together to drive flow through a capillary tube and how these properties are measured in a system. There are two general equations that describe the force up and force down relationship of two fluids in equilibrium. The Young–Laplace equation is the force up description of capillary pressure, and the most commonly used variation of the capillary pressure equation: where: is the interfacial tension is the effective radius of the interface is the wetting angle of the liquid on the surface of the capillary The force down formula for capillary pressure is seen as: where: is the height of the capillary rise is the density gradient of the wetting phase is the density gradient of the non-wetting phase Applications Microfluidics Microfluidics is the study and design of the control or transport of small volumes of fluid flow through porous material or narrow channels for a variety of applications (e.g. mixing, separations). Capillary pressure is one of many geometry-related characteristics that can be altered in a microfluidic device to optimize a certain process. For instance, as the capillary pressure increases, a wettable surface in a channel will pull the liquid through the conduit. This eliminates the need for a pump in the system, and can make the desired process completely autonomous. Capillary pressure can also be utilized to block fluid flow in a microfluidic device. The capillary pressure in a microchannel can be described as: where: is the surface tension of the liquid is the contact angle at the bottom is the contact angle at the top is the contact angle at the left side of the channel is the contact angles at the right side of the channel is the depth is the width Thus, the capillary pressure can be altered by changing the surface tension of the fluid, contact angles of the fluid, or the depth and width of the device channels. To change the surface tension, one can apply a surfactant to the capillary walls. The contact angles vary by sudden expansion or contraction within the device channels. A positive capillary pressure represents a valve on the fluid flow while a negative pressure represents the fluid being pulled into the microchannel. Measurement Methods Methods for taking physical measurements of capillary pressure in a microchannel have not been thoroughly studied, despite the need for accurate pressure measurements in microfluidics. The primary issue with measuring the pressure in microfluidic devices is that the volume of fluid is too small to be used in standard pressure measurement tools. Some studies have presented the use of microballoons, which are size-changing pressure sensors. Servo-nulling, which is historically used for measuring blood pressure, has also been demonstrated to provide pressure information in microfluidic channels with the assistance of a LabVIEW control system. Essentially, a micropipette is immersed in the microchannel fluid and is programmed to respond to changes in the fluid meniscus. A displacement in the meniscus of the fluid in the micropipette induces a voltage drop, which triggers a pump to restore the original position of the meniscus. The pressure exerted by the pump is interpreted as the pressure within the microchannel. Examples Current research in microfluidics is focused on developing point-of-care diagnostics and cell sorting techniques (see lab-on-a-chip), and understanding cell behavior (e.g. cell growth, cell aging). In the field of diagnostics, the lateral flow test is a common microfluidic device platform that utilizes capillary forces to drive fluid transport through a porous membrane. The most famous lateral flow test is the take home pregnancy test, in which bodily fluid initially wets and then flows through the porous membrane, often cellulose or glass fiber, upon reaching a capture line to indicate a positive or negative signal. An advantage to this design, and several other microfluidic devices, is its simplicity (for example, its lack of human intervention during operation) and low cost. However, a disadvantage to these tests is that capillary action cannot be controlled after it has started, so the test time cannot be sped up or slowed down (which could pose an issue if certain time-dependent processes are to take place during the fluid flow). Another example of point-of-care work involving a capillary pressure-related design component is the separation of plasma from whole blood by filtration through porous membrane. Efficient and high-volume separation of plasma from whole blood is often necessary for infectious disease diagnostics, like the HIV viral load test. However, this task is often performed through centrifugation, which is limited to clinical laboratory settings. An example of this point-of-care filtration device is a packed-bed filter, which has demonstrated the ability to separate plasma and whole blood by utilizing asymmetric capillary forces within the membrane pores. Petrochemical industry Capillary pressure plays a vital role in extracting sub-surface hydrocarbons (such as petroleum or natural gas) from underneath porous reservoir rocks. Its measurements are utilized to predict reservoir fluid saturations and cap-rock seal capacity, and for assessing relative permeability (the ability of a fluid to be transported in the presence of a second immiscible fluid) data. Additionally, capillary pressure in porous rocks has been shown to affect phase behavior of the reservoir fluids, thus influencing extraction methods and recovery. It is crucial to understand these geological properties of the reservoir for its development, production, and management (e.g. how easy it is to extract the hydrocarbons). The Deepwater Horizon oil spill is an example of why capillary pressure is significant to the petrochemical industry. It is believed that upon the Deepwater Horizon oil rig’s explosion in the Gulf of Mexico in 2010, methane gas had broken through a recently implemented seal, and expanded up and out of the rig. Although capillary pressure studies (or potentially a lack thereof) do not necessarily sit at the root of this particular oil spill, capillary pressure measurements yield crucial information for understanding reservoir properties that could have influenced the engineering decisions made in the Deepwater Horizon event. Capillary pressure, as seen in petroleum engineering, is often modeled in a laboratory where it is recorded as the pressure required to displace some wetting phase by a non-wetting phase to establish equilibrium. For reference, capillary pressures between air and brine (which is a significant system in the petrochemical industry) have been shown to range between 0.67 and 9.5 MPa. There are various ways to predict, measure, or calculate capillary pressure relationships in the oil and gas industry. These include the following: Leverett J-function The Leverett J-function serves to provide a relationship between the capillary pressure and the pore structure (see Leverett J-function). Mercury Injection This method is well suited to irregular rock samples (e.g. those found in drill cuttings) and is typically used to understand the relationship between capillary pressure and the porous structure of the sample. In this method, the pores of the sample rock are evacuated, followed by mercury filling the pores with increasing pressure. Meanwhile, the volume of mercury at each given pressure is recorded and given as a pore size distribution, or converted to relevant oil/gas data. One pitfall to this method is that it does not account for fluid-surface interactions. However, the entire process of injecting mercury and collecting data occurs rapidly in comparison to other methods. Porous Plate Method The Porous Plate Method is an accurate way to understand capillary pressure relationships in fluid-air systems. In this process, a sample saturated with water is placed on a flat plate, also saturated with water, inside a gas chamber. Gas is injected at increasing pressures, thus displacing the water through the plate. The pressure of the gas represents the capillary pressure, and the amount of water ejected from the porous plate is correlated to the water saturation of the sample. Centrifuge Method The centrifuge method relies on the following relationship between capillary pressure and gravity: where: is the height of the capillary rise is gravity is the density of the wetting phase is the density of the non-wetting phase The centrifugal force essentially serves as an applied capillary pressure for small test plugs, often composed of brine and oil. During the centrifugation process, a given amount of brine is expelled from the plug at certain centrifugal rates of rotation. A glass vial measures the amount of fluid as it is being expelled, and these readings result in a curve that relates rotation speeds with drainage amounts. The rotation speed is correlated to capillary pressure by the following equation: where: is the radius of rotation of the bottom of the core sample is the radius of rotation of the top of the core sample is the rotational speed The primary benefits to this method are that it's rapid (producing curves in a matter of hours) and is not restricted to being performed at certain temperatures. Other methods include the Vapor Pressure Method, Gravity-Equilibrium Method, Dynamic Method, Semi-dynamic Method, and the Transient Method. Correlations In addition to measuring the capillary pressure in a laboratory setting to model that of an oil/natural gas reservoir, there exist several relationships to describe the capillary pressure given specific rock and extraction conditions. For example, R. H. Brooks and A. T. Corey developed a relationship for capillary pressure during the drainage of oil from an oil-saturated porous medium experiencing a gas invasion: where: is the capillary pressure between oil and gas phases is the oil saturation is the residual oil saturation that remains trapped in the pore at high capillary pressure is the threshold pressure (the pressure at which the gas phase is allowed to flow) is a parameter that is related to the distribution of pore sizes for narrow distributions for wide distributions Additionally, R. G. Bentsen and J. Anli developed a correlation for the capillary pressure during the drainage from a porous rock sample in which an oil phase displaces saturated water: where: is the capillary pressure between oil and water phases is a parameter that controls the shape of the capillary pressure function is the normalized wetting-phase saturation is the saturation of the wetting phase is the irreducible wetting-phase saturation Averaging capillary pressure vs. water saturation curves It has been shown that as reservoir simulators use the primary drainage capillary pressure data for saturation-height modeling calculations, primary drainage capillary pressure data should be averaged in the same manner that water saturations are averaged. Also, as reservoir simulators use the imbibition and secondary drainage capillary pressure data for fluids displacement calculations, these capillary pressures should not be averaged like primary drainage capillary pressure data. These can be averaged by Leverett J-function. The averaging equations are as follows averaging primary drainage capillary pressure vs. normalized saturation data in which is the number of core samples, is the effective porosity, is the bulk volume of sample, and is the primary drainage capillary pressure data vs. normalized water saturation. averaging imbibition and secondary drainage capillary pressure vs. normalized saturation data in which is the number of core samples, is the effective porosity, is the absolute permeability, is the interfacial tension or IFT, and is the imbibition or secondary drainage capillary pressure data vs. normalized water saturation. In nature Needle ice In addition to being manipulated for medical and energy applications, capillary pressure is the cause behind various natural phenomena as well. For example, needle ice, seen in cold soil, occurs via capillary action. The first major contributions to the study of needle ice, or simply, frost heaving were made by Stephen Taber (1929) and Gunnar Beskow (1935), who independently aimed to understand soil freezing. Taber’s initial work was related to understanding how the size of pores within the ground influenced the amount of frost heave. He also discovered that frost heave is favorable for crystal growth and that a gradient of soil moisture tension drives water upward toward the freezing front near the top of the ground. In Beskow’s studies, he defined this soil moisture tension as “capillary pressure” (and soil water as “capillary water”). Beskow determined that the soil type and effective stress on the soil particles influenced frost heave, where effective stress is the sum of pressure from above ground and the capillary pressure. In 1961, D.H. Everett elaborated on Taber and Beskow’s studies to understand why pore spaces filled with ice continue to experience ice growth. He utilized thermodynamic equilibrium principles, a piston cylinder model for ice growth and the following equation to understand the freezing of water in porous media (directly applicable to the formation of needle ice): where: is the pressure of the solid crystal is the pressure in the surrounding liquid is the interfacial tension between the solid and the liquid is the surface area of the phase boundary is the volume of the crystal is the mean curvature of the solid/liquid interface With this equation and model, Everett noted the behavior of water and ice given different pressure conditions at the solid-liquid interface. Everett determined that if the pressure of the ice is equal to the pressure of the liquid underneath the surface, ice growth is unable to continue into the capillary. Thus, with additional heat loss, it is most favorable for water to travel up the capillary and freeze in the top cylinder (as needle ice continues to grow atop itself above the soil surface). As the pressure of the ice increases, a curved interface between the solid and liquid arises and the ice will either melt, or equilibrium will be reestablished so that further heat loss again leads to ice formation. Overall, Everett determined that frost heaving (analogous to the development of needle ice) occurs as a function of the pore size in the soil and the energy at the interface of ice and water. Unfortunately, the downside to Everett's model is that he did not consider soil particle effects on the surface. Circulatory system Capillaries in the circulatory system are vital to providing nutrients and excreting waste throughout the body. There exist pressure gradients (due to hydrostatic and oncotic pressures) in the capillaries that control blood flow at the capillary level, and ultimately influence the capillary exchange processes (e.g. fluid flux). Due to limitations in technology and bodily structure, most studies of capillary activity are done in the retina, lip and skin, historically through cannulation or a servo-nulling system. Capillaroscopy has been used to visualize capillaries in the skin in 2D, and has been reported to observe an average range of capillary pressure of 10.5 to 22.5 mmHg in humans, and an increase in pressure among people with type 1 diabetes and hypertension. Relative to other components of the circulatory system, capillary pressure is low, as to avoid rupturing, but sufficient for facilitating capillary functions. See also Capillary action Capillary number Disjoining pressure Leverett J-function Young–Laplace equation Laplace pressure Surface tension Microfluidics Water_retention_curve TEM-function USBM wettability index References Fluid dynamics
Capillary pressure
[ "Chemistry", "Engineering" ]
3,626
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
3,917,813
https://en.wikipedia.org/wiki/Rutherford%20model
The Rutherford model is a name for the first model of an atom with a compact nucleus. The concept arose from Ernest Rutherford discovery of the nucleus. Rutherford directed the Geiger–Marsden experiment in 1909, which showed much more alpha particle recoil than J. J. Thomson's plum pudding model of the atom could explain. Thomson's model had positive charge spread out in the atom. Rutherford's analysis proposed a high central charge concentrated into a very small volume in comparison to the rest of the atom and with this central volume containing most of the atom's mass. The central region would later be known as the atomic nucleus. Rutherford did not discuss the organization of electrons in the atom and did not himself propose a model for the atom. Niels Bohr joined Rutherford's lab and developed a theory for the electron motion which became known as the Bohr model. Background Throughout the 1800's speculative ideas about atoms were discussed and published. JJ Thomson's model was the first of these models to be based on experimentally detected subatomic particles. In the same paper that Thomson announced his results on "corpuscle" nature of cathode rays, an event considered the discovery of the electron, he began speculating on atomic models composed of electrons. He developed his model, now called the plum pudding model, primarily in 1904-06. He produced an elaborate mechanical model of the electrons moving in concentric rings, but the positive charge needed to balance the negative electrons was a simple sphere of uniform charge and unknown composition. Between 1904 and 1910 Thomson developed formulae for the deflection of fast beta particles from his atomic model for comparison to experiment. Similar work by Rutherford using alpha particles would eventually show Thomson's model could not be correct. Also among the early models were "planetary" or Solar System-like models. In a 1901 paper, Jean Baptiste Perrin used Thomson's discovery in a proposed a Solar System like model for atoms, with very strongly charged "positive suns" surrounded by "corpuscles, a kind of small negative planets", where the word "corpuscles" refers to what we now call electrons. Perrin discussed how this hypothesis might related to important then unexplained phenomena like the photoelectric effect, emission spectra, and radioactivity. Perrin later credited Rutherford with the discovery of the nuclear model. A somewhat similar model proposed by Hantaro Nagaoka in 1904 used Saturn's rings as an analog. The rings consisted of a large number of particles that repelled each other but were attracted to a large central charge. This charge was calculated to be 10,000 times the charge of the ring particles for stability. George A. Schott showed in 1904 that Nagaoka's model could not be consistent with results of atomic spectroscopy and the model fell out of favor. Experimental basis for the model Rutherford's nuclear model of the atom grew out of a series of experiments with alpha particles, a form of radiation Rutherford discovered in 1899. These experiments demonstrated that alpha particles "scattered" or bounced off atoms in ways unlike Thomson's model predicted. In 1908 and 1910, Hans Geiger and Ernest Marsden in Rutherford's lab showed that alpha particles could occasionally be reflected from gold foils. If Thomson was correct, the beam would go through the gold foil with very small deflections. In the experiment most of the beam passed through the foil, but a few were deflected. In a May 1911 paper, Rutherford presented his own physical model for subatomic structure, as an interpretation for the unexpected experimental results. In it, the atom is made up of a central charge (this is the modern atomic nucleus, though Rutherford did not use the term "nucleus" in his paper). Rutherford only committed himself to a small central region of very high positive or negative charge in the atom. For concreteness, consider the passage of a high speed α particle through an atom having a positive central charge N e, and surrounded by a compensating charge of N electrons. Using only energetic considerations of how far particles of known speed would be able to penetrate toward a central charge of 100 e, Rutherford was able to calculate that the radius of his gold central charge would need to be less (how much less could not be told) than 3.4 × 10−14 meters. This was in a gold atom known to be 10−10 metres or so in radius—a very surprising finding, as it implied a strong central charge less than 1/3000th of the diameter of the atom. The Rutherford model served to concentrate a great deal of the atom's charge and mass to a very small core, but did not attribute any structure to the remaining electrons and remaining atomic mass. It did mention the atomic model of Hantaro Nagaoka, in which the electrons are arranged in one or more rings, with the specific metaphorical structure of the stable rings of Saturn. The plum pudding model of J. J. Thomson also had rings of orbiting electrons. The Rutherford paper suggested that the central charge of an atom might be "proportional" to its atomic mass in hydrogen mass units u (roughly 1/2 of it, in Rutherford's model). For gold, this mass number is 197 (not then known to great accuracy) and was therefore modelled by Rutherford to be possibly 196 u. However, Rutherford did not attempt to make the direct connection of central charge to atomic number, since gold's "atomic number" (at that time merely its place number in the periodic table) was 79, and Rutherford had modelled the charge to be about +100 units (he had actually suggested 98 units of positive charge, to make half of 196). Thus, Rutherford did not formally suggest the two numbers (periodic table place, 79, and nuclear charge, 98 or 100) might be exactly the same. In 1913 Antonius van den Broek suggested that the nuclear charge and atomic weight were not connected, clearing the way for the idea that atomic number and nuclear charge were the same. This idea was quickly taken up by Rutherford's team and was confirmed experimentally within two years by Henry Moseley. These are the key indicators: The atom's electron cloud does not (substantially) influence alpha particle scattering. Much of an atom's positive charge is concentrated in a relatively tiny volume at the center of the atom, known today as the nucleus. The magnitude of this charge is proportional to (up to a charge number that can be approximately half of) the atom's atomic mass—the remaining mass is now known to be mostly attributed to neutrons. This concentrated central mass and charge is responsible for deflecting both alpha and beta particles. The mass of heavy atoms such as gold is mostly concentrated in the central charge region, since calculations show it is not deflected or moved by the high speed alpha particles, which have very high momentum in comparison to electrons, but not with regard to a heavy atom as a whole. The atom itself is about 100,000 (105) times the diameter of the nucleus. This could be related to putting a grain of sand in the middle of a football field. Contribution to modern science Rutherford's new atom model caused no reaction at first. Rutherford explicitly ignores the electrons, only mentioning Hantaro Nagaoka's Saturnian model. By ignoring the electrons Rutherford also ignores any potential implications for atomic spectroscopy for chemistry. Rutherford himself did not press the case for his atomic model in the following years: his own 1913 book on "Radioactive substances and their radiations" only mentions the atom twice; other books by other authors around this time focus on Thomson's model. The impact of Rutherford's nuclear model came after Niels Bohr arrived as a post-doctoral student in Manchester at Rutherford's invitation. Bohr dropped his work on the Thomson model in favor of Rutherford's nuclear model, developing the Rutherford–Bohr model over the next several years. Eventually Bohr incorporated early ideas of quantum mechanics into the model of the atom, allowing prediction of electronic spectra and concepts of chemistry. After Rutherford's discovery, subsequent research determined the atomic structure which led to Rutherford's gold foil experiment. Scientists eventually discovered that atoms have a positively charged nucleus (with an atomic number of charges) in the center, with a radius of about 1.2 × 10−15 meters × [atomic mass number]. Electrons were found to be even smaller. References External links Rutherford's Model by Raymond College Rutherford's Model by Kyushu University 1911 in science Articles containing video clips Ernest Rutherford Foundational quantum physics Obsolete theories in physics
Rutherford model
[ "Physics" ]
1,735
[ "Theoretical physics", "Foundational quantum physics", "Quantum mechanics", "Obsolete theories in physics" ]
21,574,599
https://en.wikipedia.org/wiki/Tribocorrosion
Tribocorrosion is a material degradation process due to the combined effect of corrosion and wear. The name tribocorrosion expresses the underlying disciplines of tribology and corrosion. Tribology is concerned with the study of friction, lubrication and wear (its name comes from the Greek "tribo" meaning to rub) and corrosion is concerned with the chemical and electrochemical interactions between a material, normally a metal, and its environment. As a field of research tribocorrosion is relatively new, but tribocorrosion phenomena have been around ever since machines and installations are being used. Wear is a mechanical material degradation process occurring on rubbing or impacting surfaces, while corrosion involves chemical or electrochemical reactions of the material. Corrosion may accelerate wear and wear may accelerate corrosion. One then speaks of corrosion accelerated wear or wear accelerated corrosion. Both these phenomena, as well as fretting corrosion (which results from small amplitude oscillations between contacting surfaces) fall into the broader category of tribocorrosion. Erosion-corrosion is another tribocorrosion phenomenon involving mechanical and chemical effects: impacting particles or fluids erode a solid surface by abrasion, chipping or fatigue while simultaneously the surface corrodes. Phenomena in different engineering fields Tribocorrosion occurs in many engineering fields. It reduces the life-time of pipes, valves and pumps, of waste incinerators, of mining equipment or of medical implants, and it can affect the safety of nuclear reactors or of transport systems. On the other hand, tribocorrosion phenomena can also be applied to good use, for example in the chemical-mechanical planarization of wafers in the electronics industry or in metal grinding and cutting in presence of aqueous emulsions. Keeping this in mind, we may define tribocorrosion in a more general way independently of the notion of usefulness or damage or of the particular type of mechanical interaction: Tribocorrosion concerns the irreversible transformation of materials or of their function as a result of simultaneous mechanical and chemical/electrochemical interactions between surfaces in relative motion. Biotribocorrosion Biotribocorrosion covers the science of surface transformations resulting from the interactions of mechanical loading and chemical/electrochemical reactions that occur between elements of a tribological system exposed to biological environments. It has been studied for artificial joint prostheses. It is important to understand material degradation processes for joint implants to achieve longer service life and better safety issues for such devices. Passive metals While tribocorrosion phenomena may affect many materials, they are most critical for metals, especially the normally corrosion resistant so-called passive metals. The vast majority of corrosion resistant metals and alloys used in engineering (stainless steels, titanium, aluminium etc.) fall into this category. These metals are thermodynamically unstable in the presence of oxygen or water, and they derive their corrosion resistance from the presence at the surface of a thin oxide film, called the passive film, which acts as a protective barrier between the metal and its environment. Passive films are usually just a few atomic layers thick. Nevertheless, they can provide excellent corrosion protection because if damaged accidentally they spontaneously self-heal by metal oxidation. However, when a metal surface is subjected to severe rubbing or to a stream of impacting particles the passive film damage can become continuous and extensive. The self-healing process may no longer be effective and in addition it requires a high rate of metal oxidation. In other words, the underlying metal will strongly corrode before the protective passive film is reformed, if at all. In such a case, the total material loss due to tribocorrosion will be much higher than the sum of wear and corrosion one would measure in experiments with the same metal where only wear or only corrosion takes place. The example illustrates the fact that the rate of tribocorrosion is not simply the addition of the rate of wear and the rate of corrosion but it is strongly affected by synergistic and antagonistic effects between mechanical and chemical mechanisms. To study such effects in the laboratory, one most often uses mechanical wear testing rigs which are equipped with an electrochemical cell. This permits one to control independently the mechanical and chemical parameters. For example, by imposing a given potential to the rubbing metal one can simulate the oxidation potential of the environment and in addition, under certain conditions, the current flow is a measure of the instantaneous corrosion rate. Volume loss due to electrochemical dissolution can be measured by Faraday's laws of electrolysis and subtracted from total volume loss in tribocorrosion so the sum of mechanical wear loss and the synergies can be calculated. For a deeper understanding tribocorrosion experiments are supplemented by detailed microscopic and analytical studies of the contacting surfaces. At high temperatures, the more rapid generation of oxide due to a combination of temperature and tribological action during sliding wear can generate potentially wear resistant oxide layers known as 'glazes'. Under such circumstances, tribocorrosion can be used potentially in a beneficial way. Erosion corrosion Erosion corrosion is a degradation of material surface due to mechanical action, often by impinging liquid, abrasion by a slurry, particles suspended in fast flowing liquid or gas, bubbles or droplets, cavitation, etc. The mechanism can be described as follows: mechanical erosion of the material, or protective (or passive) oxide layer on its surface, enhanced corrosion of the material, if the corrosion rate of the material depends on the thickness of the oxide layer. The mechanism of erosion corrosion, the materials affected by it, and the conditions when it occurs are generally different from that of flow-accelerated corrosion, although the latter is sometimes classified as a sub-type of erosion corrosion. References Tribology Engineering mechanics Corrosion
Tribocorrosion
[ "Chemistry", "Materials_science", "Engineering" ]
1,195
[ "Tribology", "Metallurgy", "Materials science", "Surface science", "Corrosion", "Electrochemistry", "Civil engineering", "Mechanical engineering", "Engineering mechanics", "Materials degradation" ]
35,868,054
https://en.wikipedia.org/wiki/Cr23C6%20crystal%20structure
{{DISPLAYTITLE:Cr23C6 crystal structure}} Cr23C6 is the prototypical compound of a common crystal structure, discovered in 1933 as part of the chromium-carbon binary phase diagram. Over 85 known compounds adopt this structure type, which can be described as a NaCl-like packing of chromium cubes and cuboctahedra. Structure The space group of this structure is called Fmm (in Hermann–Mauguin notation) or "225" (in the International Tables for Crystallography). It belongs to a cubic crystal system, with Pearson symbol cF116. The shortest interatomic distances are between carbon and chromium atoms, which is expected on the basis of atomic size. The carbon atoms are in positions that cap each face of the chromium cubes and their coordination environment can be thought of as distorted square antiprisms formed from chromium atoms of both the cubes and the cuboctahedra. The closest Cr-Cr contacts are between members of a cuboctahedron, and the third closest are between members of a cube. The members of the cube, however, are closer to the 8 chromium atoms in the unit cell that are not part of either polyhedron. The coordination environment of these other atoms can be thought of as distorted Friauf polyhedra composed of chromium atoms, if next-nearest neighbors are included. Materials Examples of compounds that form in this structure type include Cr23C6, Mn23C6, and many ternary intermetallic carbides and borides. A few phases of ternary silicides, germanides, and phosphides are also known to exist. In going from the binary to ternary systems, some of the transition metal atoms are substituted by a third element, which can be an alkali metal, alkaline earth metal, rare-earth element, main group element, or another transition metal. This leads to an empirical formula of the form A23−xBxC6. Materials of this kind continue to be studied for potentially interesting magnetic and physical properties. References External links International Crystal Structure Database Crystal structure types
Cr23C6 crystal structure
[ "Chemistry", "Materials_science" ]
452
[ "Crystallography", "Crystal structure types" ]
35,868,491
https://en.wikipedia.org/wiki/Epigenetics%20in%20stem-cell%20differentiation
Embryonic stem cells are capable of self-renewing and differentiating to the desired fate depending on their position in the body. Stem cell homeostasis is maintained through epigenetic mechanisms that are highly dynamic in regulating the chromatin structure as well as specific gene transcription programs. Epigenetics has been used to refer to changes in gene expression, which are heritable through modifications not affecting the DNA sequence. The mammalian epigenome undergoes global remodeling during early stem cell development that requires commitment of cells to be restricted to the desired lineage. There has been multiple evidence suggesting that the maintenance of the lineage commitment of stem cells is controlled by epigenetic mechanisms such as DNA methylation, histone modifications and regulation of ATP-dependent remolding of chromatin structure. Based on the histone code hypothesis, distinct covalent histone modifications can lead to functionally distinct chromatin structures that influence the cell's fate. This regulation of chromatin through epigenetic modifications is a molecular mechanism that determines whether the cell continues to differentiate into the desired fate. A research study by Lee et al. examined the effects of epigenetic modifications on the chromatin structure and the modulation of these epigenetic markers during stem cell differentiation through in vitro differentiation of murine embryonic stem (ES) cells. Experimental background Embryonic stem cells exhibit dramatic and complex alterations to both global and site-specific chromatin structures. Lee et al. performed an experiment to determine the importance of deacetylation and acetylation for stem cell differentiation by looking at global acetylation and methylation levels at certain site-specific modification in histone sites H3K9 and H3K4. Gene expression at these histones regulated by epigenetic modifications is critical in restricting the embryonic stem cell to desired cell lineages and developing cellular memory. For mammalian cells, the maintenance of cytosine methylation is catalyzed by DNA methyltransferases and any disruption to these methyltransferases will cause a lethal phenotype to the embryo. Cytosine methylation is examined at H3K9, which is associated with inactive heterochromatin and occurs mainly at CpG dinucleotides while global acetylation is examined at H3K4, which is associated with active euchromatin. The mammalian zygotic genome undergoes active and passive global cytosine demethylation following fertilization that reaches a minimal point of 20% CpG methylation at the blastocyst stage to which is then followed by a wave of methylation that reprograms the chromatin structure in order to restore global levels of CpG methylation to 60%. Embryonic stem cells containing reduced or elevation levels of methylation are viable but unable to differentiate and therefore require critical regulation of cytosine methylation for mammalian development. Effects of global histone modifications during embryonic stem cell differentiation Histones modifications in chromatin were analyzed at various time intervals (along a 6-day period) following the initiation of in vitro embryonic stem cell differentiation. The removal of leukemia inhibitory factor (LIF) triggers differentiation. Representative data of the histone modifications at the specific sites after LIF removal, assessed using Western blotting, confirms strong deacetylation at the H3K4 and H3K9 positions on histone H3 after one day, followed by a small increase in acetylation by day two. The methylation of histone H3K4 also decreased after one day of LIF removal but showed a rebound between days 2–4 of differentiation, finally ending with a decrease in methylation on day five. These results indicate a decrease in the level of active euchromatin epigenetic marks upon initiation of embryonic stem cell differentiation which is then followed immediately by reprogramming of the epigenome. Histone modifications of H3K9 position show a decrease in di- and tri-methylation of undifferentiated embryonic stem cells and had a gradual increase in methylation during the six-day time course of in vitro differentiation, which indicated that there is a global increase of inactive heterochromatin levels at this histone mark. As the embryonic stem cell undergoes differentiation the markers for active euchromatin (histone acetylation and H3K4 methylation) are decreased after the removal of LIF showing that the cell is indeed becoming more differentiated. The slight rebound in each of these marks allows for further differentiation to occur by allowing another opportunity to decrease the markers once again, bringing the cell closer to its mature state. Since there is also an increase throughout the six-day period in H3K9me, a marker for active heterochromatin, once differentiation occurs it is concluded that the formation of heterochromatin occurs as the cell is differentiated into its desired fate making the cell inactive to prevent further differentiation. DNA methylation in differentiated versus undifferentiated cells Global levels of 5-methylcytosine were compared between undifferentiated and differentiated embryonic stem cells in vitro. The global cytosine methylation pattern appears to be established prior to the reprogramming of the histone code that occurs upon in vitro differentiation of embryonic stem cells. As the embryonic stem cell undergoes differentiation the level of DNA methylation increases. This indicates that there is an increase in inactive heterochromatin during differentiation. Supplemental effects of methylation with DNMTs In mammals, DNA methylation plays a role in regulating a key component of multipotency—the ability to rapidly self-renew. Khavari et al. discussed the fundamental mechanisms of DNA methylation and the interaction with several pathways regulating differentiation. New approaches studying the genomic status of DNA methylation in various states of differentiation have shown that methylation at CpG sites associated with putative enhancers are important in this process. DNA methylation can modulate the binding affinities of transcription factors by recruiting repressors such as MeCP2 which display binding specificity for sequences containing methylated CpG dinucleotides. DNA methylation is controlled by certain methyltransferases, DMNTs, which perform different functions depending on each one. DNMT3A and DNMT3B have both been linked to a role in the establishment of DNA methylation pattern in the early development of the stem cell, whereas DNMT1 is required to methylate a newly synthesized strand of DNA after the cell has undergone replication in order to sustain the epigenetic regulatory state. Numerous proteins can physically interact with DNMTs themselves, which help target DNMT1 to hemi-methylated DNA. Several new studies point to the central role of DNA methylation interacting with the regulation of cell cycles and DNA repair pathways in order to maintain the undifferentiated state. In embryonic stem cells, DNMT1 depletion within the undifferentiated progenitor cell compartment led to cell cycle arrest, premature differentiation and a failure of tissue self-renewal. The loss of DNMT1 occurred from profound effects associated with activation of differentiation genes and loss of genes promoting cell cycle progression, thus indicating that DNMT1 and other DNMTs do not continuously suppress differentiation and thus maintain the pluripotent state. These studies point to the importance of the interaction of DNMTs in order to maintain stem cell states allowing for further differentiation and formation of heterochromatin to occur. Epigenetic modifications of regulated genes during ESC differentiation Okamoto et al. previously documented the expression level of the Oct4 gene decreasing with embryonic stem cell differentiation. Lee et al. performed a ChIP analysis of the Oct4 promoter, associated with undifferentiated cells, region to examine the epigenetic modifications of regulated genes undergoing development during embryonic stem cell differentiation. This promoter region decreased at H3K4 methylation and H3K9 acetylation sites and increased at the H3K9 methylation site during differentiation. Analysis of a CpG motif of the Oct4 gene promoter revealed a progressive increase of DNA methylation and was completely methylated at day 10 of differentiation as previously reported in Gidekel and Bergman. These results indicate that there was a shift from the active eurchromatin to the inactive heterochromatin due to the decrease of acetylation of H3K4 and an increase of H3K9me. This means that the cell is becoming differentiated at the Oct4 gene, which is coincident with the silence of Oct4 gene expression. Another site specific gene tested for histone modification was a Brachyury gene, a marker of mesoderm differentiation and is only slightly expressed in undifferentiated embryonic stem cells. "Brachyury" was induced at day five of differentiation and completely silencing by day 10, corresponding to the last day of differentiation. The ChIP analysis of the "Brachyury" gene promoter revealed increase of expression in mono- and di-methylation of H3K4 at day 0 and 5 of embryonic stem cell differentiation with a loss of gene expression at day 10. H3K4 trimethylation coincides with the time of highest Brachyury gene expression since it only had gene expression on day 5. H3K4 methylations in all forms are absent at day 10 of differentiation, which correlates with the silencing of Brachyury gene expression. Mono-methylation of both histones produced expression at day 0 indication a marker that is not useful for chromatin structure. Acetlyation of H3K9 does not correlate to Brachyury gene expression since it was down regulated at the induction of differentiation. Upon examining of DNA methylation expression, there was no formation of intermediate sized bad in the Southern analysis suggesting that CpG motifs upstream of the promoter region are not methylated in the absence of cytosine methylation at this site. It is demonstrated from these studies that both H3K9 di-and tri-methylation correlate with the DNA methylation and gene expression while H3K4 tri-methylation is associated the highest gene expression stage of the Brachyury gene. A previous report from Santos-Rosa is in agreement with these data showing that active genes are associated with H3K4 tri-methylation in yeast. This data indicated the same results as for the Oct4 gene, in that heterochromatin is forming as differentiation occurs again coinciding with the silence of Brachyury gene expression. Epigenetic crosstalk in the keeps ground state ESCs from entering a primed state In a study done by Mierlo et al., data was obtained that suggests unique patterns of H3K27me3 and PCR2 polycomb repressive complexes shield ESCs from leaving the ground state or entering a primed state. They started from the observation that ground state ESCs are maintained by LIF or 2i, and began to investigate what the ground state epigenome looked like. Using ChIP-seq profiling, they were able to determine that the baseline levels of H3K27me3 were higher in 2i ESCs as opposed to those in serum (primed). This conclusion was further supported by the finding in the same paper that 2i ESCs gained primed-like qualities after removing a functional PRC2 complex, which rendered H3K27me3 useless. Effect of TSA on stem cell differentiation Leukemia inhibitory factor (LIF) was removed from all the cell lines. LIF inhibits cell differentiation, and its removal allows the cell lines to go through cell differentiation. The cell lines were treated with Trichostatin A (TSA) - a histone deacetylase inhibitor for 6 days. One group of cell lines was treated with 10nM of TSA. The western analysis showed the lack of initial deacetylation on Day-1 which, was observed in the control for the embryonic stem cell differentiation. The lack of histone deacetylase activity allowed the acetylation of H3K9 and histone H4. Embryonic stem cells were also analyzed morphologically to observe the formation of embryoid body formation as one of the measures of cell differentiation. The 10nM TSA treated cells failed to form the embryoid body by Day-6 as observed in the control cell line. This implies that the ES cells treated with TSA lacked the deacetylation on Day-1 and failed to differentiate after the removal of LIF. Second group,’-TSA Day4’ was treated with TSA for 3days. As soon as the TSA treatment was stopped, on day 4 the deacetylation was observed and the acetylation recovered on Day-5. The morphological examination showed the formation of embryoid body formation by Day-6. In addition, the embryoid body formation was faster than the control cell line. This suggests that the ‘-TSA Day4’ lines were responding to the removal of LIF but, were unable to acquire any differentiation phenotype. They were able to acquire the differentiation phenotype after the cessation of TSA treatment and at rapid rate. The morphological examination of the third group,’ 5 nM TSA’ showed the intermediate effect between the control and 10nM-TSA group. The lower dose of TSA allowed the formation of some embryoid body formation. This experiment shows that TSA inhibits histone deacetylase and the activity of histone deacetylase is required for the embryonic stem cell differentiation. Without the initial deacetylation on Day-1, the ES cells cannot go through the differentiation. Alkaline phosphatase activity Alkaline phosphatases found in humans are membrane bound glycoproteins, which function to catalyze the hydrolysis of monophosphate esters. McKenna et al. (1979) found that there are at least three varieties of alkaline phosphatases, kidney, liver, and bone alkaline phosphatases, that are all coded by the same structural gene, but contain non-identical carbohydrate moieties. The alkaline phosphatase varieties, therefore, express a unique complement of in the enzymatic processes in post-translational glycosylation of proteins. In normal stem cells, the activity of alkaline phosphatase activity is lowered upon differentiation. Trichostatin A causes the cells to maintain the activity of alkaline phosphatase. Trichostatin A can cause reversible inhibition of mammalian histone deacetylase, which leads to the inhibition of cell growth Significant increase in alkaline phosphatase extinction was observed when Trichostatin A was withdrawn after three days. Alkaline phosphatase activity correlates with the morphology changes. Initial deacetylation of histone is required for embryonic stem cell differentiation. HDAC1, but not HDAC2 controls differentiation Dovery et al. (2010) used HDAC knockout mice to demonstrate whether HDAC1 or HDAC2 was important for the embryonic stem cell differentiation. Examination of global histone acetylation in the absence of HDAC 1 showed an increase in acetylation. Global histone acetylation levels were unchanged by the loss of HDAC2. In order to analyze the process of HDAC knockout mouse in detail, the knockout mice embryonic stem cells were used to generate embryoid bodies. It showed that just before or during gastrulation, embryonic stem cells lacking HDAC1 acquired visible developmental defects. The continued culture of HDAC1 knockout embryonic stem cells showed that the embryoid bodies formed became irregular and reduced in size rather than uniformly spherical as in normal mice. Embryonic stem cell proliferation was unaffected by the loss of either HDAC1 or HDAC2 but the differentiation of embryonic stem cells were affected with that lack of HDAC 1. This shows that HDAC1 is required for cell fate determination during differentiation. In a study done by Chronis et al. (2017), it was found that during the reprogramming of somatic cells into pluripotent stem cells, OSK (three of the four Yamanaka factors – Oct4, Sox2, KLF4, c-Myc) works together with other transcription factors to change the enhancer landscape, leading to the loss of differentiation and the gain of pluripotency. They were able to determine this by mapping OSKM-binding chromatin states in reprogramming stages and doing loss and gain of function experiments. They were also able to conclude that OSK silenced ME (MEF-enhancers) partially through Hdac1, which suggests that Hdac1 plays a role in the process of reprogramming somatic cells. Epidrugs Epigenetics is known to be important in the regulation of gene expression of differentiation. As such, studies done on small molecules or drugs that inhibit epigenetic mechanisms during differentiation can have great potential for clinical applications such as bone regeneration from mesenchymal stem cells. One example of such an application is the use of decitabine (a deoxycytidine analog) to deplete DNA methyl transferase 1 (DNMT1) in the context of melanomas. Alcazar et al. did this with the intent to cause gene expression changes (as a result of the DNMT1 depletion) that would lead to the differentiation of melanocytes, leading to inhibited tumor growth. Small molecules such as these (epidrugs) – while harboring great potential for therapeutic use – are also associated with risk. Circling back to the DNMTi (decitabine), there are known side effects such as anemia that can occur – in addition to other unexpected off-target effects. This underscores the importance of improving the specificity of the epidrug target, and of devising new and improved methods of drug delivery. Quantification of the Waddington landscape C. Waddington coined the metaphor of the “epigenetic landscape”, in which a marble rolls down a landscape, its course determined by the topography of the land, eventually working towards a lower point. At different points, these paths branch out, and the marble goes down one path. In this metaphor, the ball is a cell during development, and this metaphor can be extended to the dynamics of gene regulation that underlie cell differentiation (epigenetics). Wang et al (2011). developed a theoretical/mathematical framework for this, which is significant because it is one of the first attempts to link the Waddington landscape idea with gene regulatory networks in a data-driven way. This, in their own words, provides a “framework for studying the global nature of the binary fate decision and commitment of a multipotent cell into one of two cell fates”, which can be an extremely valuable tool for cell reprogramming or just learning more about what could drive differentiation. The future Any disturbance of a stable epigenetic regulation of gene expression mediated by DNA methylation is associated with a number of human disorders, including cancer as well as congenital diseases such as pseudohypoparathyroidism type IA, Beckwith-Wiedemann, Prader-Willi and Angelman syndromes, which are each caused by altered methylation-based imprinting at specific loci. Perturbations of both global and gene-specific patterns of cytosine methylation are commonly observed in cancer while histone deacetylation is an important feature of nuclear reprogramming in oocytes during meiosis. Recent studies have revealed that there is an array of different pathways that cooperates with one another in order to bestow proper epigenetic regulation by DNA methylation. Future studies will be needed to further clarify the certain mechanism pathways such as DNA binding proteins, DNA repair and noncoding RNAs serve in order to regulate DNA methylation to suppress differentiation and sustain self-renewal in somatic stem cells in the epidermis and other tissues. Addressing these questions will help extend insight into these recent findings for a central role in epigenetic regulators of DNA methylation in controlling embryonic stem cell differentiation. References Epigenetics Induced stem cells
Epigenetics in stem-cell differentiation
[ "Biology" ]
4,220
[ "Induced stem cells", "Stem cell research" ]
35,870,958
https://en.wikipedia.org/wiki/Order-2%20apeirogonal%20tiling
In geometry, an order-2 apeirogonal tiling, apeirogonal dihedron, or infinite dihedron is a tessellation (gap-free filling with repeated shapes) of the plane consisting of two apeirogons. It may be considered an improper regular tiling of the Euclidean plane, with Schläfli symbol Two apeirogons, joined along all their edges, can completely fill the entire plane as an apeirogon is infinite in size and has an interior angle of 180°, which is half of a full 360°. Related tilings and polyhedra Similarly to the uniform polyhedra and the uniform tilings, eight uniform tilings may be based from the regular apeirogonal tiling. The rectified and cantellated forms are duplicated, and as two times infinity is also infinity, the truncated and omnitruncated forms are also duplicated, therefore reducing the number of unique forms to four: the apeirogonal tiling, the apeirogonal hosohedron, the apeirogonal prism, and the apeirogonal antiprism. See also Order-3 apeirogonal tiling - hyperbolic tiling Order-4 apeirogonal tiling - hyperbolic tiling Notes References The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, External links Jim McNeill: Tessellations of the Plane Apeirogonal tilings Euclidean tilings Isogonal tilings Isohedral tilings Order-2 tilings Regular tilings
Order-2 apeirogonal tiling
[ "Physics", "Mathematics" ]
318
[ "Geometry stubs", "Planes (geometry)", "Isogonal tilings", "Tessellation", "Euclidean plane geometry", "Geometry", "Euclidean tilings", "Isohedral tilings", "Symmetry" ]
2,916,856
https://en.wikipedia.org/wiki/Ion%20mobility%20spectrometry
Ion mobility spectrometry (IMS) It is a method of conducting analytical research that separates and identifies ionized molecules present in the gas phase based on the mobility of the molecules in a carrier buffer gas. Even though it is used extensively for military or security objectives, such as detecting drugs and explosives, the technology also has many applications in laboratory analysis, including studying small and big biomolecules. IMS instruments are extremely sensitive stand-alone devices, but are often coupled with mass spectrometry, gas chromatography or high-performance liquid chromatography in order to achieve a multi-dimensional separation. They come in various sizes, ranging from a few millimetres to several metres depending on the specific application, and are capable of operating under a broad range of conditions. IMS instruments such as microscale high-field asymmetric-waveform ion mobility spectrometry can be palm-portable for use in a range of applications including volatile organic compound (VOC) monitoring, biological sample analysis, medical diagnosis and food quality monitoring. Systems operated at higher pressure (i.e. atmospheric conditions, 1 atm or 1013 hPa) are often accompanied by elevated temperature (above 100 °C), while lower pressure systems (1–20 hPa) do not require heating. History IMS was first developed primarily by Earl W. McDaniel of Georgia Institute of Technology in the 1950s and 1960s when he used drift cells with low applied electric fields to study gas phase ion mobilities and reactions. In the following decades, he integrated the recently developed technology he had been working on with a magnetic-sector mass spectrometer. During this period, others also utilized his techniques in novel and original ways. Since then, IMS cells have been included in various configurations of mass spectrometers, gas chromatographs, and high-performance liquid chromatography instruments. IMS is a method used in multiple contexts, and the breadth of applications that it can support, in addition to its capabilities, is continually being expanded. Applications Perhaps ion mobility spectrometry's greatest strength is the speed at which separations occur—typically on the order of tens of milliseconds. This feature combined with its ease of use, relatively high sensitivity, and highly compact design have allowed IMS as a commercial product to be used as a routine tool for the field detection of explosives, drugs, and chemical weapons. Major manufacturers of IMS screening devices used in airports are Morpho and Smiths Detection. Smiths purchased Morpho Detection in 2017 and subsequently had to legally divest ownership of the Trace side of the business (Smiths have Trace Products) which was sold on to Rapiscan Systems in mid 2017. The products are listed under ETD Itemisers. The latest model is a non-radiation 4DX. In the pharmaceutical industry, IMS is used in cleaning validations, demonstrating that reaction vessels are sufficiently clean to proceed with the next batch of pharmaceutical product. IMS is much faster and more accurate than HPLC and total organic carbon methods previously used. IMS is also used for analyzing the composition of drugs produced, thereby finding a place in quality assurance and control. As a research tool, ion mobility is becoming more widely used in the analysis of biological materials, specifically proteomics and metabolomics. For example, IMS-MS using MALDI as the ionization method has helped make advances in proteomics, providing faster high-resolution separations of protein pieces in analysis. Moreover, it is a really promising tool for glycomics, as rotationally averaged collision cross section (CCS) values can be obtained. CCS values are important distinguishing characteristics of ions in the gas phase, and in addition to the empirical determinations, it can also be calculated computationally when the 3D structure of the molecule is known. This way, adding CCS values of glycans and their fragments to databases will increase structural identification confidence and accuracy. Outside of laboratory purposes, IMS has found great usage as a detection tool for hazardous substances. More than 10,000 IMS devices are in use worldwide in airports, and the US Army has more than 50,000 IMS devices. In industrial settings, uses of IMS include checking equipment cleanliness and detecting emission contents, such as determining the amount of hydrochloric and hydrofluoric acid in a stack gas from a process. It is also applied in industrial purposes to detect harmful substances in air. In metabolomics, the IMS is used to detect lung cancer, chronic obstructive pulmonary disease, sarcoidosis, potential rejections after lung transplantation and relations to bacteria within the lung (see Breath gas analysis). Ion mobility The physical quantity ion mobility K is defined as the proportionality factor between an ion's drift velocity vd in a gas and an electric field of strength E. After making the necessary adjustments to account for the n0 standard gas density, ion mobilities are often expressed as reduced mobilities. This number can also be described as standard temperature T0 = 273 K and standard pressure p0 = 1013 hPa. Both of these can be found in the table below. Ion concentrations are another term that may be used when referring to ion mobilities. Because of this, the decreased ion mobility is still temperature-dependent, although this adjustment does not consider any impacts other than the reduction in gas density. The ion mobility K can, under a variety of assumptions, be calculated by the Mason–Schamp equation. where Q is the ion charge, n is the drift gas number density, μ is the reduced mass of the ion and the drift gas molecules, k is Boltzmann constant, T is the drift gas temperature, and σ is the collision cross section between the ion and the drift gas molecules. Often, N is used instead of n for the drift gas number density and Ω instead σ for the ion-neutral collision cross section. This relation holds approximately at a low electric field limit, where the ratio of E/N is small and thus the thermal energy of the ions is much greater than the energy gained from the electric field between collisions. With these ions having similar energies as the buffer gas molecules, diffusion forces dominate ion motion in this case. The ratio E/N is typically given in Townsends (Td) and the transition between low- and high-field conditions is typically estimated to occur between 2 Td and 10 Td. When low-field conditions no longer prevail, the ion mobility itself becomes a function of the electric field strength which is usually described empirically through the so-called alpha function. Ionization The molecules of the sample need to be ionized, usually by corona discharge, atmospheric pressure photoionization (APPI), electrospray ionization (ESI), or radioactive atmospheric-pressure chemical ionization (R-APCI) source, e.g. a small piece of 63Ni or 241Am, similar to the one used in ionization smoke detectors. ESI and MALDI techniques are commonly used when IMS is paired with mass spectrometry. Doping materials are sometimes added to the drift gas for ionization selectivity. For example, acetone can be added for chemical warfare agent detection, chlorinated solvents added for explosives, and nicotinamide added for drugs detection. Analyzers Ion mobility spectrometers exist based on various principles, optimized for different applications. A review from 2014 lists eight different ion mobility spectrometry concepts. Drift tube ion mobility spectrometry Drift tube ion mobility spectrometry (DTIMS) measures how long a given ion takes to traverse a given length in a uniform electric field through a given atmosphere. In specified intervals, a sample of the ions is let into the drift region; the gating mechanism is based on a charged electrode working in a similar way as the control grid in triodes works for electrons. For precise control of the ion pulse width admitted to the drift tube, more complex gating systems such as a Bradbury–Nielsen or a field switching shutter are employed. Once in the drift tube, ions are subjected to a homogeneous electric field ranging from a few volts per centimetre up to many hundreds of volts per centimetre. This electric field then drives the ions through the drift tube where they interact with the neutral drift molecules contained within the system and separate based on the ion mobility, arriving at the detector for measurement. Ions are recorded at the detector in order from the fastest to the slowest, generating a response signal characteristic for the chemical composition of the measured sample. The ion mobility K can then be experimentally determined from the drift time tD of an ion traversing within a homogeneous electric field the potential difference U in the drift length L. A drift tube's resolving power RP can, when diffusion is assumed as the sole contributor to peak broadening, be calculated as where tD is the ion drift time, ΔtD is the Full width at half maximum, L is the tube length, E is the electric field strength, Q is the ion charge, k is the Boltzmann constant, and T is the drift gas temperature. Ambient pressure methods allow for higher resolving power and greater separation selectivity due to a higher rate of ion-molecule interactions and is typically used for stand-alone devices, as well as for detectors for gas, liquid, and supercriticial fluid chromatography. As shown above, the resolving power depends on the total voltage drop the ion traverses. Using a drift voltage of 25 kV in a 15 cm long atmospheric pressure drift tube, a resolving power above 250 is achievable even for small, single charged ions. This is sufficient to achieve separation of some isotopologues based on their difference in reduced mass μ. Low pressure drift tube Reduced pressure drift tubes operate using the same principles as their atmospheric pressure counterparts, but at drift gas pressure of only a few torr. Due to the vastly reduced number of ion-neutral interactions, much longer drift tubes or much faster ion shutters are necessary to achieve the same resolving power. However, the reduced pressure operation offers several advantages. First, it eases interfacing the IMS with mass spectrometry. Second, at lower pressures, ions can be stored for injection from an ion trap and re-focussed radially during and after the separation. Third, high values of E/N can be achieved, allowing for direct measurement of K(E/N) over a wide range. Travelling wave Though drift electric fields are normally uniform, non-uniform drift fields can also be used. One example is the travelling wave IMS, which is a low pressure drift tube IMS where the electric field is only applied in a small region of the drift tube. This region then moves along the drift tube, creating a wave pushing the ions towards the detector, removing the need for a high total drift voltage. A direct determination of collision cross sections (CCS) is not possible, using TWIMS. Calibrants can help circumvent this major drawback, however, these should be matched for size, charge and chemical class of the given analyte. An especially noteworthy variant is the "SUPER" IMS, which combines ion trapping by the so-called structures for lossless ion manipulations (SLIM) with several passes through the same drift region to achieve extremely high resolving powers. Trapped ion mobility spectrometry In trapped ion mobility spectrometry (TIMS), ions are held stationary (or trapped) in a flowing buffer gas by an axial electric field gradient (EFG) profile while the application of radio frequency (rf) potentials results in trapping in the radial dimension. TIMS operates in the pressure range of 2 to 5 hPa and replaces the ion funnel found in the source region of modern mass spectrometers. It can be coupled with nearly any mass analyzer through either the standard mode of operation for beam-type instruments or selective accumulation mode (SA-TIMS) when used with trapping mass spectrometry (MS) instruments. Effectively, the drift cell is prolonged by the ion motion created through the gas flow. Thus, TIMS devices do neither require large size nor high voltage in order to achieve high resolution, for instance achieving over 250 resolving power from a 4.7 cm device through the use of extended separation times. However, the resolving power strongly depends on the ion mobility and decreases for more mobile ions. In addition, TIMS can be capable of higher sensitivity than other ion mobility systems because no grids or shutters exist in the ion path, improving ion transmission both during ion mobility experiments and while operating in a transparent MS only mode. High-field asymmetric waveform ion mobility spectrometry DMS (differential mobility spectrometer) or FAIMS (field asymmetric ion mobility spectrometer) make use of the dependence of the ion mobility K on the electric field strength E at high electric fields. Ions are transported through the device by the drift gas flow and subjected to different field strengths in orthogonal direction for different amounts of time. Ions are deflected towards the walls of the analyzer based on the change of their mobility. Thereby only ions with a certain mobility dependence can pass the thus created filter Differential mobility analyzer A differential mobility analyzer (DMA) makes use of a fast gas stream perpendicular to the electric field. Thereby ions of different mobilities undergo different trajectories. This type of IMS corresponds to the sector instruments in mass spectrometry. They also work as a scannable filter. Examples include the differential mobility detector first commercialized by Varian in the CP-4900 MicroGC. Aspiration IMS operates with open-loop circulation of sampled air. Sample flow is passed via ionization chamber and then enters to measurement area where the ions are deflected into one or more measuring electrodes by perpendicular electric field which can be either static or varying. The output of the sensor is characteristic of the ion mobility distribution and can be used for detection and identification purposes. A DMA can separate charged aerosol particles or ions according to their mobility in an electric field prior to their detection, which can be done with several means, including electrometers or the more sophisticated mass spectrometers. Drift gas The drift gas composition is an important parameter for the IMS instrument design and resolution. Often, different drift gas compositions can allow for the separation of otherwise overlapping peaks. Elevated gas temperature assists in removing ion clusters that may distort experimental measurements. Detector Often the detector is a simple Faraday plate coupled to a transimpedance amplifier, however, more advanced ion mobility instruments are coupled with mass spectrometers in order to obtain both size and mass information simultaneously. It is noteworthy that the detector influences the optimum operating conditions for the ion mobility experiment. Combined methods IMS can be combined with other separation techniques. Gas chromatography When IMS is coupled with gas chromatography, common sample introduction is with the GC capillary column directly connected to the IMS setup, with molecules ionized as they elute from GC. A similar technique is commonly used for HPLC. A novel design for corona discharge ionization ion mobility spectrometry (CD–IMS) as a detector after capillary gas chromatography has been produced in 2012. In this design, a hollow needle was used for corona discharge creation and the effluent was entered into the ionization region on the upstream side of the corona source. In addition to the practical conveniences in coupling the capillary to IMS cell, this direct axial interfacing helps us to achieve a more efficient ionization, resulting in higher sensitivity. When used with GC, a differential mobility analyzer is often called a differential mobility detector (DMD). A DMD is often a type of microelectromechanical system, radio frequency modulated ion mobility spectrometry (MEMS RF-IMS) device. Though small, it can fit into portable units, such as transferable gas chromatographs or drug/explosives sensors. For instance, it was incorporated by Varian in its CP-4900 DMD MicroGC, and by Thermo Fisher in its EGIS Defender system, designed to detect narcotics and explosives in transportation or other security applications. Liquid chromatography Coupled with LC and MS, IMS has become widely used to analyze biomolecules, a practice heavily developed by David E. Clemmer, now at Indiana University (Bloomington). Mass spectrometry When IMS is used with mass spectrometry, ion mobility spectrometry-mass spectrometry offers many advantages, including better signal to noise, isomer separation, and charge state identification. IMS has commonly been attached to several mass spec analyzers, including quadropole, time-of-flight, and Fourier transform cyclotron resonance. Dedicated software Ion mobility mass spectrometry is a rather recently popularized gas phase ion analysis technique. As such there is not a large software offering to display and analyze ion mobility mass spectrometric data, apart from the software packages that are shipped along with the instruments. ProteoWizard, OpenMS, and msXpertSuite are free software according to the OpenSourceInitiative definition. While ProteoWizard and OpenMS have features to allow spectrum scrutiny, those software packages do not provide combination features. In contrast, msXpertSuite features the ability to combine spectra according to various criteria: retention time, m/z range, drift time range, for example. msXpertSuite thus more closely mimicks the software that usually comes bundled with the mass spectrometer. See also Electrical mobility Explosive detection Viehland–Mason theory References Bibliography External links Mass spectrometry Explosive detection
Ion mobility spectrometry
[ "Physics", "Chemistry" ]
3,677
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
2,918,338
https://en.wikipedia.org/wiki/Spring%20scale
A spring scale, spring balance or newton meter is a type of mechanical force gauge or weighing scale. It consists of a spring fixed at one end with a hook to attach an object at the other. It works in accordance with Hooke's Law, which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance. Therefore, the scale markings on the spring balance are equally spaced. A spring balance can be calibrated for the accurate measurement of mass in the location in which they are used, but many spring balances are marked right on their face "Not Legal for Trade" or words of similar import due to the approximate nature of the theory used to mark the scale. Also, the spring in the scale can permanently stretch with repeated use. A spring scale will only read correctly in a frame of reference where the acceleration in the spring axis is constant (such as on earth, where the acceleration is due to gravity). This can be shown by taking a spring scale into an elevator, where the weight measured will change as the elevator moves up and down changing velocities. If two or more spring balances are hung one below the other in series, each of the scales will read approximately the same, the full weight of the body hung on the lower scale. The scale on top would read slightly heavier due to also supporting the weight of the lower scale itself. Spring balances come in different sizes. Generally, small scales that measure newtons will have a less firm spring (one with a smaller spring constant) than larger ones that measure tens, hundreds or thousands of newtons or even more depending on the scale of newtons used. The largest spring scale ranged in measurement from 5000 to 8000 newtons. A spring balance may be labeled in both units of force (poundals, Newtons) and mass (pounds, kilograms/grams). Strictly speaking, only the force values are correctly labeled. In order to infer that the labeled mass values are correct, an object must be hung from the spring balance at rest in an inertial reference frame, interacting with no other objects but the scale itself. Uses Main uses of spring balances are to weigh heavy loads such as trucks, storage silos, and material carried on a conveyor belt. They are also common in science education as basic accelerators. They are used when the accuracy afforded by other types of scales can be sacrificed for simplicity, cheapness, and robustness. A spring balance measures the weight of an object by opposing the force of gravity acting with the force of an extended spring. History The first spring balance in Britain was made around 1770 by Richard Salter of Bilston, near Wolverhampton. He and his nephews John & George founded the firm of George Salter & Co., still notable makers of scales and balances, who in 1838 patented the spring balance. They also applied the same spring balance principle to steam locomotive safety valves, replacing the earlier deadweight valves. See also Weighing scale References External links Force Weighing instruments British inventions
Spring scale
[ "Physics", "Mathematics", "Technology", "Engineering" ]
622
[ "Force", "Physical quantities", "Weighing instruments", "Quantity", "Mass", "Classical mechanics", "Measuring instruments", "Wikipedia categories named after physical quantities", "Matter" ]
2,918,518
https://en.wikipedia.org/wiki/Atom%20transfer%20radical%20polymerization
Atom transfer radical polymerization (ATRP) is an example of a reversible-deactivation radical polymerization. Like its counterpart, ATRA, or atom transfer radical addition, ATRP is a means of forming a carbon-carbon bond with a transition metal catalyst. Polymerization from this method is called atom transfer radical addition polymerization (ATRAP). As the name implies, the atom transfer step is crucial in the reaction responsible for uniform polymer chain growth. ATRP (or transition metal-mediated living radical polymerization) was independently discovered by Mitsuo Sawamoto and by Krzysztof Matyjaszewski and Jin-Shan Wang in 1995. The following scheme presents a typical ATRP reaction: Overview of ATRP ATRP usually employs a transition metal complex as the catalyst with an alkyl halide as the initiator (R-X). Various transition metal complexes, namely those of Cu, Fe, Ru, Ni, and Os, have been employed as catalysts for ATRP. In an ATRP process, the dormant species is activated by the transition metal complex to generate radicals via one electron transfer process. Simultaneously the transition metal is oxidized to higher oxidation state. This reversible process rapidly establishes an equilibrium that is predominately shifted to the side with very low radical concentrations. The number of polymer chains is determined by the number of initiators. Each growing chain has the same probability to propagate with monomers to form living/dormant polymer chains (R-Pn-X). As a result, polymers with similar molecular weights and narrow molecular weight distribution can be prepared. ATRP reactions are very robust in that they are tolerant of many functional groups like allyl, amino, epoxy, hydroxy, and vinyl groups present in either the monomer or the initiator. ATRP methods are also advantageous due to the ease of preparation, commercially available and inexpensive catalysts (copper complexes), pyridine-based ligands, and initiators (alkyl halides). Components of normal ATRP There are five important variable components of atom transfer radical polymerizations. They are the monomer, initiator, catalyst, ligand, and solvent. The following section breaks down the contributions of each component to the overall polymerization. Monomer Monomers typically used in ATRP are molecules with substituents that can stabilize the propagating radicals; for example, styrenes, (meth)acrylates, (meth)acrylamides, and acrylonitrile. ATRP is successful at leading to polymers of high number average molecular weight and low dispersity when the concentration of the propagating radical balances the rate of radical termination. Yet, the propagating rate is unique to each individual monomer. Therefore, it is important that the other components of the polymerization (initiator, catalyst, ligand, and solvent) are optimized in order for the concentration of the dormant species to be greater than that of the propagating radical while being low enough as to prevent slowing down or halting the reaction. Initiator The number of growing polymer chains is determined by the initiator. To ensure a low polydispersity and a controlled polymerization, the rate of initiation must be as fast or preferably faster than the rate of propagation Ideally, all chains will be initiated in a very short period of time and will be propagated at the same rate. Initiators are typically chosen to be alkyl halides whose frameworks are similar to that of the propagating radical. Alkyl halides such as alkyl bromides are more reactive than alkyl chlorides. Both offer good molecular weight control. The shape or structure of the initiator influences polymer architecture. For example, initiators with multiple alkyl halide groups on a single core can lead to a star-like polymer shape. Furthermore, α-functionalized ATRP initiators can be used to synthesize hetero-telechelic polymers with a variety of chain-end groups Catalyst The catalyst is the most important component of ATRP because it determines the equilibrium constant between the active and dormant species. This equilibrium determines the polymerization rate. An equilibrium constant that is too small may inhibit or slow the polymerization while an equilibrium constant that is too large leads to a wide distribution of chain lengths. There are several requirements for the metal catalyst: There needs to be two accessible oxidation states that are differentiated by one electron The metal center needs to have reasonable affinity for halogens The coordination sphere of the metal needs to be expandable when it is oxidized as to accommodate the halogen The transition metal catalyst should not lead to significant side reactions, such as irreversible coupling with the propagating radicals and catalytic radical termination The most studied catalysts are those that include copper, which has shown the most versatility with successful polymerizations for a wide selection of monomers. Ligand One of the most important aspects in an ATRP reaction is the choice of ligand which is used in combination with the traditionally copper halide catalyst to form the catalyst complex. The main function of the ligand is to solubilize the copper halide in whichever solvent is chosen and to adjust the redox potential of the copper. This changes the activity and dynamics of the halogen exchange reaction and subsequent activation and deactivation of the polymer chains during polymerization, therefore greatly affecting the kinetics of the reaction and the degree of control over the polymerization. Different ligands should be chosen based on the activity of the monomer and the choice of metal for the catalyst. As copper halides are primarily used as the catalyst, amine based ligands are most commonly chosen. Ligands with higher activities are being investigated as ways to potentially decrease the concentration of catalyst in the reaction since a more active catalyst complex would lead to a higher concentration of deactivator in the reaction. However, a too active catalyst can lead to a loss of control and increase the polydispersity of the resulting polymer. Solvents Toluene, 1,4-dioxane, xylene, anisole, DMF, DMSO, water, methanol, acetonitrile, or even the monomer itself (described as a bulk polymerization) are commonly used. Kinetics of normal ATRP Reactions in atom transfer radical polymerization Initiation Quasi-steady state Other chain breaking reactions () should also be considered. ATRP equilibrium constant The radical concentration in normal ATRP can be calculated via the following equation: It is important to know the KATRP value to adjust the radical concentration. The KATRP value depends on the homo-cleavage energy of the alkyl halide and the redox potential of the Cu catalyst with different ligands. Given two alkyl halides (R1-X and R2-X) and two ligands (L1 and L2), there will be four combinations between different alkyl halides and ligands. Let KijATRP refer to the KATRP value for Ri-X and Lj. If we know three of these four combinations, the fourth one can be calculated as: The KATRP values for different alkyl halides and different Cu catalysts can be found in literature. Solvents have significant effects on the KATRP values. The KATRP value increases dramatically with the polarity of the solvent for the same alkyl halide and the same Cu catalyst. The polymerization must take place in solvent/monomer mixture, which changes to solvent/monomer/polymer mixture gradually. The KATRP values could change 10000 times by switching the reaction medium from pure methyl acrylate to pure dimethyl sulfoxide. Activation and deactivation rate coefficients Deactivation rate coefficient, kd, values must be sufficiently large to obtain low dispersity. The direct measurement of kd is difficult though not impossible. In most cases, kd may be calculated from known KATRP and ka. Cu complexes providing very low kd values are not recommended for use in ATRP reactions. Retention of chain end functionality High level retention of chain end functionality is typically desired. However, the determination of the loss of chain end functionality based on 1H NMR and mass spectroscopy methods cannot provide precise values. As a result, it is difficult to identify the contributions of different chain breaking reactions in ATRP. One simple rule in ATRP comprises the principle of halogen conservation. Halogen conservation means the total amount of halogen in the reaction systems must remain as a constant. From this rule, the level of retention of chain end functionality can be precisely determined in many cases. The precise determination of the loss of chain end functionality enabled further investigation of the chain breaking reactions in ATRP. Advantages and disadvantages of ATRP Advantages ATRP enables the polymerization of a wide variety of monomers with different chemical functionalities, proving to be more tolerant of these functionalities than ionic polymerizations. It provides increased control of molecular weight, molecular architecture and polymer composition while maintaining a low polydispersity (1.05-1.2). The halogen remaining at the end of the polymer chain after polymerization allows for facile post-polymerization chain-end modification into different reactive functional groups. The use of multi-functional initiators facilitates the synthesis of lower-arm star polymers and telechelic polymers. External visible light stimulation ATRP has a high responding speed and excellent functional group tolerance. Disadvantages The most significant drawback of ATRP is the high concentrations of catalyst required for the reaction. This catalyst standardly consists of a copper halide and an amine-based ligand. The removal of the copper from the polymer after polymerization is often tedious and expensive, limiting ATRP's use in the commercial sector. However, researchers are currently developing methods which would limit the necessity of the catalyst concentration to ppm. ATRP is also a traditionally air-sensitive reaction normally requiring freeze-pump thaw cycles. However, techniques such as Activator Generated by Electron Transfer (AGET) ATRP provide potential alternatives which are not air-sensitive. A final disadvantage is the difficulty of conducting ATRP in aqueous media. Different ATRP methods Activator regeneration ATRP methods In a normal ATRP, the concentration of radicals is determined by the KATRP value, concentration of dormant species, and the [CuI]/[CuII] ratio. In principle, the total amount of Cu catalyst should not influence polymerization kinetics. However, the loss of chain end functionality slowly but irreversibly converts CuI to CuII. Thus initial [CuI]/[I] ratios are typically 0.1 to 1. When very low concentrations of catalysts are used, usually at the ppm level, activator regeneration processes are generally required to compensate the loss of CEF and regenerate a sufficient amount of CuI to continue the polymerization. Several activator regeneration ATRP methods were developed, namely ICAR ATRP, ARGET ATRP, SARA ATRP, eATRP, and photoinduced ATRP. The activator regeneration process is introduced to compensate the loss of chain end functionality, thus the cumulative amount of activator regeneration should roughly equal the total amount of the loss of chain end functionality. ICAR ATRP Initiators for continuous activator regeneration (ICAR) is a technique that uses conventional radical initiators to continuously regenerate the activator, lowering its required concentration from thousands of ppm to <100 ppm; making it an industrially relevant technique. ARGET ATRP Activators regenerated by electron transfer (ARGET) employs non-radical forming reducing agents for regeneration of CuI. A good reducing agent (e.g. hydrazine, phenols, sugars, ascorbic acid) should only react with CuII and not with radicals or other reagents in the reaction mixture. SARA ATRP A typical SARA ATRP employs Cu0 as both supplemental activator and reducing agent (SARA). Cu0 can activate alkyl halide directly but slowly. Cu0 can also reduce CuII to CuI. Both processes help to regenerate CuI activator. Other zerovalent metals, such as Mg, Zn, and Fe, have also been employed for Cu-based SARA ATRP. eATRP In eATRP the activator CuI is regenerated via electrochemical process. The development of eATRP enables precise control of the reduction process and external regulation of the polymerization. In an eATRP process, the redox reaction involves two electrodes. The CuII species is reduced to CuI at the cathode. The anode compartment is typically separated from the polymerization environment by a glass frit and a conductive gel. Alternatively, a sacrificial aluminum counter electrode can be used, which is directly immersed in the reaction mixture. Photoinduced ATRP The direct photo reduction of transition metal catalysts in ATRP and/or photo assistant activation of alkyl halide is particularly interesting because such a procedure will allow performing of ATRP with ppm level of catalysts without any other additives. Other ATRP methods Reverse ATRP In reverse ATRP, the catalyst is added in its higher oxidation state. Chains are activated by conventional radical initiators (e.g. AIBN) and deactivated by the transition metal. The source of transferable halogen is the copper salt, so this must be present in concentrations comparable to the transition metal. SR&NI ATRP A mixture of radical initiator and active (lower oxidation state) catalyst allows for the creation of block copolymers (contaminated with homopolymer) which is impossible using standard reverse ATRP. This is called SR&NI (simultaneous reverse and normal initiation ATRP). AGET ATRP Activators generated by electron transfer uses a reducing agent unable to initiate new chains (instead of organic radicals) as regenerator for the low-valent metal. Examples are metallic copper, tin(II), ascorbic acid, or triethylamine. It allows for lower concentrations of transition metals, and may also be possible in aqueous or dispersed media. Hybrid and bimetallic systems This technique uses a variety of different metals/oxidation states, possibly on solid supports, to act as activators/deactivators, possibly with reduced toxicity or sensitivity. Iron salts can, for example, efficiently activate alkyl halides but requires an efficient Cu(II) deactivator which can be present in much lower concentrations (3–5 mol%) Metal-free ATRP Trace metal catalyst remaining in the final product has limited the application of ATRP in biomedical and electronic fields. In 2014, Craig Hawker and coworkers developed a new catalysis system involving photoredox reaction of 10-phenothiazine. The metal-free ATRP has been demonstrated to be capable of controlled polymerization of methacrylates. This technique was later expanded to polymerization of acrylonitrile by Matyjaszewski et al. Mechano/sono-ATRP Mechano/sono-ATRP uses mechanical forces, typically ultrasonic agitation, as an external stimulus to induce the (re)generation of activators in ATRP. Esser-Kahn, et al. demonstrated the first example of mechanoATRP using the piezoelectricity of barium titanate to reduce Cu(II) species. Matyjaszewski, et al. later improved the technique by using nanometer-sized and/or surface-functionalized barium titanate or zinc oxide particles, achieving superior rate and control of polymerization, as well as temporal control, with ppm-level of copper catalysts. In addition to peizoelectric particles, water and carbonates were found to mediate mechano/sono-ATRP. Mechochemically homolyzed water molecules undergoes radical addition to monomers, which in turn reduces Cu(II) species. Mechanically unstable Cu(II)-carbonate complexes formed in the presence to insoluble carbonates, which oxidizes dimethylsulfoxide, the solvent molecules, to generate Cu(I) species and carbon dioxide. Biocatalytic ATRP Metalloenzymes have been used for the first time as ATRP catalysts, in parallel and independently, by the research teams of Fabio Di Lena and Nico Bruns. This pioneering work has paved the way to the emerging field of biocatalytic reversible-deactivation radical polymerization. Polymers synthesized through ATRP Polystyrene Poly (methyl methacrylate) Polyacrylamide See also Heteropolymer Radical (chemistry) Reversible addition−fragmentation chain-transfer polymerization Nitroxide mediated radical polymerization External links About ATRP - Matyjaszewski Polymer Group References Polymerization reactions
Atom transfer radical polymerization
[ "Chemistry", "Materials_science" ]
3,527
[ "Polymerization reactions", "Polymer chemistry" ]
2,918,563
https://en.wikipedia.org/wiki/Reversible%20addition%E2%88%92fragmentation%20chain-transfer%20polymerization
Reversible addition−fragmentation chain-transfer or RAFT polymerization is one of several kinds of reversible-deactivation radical polymerization. It makes use of a chain-transfer agent (CTA) in the form of a thiocarbonylthio compound (or similar, from here on referred to as a RAFT agent, see Figure 1) to afford control over the generated molecular weight and polydispersity during a free-radical polymerization. Discovered at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) of Australia in 1998, RAFT polymerization is one of several living or controlled radical polymerization techniques, others being atom transfer radical polymerization (ATRP) and nitroxide-mediated polymerization (NMP), etc. RAFT polymerization uses thiocarbonylthio compounds, such as dithioesters, thiocarbamates, and xanthates, to mediate the polymerization via a reversible chain-transfer process. As with other controlled radical polymerization techniques, RAFT polymerizations can be performed under conditions that favor low dispersity (narrow molecular weight distribution) and a pre-chosen molecular weight. RAFT polymerization can be used to design polymers of complex architectures, such as linear block copolymers, comb-like, star, brush polymers, dendrimers and cross-linked networks. Overview History The addition−fragmentation chain-transfer process was first reported in the early 1970s. However, the technique was irreversible, so the transfer reagents could not be used to control radical polymerization at this time. For the first few years addition−fragmentation chain-transfer was used to help synthesize end-functionalized polymers. Scientists began to realize the potential of RAFT in controlled radical polymerization in the 1980s. Macromonomers were known as reversible chain transfer agents during this time, but had limited applications on controlled radical polymerization. In 1995, a key step in the "degenerate" reversible chain transfer step for chain equilibration was brought to attention. The essential feature is that the product of chain transfer is also a chain transfer agent with similar activity to the precursor transfer agent. RAFT polymerization today is mainly carried out by thiocarbonylthio chain transfer agents. It was first reported by Rizzardo et al. in 1998. RAFT is one of the most versatile methods of controlled radical polymerization because it is tolerant of a very wide range of functionality in the monomer and solvent, including aqueous solutions. RAFT polymerization has also been effectively carried out over a wide temperature range. Important components of RAFT Typically, a RAFT polymerization system consists of: a radical source (e.g. thermochemical initiator or the interaction of gamma radiation with some reagent) monomer RAFT agent solvent (not strictly required if the monomer is a liquid) A temperature is chosen such that (a) chain growth occurs at an appropriate rate, (b) the chemical initiator (radical source) delivers radicals at an appropriate rate and (c) the central RAFT equilibrium (see later) favors the active rather than dormant state to an acceptable extent. RAFT polymerization can be performed by adding a chosen quantity of an appropriate RAFT agent to a conventional free radical polymerization. Usually the same monomers, initiators, solvents and temperatures can be used. Radical initiators such as azobisisobutyronitrile (AIBN) and 4,4'-azobis(4-cyanovaleric acid) (ACVA), also called 4,4'-azobis(4-cyanopentanoic acid), are widely used as the initiator in RAFT. Figure 3 provides a visual description of RAFT polymerizations of poly(methyl methacrylate) and polyacrylic acid using AIBN as the initiator and two RAFT agents. RAFT polymerization is known for its compatibility with a wide range of monomers compared to other controlled radical polymerizations. These monomers include (meth)acrylates, (meth)acrylamides, acrylonitrile, styrene and derivatives, butadiene, vinyl acetate and N-vinylpyrrolidone. The process is also suitable for use under a wide range of reaction parameters such as temperature or the level of impurities, as compared to NMP or ATRP. The Z and R group of a RAFT agent must be chosen according to a number of considerations. The Z group primarily affects the stability of the S=C bond and the stability of the adduct radical (Polymer-S-C•(Z)-S-Polymer, see section on Mechanism). These in turn affect the position of and rates of the elementary reactions in the pre- and main-equilibrium. The R group must be able to stabilize a radical such that the right hand side of the pre-equilibrium is favored, but unstable enough that it can reinitiate growth of a new polymer chain. As such, a RAFT agent must be designed with consideration of the monomer and temperature, since both these parameters also strongly influence the kinetics and thermodynamics of the RAFT equilibria. Products The desired product of a RAFT polymerization is typically linear polymer with an R-group at one end and a dithiocarbonate moiety at the other end. Figure 4 depicts the major and minor products of a RAFT polymerization. All other products arise from (a) biradical termination events or (b) reactions of chemical species that originate from initiator fragments, denoted by I in the figure. (Note that categories (a) and (b) intersect). The selectivity towards the desired product can be increased by increasing the concentration of RAFT agent relative to the quantity of free radicals delivered during the polymerization. This can be done either directly (i.e. by increasing the RAFT agent concentration) or by decreasing the rate of decomposition of or concentration of initiator. RAFT mechanism Kinetics overview RAFT is a type of living polymerization involving a conventional radical polymerization which is mediated by a RAFT agent. Monomers must be capable of radical polymerization. There are a number of steps in a RAFT polymerization: initiation, pre-equilibrium, re-initiation, main equilibrium, propagation and termination. The mechanism is now explained further with the help of Figure 5. Initiation: The reaction is started by a free-radical source which may be a decomposing radical initiator such as AIBN. In the example in Figure 5, the initiator decomposes to form two fragments (I•) which react with a single monomer molecule to yield a propagating (i.e. growing) polymeric radical of length 1, denoted P1•. Propagation: Propagating radical chains of length n in their active (radical) form, Pn•, add to monomer, M, to form longer propagating radicals, Pn+1•. RAFT pre-equilibrium: A polymeric radical with n monomer units (Pn) reacts with the RAFT agent to form a RAFT adduct radical. This may undergo a fragmentation reaction in either direction to yield either the starting species or a radical (R•) and a polymeric RAFT agent (S=C(Z)S-Pn). This is a reversible step in which the intermediate RAFT adduct radical is capable of losing either the R group (R•) or the polymeric species (Pn•). Re-initiation: The leaving group radical (R•) then reacts with another monomer species, starting another active polymer chain. Main RAFT equilibrium: This is the most important part in the RAFT process, in which, by a process of rapid interchange, the present radicals (and hence opportunities for polymer chain growth) are "shared" among all species that have not yet undergone termination (Pn• and S=C(Z)S-Pn). Ideally the radicals are shared equally, causing chains to have equal opportunities for growth and a narrow PDI. Termination: Chains in their active form react via a process known as bi-radical termination to form chains that cannot react further, known as dead polymer. Ideally, the RAFT adduct radical is sufficiently hindered such that it does not undergo termination reactions. A visual representation of this process can be seen in Video 1. Thermodynamics of the main RAFT equilibrium The position of the main RAFT equilibrium (Figure 5) is affected by the relative stabilities of the RAFT adduct radical (Pn-S-C•(Z)-S-Pm) and its fragmentation products, namely S=C(Z)S-Pn and polymeric radical (Pm•). If formation of the RAFT adduct radical is sufficiently thermodynamically favorable, the concentration of active species, Pm•, will be reduced to the extent that a reduction in the rate of conversion of monomer into polymer is also observed, as compared to an equivalent polymerization without RAFT agent. Such a polymerization, is referred to as a rate-retarded RAFT polymerization. The rate of a RAFT polymerization, that is, the rate of conversion of monomer into polymer, mainly depends on the rate of the Propagation reaction (Figure 5) because the rate of initiation and termination are much higher than the rate of propagation. The rate of propagation is proportional to the concentration, [P•], of the active species P•, whereas the rate of the termination reaction, being second order, is proportional to the square [P•]2. This means that during rate-retarded RAFT polymerizations, the rate of formation of termination products is suppressed to a greater extent than the rate of chain growth. In RAFT polymerizations without rate-retardation, the concentration of the active species P• is close to that in an equivalent conventional polymerization in the absence of RAFT agent. The main RAFT equilibrium and hence the rate retardation of the reaction is influenced by both temperature and chemical factors. A high temperature favors formation of the fragmentation products rather than the adduct radical Pn-S-C•(Z)-S-Pm. RAFT agents with a radical stabilising Z-group such as Phenyl group favor the adduct radical, as do propagating radicals whose monomers lack radical stabilising features, for example Vinyl acetate. Further mechanistic considerations In terms of mechanism, an ideal RAFT polymerization has several features. The pre-equilibrium and re-initiation steps are completed very early in the polymerization meaning that the major product of the reaction (the RAFT polymer chains, RAFT-Pn), all start growing at approximately the same time. The forward and reverse reactions of the main RAFT equilibrium are fast, favoring equal growth opportunities amongst the chains. The total number of radicals delivered to the system by the initiator during the course of the polymerization is low compared to the number of RAFT agent molecules, meaning that the R group initiated polymer chains from the re-initiation step form the majority of the chains in the system, rather than initiator fragment bearing chains formed in the Initiation step. This is important because initiator decomposes continuously during the polymerization, not just at the start, and polymer chains arising from initiator decomposition cannot, therefore, have a narrow length distribution. These mechanistic features lead to an average chain length that increases linearly with the conversion of monomer into polymer. In contrast to other controlled radical polymerizations (for example ATRP), a RAFT polymerization does not achieve controlled evolution of molecular weight and low polydispersity by reducing bi-radical termination events (although in some systems, these events may indeed be reduced somewhat, as outlined above), but rather, by ensuring that most polymer chains start growing at approximately the same time and experience equal growth during polymerization. Role of Z and R groups on RAFT agent Guidelines for Z and R groups depend on their functions and which types monomers are required to be polymerized. R group: It must be a good homolytic leaving group relative to Pn (shifts main equlibrium towards macro-CTA and R radical) It should reinitiate polymerisation efficiently Choice of Z group affects: Rate of addition of propagating polymer to the thiocarbonyl of intermediate species Rate of fragmentation of intermediate radicals Guidelines have been provided for selection of R and Z groups based on the desried monomer to be polymerised and these are summarised in Figures 6 and 7. Monomers can be divided into more actived and less actived, called MAM and LAM, respectively. MAM will yield less active propagating radical species, and vice versa for LAM. Therefore, MAM require more active RAFT reageants, while LAM require less active reagents. Important ratios between reaction components During RAFT synthesis, some ratios between reaction components are important and usually can be used to control or set the desired degree of polymerization and polymer molecular weight. All the following ratios are relative to initial moles: Monomer to RAFT reagent: gives the expected degree of polymerization (that is, the number of monomer units in each polymer chain) and can be used to estimate the molecular weight of the polymer by Equation (1) (see below). RAFT reagent to initiator: determines the end groups on the polymer chains. For the α end, this ratio gives the number of chains that come from the R group (4th step in Figure 5) to the number of chains that come from the initiator (2nd step in Figure 5). For the ω end, it gives the proportion of dormant polymer chains (those with a thiocarbonylthio at the end) to dead chains. Monomer to initiator: similar to other radical polymerization techniques, for which the rate of propagation is proportional to the concentration of monomer and the square root of initiator concentration. Equation (1): Where MWn is the molecular weight of the polymer, M0 and Mt are the initial and final moles of monomer, respectively, RAFT0 is the initial moles of RAFT agent, MWM is the molecular weight of the monomer and MWRAFT is the molecular weight of the RAFT agent. M0 - Mt can also be rewritten as M0*X (where X is conversion), so that the average molecular weight of the polymer can be estimated based on conversion. Enz-RAFT Enz-RAFT is a RAFT polymerization technique which allows for controlled oxygen-sensitive polymerization in an open vessel. Enz-RAFT uses 1–4 μM glucose oxidase to remove dissolved oxygen from the system. As the degassing is decoupled from the polymerization, initiator concentrations can be reduced, allowing for high control and end group fidelity. Enz-RAFT can be used in a number of organic solvent systems, with high activity in up to 80% tert-butanol, acetonitrile, and dioxane. With Enz-RAFT, polymerizations do not require prior degassing making this technique convenient for the preparation of most polymers by RAFT. The technique was developed at Imperial College London by Robert Chapman and Adam Gormley in the lab of Molly Stevens. Applications RAFT polymerization has been used to synthesize a wide range of polymers with controlled molecular weight and low polydispersities (between 1.05 and 1.4 for many monomers). RAFT polymerization is known for its compatibility with a wide range of monomers as compared to other controlled radical polymerizations. Some monomers capable of polymerizing by RAFT include styrenes, acrylates, acrylamides, and many vinyl monomers. Additionally, the RAFT process allows the synthesis of polymers with specific macromolecular architectures such as block, gradient, statistical, comb, brush, star, hyperbranched, and network copolymers. These properties make RAFT useful in many types of polymer synthesis. Block copolymers As with other living radical polymerization techniques, RAFT allows chain extension of a polymer of one monomer with a second type of polymer to yield a block copolymer. In such a polymerisation, there is the additional challenge that the RAFT agent for the first monomer must also be suitable for the second monomer, making block copolymerisation of monomers of highly disparate character challenging. For block copolymers, different guidelines exist for selecting the macro-R agent for polymerizing the second block (Figure 9). Multiblock copolymers have also been reported by using difunctional R groups or symmetrical trithiocarbonates with difunctional Z groups. Star, brush and comb polymers Using a compound with multiple dithio moieties (often termed a multifunctional RAFT agent) can result in the formation of star, brush and comb polymers. Taking star polymers as an example, RAFT differs from other forms of living radical polymerization techniques in that either the R- or Z-group may form the core of the star (See Figure 10). While utilizing the R-group as the core results in similar structures found using ATRP or NMP, the ability to use the Z-group as the core makes RAFT unique. When the Z-group is used, the reactive polymeric arms are detached from the star's core during growth and to undergo chain transfer, must once again react at the core. Smart materials and biological applications Due to its flexibility with respect to the choice of monomers and reaction conditions, the RAFT process competes favorably with other forms of living polymerization for the generation of bio-materials. New types of polymers are able to be constructed with unique properties, such as temperature and pH sensitivity. Specific materials and their applications include polymer-protein and polymer-drug conjugates, mediation of enzyme activity, molecular recognition processes and polymeric micelles which can deliver a drug to a specific site in the body. RAFT has also been used to graft polymer chains onto polymeric surfaces, for example, polymeric microspheres. RAFT compared to other controlled polymerizations Advantages Polymerization can be performed in large range of solvents (including water), within a wide temperature range, high functional group tolerance and absent of metals for polymerization. As of 2014, the range of commercially available RAFT agents covers close to all the monomer classes that can undergo radical polymerization. Disadvantages A particular RAFT agent is only suitable for a limited set of monomers and the synthesis of a RAFT agent typically requires a multistep synthetic procedure and subsequent purification. RAFT agents can be unstable over long time periods, are highly colored and can have a pungent odor due to gradual decomposition of the dithioester moiety to yield small sulfur compounds. The presence of sulfur and color in the resulting polymer may also be undesirable for some applications; however, this can, to an extent, be eliminated with further chemical and physical purification steps. See also Radical (chemistry) Copolymer Living polymerization ATRP (chemistry) NMP References Polymerization reactions
Reversible addition−fragmentation chain-transfer polymerization
[ "Chemistry", "Materials_science" ]
3,964
[ "Polymerization reactions", "Polymer chemistry" ]
2,918,988
https://en.wikipedia.org/wiki/Neurotransmission
Neurotransmission (Latin: transmissio "passage, crossing" from transmittere "send, let through") is the process by which signaling molecules called neurotransmitters are released by the axon terminal of a neuron (the presynaptic neuron), and bind to and react with the receptors on the dendrites of another neuron (the postsynaptic neuron) a short distance away. A similar process occurs in retrograde neurotransmission, where the dendrites of the postsynaptic neuron release retrograde neurotransmitters (e.g., endocannabinoids; synthesized in response to a rise in intracellular calcium levels) that signal through receptors that are located on the axon terminal of the presynaptic neuron, mainly at GABAergic and glutamatergic synapses. Neurotransmission is regulated by several different factors: the availability and rate-of-synthesis of the neurotransmitter, the release of that neurotransmitter, the baseline activity of the postsynaptic cell, the number of available postsynaptic receptors for the neurotransmitter to bind to, and the subsequent removal or deactivation of the neurotransmitter by enzymes or presynaptic reuptake. In response to a threshold action potential or graded electrical potential, a neurotransmitter is released at the presynaptic terminal. The released neurotransmitter may then move across the synapse to be detected by and bind with receptors in the postsynaptic neuron. Binding of neurotransmitters may influence the postsynaptic neuron in either an inhibitory or excitatory way. The binding of neurotransmitters to receptors in the postsynaptic neuron can trigger either short term changes, such as changes in the membrane potential called postsynaptic potentials, or longer term changes by the activation of signaling cascades. Neurons form complex biological neural networks through which nerve impulses (action potentials) travel. Neurons do not touch each other (except in the case of an electrical synapse through a gap junction); instead, neurons interact at close contact points called synapses. A neuron transports its information by way of an action potential. When the nerve impulse arrives at the synapse, it may cause the release of neurotransmitters, which influence another (postsynaptic) neuron. The postsynaptic neuron may receive inputs from many additional neurons, both excitatory and inhibitory. The excitatory and inhibitory influences are summed, and if the net effect is inhibitory, the neuron will be less likely to "fire" (i.e., generate an action potential), and if the net effect is excitatory, the neuron will be more likely to fire. How likely a neuron is to fire depends on how far its membrane potential is from the threshold potential, the voltage at which an action potential is triggered because enough voltage-dependent sodium channels are activated so that the net inward sodium current exceeds all outward currents. Excitatory inputs bring a neuron closer to threshold, while inhibitory inputs bring the neuron farther from threshold. An action potential is an "all-or-none" event; neurons whose membranes have not reached threshold will not fire, while those that do must fire. Once the action potential is initiated (traditionally at the axon hillock), it will propagate along the axon, leading to release of neurotransmitters at the synaptic bouton to pass along information to yet another adjacent neuron. Stages in neurotransmission at the synapse Synthesis of the neurotransmitter. This can take place in the cell body, in the axon, or in the axon terminal. Storage of the neurotransmitter in storage granules or vesicles in the axon terminal. Calcium enters the axon terminal during an action potential, causing release of the neurotransmitter into the synaptic cleft. After its release, the transmitter binds to and activates a receptor in the postsynaptic membrane. Deactivation of the neurotransmitter. The neurotransmitter is either destroyed enzymatically, or taken back into the terminal from which it came, where it can be reused, or degraded and removed. General description Neurotransmitters are spontaneously packed in vesicles and released in individual quanta-packets independently of presynaptic action potentials. This slow release is detectable and produces micro-inhibitory or micro-excitatory effects on the postsynaptic neuron. An action potential briefly amplifies this process. Neurotransmitters containing vesicles cluster around active sites, and after they have been released may be recycled by one of three proposed mechanisms. The first proposed mechanism involves partial opening and then re-closing of the vesicle. The second two involve the full fusion of the vesicle with the membrane, followed by recycling, or recycling into the endosome. Vesicular fusion is driven largely by the concentration of calcium in micro domains located near calcium channels, allowing for only microseconds of neurotransmitter release, while returning to normal calcium concentration takes a couple of hundred of microseconds. The vesicle exocytosis is thought to be driven by a protein complex called SNARE, that is the target for botulinum toxins. Once released, a neurotransmitter enters the synapse and encounters receptors. Neurotransmitter receptors can either be ionotropic or g protein coupled. Ionotropic receptors allow for ions to pass through when agonized by a ligand. The main model involves a receptor composed of multiple subunits that allow for coordination of ion preference. G protein coupled receptors, also called metabotropic receptors, when bound to by a ligand undergo conformational changes yielding in intracellular response. Termination of neurotransmitter activity is usually done by a transporter, however enzymatic deactivation is also plausible. Summation Each neuron connects with numerous other neurons, receiving numerous impulses from them. Summation is the adding together of these impulses at the axon hillock. If the neuron only gets excitatory impulses, it will generate an action potential. If instead the neuron gets as many inhibitory as excitatory impulses, the inhibition cancels out the excitation and the nerve impulse will stop there. Action potential generation is proportionate to the probability and pattern of neurotransmitter release, and to postsynaptic receptor sensitization. Spatial summation means that the effects of impulses received at different places on the neuron add up, so that the neuron may fire when such impulses are received simultaneously, even if each impulse on its own would not be sufficient to cause firing. Temporal summation means that the effects of impulses received at the same place can add up if the impulses are received in close temporal succession. Thus the neuron may fire when multiple impulses are received, even if each impulse on its own would not be sufficient to cause firing. Convergence and divergence Neurotransmission implies both a convergence and a divergence of information. First one neuron is influenced by many others, resulting in a convergence of input. When the neuron fires, the signal is sent to many other neurons, resulting in a divergence of output. Many other neurons are influenced by this neuron. Cotransmission Cotransmission is the release of several types of neurotransmitters from a single nerve terminal. At the nerve terminal, neurotransmitters are present within 35–50 nm membrane-encased vesicles called synaptic vesicles. To release neurotransmitters, the synaptic vesicles transiently dock and fuse at the base of specialized 10–15 nm cup-shaped lipoprotein structures at the presynaptic membrane called porosomes. The neuronal porosome proteome has been solved, providing the molecular architecture and the complete composition of the machinery. Recent studies in a myriad of systems have shown that most, if not all, neurons release several different chemical messengers. Cotransmission allows for more complex effects at postsynaptic receptors, and thus allows for more complex communication to occur between neurons. In modern neuroscience, neurons are often classified by their cotransmitter. For example, striatal "GABAergic neurons" utilize opioid peptides or substance P as their primary cotransmitter. Some neurons can release at least two neurotransmitters at the same time, the other being a cotransmitter, in order to provide the stabilizing negative feedback required for meaningful encoding, in the absence of inhibitory interneurons. Examples include: GABA–glycine co-release. Dopamine–glutamate co-release. Acetylcholine (ACh)–glutamate co-release. ACh–vasoactive intestinal peptide (VIP) co-release. ACh–calcitonin gene-related peptide (CGRP) co-release. Glutamate–dynorphin co-release (in hippocampus). Noradrenaline and ATP are sympathetic co-transmitters. It is found that the endocannabinoid anadamide and the cannabinoid WIN 55,212-2 can modify the overall response to sympathetic nerve stimulation, and indicate that prejunctional CB1 receptors mediate the sympatho-inhibitory action. Thus cannabinoids can inhibit both the noradrenergic and purinergic components of sympathetic neurotransmission. One unusual pair of co-transmitters is GABA and glutamate which are released from the same axon terminals of neurons originating from the ventral tegmental area (VTA), internal globus pallidus, and supramammillary nucleus. The former two project to the habenula whereas the projections from the supramammillary nucleus are known to target the dentate gyrus of the hippocampus. Genetic association Neurotransmission is genetically associated with other characteristics or features. For example, enrichment analyses of different signaling pathways led to the discovery of a genetic association with intracranial volume. See also Autoreceptor Biological neuron model § Synaptic transmission (Koch & Segev) Electrophysiology G protein-coupled receptor Molecular neuropharmacology Neuromuscular transmission Neuropsychopharmacology References External links Historical evolution of the neurotransmission concept Neurophysiology Molecular neuroscience
Neurotransmission
[ "Chemistry" ]
2,287
[ "Molecular neuroscience", "Molecular biology" ]
2,919,468
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Nuclear%20Physics
The Max-Planck-Institut für Kernphysik ("MPI for Nuclear Physics" or MPIK for short) is a research institute in Heidelberg, Germany. The institute is one of the 80 institutes of the Max-Planck-Gesellschaft (Max Planck Society), an independent, non-profit research organization. The Max Planck Institute for Nuclear Physics was founded in 1958 under the leadership of Wolfgang Gentner. Its precursor was the Institute for Physics at the MPI for Medical Research. Today, the institute's research areas are: crossroads of particle physics and astrophysics (astroparticle physics) and many-body dynamics of atoms and molecules (quantum dynamics). The research field of Astroparticle Physics, represented by the divisions of Jim Hinton, Werner Hofmann and Manfred Lindner, combines questions related to macrocosm and microcosm. Unconventional methods of observation for gamma rays and neutrinos open new windows to the universe. What lies behind “dark matter” and “dark energy” is theoretically investigated. The research field of Quantum Dynamics is represented by the divisions of Klaus Blaum, Christoph Keitel and Thomas Pfeifer. Using reaction microscopes, simple chemical reactions can be “filmed”. Storage rings and traps allow precision experiments almost under space conditions. The interaction of intense laser light with matter is investigated using quantum-theoretical methods. Further research fields are cosmic dust, atmospheric physics as well as fullerenes and other carbon molecules. Scientists at the MPIK collaborate with other research groups in Europe and all over the world and are involved in numerous international collaborations, partly in a leading role. Particularly close connections to some large-scale facilities like GSI (Darmstadt), DESY (Hamburg), CERN (Geneva), TRIUMF (Canada), and INFN-LNGS (Assergi L‘Aquila) exist. The institute has about 390 employees, as well as many diploma students and scientific guests. In the local region, the Institute cooperates closely with the University of Heidelberg, where the directors and further members of the Institute are teaching. Three International Max Planck Research Schools (IMPRS) and a graduate school serve to foster young scientists. The institute operates a cryogenic ion storage ring (CSR) dedicated to the study of molecular ions under interstellar space conditions. Several Penning ion traps are used to measure fundamental constants of nature, such as the atomic mass of the electron and of nuclei. A facility containing several electron beam ion traps (EBIT) that produce and store highly charged ions is dedicated to fundamental atomic structure as well as astrophysical investigations. Large cameras for gamma-ray telescopes (HESS, CTA), Dark Matter (XENON1t, DARWIN), and neutrino detectors are developed and tested on-site. Structure There are five scientific divisions and several further research groups and junior groups. Scientific and technical departments as well as the administration support the researchers. Departments Stored and cooled ions (Klaus Blaum) Non-thermal astrophysics (Jim Hinton) Theoretical quantum dynamics and quantum electrodynamics (Christoph H. Keitel) Particle and astroparticle physics (Manfred Lindner) Quantum dynamics and control (Thomas Pfeifer) Independent research groups Cold Collisions and the Pathways toward Life in Interstellar Space (ASTROLAB)(Holger Kreckel) Astrophysical Plasma Theory (APT) (Brian Reville) Massive Neutrinos: Investigating their Theoretical Origin and Phenomenology (MANITOP) (Werner Rodejohann) Strong Interaction and Exotic Nuclei (Achim Schwenk) Theoretical Neutrino and Astroparticle Physics (Alexei Smirnov) High-energy astrophysics with H.E.S.S. and  CTA (Werner Hofmann; emeritus) Plasma astrophysics (Heinrich J. Völk; emeritus) Random matrices, chaos and disorder in many-body quantum systems (Hans A. Weidenmüller; emeritus) External links MPI for Nuclear Physics - Homepage Heidelberg Institutes associated with CERN Nuclear Physics Nuclear physics Nuclear research institutes Physics research institutes
Max Planck Institute for Nuclear Physics
[ "Physics", "Engineering" ]
842
[ "Nuclear research institutes", "Nuclear organizations", "Nuclear physics" ]
2,919,807
https://en.wikipedia.org/wiki/Spent%20fuel%20pool
Spent fuel pools (SFP) are storage pools (or "ponds" in the United Kingdom) for spent fuel from nuclear reactors. They are typically 40 or more feet (12 m) deep, with the bottom 14 feet (4.3 m) equipped with storage racks designed to hold fuel assemblies removed from reactors. A reactor's local pool is specially designed for the reactor in which the fuel was used and is situated at the reactor site. Such pools are used for short-term cooling of the fuel rods. This allows short-lived isotopes to decay and thus reduces the ionizing radiation and decay heat emanating from the rods. The water cools the fuel and provides radiological protection from its radiation. Pools also exist on sites remote from reactors, for longer-term storage such as the Independent Spent Fuel Storage Installation (ISFSI), located at the Morris Operation, or as a production buffer for 10 to 20 years before being sent for reprocessing or dry cask storage. While only about 20 feet (about 6 m) of water is needed to keep radiation levels below acceptable levels, the extra depth provides a safety margin and allows fuel assemblies to be manipulated without special shielding to protect the operators. Operation About a quarter to a third of the total fuel load of a reactor is removed from the core every 12 to 24 months and replaced with fresh fuel. Spent fuel rods generate intense heat and dangerous radiation that must be contained. Fuel is moved from the reactor and manipulated in the pool generally by automated handling systems, although some manual systems are still in use. The fuel bundles fresh from the core are normally segregated for several months for initial cooling before being sorted into other parts of the pool to wait for final disposal. Metal racks keep the fuel in controlled positions for physical protection and for ease of tracking and rearrangement. High-density racks also incorporate boron-10, often as boron carbide (Metamic, Boraflex, Boral, Tetrabor and Carborundum) or other neutron-absorbing material to ensure subcriticality. Water quality is tightly controlled to prevent the fuel or its cladding from degrading. This can include monitoring the water for contamination by actinides, which could indicate a leaking fuel rod. Current regulations in the United States permit re-arranging of the spent rods so that maximum efficiency of storage can be achieved. The maximum temperature of the spent fuel bundles decreases significantly between two and four years, and less from four to six years. The fuel pool water is continuously cooled to remove the heat produced by the spent fuel assemblies. Pumps circulate water from the spent fuel pool to heat exchangers, then back to the spent fuel pool. The water temperature in normal operating conditions is held below 50 °C (120 °F). Radiolysis, the dissociation of molecules by radiation, is of particular concern in wet storage, as water may be split by residual radiation and hydrogen gas may accumulate increasing the risk of explosions. For this reason the air in the room of the pools, as well as the water, must be continually monitored and treated. Other possible configurations Rather than manage the pool's inventory to minimize the possibility of continued fission activity, China is building a 200 MWt nuclear reactor to run on used fuel from nuclear power stations to generate process heat for district heating and desalination. Essentially an SFP operated as a deep swimming pool reactor; it will operate at atmospheric pressure, which will reduce the engineering requirements for safety. Other research envisions a similar low-power reactor using spent fuel where instead of limiting the production of hydrogen by radiolysis, it is encouraged by the addition of catalysts and ion scavengers to the cooling water. This hydrogen would then be removed to use as fuel. Risks The neutron absorbing materials in spent fuel pools have been observed to degrade severely over time, reducing the safety margins of maintaining subcriticality; in addition, it has been shown that the in-site measurement technique used to evaluate these neutron absorbers (Boron Areal Density Gauge for Evaluating Racks, or BADGER) has an unknown degree of uncertainty. If there is a prolonged interruption of cooling due to emergency situations, the water in the spent fuel pools may boil off, possibly resulting in radioactive elements being released into the atmosphere. In the magnitude 9 earthquake that struck the Fukushima nuclear plants in March 2011, three of the spent fuel pools were in buildings which had been damaged and were seen to be emitting water vapour. The US NRC wrongly stated that the pool at reactor 4 had boiled dry—this was denied at the time by the Government of Japan and found to be incorrect in subsequent inspection and data examination. According to nuclear plant safety specialists, the chances of criticality in a spent fuel pool are very small, usually avoided by the dispersal of the fuel assemblies, inclusion of a neutron absorber in the storage racks and overall by the fact that the spent fuel has too low an enrichment level to self-sustain a fission reaction. They also state that if the water covering the spent fuel evaporates, there is no element to enable a chain reaction by moderating neutrons. According to Dr. Kevin Crowley of the Nuclear and Radiation Studies Board, "successful terrorist attacks on spent fuel pools, though difficult, are possible. If an attack leads to a propagating zirconium cladding fire, it could result in the release of large amounts of radioactive material." After the September 11, 2001 attacks the Nuclear Regulatory Commission required American nuclear plants "to protect with high assurance" against specific threats involving certain numbers and capabilities of assailants. Plants were also required to "enhance the number of security officers" and to improve "access controls to the facilities". On August 31, 2010, a diver servicing the spent fuel pool at the Leibstadt Nuclear Power Plant (KKL) was exposed to radiation in excess of statutory annual dose limits after handling an unidentified object, which was later identified as protective tubing from a radiation monitor in the reactor core, made highly radioactive by neutron flux. The diver received a hand dose of about 1,000 mSv which is twice the statutory limit of 500 mSv. According to KKL authorities the diver has not suffered any longtime consequences from the accident. See also Deep geological repository Dry cask storage Lists of nuclear disasters and radioactive incidents Nuclear fuel cycle Radioactive waste Spent nuclear fuel shipping cask Cherenkov radiation References External links Radiological Terrorism: Sabotage of Spent Fuel Pool Storage of Spent Nuclear Fuel U.S. Nuclear Regulatory Commission (NRC) An example diagram of a Spent Fuel Pool Indian Point Energy Center "Geek Answers: Does nuclear waste really glow?" BY GRAHAM TEMPLETON 07.17.2014 at Geek.com Nuclear power plant components Radioactive waste Waste treatment technology
Spent fuel pool
[ "Chemistry", "Technology", "Engineering" ]
1,380
[ "Water treatment", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Environmental engineering", "Waste treatment technology", "Radioactive waste" ]
2,920,337
https://en.wikipedia.org/wiki/Born%20rigidity
Born rigidity is a concept in special relativity. It is one answer to the question of what, in special relativity, corresponds to the rigid body of non-relativistic classical mechanics. The concept was introduced by Max Born (1909), who gave a detailed description of the case of constant proper acceleration which he called hyperbolic motion. When subsequent authors such as Paul Ehrenfest (1909) tried to incorporate rotational motions as well, it became clear that Born rigidity is a very restrictive sense of rigidity, leading to the Herglotz–Noether theorem, according to which there are severe restrictions on rotational Born rigid motions. It was formulated by Gustav Herglotz (1909, who classified all forms of rotational motions) and in a less general way by Fritz Noether (1909). As a result, Born (1910) and others gave alternative, less restrictive definitions of rigidity. Definition Born rigidity is satisfied if the orthogonal spacetime distance between infinitesimally separated curves or worldlines is constant, or equivalently, if the length of the rigid body in momentary co-moving inertial frames measured by standard measuring rods (i.e. the proper length) is constant and is therefore subjected to Lorentz contraction in relatively moving frames. Born rigidity is a constraint on the motion of an extended body, achieved by careful application of forces to different parts of the body. A body that could maintain its own rigidity would violate special relativity, as its speed of sound would be infinite. A classification of all possible Born rigid motions can be obtained using the Herglotz–Noether theorem. This theorem states that all irrotational Born rigid motions (class A) consist of hyperplanes rigidly moving through spacetime, while any rotational Born rigid motion (class B) must be an isometric Killing motion. This implies that a Born rigid body only has three degrees of freedom. Thus a body can be brought in a Born rigid way from rest into any translational motion, but it cannot be brought in a Born rigid way from rest into rotational motion. Stresses and Born rigidity It was shown by Herglotz (1911), that a relativistic theory of elasticity can be based on the assumption, that stresses arise when the condition of Born rigidity is broken. An example of breaking Born rigidity is the Ehrenfest paradox: Even though the state of uniform circular motion of a body is among the allowed Born rigid motions of class B, a body cannot be brought from any other state of motion into uniform circular motion without breaking the condition of Born rigidity during the phase in which the body undergoes various accelerations. But if this phase is over and the centripetal acceleration becomes constant, the body can be uniformly rotating in agreement with Born rigidity. Likewise, if it is now in uniform circular motion, this state cannot be changed without again breaking the Born rigidity of the body. Another example is Bell's spaceship paradox: If the endpoints of a body are accelerated with constant proper accelerations in rectilinear direction, then the leading endpoint must have a lower proper acceleration in order to leave the proper length constant so that Born rigidity is satisfied. It will also exhibit an increasing Lorentz contraction in an external inertial frame, that is, in the external frame the endpoints of the body are not accelerating simultaneously. However, if a different acceleration profile is chosen by which the endpoints of the body are simultaneously accelerated with same proper acceleration as seen in the external inertial frame, its Born rigidity will be broken, because constant length in the external frame implies increasing proper length in a comoving frame due to relativity of simultaneity. In this case, a fragile thread spanned between two rockets will experience stresses (which are called Herglotz–Dewan–Beran stresses) and will consequently break. Born rigid motions A classification of allowed, in particular rotational, Born rigid motions in flat Minkowski spacetime was given by Herglotz, which was also studied by Friedrich Kottler (1912, 1914), Georges Lemaître (1924), Adriaan Fokker (1940), George Salzmann & Abraham H. Taub (1954). Herglotz pointed out that a continuum is moving as a rigid body when the world lines of its points are equidistant curves in . The resulting worldliness can be split into two classes: Class A: Irrotational motions Herglotz defined this class in terms of equidistant curves which are the orthogonal trajectories of a family of hyperplanes, which also can be seen as solutions of a Riccati equation (this was called "plane motion" by Salzmann & Taub or "irrotational rigid motion" by Boyer). He concluded, that the motion of such a body is completely determined by the motion of one of its points. The general metric for these irrotational motions has been given by Herglotz, whose work was summarized with simplified notation by Lemaître (1924). Also the Fermi metric in the form given by Christian Møller (1952) for rigid frames with arbitrary motion of the origin was identified as the "most general metric for irrotational rigid motion in special relativity". In general, it was shown that irrotational Born motion corresponds to those Fermi congruences of which any worldline can be used as baseline (homogeneous Fermi congruence). Already Born (1909) pointed out that a rigid body in translational motion has a maximal spatial extension depending on its acceleration, given by the relation , where is the proper acceleration and is the radius of a sphere in which the body is located, thus the higher the proper acceleration, the smaller the maximal extension of the rigid body. The special case of translational motion with constant proper acceleration is known as hyperbolic motion, with the worldline Class B: Rotational isometric motions Herglotz defined this class in terms of equidistant curves which are the trajectories of a one-parameter motion group (this was called "group motion" by Salzmann & Taub and was identified with isometric Killing motion by Felix Pirani & Gareth Williams (1962)). He pointed out that they consist of worldlines whose three curvatures are constant (known as curvature, torsion and hypertorsion), forming a helix. Worldlines of constant curvatures in flat spacetime were also studied by Kottler (1912), Petrův (1964), John Lighton Synge (1967, who called them timelike helices in flat spacetime), or Letaw (1981, who called them stationary worldlines) as the solutions of the Frenet–Serret formulas. Herglotz further separated class B using four one-parameter groups of Lorentz transformations (loxodromic, elliptic, hyperbolic, parabolic) in analogy to hyperbolic motions (i.e. isometric automorphisms of a hyperbolic space), and pointed out that Born's hyperbolic motion (which follows from the hyperbolic group with in the notation of Herglotz and Kottler, in the notation of Lemaître, in the notation of Synge; see the following table) is the only Born rigid motion that belongs to both classes A and B. General relativity Attempts to extend the concept of Born rigidity to general relativity have been made by Salzmann & Taub (1954), C. Beresford Rayner (1959), Pirani & Williams (1962), Robert H. Boyer (1964). It was shown that the Herglotz–Noether theorem is not completely satisfied, because rigid rotating frames or congruences are possible which do not represent isometric Killing motions. Alternatives Several weaker substitutes have also been proposed as rigidity conditions, such as by Noether (1909) or Born (1910) himself. A modern alternative was given by Epp, Mann & McGrath. In contrast to the ordinary Born rigid congruence consisting of the "history of a spatial volume-filling set of points", they recover the six degrees of freedom of classical mechanics by using a quasilocal rigid frame by defining a congruence in terms of the "history of the set of points on the surface bounding a spatial volume". References Bibliography ; English translation by David Delphenich: On the mechanics of deformable bodies from the standpoint of relativity theory. In English: External links Born Rigidity, Acceleration, and Inertia at mathpages.com The Rigid Rotating Disk in Relativity in the USENET Physics FAQ Special relativity Rigid bodies Max Born
Born rigidity
[ "Physics" ]
1,806
[ "Special relativity", "Theory of relativity" ]
2,920,873
https://en.wikipedia.org/wiki/Uncorrelated%20asymmetry
In game theory an uncorrelated asymmetry is an arbitrary asymmetry in a game which is otherwise symmetrical. The name 'uncorrelated asymmetry' is due to John Maynard Smith who called payoff relevant asymmetries in games with similar roles for each player 'correlated asymmetries' (note that any game with correlated asymmetries must also have uncorrelated asymmetries). The explanation of an uncorrelated asymmetry usually makes reference to "informational asymmetry". Which may confuse some readers, since, games which may have uncorrelated asymmetries are still games of complete information . What differs between the same game with and without an uncorrelated asymmetry is whether the players know which role they have been assigned. If players in a symmetric game know whether they are Player 1, Player 2, etc. (or row vs. column player in a bimatrix game) then an uncorrelated asymmetry exists. If the players do not know which player they are then no uncorrelated asymmetry exists. The information asymmetry is that one player believes he is player 1 and the other believes he is player 2. Therefore, "informational asymmetry" does not refer to knowledge in the sense of an information set in an extensive form game. The concept of uncorrelated asymmetries is important in determining which Nash equilibria are evolutionarily stable strategies in discoordination games such as the game of chicken. In these games the mixing Nash is the ESS if there is no uncorrelated asymmetry, and the pure conditional Nash equilibria are ESSes when there is an uncorrelated asymmetry. The usual applied example of an uncorrelated asymmetry is territory ownership in the hawk-dove game. Even if the two players ("owner" and "intruder") have the same payoffs (i.e., the game is payoff symmetric), the territory owner will play Hawk, and the intruder Dove, in what is known as the 'Bourgeois strategy' (the reverse is also an ESS known as the 'anti-bourgeois strategy', but makes little biological sense). See also The section on uncorrelated asymmetries in Game of chicken The section on discoordination games in Best response. References Maynard Smith, J (1982) Evolution and the Theory of Games Cambridge University Press. Game theory Asymmetry
Uncorrelated asymmetry
[ "Physics", "Mathematics" ]
524
[ "Game theory", "Symmetry", "Asymmetry" ]
37,236,489
https://en.wikipedia.org/wiki/Stage%20loading
Stage Loading is a measure of the load on a turbomachinery stage, be it a part of a compressor, fan or turbine. The parameter, which is non-dimensional, is defined as: where: Imperial Units (SI Units) acceleration of gravity ft/s/s (1.0) mechanical equivalent of heat ft.lb/(s.hp) (1.0) change in specific enthalpy over stage hp.s/lb (KW.s/Kg) peripheral blade speed ft/s (m/s) Average stage loading has a very similar definition, where the number of stages, n, within the compressor, fan or turbine is used to provide an average value: In this case the change in enthalpy is across the whole unit, not just a stage. Similarly, the blade speed used is a mean for the whole device. The above equation shows that if blade speed cannot be increased for, say, mechanical or aerodynamic reasons, the number of stages has to be increased to get the average stage loading back to an acceptable level, to obtain a satisfactory level of efficiency. The ideal average stage loading for a turbine is about 1.8. Turbomachinery
Stage loading
[ "Chemistry", "Engineering" ]
245
[ "Chemical equipment", "Mechanical engineering", "Turbomachinery" ]
37,237,169
https://en.wikipedia.org/wiki/Affect%20%28education%29
In education, affect is broadly defined as the attitudes, emotions, and values present in an educational environment. The two main types of affect are professional affect and student affect. Professional affect refers to the emotions and values presented by the teacher which are picked up by the student, while student affect refers to the attitudes, interests, and values acquired in the educational environment. While there is the possibility of overlap between student and professional affect, the terms are rarely used interchangeably by educational professionals, with student affect being reserved primarily for use to describe developmental activities present in a school which are not presented by the teacher. The importance of affect in education has become a topic of increasing interest in the fields of psychology and education. It is a commonly held opinion that curriculum and emotional literacy should be interwoven. Examples of such curriculum include using English language to increase emotional vocabulary (see affect labeling), and writing about the self and history to discuss emotion in major events such as genocide. This type of curriculum is also known as therapeutic education. According to Ecclestone and Hayes, therapeutic education focuses on the emotional over the intellectual. Educator attitudes In order for such curriculum to be implemented, it is essential that educators be aware of the importance emotional literacy. Examination of educator and student attitudes towards emotional literacy is a common topic of research. Researchers have found that staff have conceptions of what constitutes emotional literacy, including being self-aware of one's own feelings, using emotional language, and being cognizant that children have feelings that need to be taken into account. In addition, staff discussed the necessity of having all educators dedicated to creating an emotionally literate school, and the detrimental effects of even one educator not supporting this initiative. School attitudes Roffey (2008) examined the influence of emotional literacy on the school as a whole using ecological analysis. It was found that positive change was gradual, and involved multiple elements. For instance, teachers who felt as though they were genuinely valued and were consulted about policy felt happier at work. In turn, these teachers felt better prepared to handle conflicts that arose inside the classroom, and when students experienced this positive approach they were more cooperative (see cooperative learning). This shows how incorporating emotional literacy into a child's education is a school-wide collaborative effort. Funding Examples of government funding of emotional literacy include Every Child Matters. Criticisms Criticisms of emotional literacy in curriculum revolves around the idea that it although well intended, it is designed to disempower students. See also Affect (psychology) References Emotion Educational psychology
Affect (education)
[ "Biology" ]
508
[ "Emotion", "Behavior", "Human behavior" ]
37,237,547
https://en.wikipedia.org/wiki/Nuclear%20transparency
Nuclear transparency is the ratio of cross-sections for exclusive processes from the nuclei to those of the nucleons. If a nuclear cross-section is denoted as and free nucleon cross-section as , then nuclear transparency can be defined as , where can be parameterized in terms of as . Therefore, transparency can be expressed as . Here, nucleon cross-section can be thought of as a hydrogen cross-section, and nuclei cross-section can be as for other targets. Nuclear physics
Nuclear transparency
[ "Physics" ]
103
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
37,242,178
https://en.wikipedia.org/wiki/Gab%20operon
The gab operon is responsible for the conversion of γ-aminobutyrate (GABA) to succinate. The gab operon comprises three structural genes – gabD, gabT and gabP – that encode for a succinate semialdehyde dehydrogenase, GABA transaminase and a GABA permease respectively. There is a regulatory gene csiR, downstream of the operon, that codes for a putative transcriptional repressor and is activated when nitrogen is limiting. The gab operon has been characterized in Escherichia coli and significant homologies for the enzymes have been found in organisms such as Saccharomyces cerevisiae, rats and humans. Limited nitrogen conditions activate the gab genes. The enzymes produced by these genes convert GABA to succinate, which then enters the TCA cycle, to be used as a source of energy. The gab operon is also known to contribute to polyamine homeostasis during nitrogen-limited growth and to maintain high internal glutamate concentrations under stress conditions. Structure The gab operon consists of three structural genes: gabT : encodes a GABA transaminase that produces succinic semialdehyde. gabD : encodes an NADP-dependent succinic semialdehyde dehydrogenase, which oxidizes succinic semialdehyde to succinate. gabP : encodes a GABA-specific permease. Physiological significance of the operon The gabT gene encodes for GABA transaminase, an enzyme that catalyzes the conversion of GABA and 2-oxoglutarate into succinate semialdehyde and glutamate. Succinate semialdehyde is then oxidized into succinate by succinate semialdehyde dehydrogenase which is encoded by the gabP gene, thereby entering the TCA cycle as a usable source of energy. The gab operon contributes to homeostasis of polyamines such as putrescine, during nitrogen-limited growth. It is also known to maintain high internal glutamate concentrations under stress conditions. Regulation Differential Regulation of Promoters The expression of genes in the operon is controlled by three differentially regulated promoters, two of which are controlled by RpoS encoded sigma factor σS. csiDp : is σS-dependent and is activated exclusively upon carbon starvation because cAMP-CRP acts an essential activator for σS containing RNA polymerase at the csiD promoter. gabDp1: is σS -dependent and is induced by multiple stresses. gabDp2: is σ70 dependent and is controlled by Nac (Nitrogen Assimilation Control) regulatory proteins expressed under nitrogen limitation. Mechanism of Regulation Activation The csiD promoter (csiDp) is essential for the expression of csiD(carbon starvation induced gene), ygaF and the gab genes. The csiDp is activated exclusively under carbon starvation conditions and stationary phase during which cAMP accumulates in high concentrations in the cell. The binding of cAMP to the cAMP receptor protein(CRP) causes CRP to bind tightly to a specific DNA site in the csiDp promoter, thus activating the transcription of genes downstream of the promoter. The gabDp1 exerts an additional control over the gabDTP region. The gabDp1 is activated by σS inducing conditions such as hyperosmotic and acidic shifts besides starvation and stationary phase. The gabDp2 promoter on the other hand, is σ70 dependent and is activated under nitrogen limitation. In nitrogen limiting conditions, the nitrogen regulator Nac binds to a site located just upstream of the promoter expressing the gab genes. The gab genes upon activation produce enzymes that degrade GABA to succinate. Repression The presence of nitrogen activates the csiR gene located downstream of the gabP gene. The csiR gene encodes a protein that acts as a transcriptional repressor for csiD-ygaF-gab operon hence shutting off the GABA degradation pathway. Eukaryotic Analogue GABA degradation pathways exists in almost all eukaryotic organisms and takes place by the action of similar enzymes. Although, GABA in E.coli is predominantly used as an alternative source of energy through GABA degradation pathways, GABA in higher eukaryotic organisms acts as an inhibitory neurotransmitter and also as regulator of muscle tone. GABA degradation pathways in eukaryotes are responsible for the inactivation of GABA. References Gene expression Operons
Gab operon
[ "Chemistry", "Biology" ]
961
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Operons" ]
37,243,862
https://en.wikipedia.org/wiki/Strength%20tester%20machine
A strength tester machine is a type of amusement personality tester machine, which upon receiving credit rates the subject's strength, according to how strongly the person presses levers, squeezes a grip or punches a punching bag. In the past, strength testers could mainly be found in penny arcades and amusement parks, but they are now also common in pub-style locations as well as video arcades, bowling alleys, family entertainment centers and disco venues. Modern strength testing machines have become redemption games and use LCDs for a video feedback, while some such as Sega's K.O. Punch (1981) use a video game display for feedback. In media American Restoration features a restoration of a Punch-A-Bag strength tester machine from 1910 in the 6th episode "Knockout" and of a strength tester that had stood on the Santa Monica Pier in the 17th episode "Grippin' Mad". American Pickers features a 1920s Advance Machine Company electric shock strength tester in the 22nd episode "Laurel and Hardy". Special forms Electric shock strength testers evaluate how long someone can stand unperilous electric shocks. However, most machines in amusement parks today only utilize vibrations that feels somewhat like an electric shock to someone not expecting it. Personality strength testers are a type of amusement personality tester machines that try to rate the strength of the subject's character. Such machines are for amusement purposes only and do not actually give a real result. Gallery See also Fortune teller machine High striker – an attraction used in funfairs, amusement parks, fundraisers, and carnivals Love tester machine Notes External links Mercury Athletic Scales Strength Tester coin-operated arcade machine game 1969 Midway Golden Arm strength tester coin-operated arcade game Ingo Strength Tester United Distributing coin-operated arcade game Commercial machines
Strength tester machine
[ "Physics", "Technology" ]
369
[ "Physical systems", "Commercial machines", "Machines" ]
949,189
https://en.wikipedia.org/wiki/Design%20matrix
In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model. It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables. The design matrix contains data on the independent variables (also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called a dependent variable). The theory relating to such models uses the design matrix as input to some linear algebra : see for example linear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of different experimental designs and statistical models, e.g., ANOVA, ANCOVA, and linear regression. Definition The design matrix is defined to be a matrix such that (the jth column of the ith row of ) represents the value of the jth variable associated with the ith object. A regression model may be represented via matrix multiplication as where X is the design matrix, is a vector of the model's coefficients (one for each variable), is a vector of random errors with mean zero, and y is the vector of predicted outputs for each object. Size The design matrix has dimension n-by-p, where n is the number of samples observed, and p is the number of variables (features) measured in all samples. In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrix M would be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in row i and column j of this matrix would be the answer of the i th person to the j th question. Examples Arithmetic mean The design matrix for an arithmetic mean is a column vector of ones. Simple linear regression This section gives an example of simple linear regression—that is, regression with only a single explanatory variable—with seven observations. The seven data points are {yi, xi}, for i = 1, 2, …, 7. The simple linear regression model is where is the y-intercept and is the slope of the regression line. This model can be represented in matrix form as where the first column of 1s in the design matrix allows estimation of the y-intercept while the second column contains the x-values associated with the corresponding y-values. The matrix whose columns are 1's and x'''s in this example is the design matrix. Multiple regression This section contains an example of multiple regression with two covariates (explanatory variables): w and x. Again suppose that the data consist of seven observations, and that for each observed value to be predicted (), values wi and x''i of the two covariates are also observed. The model to be considered is This model can be written in matrix terms as Here the 7×3 matrix on the right side is the design matrix. One-way ANOVA (cell means model) This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group. If the model to be fit is just the mean of each group, then the model is which can be written In this model represents the mean of the th group. One-way ANOVA (offset from reference group) The ANOVA model could be equivalently written as each group parameter being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is with the constraint that is zero. In this model is the mean of the reference group and is the difference from group to the reference group. is not included in the matrix because its difference from the reference group (itself) is necessarily zero. See also Moment matrix Projection matrix Jacobian matrix and determinant Scatter matrix Gram matrix Vandermonde matrix References Further reading Matrices Regression analysis Design of experiments Multivariate statistics Data
Design matrix
[ "Mathematics", "Technology" ]
995
[ "Matrices (mathematics)", "Information technology", "Mathematical objects", "Data" ]
950,012
https://en.wikipedia.org/wiki/Coupling%20%28physics%29
In physics, two objects are said to be coupled when they are interacting with each other. In classical mechanics, coupling is a connection between two oscillating systems, such as pendulums connected by a spring. The connection affects the oscillatory pattern of both objects. In particle physics, two particles are coupled if they are connected by one of the four fundamental forces. Wave mechanics Coupled harmonic oscillator If two waves are able to transmit energy to each other, then these waves are said to be "coupled." This normally occurs when the waves share a common component. An example of this is two pendulums connected by a spring. If the pendulums are identical, then their equations of motion are given by These equations represent the simple harmonic motion of the pendulum with an added coupling factor of the spring. This behavior is also seen in certain molecules (such as CO2 and H2O), wherein two of the atoms will vibrate around a central one in a similar manner. Coupled LC circuits In LC circuits, charge oscillates between the capacitor and the inductor and can therefore be modeled as a simple harmonic oscillator. When the magnetic flux from one inductor is able to affect the inductance of an inductor in an unconnected LC circuit, the circuits are said to be coupled. The coefficient of coupling k defines how closely the two circuits are coupled and is given by the equation where M is the mutual inductance of the circuits and Lp and Ls are the inductances of the primary and secondary circuits, respectively. If the flux lines of the primary inductor thread every line of the secondary one, then the coefficient of coupling is 1 and In practice, however, there is often leakage, so most systems are not perfectly coupled. Chemistry Spin-spin coupling Spin-spin coupling occurs when the magnetic field of one atom affects the magnetic field of another nearby atom. This is very common in NMR imaging. If the atoms are not coupled, then there will be two individual peaks, known as a doublet, representing the individual atoms. If coupling is present, then there will be a triplet, one larger peak with two smaller ones to either side. This occurs due to the spins of the individual atoms oscillating in tandem. Astrophysics Objects in space which are coupled to each other are under the mutual influence of each other's gravity. For instance, the Earth is coupled to both the Sun and the Moon, as it is under the gravitational influence of both. Common in space are binary systems, two objects gravitationally coupled to each other. Examples of this are binary stars which circle each other. Multiple objects may also be coupled to each other simultaneously, such as with globular clusters and galaxy groups. When smaller particles, such as dust, which are coupled together over time accumulate into much larger objects, accretion is occurring. This is the major process by which stars and planets form. Plasma The coupling constant of a plasma is given by the ratio of its average Coulomb-interaction energy to its average kinetic energy—or how strongly the electric force of each atom holds the plasma together. Plasmas can therefore be categorized into weakly- and strongly-coupled plasmas depending upon the value of this ratio. Many of the typical classical plasmas, such as the plasma in the solar corona, are weakly coupled, while the plasma in a white dwarf star is an example of a strongly coupled plasma. Quantum mechanics Two coupled quantum systems can be modeled by a Hamiltonian of the form which is the addition of the two Hamiltonians in isolation with an added interaction factor. In most simple systems, and can be solved exactly while can be solved through perturbation theory. If the two systems have similar total energy, then the system may undergo Rabi oscillation. Angular momentum coupling When angular momenta from two separate sources interact with each other, they are said to be coupled. For example, two electrons orbiting around the same nucleus may have coupled angular momenta. Due to the conservation of angular momentum and the nature of the angular momentum operator, the total angular momentum is always the sum of the individual angular momenta of the electrons, or Spin-Orbit interaction (also known as spin-orbit coupling) is a special case of angular momentum coupling. Specifically, it is the interaction between the intrinsic spin of a particle, S, and its orbital angular momentum, L. As they are both forms of angular momentum, they must be conserved. Even if energy is transferred between the two, the total angular momentum, J, of the system must be constant, . Particle physics and quantum field theory Particles which interact with each other are said to be coupled. This interaction is caused by one of the fundamental forces, whose strengths are usually given by a dimensionless coupling constant. In quantum electrodynamics, this value is known as the fine-structure constant α, approximately equal to 1/137. For quantum chromodynamics, the constant changes with respect to the distance between the particles. This phenomenon is known as asymptotic freedom. Forces which have a coupling constant greater than 1 are said to be "strongly coupled" while those with constants less than 1 are said to be "weakly coupled." References Force Particle physics
Coupling (physics)
[ "Physics", "Mathematics" ]
1,078
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Particle physics", "Wikipedia categories named after physical quantities", "Matter" ]
950,971
https://en.wikipedia.org/wiki/Mohr%E2%80%93Coulomb%20theory
Mohr–Coulomb theory is a mathematical model (see yield surface) describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength. In geotechnical engineering it is used to define shear strength of soils and rocks at different effective stresses. In structural engineering it is used to determine failure load as well as the angle of fracture of a displacement fracture in concrete and similar materials. Coulomb's friction hypothesis is used to determine the combination of shear and normal stress that will cause a fracture of the material. Mohr's circle is used to determine which principal stresses will produce this combination of shear and normal stress, and the angle of the plane in which this will occur. According to the principle of normality the stress introduced at failure will be perpendicular to the line describing the fracture condition. It can be shown that a material failing according to Coulomb's friction hypothesis will show the displacement introduced at failure forming an angle to the line of fracture equal to the angle of friction. This makes the strength of the material determinable by comparing the external mechanical work introduced by the displacement and the external load with the internal mechanical work introduced by the strain and stress at the line of failure. By conservation of energy the sum of these must be zero and this will make it possible to calculate the failure load of the construction. A common improvement of this model is to combine Coulomb's friction hypothesis with Rankine's principal stress hypothesis to describe a separation fracture. An alternative view derives the Mohr-Coulomb criterion as extension failure. History of the development The Mohr–Coulomb theory is named in honour of Charles-Augustin de Coulomb and Christian Otto Mohr. Coulomb's contribution was a 1776 essay entitled "Essai sur une application des règles des maximis et minimis à quelques problèmes de statique relatifs à l'architecture" . Mohr developed a generalised form of the theory around the end of the 19th century. As the generalised form affected the interpretation of the criterion, but not the substance of it, some texts continue to refer to the criterion as simply the 'Coulomb criterion'. Mohr–Coulomb failure criterion The Mohr–Coulomb failure criterion represents the linear envelope that is obtained from a plot of the shear strength of a material versus the applied normal stress. This relation is expressed as where is the shear strength, is the normal stress, is the intercept of the failure envelope with the axis, and is the slope of the failure envelope. The quantity is often called the cohesion and the angle is called the angle of internal friction. Compression is assumed to be positive in the following discussion. If compression is assumed to be negative then should be replaced with . If , the Mohr–Coulomb criterion reduces to the Tresca criterion. On the other hand, if the Mohr–Coulomb model is equivalent to the Rankine model. Higher values of are not allowed. From Mohr's circle we have where and is the maximum principal stress and is the minimum principal stress. Therefore, the Mohr–Coulomb criterion may also be expressed as This form of the Mohr–Coulomb criterion is applicable to failure on a plane that is parallel to the direction. Mohr–Coulomb failure criterion in three dimensions The Mohr–Coulomb criterion in three dimensions is often expressed as The Mohr–Coulomb failure surface is a cone with a hexagonal cross section in deviatoric stress space. The expressions for and can be generalized to three dimensions by developing expressions for the normal stress and the resolved shear stress on a plane of arbitrary orientation with respect to the coordinate axes (basis vectors). If the unit normal to the plane of interest is where are three orthonormal unit basis vectors, and if the principal stresses are aligned with the basis vectors , then the expressions for are The Mohr–Coulomb failure criterion can then be evaluated using the usual expression for the six planes of maximum shear stress. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of normal and shear stress on a plane |- |Let the unit normal to the plane of interest be where are three orthonormal unit basis vectors. Then the traction vector on the plane is given by The magnitude of the traction vector is given by Then the magnitude of the stress normal to the plane is given by The magnitude of the resolved shear stress on the plane is given by In terms of components, we have If the principal stresses are aligned with the basis vectors , then the expressions for are |} Mohr–Coulomb failure surface in Haigh–Westergaard space The Mohr–Coulomb failure (yield) surface is often expressed in Haigh–Westergaad coordinates. For example, the function can be expressed as Alternatively, in terms of the invariants we can write where {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Derivation of alternative forms of Mohr–Coulomb yield function |- |We can express the yield function as The Haigh–Westergaard invariants are related to the principal stresses by Plugging into the expression for the Mohr–Coulomb yield function gives us Using trigonometric identities for the sum and difference of cosines and rearrangement gives us the expression of the Mohr–Coulomb yield function in terms of . We can express the yield function in terms of by using the relations and straightforward substitution. |} Mohr–Coulomb yield and plasticity The Mohr–Coulomb yield surface is often used to model the plastic flow of geomaterials (and other cohesive-frictional materials). Many such materials show dilatational behavior under triaxial states of stress which the Mohr–Coulomb model does not include. Also, since the yield surface has corners, it may be inconvenient to use the original Mohr–Coulomb model to determine the direction of plastic flow (in the flow theory of plasticity). A common approach is to use a non-associated plastic flow potential that is smooth. An example of such a potential is the function where is a parameter, is the value of when the plastic strain is zero (also called the initial cohesion yield stress), is the angle made by the yield surface in the Rendulic plane at high values of (this angle is also called the dilation angle), and is an appropriate function that is also smooth in the deviatoric stress plane. Typical values of cohesion and angle of internal friction Cohesion (alternatively called the cohesive strength) and friction angle values for rocks and some common soils are listed in the tables below. See also 3-D elasticity Hoek–Brown failure criterion Byerlee's law Lateral earth pressure von Mises stress Yield (engineering) Drucker Prager yield criterion — a smooth version of the M–C yield criterion Lode coordinates Bigoni–Piccolroaz yield criterion References https://web.archive.org/web/20061008230404/http://fbe.uwe.ac.uk/public/geocal/SoilMech/basic/soilbasi.htm http://www.civil.usyd.edu.au/courses/civl2410/earth_pressures_rankine.doc Shear strength Solid mechanics Soil mechanics Plasticity (physics) Materials science Applied mathematics Yield criteria
Mohr–Coulomb theory
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
1,630
[ "Structural engineering", "Solid mechanics", "Applied and interdisciplinary physics", "Deformation (mechanics)", "Applied mathematics", "Shear strength", "Soil mechanics", "Materials science", "Plasticity (physics)", "Mechanics", "nan", "Mechanical engineering" ]
951,727
https://en.wikipedia.org/wiki/D-dimer
D-dimer (or D dimer) is a dimer that is a fibrin degradation product (FDP), a small protein fragment present in the blood after a blood clot is degraded by fibrinolysis. It is so named because it contains two D fragments of the fibrin protein joined by a cross-link, hence forming a protein dimer. D-dimer concentration may be determined by a blood test to help diagnose thrombosis. Since its introduction in the 1990s, it has become an important test performed in people with suspected thrombotic disorders, such as venous thromboembolism. While a negative result practically rules out thrombosis, a positive result can indicate thrombosis but does not exclude other potential causes. Its main use, therefore, is to exclude thromboembolic disease where the probability is low. D-dimer levels are used as a predictive biomarker for the blood disorder disseminated intravascular coagulation and in the coagulation disorders associated with COVID-19 infection. A four-fold increase in the protein is an indicator of poor prognosis in people hospitalized with COVID-19. Principles Coagulation, the formation of a blood clot or thrombus, occurs when the proteins of the coagulation cascade are activated, either by contact with a damaged blood vessel wall and exposure to collagen in the tissue space (intrinsic pathway) or by activation of factor VII by tissue activating factors (extrinsic pathway). Both pathways lead to the generation of thrombin, an enzyme that turns the soluble blood protein fibrinogen into fibrin, which aggregates into protofibrils. Another thrombin-generated enzyme, factor XIII, then crosslinks the fibrin protofibrils at the D fragment site, leading to the formation of an insoluble gel that serves as a scaffold for blood clot formation. The circulating enzyme plasmin, the main enzyme of fibrinolysis, cleaves the fibrin gel in a number of places. The resultant fragments, "high molecular weight polymers", are digested several times more by plasmin to lead to intermediate and then to small polymers (fibrin degradation products or FDPs). The cross-link between two D fragments remains intact, however, and these are exposed on the surface when the fibrin fragments are sufficiently digested. The structure of D-dimer is either a 180 kDa or 195 kDa molecule of two D domains, or a 340 kDa molecule of two D domains and one E domain of the original fibrinogen molecule. The half-life of D-dimer in blood is approximately 6 to 8 hours. D-dimers are not normally present in human blood plasma, except when the coagulation system has been activated, for instance, because of the presence of thrombosis or disseminated intravascular coagulation. The D-dimer assay depends on the binding of a monoclonal antibody to a particular epitope on the D-dimer fragment. Several detection kits are commercially available; all of them rely on a different monoclonal antibody against D-dimer. For some of these, the area of the D-dimer to which the antibody binds is known. The binding of the antibody is then measured quantitatively by one of various laboratory methods. Indications D-dimer testing is of clinical use when there is a suspicion of deep venous thrombosis (DVTl), pulmonary embolism (PE) or disseminated intravascular coagulation (DIC). For DVT and PE, there are possible various scoring systems that are used to determine the a priori clinical probability of these diseases; the best-known is the Wells score. For a high score, or pretest probability, a D-dimer will make little difference and anticoagulant therapy will be initiated regardless of test results, and additional testing for DVT or pulmonary embolism may be performed. For a moderate or low score, or pretest probability: A negative D-dimer test will virtually rule out thromboembolism: the degree to which the D-dimer reduces the probability of thrombotic disease is dependent on the test properties of the specific test used in the clinical setting: most available D-dimer tests with a negative result will reduce the probability of thromboembolic disease to less than 1% if the pretest probability is less than 15-20%. Chest computed tomography (CT angiography) should not be used to evaluate pulmonary embolism for persons with negative results of a D-dimer assay. A low pretest probability is also valuable in ruling out PE. If the D-dimer reads high, then further testing (ultrasound of the leg veins or lung scintigraphy or CT scanning) is required to confirm the presence of thrombus. Anticoagulant therapy may be started at this point or withheld until further tests confirm the diagnosis, depending on the clinical situation. In some hospitals, they are measured by laboratories after a form is completed showing the probability score and only if the probability score is low or intermediate. This reduces the need for unnecessary tests in those who are high-probability. Performing the D-dimer test first can avoid a significant proportion of imaging tests and is less invasive. Since the D-dimer can exclude the need for imaging, specialty professional organizations recommend that physicians use D-dimer testing as an initial diagnostic. Interpretation Reference ranges The following are reference ranges for D-dimer: D-dimer increases with age. It has therefore been suggested to use a cutoff equal to patient’s age in years × 10 μg/L (or x 0.056 nmol/L) for patients aged over 50 years for the suspicion of venous thromboembolism (VTE), as it decreases the false positive rate without substantially increasing the false negative rate. An alternative measurement of D-dimer is in fibrinogen equivalent units (FEU). The molecular weight of the fibrinogen molecule is about twice the size of the D-dimer molecule, and therefore 1.0 mcg/mL FEU is equivalent to 0.5 mcg/mL of d-dimer. Thrombotic disease Various kits have a 93 to 95% sensitivity (true positive rate). For hospitalized patients, one study found the specificity to be about 50% (related to false positive rate) in the diagnosis of thrombotic disease. False positive readings can be due to various causes: liver disease, high rheumatoid factor, inflammation, malignancy, trauma, pregnancy, recent surgery as well as advanced age. False negative readings can occur if the sample is taken either too early after thrombus formation or if testing is delayed for several days. Additionally, the presence of anti-coagulation can render the test negative because it prevents thrombus extension. The anti-coagulation medications dabigatran and rivaroxaban decrease D-dimer levels but do not interfere with the D-dimer assay. False values may be obtained if the specimen collection tube is not sufficiently filled (false low value if underfilled and false high value if overfilled). This is due to the dilutional effect of the anticoagulant (the blood must be collected in a 9:1 blood to anticoagulant ratio). Likelihood ratios are derived from sensitivity and specificity to adjust pretest probability. In interpretation of the D-dimer, for patients over age 50, a value of (patient's age) × 10 μg/L may be abnormal. History D-dimer was originally identified, described and named in the 1970s (Fibrinolysis, Dr P J Gaffney) and found its diagnostic application in the 1990s. References External links D-dimer - Lab Tests Online Chemical pathology Fibrinolytic system Blood tests
D-dimer
[ "Chemistry", "Biology" ]
1,692
[ "Biochemistry", "Blood tests", "Chemical pathology" ]
951,895
https://en.wikipedia.org/wiki/List%20of%20fungicides
This is a list of fungicides. These are chemical compounds which have been registered as agricultural fungicides. The names on the list are the ISO common name for the active ingredient which is formulated into the branded product sold to end-users. The University of Hertfordshire maintains a database of the chemical and biological properties of these materials, including their brand names and the countries and dates where and when they have been introduced. The industry-sponsored Fungicide Resistance Action Committee (FRAC) advises on the use of fungicides in crop protection and classifies the available compounds according to their chemical structures and mechanism of action so as to manage the risks of pesticide resistance developing. The 2024 FRAC poster of fungicides includes the majority of chemicals listed below. 0-9 A B C D E F G H I J K L M N O P Q R S T U V Z See also References External links Pesticide use in the United Kingdom Pesticide usage statistics for the United Kingdom Prevention and treatment of mold in library collections with an emphasis on tropical climates: A RAMP study, Ch. 5.1: Fungicides Fungicides
List of fungicides
[ "Chemistry", "Biology" ]
232
[ "Fungicides", "Biocides", "Lists of chemical compounds" ]
952,858
https://en.wikipedia.org/wiki/Closed%20ecological%20system
Closed ecological systems or contained ecological systems (CES) are ecosystems that do not rely on matter exchange with any part outside the system. The term is most often used to describe small, man-made ecosystems. Such systems can potentially serve as a life-support system or space habitats. In a closed ecological system, any waste products produced by one species must be used by at least one other species. If the purpose is to maintain a life form, such as a mouse or a human, waste products such as carbon dioxide, feces and urine must eventually be converted into oxygen, food, and water. A closed ecological system must contain at least one autotrophic organism. While both chemotrophic and phototrophic organisms are plausible, almost all closed ecological systems to date are based on an autotroph such as green algae. Examples A closed ecological system for an entire planet is called an ecosphere. Man-made closed ecological systems which were created to sustain human life include Biosphere 2, MELiSSA, and the BIOS-1, BIOS-2, and BIOS-3 projects. Bottle gardens and aquarium ecospheres are partially or fully enclosed glass containers that are self-sustaining closed ecosystems that can be made or purchased. They can include tiny shrimp, algae, gravel, decorative shells, and Gorgonia. In fiction Closed ecological systems are commonly featured in fiction and particularly in science fiction. These include domed cities, space stations and habitats on foreign planets or asteroids, cylindrical habitats (e.g. O'Neill cylinders), Dyson Spheres and so on. See also References Ecological processes Systems ecology Ecosystems Artificial ecosystems
Closed ecological system
[ "Physics", "Astronomy", "Biology", "Environmental_science" ]
331
[ "Artificial ecosystems", "Physical phenomena", "Earth phenomena", "Symbiosis", "Outer space", "Systems ecology", "Astronomy stubs", "Ecological processes", "Ecosystems", "Environmental social science", "Outer space stubs" ]
952,930
https://en.wikipedia.org/wiki/SHGb02%2B14a
SHGb02+14a is an astronomical radio source and a candidate in the Search for Extra-Terrestrial Intelligence (SETI), discovered in March 2003 by SETI@home and announced in New Scientist on September 1, 2004. Observation The source was originally detected by Oliver Voelker of Logpoint in Nuremberg, Germany and Nate Collins of Farin and Associates in Wisconsin, USA using the giant Arecibo Telescope in Puerto Rico. It was observed three times (for a total of about one minute) at a frequency of about 1420 MHz, one of the frequencies in the waterhole region, which is theorized to be a good candidate for frequencies used by extraterrestrial intelligence to broadcast contact signals. There are a number of puzzling features of this candidate, which have led to a large amount of skepticism. The source is located between the constellations Pisces and Aries, a direction in which no stars are observed within 1000 light years from Earth. It is also a very weak signal. The frequency of the signal has a rapid drift, changing by between 8 and 37 hertz per second. If the cause is Doppler shift, it would indicate emission from a planet rotating nearly 40 times faster on its axis than the Earth. Each time the signal was detected, it was again at about 1420 MHz, the original frequency before any drift. There are a number of potential explanations for this signal. SETI@home has denied media reports of a likely extraterrestrial intelligence signal. It could be an artifact of random chance, cosmic noise or even a glitch in the technology. Star field The region is unusually devoid of any nearby stars. The closest star systems in the approximate region of the signal include the binary star G 73-11A and B, which are 106.1 light-years from the Sun, although the unrelated star G 73-10 is only 108.7 light-years away, less than three light-years from G 73-11A and B. All of these stars are red dwarfs much less massive than the Sun. The much nearer star, L 1159-16, which is one of the nearest 40 stars to the Sun, is near the signal's position, but its proximity is likely coincidental. See also BLC1 Wow! signal References and notes External links Signal Candidate SHGb02+14a SETI@home (classic)'s Best Gaussians SETI range calculator 2003 in science Astronomical radio sources Radio spectrum Search for extraterrestrial intelligence
SHGb02+14a
[ "Physics", "Astronomy" ]
518
[ "Astronomical radio sources", "Radio spectrum", "Spectrum (physical sciences)", "Astronomical events", "Electromagnetic spectrum", "Astronomical objects" ]
953,148
https://en.wikipedia.org/wiki/Lagrange%20reversion%20theorem
In mathematics, the Lagrange reversion theorem gives series or formal power series expansions of certain implicitly defined functions; indeed, of compositions with such functions. Let v be a function of x and y in terms of another function f such that Then for any function g, for small enough y: If g is the identity, this becomes In which case the equation can be derived using perturbation theory. In 1770, Joseph Louis Lagrange (1736–1813) published his power series solution of the implicit equation for v mentioned above. However, his solution used cumbersome series expansions of logarithms. In 1780, Pierre-Simon Laplace (1749–1827) published a simpler proof of the theorem, which was based on relations between partial derivatives with respect to the variable x and the parameter y. Charles Hermite (1822–1901) presented the most straightforward proof of the theorem by using contour integration. Lagrange's reversion theorem is used to obtain numerical solutions to Kepler's equation. Simple proof We start by writing: Writing the delta-function as an integral we have: The integral over k then gives and we have: Rearranging the sum and cancelling then gives the result: References External links Lagrange Inversion [Reversion] Theorem on MathWorld Cornish–Fisher expansion, an application of the theorem Article on equation of time contains an application to Kepler's equation. Theorems in analysis Inverse functions fr:Théorème d'inversion de Lagrange
Lagrange reversion theorem
[ "Mathematics" ]
311
[ "Mathematical analysis", "Theorems in mathematical analysis", "Mathematical theorems", "Mathematical problems" ]
27,852,664
https://en.wikipedia.org/wiki/Catalyst%20support
In chemistry, a catalyst support or carrier is a material, usually a solid with a high surface area, to which a catalyst is affixed. The activity of heterogeneous catalysts is mainly promoted by atoms present at the accessible surface of the material. Consequently, great effort is made to maximize the specific surface area of a catalyst. One popular method for increasing surface area involves distributing the catalyst over the surface of the support. The support may be inert or participate in the catalytic reactions. Typical supports include various kinds of activated carbon, alumina, and silica. Applying catalysts to supports Two main methods are used to prepare supported catalysts. In the impregnation method, a suspension of the solid support is treated with a solution of a precatalyst, and the resulting material is then activated under conditions that will convert the precatalyst (often a metal salt) to a more active state, perhaps the metal itself. In such cases, the catalyst support is usually in the form of pellets. Alternatively, supported catalysts can be prepared from homogeneous solution by co-precipitation. For example, an acidic solution of aluminium salts and precatalyst are treated with base to precipitate the mixed hydroxide, which is subsequently calcined. Supports are usually thermally very stable and withstand processes required to activate precatalysts. For example, many precatalysts are activated by exposure to a stream of hydrogen at high temperatures. Similarly, catalysts become fouled after extended use, and in such cases they are sometimes re-activated by oxidation-reduction cycles, again at high temperatures. The Phillips catalyst, consisting of chromium oxide supported on silica, is activated by a stream of hot air. Spillover Supports are often viewed as inert: catalysis occurs at the catalytic "islands" and the support exists to provide high surface areas. Various experiments indicate that this model is often oversimplified. It is known for example that adsorbates, such as hydrogen and oxygen, can interact with and even migrate from island to island across the support without re-entering the gas phase. This process where adsorbates migrate to and from the support is called spillover. It is envisaged, for example, that hydrogen can "spill" onto oxidic support as hydroxy groups. Catalyst leaching A common problem in heterogeneous catalysis is leaching, a form of deactivation where active species on the surface of a solid catalyst are lost in the liquid phase. Leaching is detrimental for environmental and commercial reasons, and must be taken into consideration if a catalyst is to be used for extended periods of time. If the binding interactions between a catalyst and its support are too weak, leaching will be exacerbated, and its activity will decrease after extended use. For electrophilic catalysts, leaching may be addressed by choosing a more basic support. As this strategy may negatively affect the activity of the catalyst, a subtle balance between leaching mitigation and activity is required. Strong metal-support interaction Strong metal-support interaction is another case highlighting the oversimplification that heterogeneous catalysts are merely supported on an inert substance. The original evidence was provided by the finding that particles of platinum bind H2 with the stoichiometry PtH2 for each surface atom regardless of whether the platinum is supported or not. When, however, supported on titanium dioxide, Pt no longer binds with H2 with the same stoichiometry. This difference is attributed to the electronic influence of the titania on the platinum, otherwise called strong metal-support interaction. Heterogenized molecular catalysis Molecular catalysts, consisting of transition metal complexes, have been immobilized on catalyst supports. The resulting material in principle combines features of both homogeneous catalysts – well defined metal complex structures – with the advantages of heterogeneous catalysts – recoverability and ease of handling. Many modalities have been developed for attaching metal complex catalysts to a support. However, the technique has not proven commercially viable, usually because the heterogenized transition metal complexes are leached from, or deactivated by, the support. Supports for electrocatalysis Supports are used to give mechanical stability to catalyst nanoparticles or powders. Supports immobilize the particle reducing its mobility and favouring the chemical stabilization: they can be considered as solid capping agents. Supports also allow the nanoparticles to be easily recycled. One of the most promising supports is graphene for its porosity, electronic properties, thermal stability and active surface area. Examples Almost all major heterogeneous catalysts are supported as illustrated in the table hereafter. See also References
Catalyst support
[ "Chemistry" ]
979
[ "Catalysis", "Chemical kinetics" ]
29,142,685
https://en.wikipedia.org/wiki/Bubble%20column%20reactor
A bubble column reactor is a chemical reactor that belongs to the general class of multiphase reactors, which consists of three main categories: trickle bed reactor (fixed or packed bed), fluidized bed reactor, and bubble column reactor. A bubble column reactor is a very simple device consisting of a vertical vessel filled with water with a gas distributor at the inlet. Due to the ease of design and operation, which does not involve moving parts, they are widely used in the chemical, biochemical, petrochemical, and pharmaceutical industries to generate and control gas-liquid chemical reactions. Despite the simple column arrangement, the hydrodynamics of bubble columns is very complex due to the interactions between liquid and gas phases. In recent years, Computational Fluid Dynamics (CFD) has become a very popular tool to design and optimize bubble column reactors. Technology and applications In its simplest configuration, a bubble column consists of a vertically-arranged cylindrical column filled with liquid. The gas flow rate is introduced at the bottom of the column through a gas distributor. The gas is supplied in the form of bubbles to either a liquid phase or a liquid-solid suspension. In this case, the solid particle size (typically a catalyst) ranges from 5 to 100 μm. These three-phase reactors are referred to us as slurry bubble columns. The liquid flow rate may be fed co-currently or counter-currently to the rising bubbles, or it may be zero. In the latter case, the column operates in batch condition. Bubble columns offer a significant number of advantages: excellent heat and mass transfer between the phases, low operating and maintenance costs due to the absence of moving parts, solids can be handled without any erosion or plugging problems, slow reactions can be carried out due to the high liquid residence time (this is the case for gas-liquid reactions with a Hatta number Ha <0.3), reasonable control of temperature when strongly exothermic reactions take place. However, the back-mixing of the liquid phase (the result of buoyancy-driven recirculation) is a limitation for bubble columns: excessive back-mixing can limit the conversion efficiency. The reactor may be equipped with internals, baffles, or sieve plates, to overcome the back-mixing problem with an inevitable modification in the fluid dynamics. Bubble columns are extensively used in many industrial applications. They are of considerable interest in chemical processes involving reactions like oxidation, chlorination, alkylation, polymerization, and hydrogenation, as well as in the production of synthetic fuels via a gas conversion process ( Fischer-Tropsch process) in biochemical processes such as fermentation and biological wastewater treatment. Hydrodynamic concepts Due to the increasing importance of bubble column reactors in most industrial sectors, the study of their hydrodynamics acquired significant relevance in recent years. The design of bubble columns depends on the quantification of three main phenomena: (1) mixing characteristics, (2) heat and mass transfer properties, (3) chemical kinetics in case of reactants systems. As a consequence, the correct design and operation relies on the precise knowledge of the fluid dynamics phenomena on different scales: (1) molecular scale, (2) bubble scale, (3) reactor scale, and (4) industrial scale. The fluid dynamics properties in bubble columns depend on the interaction between the gas and liquid phases, which are related to the prevailing flow regime. The description of the hydrodynamics of bubble columns required the definition of some parameters. The superficial gas and liquid velocities are defined as the ratio between the volumetric flow rate of the gas and liquid, respectively, divided by the column cross-sectional area. Although the superficial velocity concept is based on a simple one-dimensional flow assumption, it can be used to characterize and determine the hydrodynamics in bubble columns since an increase in its value can determine a flow regime transition. Concerning global flow properties, a fundamental aspect which is helpful in describing the bubble column design process is the global gas holdup. It is defined as the ratio of the volume occupied by the gas phase and the sum of the volume occupied by the gas and liquid phases: Where: is the volume occupied by the gas phase. is the volume occupied by the liquid phase. The gas holdup provides information about the mean residence time of bubbles inside the column. Combined with bubble dimensions ( a fundamental local flow property), it determines the interfacial area for the heat and mass transfer rate between the phases. Flow regimes in bubble columns In multiphase reactors, the flow regime gives information about the behaviour of the gas phase and its interaction with the continuous liquid phase. The flow regime can vary significantly depending on several factors, including gas and liquid flow rates, geometric aspects of the column (column diameter, column height, sparger type, sparger holes diameter, and eventually, the size of the solid particles) and physical properties of the phases. In the most general case, four flow regimes can be encountered in bubble column reactors: (1) homogeneous or bubbly flow regime, (2) slug flow regime, (3) churn or heterogeneous flow regime, and (4) annular flow regime. The homogeneous flow regime takes place at very low superficial gas velocity and can be divided into mono-dispersed and poly-dispersed homogeneous flow regimes. The former is characterized by a mono-dispersed bubble size distribution, the latter by a poly-dispersed one, according to the change in sign of the lift force. Small bubbles with a positive lift coefficient move towards the column wall, and large bubbles with a negative lift coefficient move towards the column center. The heterogeneous flow regime occurs at very high gas velocity and represents a chaotic and unsteady flow pattern, with high liquid recirculation and vigorous mixing. A wide range of bubble sizes is experienced, and the average bubble size is governed by coalescence and breakup phenomena, which determine the flow properties, no longer influenced by the primary bubbles generated at the sparger. The slug and the annular flow regimes are usually observed in small-diameter bubble columns with an inner diameter of less than 0.15 m. The former is characterized by giant bubbles, named Taylor bubbles, that occupy the entire cross-sectional area of the column. The latter is characterized by a central core of gas surrounded by a thin liquid film. The annular flow regime exists only at very high gas velocity. When dealing with industrial applications, larger-diameter bubble columns are typically employed so that the slug flow regime is not usually observed due to the so-called Rayleight-Taylor instabilities. The quantification of these instabilities at the reactor-scale is obtained by comparing the dimensionless bubble diameter, , with a critical diameter, : Where is the bubble column hydraulic diameter, is the surface tension, is the acceleration due to gravity, is the liquid phase density, and is the gas phase density. For example, at ambient temperature and pressure and considering air and water as working fluids, a bubble column is classified as a large-diameter if it has a hydraulic diameter greater than 0.15 m. Due to the very high gas velocity, the annular flow regime is not usually observed in industrial bubble columns. Consequently, in a large-scale bubble column, we may have only the bubbly (or homogeneous) and the churn (or heterogenous) flow regimes. Between these flow regimes, a transition region is usually observed, in which the flow field is not as distinct and well defined as in the bubbly-homogeneous and churn-heterogeneous flow regimes. The boundaries between the flow regimes can be graphically observed in the flow regimes map. Numerical modelling Numerical modelling of bubble column reactors is a way of predicting the multiphase flow to improve the reactor design and understand the reactor fluid dynamics. The recent increase in interest in Computational Fluid Dynamics (CFD) spurred substantial research efforts in determining numerical models that can obtain reasonably accurate predictions with limited computational time, thus overcoming the limitations of traditional empirical methods. When a dispersed flow is considered, two main models have been developed to predict the complex fluid dynamics phenomena: the Eulerian-Lagrangian and the Eulerian-Eulerian Multi-Fluid models. The Eulerian-Lagrangian model couples the Eulerian description of the continuous phase with a Lagrangian scheme for tracking the individual particulates. The dynamic of the surrounding fluid (continuous phase) is solved through the governing equations, while the particulates (dispersed phase) are tracked independently through the surrounding fluid by computing their trajectory. The interactions between the phases and their impact on both the continuous and the discrete phases can be considered, but it requires a greater computational effort. Consequently, it can not be used to simulate bubble columns at the industrial scale. The Eulerian-Eulerian model considers each phase as interpenetrating continua. All the phases share a single pressure field, whereas continuity and momentum equations are solved for each phase. The coupling between the phases is achieved considering interfacial source terms. Governing equations Considering an isothermal flow without mass transfer, the Unsteady Reynolds Average Navier-Stokes equations (URANS) are: Where: is the phasic volume fraction and it represents the space occupied by the k-phase. is -phase density. is the -phase velocity. P is the pressure field shared by all the phases. is the -phase strain stress tensor. is the acceleration due to gravity. is the momentum source term. Interfacial forces To correctly solve the -phase momentum equation, a feasible set of closure relations must be considered to include all the possible interactions between the phases, expressed as a momentum transfer per unit volume at the phase interface. Interfacial momentum forces are added as a source term in the momentum equation and can be divided into drag and non-drag forces. The drag force has a dominant role and can be considered as the most important contribution in bubbly flows. It reflects the resistance opposing bubble motion relative to the surrounding fluid. The non drag forces are the lift, turbulent dispersion, wall lubrication and virtual mass forces: Lift force: force perpendicular to bubble motion. It results from the pressure and stress acting on the bubble surface. Experimental and numerical studies show that the lift force change sign depending on the bubble diameter. For small bubbles, the lift force acts in the direction of decreasing liquid velocity, which is, in the case of batch or co-current mode, toward the pipe wall. Conversely, when large bubbles are considered, the lift force pushes the bubbles toward the center of the column. The change in sign of the lift force occurs at a bubble diameter of approximately 5.8 mm. Turbulent dispersion: it accounts for the fluctuation in the liquid velocity that affects the dispersed phase by scattering it. The turbulent eddies redistribute the bubbles in the lateral direction from the high-concentration to the low-concentration bubble region. The turbulent dispersion force modulates the peaks of small bubbles near the wall pipes and spreads out large bubbles. Wall lubrication: force due to the surface tension. It prevents the bubbles from touching the walls, ensuring zero presence of bubbles near vertical walls (found experimentally). Virtual mass force: it arises from the relative acceleration of an immersed moving object to its surrounding fluid. Its effect is significant when the liquid phase density is much higher than the gas phase. All the interfacial forces can be added to the numerical model using suitable correlations derived from experimental studies. Dispersed phase modelling Depending on the regime under investigation, different approaches can be used to model the dispersed gas phase. The simplest is to use a fixed bubble size distribution. This approximation is suitable to simulate the homogeneous flow regime, where the interactions between the bubbles are negligible. In addition, this approach calls for the knowledge of the bubbles diameter since it is an input parameter for the simulations. However, in industrial practice, large-scale bubble columns are typically employed, equipped with gas distributors characterized by large openings, so a heterogeneous flow regime is commonly observed. Bubble coalescence and breakup phenomena are relevant and can not be neglected. In this case, the CFD model can be coupled with a Population Balance Model (PBM) to account for the changes in bubbles size. A Population Balance Model consists of a transport equation derived from the Boltzmann statistical transport equation, and it describes the particles entering or leaving a control volume via several mechanisms. The bubble number density transport equation is also known as Population Balance Equation (PBE): Where is the bubble number density function and represents the probable number density of bubbles at a given time , about a position , with bubble volume between and , and is the bubble velocity. The right and side term of the Population Balance Equation is the source/sink term due to bubbles coalescence, breakup, phase change, pressure change, mass transfer, and chemical reactions. See also Trickle bed reactor Fluidized bed reactor Multiphase flow Computational fluid dynamics References External links Chemical reactors Fluid dynamics Computational fluid dynamics
Bubble column reactor
[ "Physics", "Chemistry", "Engineering" ]
2,673
[ "Chemical reaction engineering", "Computational fluid dynamics", "Chemical reactors", "Chemical engineering", "Chemical equipment", "Computational physics", "Piping", "Fluid dynamics" ]
29,145,096
https://en.wikipedia.org/wiki/NUPACK
The Nucleic Acid Package (NUPACK) is a growing software suite for the analysis and design of nucleic acid systems. Jobs can be run online on the NUPACK webserver or NUPACK source code can be downloaded and compiled locally for non-commercial academic use. NUPACK algorithms are formulated in terms of nucleic acid secondary structure. In most cases, pseudoknots are excluded from the structural ensemble. Secondary structure model The secondary structure of multiple interacting strands is defined by a list of base pairs. A polymer graph for a secondary structure can be constructed by ordering the strands around a circle, drawing the backbones in succession from 5’ to 3’ around the circumference with a nick between each strand, and drawing straight lines connecting paired bases. A secondary structure is pseudoknotted if every strand ordering corresponds to a polymer graph with crossing lines. A secondary structure is connected if no subset of the strands is free of the others. Algorithms are formulated in terms of ordered complexes, each corresponding to the structural ensemble of all connected polymer graphs with no crossing lines for a particular ordering of a set of strands. The free energy of an unpseudoknotted secondary structure is calculated using nearest-neighbor empirical parameters for RNA in 1M Na+ or for DNA in user-specified Na+ and Mg++ concentrations; added parameters are employed for the analysis of pseudoknots (single RNA strands only). Web server Analysis The Analysis page allows users to analyze the thermodynamic properties of a dilute solution of interacting nucleic acid strands in the absence of pseudoknots (e.g., a test tube of DNA or RNA strand species). For a dilute solution containing multiple strand species interacting to form multiple species of ordered complexes, NUPACK calculates for each ordered complex: the partition function, the minimum free energy (MFE) secondary structure, the equilibrium base-pairing probabilities, its equilibrium concentration, including rigorous treatment of distinguishability issues that arise in the multi-stranded setting. Design The Design page allows users to design sequences for one or more strands intended to adopt an unpseudoknotted target secondary structure at equilibrium. Sequence design is formulated as an optimization problem with the goal of reducing the ensemble defect below a user-specified stop condition. For a candidate sequence and a given target secondary structure, the ensemble defect is the average number of incorrectly paired over the structural ensemble of the ordered complex. For a target secondary structure with N nucleotides, the algorithm seeks to achieve an ensemble defect below N/100. Empirically, the design algorithm exhibits asymptotic optimality as N increases: for sufficiently large N, the cost of sequence design is typically only 4/3 the cost of a single evaluation of the ensemble defect. Utilities The Utilities page allows users to evaluate, display, and annotate the equilibrium properties of a complex of interacting nucleic acid strands. The page accepts as input either sequence information, structure information, or both, performing diverse functions based on the information provided, including automatic layout and rendering of secondary structures with or without ideal helical geometry. In either case, the structure layout can be edited dynamically within the web application. Implementation The NUPACK web application is programmed within the Ruby on Rails framework, employing Ajax and the Dojo Toolkit to implement dynamic features and interactive graphics. Plots and graphics are generated using NumPy and matplotlib. The site is supported on current versions of the web browsers Safari, Chrome, and Firefox. The NUPACK library of analysis and design algorithms is written in the programming language C. Dynamic programs are parallelized using Message Passing Interface (MPI). Terms of use The NUPACK web server and NUPACK source code are provided for non-commercial research purposes and is with this restriction not Free and open source software. Funding NUPACK development is funded by the National Science Foundation via the Molecular Programming Project and by the Beckman Institute at the California Institute of Technology (Caltech). See also RNA RNA structure List of RNA structure prediction software Comparison of nucleic acid simulation software External links Source download page Molecular Programming Project Homepage The Beckman Institute at Caltech References Molecular modelling software DNA nanotechnology
NUPACK
[ "Chemistry", "Materials_science" ]
860
[ "Molecular modelling software", "Computational chemistry software", "DNA nanotechnology", "Molecular modelling", "Nanotechnology" ]
29,145,201
https://en.wikipedia.org/wiki/Land%C3%A9%20interval%20rule
In atomic physics, the Landé interval rule states that, due to weak angular momentum coupling (either spin-orbit or spin-spin coupling), the energy splitting between successive sub-levels are proportional to the total angular momentum quantum number (J or F) of the sub-level with the larger of their total angular momentum value (J or F). Background The rule assumes the Russell–Saunders coupling and that interactions between spin magnetic moments can be ignored. The latter is an incorrect assumption for light atoms. As a result of this, the rule is optimally followed by atoms with medium atomic numbers. The rule was first stated in 1923 by German-American physicist Alfred Landé. Derivation As an example, consider an atom with two valence electrons and their fine structures in the LS-coupling scheme. We will derive heuristically the interval rule for the LS-coupling scheme and will remark on the similarity that leads to the interval rule for the hyperfine structure. The interactions between electrons couple their orbital and spin angular momentums. Let's denote the spin and orbital angular momentum as and for each electrons. Thus, the total orbital angular momentum is and total spin momentum is . Then the coupling in the LS-scheme gives rise to a Hamiltonian: where and encode the strength of the coupling. The Hamiltonian acts as a perturbation to the state . The coupling would cause the total orbital and spin angular momentums to change directions, but the total angular momentum would remain constant. Its z-component would also remain constant, since there is no external torque acting on the system. Therefore, we shall change the state to , which is a linear combination of various . The exact linear combination, however, is unnecessary to determine the energy shift. To study this perturbation, we consider the vector model where we treat each as a vector. and precesses around the total orbital angular momentum . Consequently, the component perpendicular to averages to zero over time, and thus only the component along needs to be considered. That is, . We replace by and by the expectation value . Applying this change to all the terms in the Hamiltonian, we can rewrite it as The energy shift is then Now we can apply the substitution to write the energy as Consequently, the energy interval between adjacent sub-levels is: This is the Landé interval rule. As an example, consider a term, which has 3 sub-levels . The separation between and is , twice as the separation between and is . As for the spin-spin interaction responsible for the hyperfine structure, because the Hamiltonian of the hyperfine interaction can be written as where is the nuclear spin and is the total angular momentum, we also have an interval rule: where is the total angular momentum . The derivation is essentially the same, but with nuclear spin , angular momentum and total angular momentum . Limitations The interval rule holds when the coupling is weak. In the LS-coupling scheme, a weak coupling means the energy of spin-orbit coupling is smaller than residual electrostatic interaction: . Here the residual electrostatic interaction refers to the term including electron-electron interaction after we employ the central field approximation to the Hamiltonian of the atom. For the hyperfine structure, the interval rule for two magnetic moments can be disrupted by magnetic quadruple interaction between them, so we want . For example, in helium, the spin-spin interactions and spin-other-orbit interaction have an energy comparable to that of the spin-orbit interaction. References Atomic physics
Landé interval rule
[ "Physics", "Chemistry" ]
712
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
5,275,240
https://en.wikipedia.org/wiki/Photon%20gas
In physics, a photon gas is a gas-like collection of photons, which has many of the same properties of a conventional gas like hydrogen or neon – including pressure, temperature, and entropy. The most common example of a photon gas in equilibrium is the black-body radiation. Photons are part of a family of particles known as bosons, particles that follow Bose–Einstein statistics and with integer spin. A gas of bosons with only one type of particle is uniquely described by three state functions such as the temperature, volume, and the number of particles. However, for a black body, the energy distribution is established by the interaction of the photons with matter, usually the walls of the container, and the number of photons is not conserved. As a result, the chemical potential of the black-body photon gas is zero at thermodynamic equilibrium. The number of state variables needed to describe a black-body state is thus reduced from three to two (e.g. temperature and volume). Thermodynamics of a black body photon gas In a classical ideal gas with massive particles, the energy of the particles is distributed according to a Maxwell–Boltzmann distribution. This distribution is established as the particles collide with each other, exchanging energy (and momentum) in the process. In a photon gas, there will also be an equilibrium distribution, but photons do not collide with each other (except under very extreme conditions, see two-photon physics), so the equilibrium distribution must be established by other means. The most common way that an equilibrium distribution is established is by the interaction of the photons with matter. If the photons are absorbed and emitted by the walls of the system containing the photon gas, and the walls are at a particular temperature, then the equilibrium distribution for the photons will be a black-body distribution at that temperature. A very important difference between a generic Bose gas (gas of massive bosons) and a photon gas with a black-body distribution is that the number of photons in the photon gas is not conserved. A photon can be created upon thermal excitation of an atom in the wall into an upper electronic state, followed by the emission of a photon when the atom falls back to a lower energetic state. This type of photon generation is called thermal emission. The reverse process can also take place, resulting in a photon being destroyed and removed from the gas. It can be shown that, as a result of such processes there is no constraint on the number of photons in the system, and the chemical potential of the photons must be zero for black-body radiation. The thermodynamics of a black-body photon gas may be derived using quantum statistical mechanical arguments, with the radiation field being in equilibrium with the atoms in the wall. The derivation yields the spectral energy density u, which is the energy of the radiation field per unit volume per unit frequency interval, given by: . where h is the Planck constant, c  is the speed of light, ν  is the frequency, k  is the Boltzmann constant, and T  is temperature. Integrating over frequency and multiplying by the volume, V, gives the internal energy of a black-body photon gas: . The derivation also yields the (expected) number of photons N: , where is the Riemann zeta function. Note that for a particular temperature, the particle number N varies with the volume in a fixed manner, adjusting itself to have a constant density of photons. If we note that the equation of state for an ultra-relativistic quantum gas (which inherently describes photons) is given by , then we can combine the above formulas to produce an equation of state that looks much like that of an ideal gas: . The following table summarizes the thermodynamic state functions for a black-body photon gas. Notice that the pressure can be written in the form , which is independent of volume (b is a constant). Isothermal transformations As an example of a thermodynamic process involving a photon gas, consider a cylinder with a movable piston. The interior walls of the cylinder are "black" in order that the temperature of the photons can be maintained at a particular temperature. This means that the space inside the cylinder will contain a blackbody-distributed photon gas. Unlike a massive gas, this gas will exist without the photons being introduced from the outside – the walls will provide the photons for the gas. Suppose the piston is pushed all the way into the cylinder so that there is an extremely small volume. The photon gas inside the volume will press against the piston, moving it outward, and in order for the transformation to be isothermic, a counter force of almost the same value will have to be applied to the piston so that the motion of the piston is very slow. This force will be equal to the pressure times the cross sectional area () of the piston. This process can be continued at a constant temperature until the photon gas is at a volume V0. Integrating the force over the distance () traveled yields the total work done to create this photon gas at this volume , where the relationship V = Ax  has been used. Defining . The pressure is . Integrating, the work done is just . The amount of heat that must be added in order to create the gas is . where H0 is the enthalpy at the end of the transformation. It is seen that the enthalpy is the amount of energy needed to create the photon gas. Photon gases with tunable chemical potential In low-dimensional systems, for example in dye-solution filled optical microcavities with a distance between the resonator mirrors in the wavelength range where the situation becomes two-dimensional, also photon gases with tunable chemical potential can be realized. Such a photon gas in many respects behaves like a gas of material particles. One consequence of the tunable chemical potential is that at high phase space densities then Bose-Einstein condensation of photons is observed. See also Gas in a box – derivation of distribution functions for all ideal gases Bose gas Fermi gas Planck's law of black-body radiation – the distribution of photon energies as a function of frequency or wavelength Stefan–Boltzmann law – the total flux emitted by a black body Radiation pressure References Further reading Photons Thermodynamics Statistical mechanics
Photon gas
[ "Physics", "Chemistry", "Mathematics" ]
1,303
[ "Statistical mechanics", "Thermodynamics", "Dynamical systems" ]
5,278,372
https://en.wikipedia.org/wiki/Thermodynamicist
In thermodynamics, a thermodynamicist is someone who studies thermodynamic processes and phenomena, i.e. the physics that deal with mechanical action and relations of heat. Among the well-known number of famous thermodynamicists, include Sadi Carnot, Rudolf Clausius, Willard Gibbs, Hermann von Helmholtz, and Max Planck. History of term Although most consider the French physicist Nicolas Sadi Carnot to be the first true thermodynamicist, the term thermodynamics itself wasn't coined until 1849 by Lord Kelvin in his publication An Account of Carnot's Theory of the Motive Power of Heat. The first thermodynamic textbook was written in 1859 by William Rankine, a civil and mechanical engineering professor at the University of Glasgow. See also References Thermodynamics
Thermodynamicist
[ "Physics", "Chemistry", "Mathematics" ]
181
[ "Dynamical systems", "Thermodynamics", "Thermodynamicists" ]
34,692,689
https://en.wikipedia.org/wiki/Gcov
Gcov is a source code coverage analysis and statement-by-statement profiling tool. Gcov generates exact counts of the number of times each statement in a program is executed and annotates source code to add instrumentation. Gcov comes as a standard utility with the GNU Compiler Collection (GCC) suite. The gcov utility gives information on how often a program executes segments of code. It produces a copy of the source file, annotated with execution frequencies. The gcov utility does not produce any time-based data and works only on code compiled with the GCC suite. The manual claims it is not compatible with any other profiling or test coverage mechanism, but it works with llvm-generated files too. Description gcov produces a test coverage analysis of a specially instrumented program. The options -fprofile-arcs -ftest-coverage should be used to compile the program for coverage analysis (first option to record branch statistics and second to save line execution count); -fprofile-arcs should also be used to link the program. After running such program will create several files with ".bb" ".bbg" and ".da" extensions (suffixes), which can be analysed by gcov. It takes source files as command-line arguments and produces an annotated source listing. Each line of source code is prefixed with the number of times it has been executed; lines that have not been executed are prefixed with "#####". gcov creates a logfile called sourcefile.gcov which indicates how many times each line of a source file sourcefile.c has executed. This annotated source file can be used with gprof, another profiling tool, to extract timing information about the program. Example The following program, written in C, loops over the integers 1 to 9 and tests their divisibility with the modulus (%) operator. #include <stdio.h> int main (void) { int i; for (i = 1; i < 10; i++) { if (i % 3 == 0) printf ("%d is divisible by 3\n", i); if (i % 11 == 0) printf ("%d is divisible by 11\n", i); } return 0; } To enable coverage testing the program must be compiled with the following options: $ gcc -Wall -fprofile-arcs -ftest-coverage cov.c where cov.c is the name of the program file. This creates an instrumented executable which contains additional instructions that record the number of times each line of the program is executed. The option adds instructions for counting the number of times individual lines are executed, while incorporates instrumentation code for each branch of the program. Branch instrumentation records how frequently different paths are taken through ‘if’ statements and other conditionals. The executable can then be run to analyze the code and create the coverage data. $ ./a.out The data from the run is written to several coverage data files with the extensions ‘.bb’ ‘.bbg’ and ‘.da’ respectively in the current directory. If the program execution varies based on the input parameters or data, it can be run multiple times and the results will accumulate in the coverage data files for overall analysis. This data can be analyzed using the gcov command and the name of a source file: $ gcov cov.c 88.89% of 9 source lines executed in file cov.c Creating cov.c.gcov The gcov command produces an annotated version of the original source file, with the file extension ‘.gcov’, containing counts of the number of times each line was executed: #include <stdio.h> int main (void) { 1 int i; 10 for (i = 1; i < 10; i++) { 9 if (i % 3 == 0) 3 printf ("%d is divisible by 3\n", i); 9 if (i % 11 == 0) ###### printf ("%d is divisible by 11\n", i); 9 } 1 return 0; 1 } The line counts can be seen in the first column of the output. Lines which were not executed are marked with hashes ‘######’. Command-line options Gcov command line utility supports following options while generating annotated files from profile data: -h (--help): Display help about using gcov (on the standard output), and exit without doing any further processing. -v (--version): Display the gcov version number (on the standard output), and exit without doing any further processing. -a (--all-blocks): Write individual execution counts for every basic block. Normally gcov outputs execution counts only for the main blocks of a line. With this option you can determine if blocks within a single line are not being executed. -b (--branch-probabilities): Write branch frequencies to the output file, and write branch summary info to the standard output. This option allows you to see how often each branch in your program was taken. Unconditional branches will not be shown, unless the -u option is given. -c (--branch-counts): Write branch frequencies as the number of branches taken, rather than the percentage of branches taken. -n (--no-output): Do not create the gcov output file. -l (--long-file-names): Create long file names for included source files. For example, if the header file x.h contains code, and was included in the file a.c, then running gcov on the file a.c will produce an output file called a.c##x.h.gcov instead of x.h.gcov. This can be useful if x.h is included in multiple source files and you want to see the individual contributions. If you use the `-p' option, both the including and included file names will be complete path names. -p (--preserve-paths): Preserve complete path information in the names of generated .gcov files. Without this option, just the filename component is used. With this option, all directories are used, with `/' characters translated to `#' characters, . directory components removed and unremoveable .. components renamed to `^'. This is useful if sourcefiles are in several different directories. -r (--relative-only): Only output information about source files with a relative pathname (after source prefix elision). Absolute paths are usually system header files and coverage of any inline functions therein is normally uninteresting. -f (--function-summaries): Output summaries for each function in addition to the file level summary. -o directory|file (--object-directory directory or --object-file file): Specify either the directory containing the gcov data files, or the object path name. The .gcno, and .gcda data files are searched for using this option. If a directory is specified, the data files are in that directory and named after the input file name, without its extension. If a file is specified here, the data files are named after that file, without its extension. -s directory (--source-prefix directory): A prefix for source file names to remove when generating the output coverage files. This option is useful when building in a separate directory, and the pathname to the source directory is not wanted when determining the output file names. Note that this prefix detection is applied before determining whether the source file is absolute. -u (--unconditional-branches): When branch probabilities are given, include those of unconditional branches. Unconditional branches are normally not interesting. -d (--display-progress): Display the progress on the standard output. Coverage summaries Lcov is a graphical front-end for gcov. It collects gcov data for multiple source files and creates HTML pages containing the source code annotated with coverage information. It also adds overview pages for easy navigation within the file structure. Lcov supports statement, function, and branch coverage measurement. There is also a Windows version. Gcovr provides a utility for managing the use of gcov and generating summarized code coverage results. This command is inspired by the Python coverage.py package, which provides a similar utility in Python. Gcovr produces either compact human-readable summary reports, machine readable XML reports or a graphical HTML summary. The XML reports generated by gcovr can be used by Jenkins to provide graphical code coverage summaries. Gcovr supports statement and branch coverage measurement SCov is a utility that processes the intermediate text format generated by gcov (using gcov -i) to generate reports on code coverage. These reports can be a simple text report, or HTML pages with more detailed reports. See also Tcov – code coverage tool for Solaris provided in Sun Studio suite Trucov - intended to improve on Gcov with machine readable output References Software metrics Software testing tools
Gcov
[ "Mathematics", "Engineering" ]
1,984
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
34,693,927
https://en.wikipedia.org/wiki/Cyclohexylthiophthalimide
Cyclohexylthiophthalimide (abbreviated CTP) is an organosulfur compound that is used in production of rubber. It is a white solid, although commercial samples often appear yellow. It features the sulfenamide functional group, being a derivative of phthalimide and cyclohexanethiol. In the production of synthetic rubber, CTP impedes the onset of sulfur vulcanization. References Reagents for organic chemistry Phthalimides Sulfenamides
Cyclohexylthiophthalimide
[ "Chemistry" ]
106
[ "Reagents for organic chemistry" ]
34,694,497
https://en.wikipedia.org/wiki/Bar%20screen
A bar screen is a mechanical filter used to remove large objects, such as rags and plastics, from wastewater. It is part of the primary filtration flow and typically is the first, or preliminary, level of filtration, being installed at the influent to a wastewater treatment plant. They typically consist of a series of vertical steel bars spaced between 1 and 3 inches apart. Bar screens come in many designs. Some employ automatic cleaning mechanisms using electric motors and chains, some must be cleaned manually by means of a heavy rake. Items removed from the influent are called screenings and are collected in dumpsters and disposed of in landfills. As a bar screen collects objects, the water level will rise, and so they must be cleared regularly to prevent overflow. References Water treatment
Bar screen
[ "Chemistry", "Engineering", "Environmental_science" ]
163
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
34,694,574
https://en.wikipedia.org/wiki/Witten%20conjecture
In algebraic geometry, the Witten conjecture is a conjecture about intersection numbers of stable classes on the moduli space of curves, introduced by Edward Witten in the paper , and generalized in . Witten's original conjecture was proved by Maxim Kontsevich in the paper . Witten's motivation for the conjecture was that two different models of 2-dimensional quantum gravity should have the same partition function. The partition function for one of these models can be described in terms of intersection numbers on the moduli stack of algebraic curves, and the partition function for the other is the logarithm of the τ-function of the KdV hierarchy. Identifying these partition functions gives Witten's conjecture that a certain generating function formed from intersection numbers should satisfy the differential equations of the KdV hierarchy. Statement Suppose that Mg,n is the moduli stack of compact Riemann surfaces of genus g with n distinct marked points x1,...,xn, and g,n is its Deligne–Mumford compactification. There are n line bundles Li on g,n, whose fiber at a point of the moduli stack is given by the cotangent space of a Riemann surface at the marked point xi. The intersection index 〈τd1, ..., τdn〉 is the intersection index of Π c1(Li)di on g,n where Σdi = dimg,n = 3g – 3 + n, and 0 if no such g exists, where c1 is the first Chern class of a line bundle. Witten's generating function encodes all the intersection indices as its coefficients. Witten's conjecture states that the partition function Z = exp F is a τ-function for the KdV hierarchy, in other words it satisfies a certain series of partial differential equations corresponding to the basis of the Virasoro algebra. Proof Kontsevich used a combinatorial description of the moduli spaces in terms of ribbon graphs to show that Here the sum on the right is over the set Gg,n of ribbon graphs X of compact Riemann surfaces of genus g with n marked points. The set of edges e and points of X are denoted by X 0 and X1. The function λ is thought of as a function from the marked points to the reals, and extended to edges of the ribbon graph by setting λ of an edge equal to the sum of λ at the two marked points corresponding to each side of the edge. By Feynman diagram techniques, this implies that F(t0,...) is an asymptotic expansion of as Λ lends to infinity, where Λ and Χ are positive definite N by N hermitian matrices, and ti is given by and the probability measure μ on the positive definite hermitian matrices is given by where cΛ is a normalizing constant. This measure has the property that which implies that its expansion in terms of Feynman diagrams is the expression for F in terms of ribbon graphs. From this he deduced that exp F is a τ-function for the KdV hierarchy, thus proving Witten's conjecture. Generalizations The Witten conjecture is a special case of a more general relation between integrable systems of Hamiltonian PDEs and the geometry of certain families of 2D topological field theories (axiomatized in the form of the so-called cohomological field theories by Kontsevich and Manin), which was explored and studied systematically by B. Dubrovin and Y. Zhang, A. Givental, C. Teleman and others. The Virasoro conjecture is a generalization of the Witten conjecture. References Moduli theory Algebraic geometry Conjectures that have been proved
Witten conjecture
[ "Mathematics" ]
766
[ "Conjectures that have been proved", "Fields of abstract algebra", "Algebraic geometry", "Mathematical problems", "Mathematical theorems" ]
34,698,401
https://en.wikipedia.org/wiki/A%20Universe%20from%20Nothing
A Universe from Nothing: Why There Is Something Rather than Nothing is a non-fiction book by the physicist Lawrence M. Krauss, initially published on January 10, 2012, by Free Press. It discusses modern cosmogony and its implications for the debate about the existence of God. The main theme of the book is the claim that "we have discovered that all signs suggest a universe that could and plausibly did arise from a deeper nothing—involving the absence of space itself and—which may one day return to nothing via processes that may not only be comprehensible but also processes that do not require any external control or direction." Publication The book ends with an afterword by Richard Dawkins in which he compares the book to On the Origin of Species — a comparison that Krauss himself called "pretentious". Christopher Hitchens had agreed to write a foreword for the book prior to his death but was too ill to complete it. To write the book, Krauss expanded material from a lecture on the cosmological implications of a flat expanding universe he gave to the Richard Dawkins Foundation at the 2009 Atheist Alliance International conference. The book appeared on The New York Times bestseller list on January 29, 2012. Reception Praise Caleb Scharf, writing in Nature, said that "it would be easy for this remarkable story to revel in self-congratulation, but Krauss steers it soberly and with grace". Ray Jayawardhana, Canada Research Chair in observational astrophysics at the University of Toronto, wrote for The Globe and Mail that Krauss "delivers a spirited, fast-paced romp through modern cosmology and its strong underpinnings in astronomical observations and particle physics theory" and that he "makes a persuasive case that the ultimate question of cosmic origin – how something, namely the universe, could arise from nothing – belongs in the realm of science rather than theology or philosophy". In New Scientist, Michael Brooks wrote, "Krauss will be preaching only to the converted. That said, we should be happy to be preached to so intelligently. The same can't be said about the Dawkins afterword, which is both superfluous and silly." Critique George Ellis, in an interview in Scientific American, said that "Krauss does not address why the laws of physics exist, why they have the form they have, or in what kind of manifestation they existed before the universe existed (which he must believe if he believes they brought the universe into existence). Who or what dreamt up symmetry principles, Lagrangians, specific symmetry groups, gauge theories, and so on? He does not begin to answer these questions." He criticized the philosophical viewpoint of the book, saying "It's very ironic when he says philosophy is bunk and then himself engages in this kind of attempt at philosophy." But, as it was highlighted in the beginning, the book was neither based on theology nor on philosophy, but on science. Regarding the questions on the laws of physics, they have been addressed before the book was written, and afterwards, Regarding their existence, if Krauss' thesis in the book is correct, there have proposed diverse possibilities for our universe, after the popped-up proposed by Krauss due to a quantum fluctuation, the interaction between two primordial elements such as supermassive primordial blackholes (whether tachyonic or not, as unique two elements in the nothingness) could have led to the emergence of an everlasting (indeterminate) expanding universe, and baryonic spacetime region as observable universe in a shared coordinate region-like. The question of the nothingness remains in the field of philosophy, but, indeed, the fundamental concept meaning the absence of anything or the opposite of something (or everything) paradoxically implies a rhetorical oxymoron to the subject matter. Yet, it might be argued that it can be a goal and a requirement for science as well as for the field in order to explain a theory of everything. As a pleonasm and a contronym, aside from the enantiodromia Jung's principle applied as a natural equilibrium (everything coming from nothing as a natural running-course), if that nothingness is considered part of everything, should that nothing have a volume to harbor primitive elements and primordial events, infinite volumes have been described and proposed but it also claimed that the dynamics of these infinite volumes were unknown, being estimated in the consideration of multiple or numerous infinite state-spaces. And in that work, a (sic) "medium concise is provided, starting from an example ―not exactly solvable". Even though it was not the purpose, both theories Krauss' and the aforementioned in the conformal cyclic cosmology together with Big Bang theory, might well make the existence of a nothingness unnecesary. In The New York Times, philosopher of science and physicist David Albert said the book failed to live up to its title; he said Krauss dismissed concerns about what Albert calls his misuse of the term nothing, since if matter comes from relativistic quantum fields, the question becomes where did those fields come from, which Krauss does not discuss. In that regard, one may refer to the aforementioned regarding that topic. These are questions that have already been asked regarding the bouncing cosmology. To direct them science has proposed conformal cyclic cosmology, within a universe can appear after the other by Roger Penrose, or one Big Bang after the other within the same universe, which, still less radical, is compatible with Roger Penrose's, keeping open the possibility of a primordial Big Bang nonetheless. Also, to the very beginning, a dual foamy structure for nothing and virtual quantum fluctuations happening at scales within and under Planck scale (with or without a box, since a box to constrain them would also be an oxymoron) called quantum foam has been proposed. Quantum foam (or spacetime foam, or spacetime bubble) is a theoretical quantum fluctuation of spacetime on very small scales due to quantum mechanics. The theory predicts that at this small scale, particles of matter and antimatter are constantly created and destroyed. These subatomic objects are called virtual particles. Since there is no definitive reason that spacetime needs to be fundamentally smooth, it would be possible that instead, in an early stage of a protospace or before the existence of a protospace, a virtual spacetime would consist of many small, ever-changing regions in which space, time, and nothingness would be not definite, but fluctuating in a foam-like manner. The idea was devised by John Wheeler in 1955. Commenting on the philosophical debate sparked by the book, the physicist Sean M. Carroll asked: "Do advances in modern physics and cosmology help us address these underlying questions, of why there is something called the universe at all, and why there are things called 'the laws of physics,' and why those laws seem to take the form of quantum mechanics, and why some particular wave function and Hamiltonian? In a word: no. I don't see how they could. Sometimes physicists pretend that they are addressing these questions, which is too bad, because they are just being lazy and not thinking carefully about the problem. You might hear, for example, claims to the effect that our laws of physics could turn out to be the only conceivable laws, or the simplest possible laws. But that seems manifestly false. Just within the framework of quantum mechanics, there are an infinite number of possible Hilbert spaces, and an infinite number of possible Hamiltonians, each of which defines a perfectly legitimate set of physical laws. And only one of them can be right, so it's absurd to claim that our laws might be the only possible ones. "Invocations of "simplicity" are likewise of no help here. The universe could be just a single point, not evolving in time. Or it could be a single oscillator, rocking back and forth in perpetuity. Those would be very simple. There might turn out to be some definition of "simplicity" under which our laws are the simplest, but there will always be others in which they are not. And in any case, we would then have the question of why the laws are supposed to be simple? "Likewise, appeals of the form "maybe all possible laws are real somewhere" fail to address the question. Why are all possible laws real? And sometimes, on the other hand, modern cosmologists talk about different laws of physics in the context of a multiverse, and suggest that we see one set of laws rather than some other set for fundamentally anthropic reasons. But again, that's just being sloppy. We're talking here about the low-energy manifestation of the underlying laws, but those underlying laws are exactly the same everywhere throughout the multiverse. "We are still left with the question of there are those deep-down laws that create a multiverse in the first place." Of course, that would imply the existence of a multiverse in the first place, something that scientists and physicists are starting to question as well as their basis, since their provability may be beyond the scope of science power and the scope of physics. Indeed, there are people in science asking those questions, as well as directing them. As aforementioned, the laws of physics might also (have been or) being changing and evolving over time, including cosmological constants. It is no surprise that within that very foamy region of the early dual quantum foam, that the interaction between virtual events, virtual subatomic particles emerging from quantum fluctuations with the very nothingness and in between them, the very laws of physics may change and acquiring oscillating foamy character. Nonetheless it is the interaction what has been proposed as the most fundamental. So, in that scenery, it is absurd to claim that our laws might be the only possible ones, as Sean Carroll pointed out. Not only our universe could be just a single point, not evolving in time, or a single oscillator, rocking back and forth in perpetuity, but it has been proposed that it could be both, filling the gap for entropy one (the second), and feeding the universe regarding mass and energy or virtual particles and events (the first). So, it is of no help the invocations of "simplicity", as it may well not be of any hope or help the split between two possibilities that are not necessary incompatible, or that are presented as incompatible. So, it might be that not only one of them can be right. Regarding these questions, there is a proposal tackling on the question of reality since Einstein let it in a "jail" that he might have created for the philosopher and the next century in his work. These underlying laws might well be the very interaction proposed by Krauss in this book and cycles proposed by Dirac, Penrose and other cyclic conformal cosmology proposals. Regarding the question of the multiverse, and the Everett interpretation, a mild proposal for the evolution since the beginning, tackling therefore the question of the emergence from the nothingness and the virtual spacetime as not necessarily a closed or constraining box, has been presented regarding the topic of the multiverse and this book, called quantum darwinism, in which it is the running-course of the emergence of planets (as a random fluctuation between appearance and dissappearance of planets, producing a adaption-like and longer continuity, permanence, or persistence of planets that present certain qualities and may harbor life or be key to others, where some may harbor life), in a fast run-track that led to a current-state in which, at least, one planet called Earth can harbor human life, fulfilling the anthropic principle with no need for a multiverse. Therefore, the book may also be considered as fulfilling its main objectives, although a second edition tacking into account a few extensions and improvements could be of great help for humanity, and add some aid to the field and science at the same time that may extend the aims of the first edition. Dawkins afterword polemic Dawkings' afterword of the book have been criticized: "Why should Krauss drop the afterword written by Richard Dawkins? Simple: Dawkins is better than this. Whether you agree with him or not, one has to admit he is a very fine and capable writer. And this afterword is some of the worst writing he has ever erected. And I am a believer you should never want to read someone's worst work but only their finest. And this is far from his best of quill. And so because he is so good and this is not representative of his excellent writing ability, it is better to let this afterword go.* *And get rid of this sentence: "Now, a century later we scientist can feel smug for having discovered the underlying expansion of the universe, the cosmic microwave background, dark matter, and dark energy." It is a mistake to use dark matter and dark energy as an accomplishment of how much science knows because scientists don't know what dark matter and dark energy are." Perhaps, a second edition of this book that has been also portrayed and recommended as a delicacy, could open a "virtual quantum window" or opportunity for a better Dawkins' afterword of the new edition of the book: "And while I said I would definitely recommend the consumption of this delicacy, I find this meal could have been made even better if only a few more ingredients had been added to it. Here are several things Krauss might think about changing if he ever writes a second edition of A Universe from Nothing". See also Problem of why there is anything at all Quantum fluctuation Equivocation A fluctuation theory of cosmology Debate regarding primordial quantum fluctuations and nothing Creation ex nihilo vs ex quantum nothingness References External links 2012 non-fiction books Popular physics books Astronomy books Physical cosmology Cosmology books Books by Lawrence M. Krauss Cosmogony Free Press (publisher) books
A Universe from Nothing
[ "Physics", "Astronomy" ]
2,946
[ "Cosmogony", "Astronomical sub-disciplines", "Astronomy books", "Theoretical physics", "Works about astronomy", "Astrophysics", "Physical cosmology" ]
38,669,910
https://en.wikipedia.org/wiki/DNase%20I%20hypersensitive%20site
In genetics, DNase I hypersensitive sites (DHSs) are regions of chromatin that are sensitive to cleavage by the DNase I enzyme. In these specific regions of the genome, chromatin has lost its condensed structure, exposing the DNA and making it accessible. This raises the availability of DNA to degradation by enzymes, such as DNase I. These accessible chromatin zones are functionally related to transcriptional activity, since this remodeled state is necessary for the binding of proteins such as transcription factors. Since the discovery of DHSs 30 years ago, they have been used as markers of regulatory DNA regions. These regions have been shown to map many types of cis-regulatory elements including promoters, enhancers, insulators, silencers and locus control regions. A high-throughput measure of these regions is available through DNase-Seq. Massive analysis The ENCODE project proposes to map all of the DHSs in the human genome with the intention of cataloging human regulatory DNA. DHSs mark transcriptionally active regions of the genome, where there will be cellular selectivity. So, they used 125 different human cell types. This way, using the massive sequencing technique, they obtained the DHSs profiles of every cellular type. Through an analysis of the data, they identified almost 2.9 million distinct DHSs. 34% were specific to each cell type, and only a small minority (3,692) were detected in all cell types. Also, it was confirmed that only 5% of DHSs were found in TSS (Transcriptional Start Site) regions. The remaining 95% represented distal DHSs, divided in a uniform way between intronic and intergenic regions. The data gives an idea of the great complexity regulating the genetic expression in the human genome and the quantity of elements that control this regulation. The high-resolution mapping of DHSs in the model plant Arabidopsis thaliana has been reported. Total 38,290 and 41,193 DHSs in leaf and flower tissues have been identified, respectively. Regulatory DNA tools The study of DHS profiles combined with other techniques allows analysis of regulatory DNA in humans: Transcription factor: Using the ChIP-Seq technique, the binding sites to DNA in certain transcription factor groups are determined, and the DHS profiles are compared. The results confirm a high correlation, which show that the coordinated union of certain factors is implicated in the remodeling and accessibility of chromatin. DNA methylation patterns: CpG methylation has been closely linked with transcriptional silencing. This methylation causes a rearrangement of the chromatin, condensing and inactivating it transcriptionally. Methylated CpG falling within DHSs impedes the association of transcription factor to DNA, inhibiting the accessibility of chromatin. Data argue that methylation patterning paralleling cell-selective chromatin accessibility results from passive deposition after the vacation of transcription factors from regulatory DNA. Promoter chromatin signature: The H3K4me3 modification is related with transcriptional activity. This modification takes place in adjacent nucleosome to the transcription start site (TSS), relaxing the chromatin structure. This histone modification is used as a marker of promoters, using it to map these elements in the human genome. Promoter/enhancer connections: distal cis-regulatory elements, such as enhancers are in charge of modulating the activity of the promoters. In this way, the distal cis-regulatory elements are actively synchronized with their promoter in the cellular lines which is active the expression of the gene controlled. Using the DHS profiles, were looked for correlations between DHS to identify promoter/enhancer connections. Thus, it was able to create a map of candidate enhancers controlling specific genes. The data obtained were validated with the chromosome conformation capture carbon copy (5C) technique. This technique is based in the physical association that exists between the promoter and the enhancers, determining the regions of chromatin that enter in contact in the promoter/enhancer connections. It was confirmed that the majority of promoters were related with more than one enhancer, which indicates the existence of a complicated network of regulation for the immense majority of genes. Surprisingly, they also found that approximately half of the enhancers were found to be associated with more than one promoter. This discovery shows that the human cis-regulatory system is much more complicated than initially thought. The number of distal cis-regulatory elements connected to a promoter is related to the quantitative average of the regulation complexity of a gene. In this way, it was determined that human genes with more interactions with distal DHSs, and with at least one more complex regulation, corresponded with those genes with functions in the immune system. This indicates that the complex of cellular and environmental signals processed by the immune system is directly encoded in the cis-regulatory architecture of its constituent genes. Database ENCODE Project: Regulatory Elements DB Plant DHSs : PlantDHS References Genetics Molecular biology
DNase I hypersensitive site
[ "Chemistry", "Biology" ]
1,015
[ "Biochemistry", "Genetics", "Molecular biology" ]
38,674,471
https://en.wikipedia.org/wiki/Enhancer%20RNA
Enhancer RNAs (eRNAs) represent a class of relatively long non-coding RNA molecules (50-2000 nucleotides) transcribed from the DNA sequence of enhancer regions. They were first detected in 2010 through the use of genome-wide techniques such as RNA-seq and ChIP-seq. eRNAs can be subdivided into two main classes: 1D eRNAs and 2D eRNAs, which differ primarily in terms of their size, polyadenylation state, and transcriptional directionality. The expression of a given eRNA correlates with the activity of its corresponding enhancer in target genes. Increasing evidence suggests that eRNAs actively play a role in transcriptional regulation in cis and in trans, and while their mechanisms of action remain unclear, a few models have been proposed. Discovery Enhancers as sites of extragenic transcription were initially discovered in genome-wide studies that identified enhancers as common regions of RNA polymerase II (RNA pol II) binding and non-coding RNA transcription. The level of RNA pol II–enhancer interaction and RNA transcript formation were found to be highly variable among these initial studies. Using explicit chromatin signature peaks, a significant proportion (~70%) of extragenic RNA Pol II transcription start sites were found to overlap enhancer sites in murine macrophages. Out of 12,000 neuronal enhancers in the mouse genome, almost 25% of the sites were found to bind RNA Pol II and generate transcripts. In parallel studies, 4,588 high confidence extragenic RNA Pol II binding sites were identified in murine macrophages stimulated with the inflammatory mediater lipopolysaccharide to induce transcription. These eRNAs, unlike messenger RNAs (mRNAs), lacked modification by polyadenylation, were generally short and non-coding, and were bidirectionally transcribed. Later studies revealed the transcription of another type of eRNAs, generated through unidirectional transcription, that were longer and contained a poly A tail. Furthermore, eRNA levels were correlated with mRNA levels of nearby genes, suggesting the potential regulatory and functional role of these non-coding enhancer RNA molecules. Biogenesis Summary eRNAs are transcribed from DNA sequences upstream and downstream of extragenic enhancer regions. Previously, several model enhancers have demonstrated the capability to directly recruit RNA Pol II and general transcription factors and form the pre-initiation complex (PIC) prior to the transcription start site at the promoter of genes. In certain cell types, activated enhancers have demonstrated the ability to both recruit RNA Pol II and also provide a template for active transcription of their local sequences. Depending on the directionality of transcription, enhancer regions generate two different types of non-coding transcripts, 1D-eRNAs and 2D-eRNAs. The nature of the pre-initiation complex and specific transcription factors recruited to the enhancer may control the type of eRNAs generated. After transcription, the majority of eRNAs remain in the nucleus. In general, eRNAs are very unstable and actively degraded by the nuclear exosome. Not all enhancers are transcribed, with non-transcribed enhancers greatly outnumbering the transcribed ones in the order of magnitude of dozens of thousands in every given cell type. 1D eRNAs In most cases, unidirectional transcription of enhancer regions generates long (>4kb) and polyadenylated eRNAs. Enhancers that generate polyA+ eRNAs have a lower H3K4me1/me3 ratio in their chromatin signature than 2D-eRNAs. PolyA+ eRNAs are distinct from long multiexonic poly transcripts (meRNAs) that are generated by transcription initiation at intragenic enhancers. These long non-coding RNAs, which accurately reflect the host gene's structure except for the alternative first exon, display poor coding potential. As a result, polyA+ 1D-eRNAs may represent a mixed group of true enhancer-templated RNAs and multiexonic RNAs. 2D eRNAs Bidirectional transcription at enhancer sites generates comparatively shorter (0.5-2kb) and non-polyadenylated eRNAs. Enhancers that generate polyA- eRNAs have a chromatin signature with a higher H3K4me1/me3 ratio than 1D-eRNAs. In general, enhancer transcription and production of bidirectional eRNAs demonstrate a strong correlation of enhancer activity on gene transcription. Frequency and timing of eRNA expression Arner et al. identified 65,423 transcribed enhancers (producing eRNA) among 33 different cell types under different conditions and different timings of stimulation. The transcription of enhancers generally preceded transcription of transcription factors which, in turn, generally preceded messenger RNA(mRNA) transcription of genes. Carullo et al. examined one particular cell type, neurons (from primary neuron cultures). They exhibited 28,492 putative enhancers generating eRNAs. These eRNAs were often transcribed from both strands of the enhancer DNA in opposite directions. Carullo et al. used these cultured neurons to examine the timing of specific enhancer eRNAs compared to the mRNAs of their target genes. The cultured neurons were activated and RNA was isolated from those neurons at 0, 3.75, 5, 7.5, 15, 30, and 60 minutes after activation. In these experimental conditions, they found that 2 of the 5 enhancers of the immediate early gene (IEG) FOS, that is FOS enhancer 1 and FOS enhancer 3, became activated and initiated transcription of their eRNAs (eRNA1 and eRNA3). FOS eRNA1 and eRNA3 were significantly up-regulated within 7.5 minutes, whereas FOS mRNA was only upregulated 15 minutes after stimulation. Similar patterns occurred at IEGs FOSb and NR4A1, indicating that for many IEGs, eRNA induction precedes mRNA induction in response to neuronal activation. While some enhancers can activate their target promoters at their target genes without transcribing eRNA, most active enhancers do transcribe eRNA during activation of their target promoters. Functions of eRNA found in the period 2013 to 2021 The functions for eRNA described below have been reported in diverse biological systems, often demonstrated with a small number of specific enhancer-target gene pairs. It is not clear to what extent the functions of eRNA described here can be generalized to most eRNAs. eRNAs in loop formation The chromosome loops shown in the figure, bringing an enhancer to the promoter of its target gene, may be directed and formed by the eRNA transcribed from the enhancer after the enhancer is activated. A transcribed enhancer RNA (eRNA) interacting with the complex of Mediator proteins (see Figure), especially Mediator subunit 12 (MED12), appears to be essential in forming the chromosome loop that brings the enhancer into close association with the promoter of the target gene of the enhancer in the case of five genes studied by Lai et al. Hou and Kraus, describe two other studies reporting similar results. Arnold et al. review another 5 instances where eRNA is active in forming the enhancer-promoter loop. eRNAs interact with proteins to affect transcription One well-studied eRNA is the eRNA of the enhancer that interacts with the promoter of the prostate specific antigen (PSA) gene. The PSA eRNA is strongly up-regulated by the androgen receptor. High PSA eRNA then has a domino effect. PSA eRNA binds to and activates the positive transcription elongation factor P-TEFb protein complex which can then phosphorylate RNA polymerase II (RNAP II), initiating its activity in producing mRNA. P-TEFb can also phosphorylate the negative elongation factor NELF (which pauses RNAP II within 60 nucleotides after mRNA initiation begins). Phosphorylated NELF is released from RNAP II, then allowing RNAP II to have productive mRNA progression (see Figure). Up-regulated PSA eRNA thereby increases expression of 586 androgen receptor-responsive genes. Knockdown of PSA eRNA or deleting a set of nucleotides from PSA eRNA causes decreased presence of phosphorylated (active) RNAP II at these genes causing their reduced transcription. The negative elongation factor NELF protein can also be released from its interaction with RNAP II by direct interaction with some eRNAs. Schaukowitch et al. showed that the eRNAs of two immediate early genes (IEGs) directly interacted with the NELF protein to release NELF from the RNAP II paused at the promoters of these two genes, allowing these two genes to then be expressed. In addition, eRNAs appear to interact with as many as 30 other proteins. Proposed mechanisms of function up until 2013 The notions that not all enhancers are transcribed at the same time and that eRNA transcription correlates with enhancer-specific activity support the idea that individual eRNAs carry distinct and relevant biological functions. However, there is still no consensus on the functional significance of eRNAs. Furthermore, eRNAs can easily be degraded through exosomes and nonsense-mediated decay, which limits their potential as important transcriptional regulators. To date, four main models of eRNA function have been proposed, each supported by different lines of experimental evidence. Transcriptional Noise Since multiple studies have shown that RNA Pol II can be found at a very large number of extragenic regions, it is possible that eRNAs simply represent the product of random “leaky” transcription and carry no functional significance. The non-specific activity of RNA Pol II would therefore allow extragenic transcriptional noise at sites where chromatin is already in an open and transcriptionally competent state. This would explain even tissue-specific eRNA expression as open sites are tissue-specific as well. Transcription-dependent effects RNA Pol II-mediated gene transcription induces a local opening of chromatin state through the recruitment of histone acetyltransferases and other histone modifiers that promote euchromatin formation. It was proposed that the presence of these enzymes could also induce an opening of chromatin at enhancer regions, which are usually present at distant locations but can be recruited to target genes through looping of DNA. In this model, eRNAs are therefore expressed in response to RNA Pol II transcription and therefore carry no biological function. Functional activity in cis While the two previous models implied that eRNAs were not functionally relevant, this mechanism states that eRNAs are functional molecules that exhibit cis activity. In this model, eRNAs can locally recruit regulatory proteins at their own site of synthesis. Supporting this hypothesis, transcripts originating from enhancers upstream of the Cyclin D1 gene are thought to serve as adaptors for the recruitment of histone acetyltransferases. It was found that depletion of these eRNAs led to Cyclin D1 transcriptional silencing. Functional activity in trans The last model involves transcriptional regulation by eRNAs at distant chromosomal locations. Through the differential recruitment of protein complexes, eRNAs can affect the transcriptional competency of specific loci. Evf-2 represents a good example of such trans regulatory eRNA as it can induce the expression of Dlx2, which in turn can increase the activity of the Dlx5 and Dlx6 enhancers. Trans-acting eRNAs might also be working in cis, and vice versa. Experimental detection The detection of eRNAs is fairly recent (2010) and has been made possible through the use of genome-wide investigation techniques such as RNA sequencing (RNA-seq) and chromatin immunoprecipitation-sequencing (ChIP-seq). RNA-seq permits the direct identification of eRNAs by matching the detected transcript to the corresponding enhancer sequence through bioinformatic analyses. ChIP-seq represents a less direct way to assess enhancer transcription but can also provide crucial information as specific chromatin marks are associated with active enhancers. Although some data remain controversial, the consensus in the literature is that the best combination of histone post-translational modifications at active enhancers is made of H2AZ, H3K27ac, and a high ratio of H3K4me1 over H3K4me3. ChIP experiments can also be conducted with antibodies that recognize RNA Pol II, which can be found at sites of active transcription. The experimental detection of eRNAs is complicated by their low endogenous stability conferred by exosome degradation and nonsense-mediated decay. A comparative study showed that assays enriching for capped and nascent RNAs (with strategies like nuclei run-on and size selection) could capture more eRNAs compared to canonical RNA-seq. These assays include Global/Precision Run-on with cap-selection (GRO/PRO-cap), capped-small RNA-seq (csRNA-seq), Native Elongating Transcript-Cap Analysis of Gene Expression (NET-CAGE), and Precision Run-On sequencing (PRO-seq). Nonetheless, the fact that eRNAs tend to be expressed from active enhancers might make their detection a useful tool to distinguish between active and inactive enhancers. Implications in development and disease Evidence that eRNAs cause downstream effects on the efficiency of enhancer activation and gene transcription suggests its functional capabilities and potential importance. The transcription factor p53 has been demonstrated to bind enhancer regions and generate eRNAs in a p53-dependent manner. In cancer, p53 plays a central role in tumor suppression as mutations of the gene are shown to appear in 50% of tumors. These p53-bound enhancer regions (p53BERs) are shown to interact with multiple local and distal gene targets involved in cell proliferation and survival. Furthermore, eRNAs generated by the activation of p53BERs are shown to be required for efficient transcription of the p53 target genes, indicating the likely important regulatory role of eRNAs in tumor suppression and cancer. Generally, mutations in eRNA have been shown to demonstrate similar phenotypic behavior in oncogenesis as compared to protein-coding RNA. Variations in enhancers have been implicated in human disease but a therapeutic approach to manipulate enhancer activity is currently not available. With the emergence of eRNAs as important components in enhancer activity, powerful therapeutic tools such as RNAi may provide promising routes to target disruption of gene expression. References External links Vista Enhancer Database Mouse ENCODE Project ENCODE Project at UCSC PEDB RNA Gene expression Protein biosynthesis Molecular genetics Spliceosome RNA splicing Non-coding RNA
Enhancer RNA
[ "Chemistry", "Biology" ]
3,039
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
38,676,345
https://en.wikipedia.org/wiki/Frederick%20Mason%20Brewer
Frederick Mason Brewer CBE FRIC (1903 – 11 February 1963) was an English chemist. He was Head of the Inorganic Chemistry Laboratory at the University of Oxford and Mayor of Oxford during 1959–60. Frederick Brewer was born in Kensal Rise (aka Kensal Green), Middlesex, England. He was the son of Frederick Charles Brewer and Ellen Maria Owen, both school teachers. Brewer studied chemistry at Lincoln College, Oxford, from 1920, having received an open scholarship, and subsequently gained a first class degree. After his undergraduate studies, Brewer undertook research with Prof. Frederick Soddy. Between 1925–7, Brewer was a Commonwealth Fund Fellow at Cornell University in the United States. During 1927–8, he was a lecturer in physical chemistry at the University of Reading. In 1928, he became a demonstrator and lecturer at the University of Oxford Inorganic Chemistry Laboratory. He stayed in Oxford for the remainder of his life. He became attached to St Catherine's Society in the 1930s. In 1955, he was appointed Reader in Inorganic Chemistry. When St Catherine's Society became St Catherine's College in 1962, he was appointed a Fellow of the College. In 1944, Brewer was elected as a university member on Oxford City Council. In 1959, he was elected Mayor of Oxford for 1959–60. In 1961, he was appointed an Alderman of the council. Brewer lived at 6 Moreton Road in North Oxford. He was a Fellow of the Royal Institute of Chemistry and was awarded the honour of Commander of the Order of the British Empire (CBE) in 1963. However, a week after collecting his CBE at Buckingham Palace, at the age of 60, he died at the Radcliffe Infirmary in Oxford. He was married with a son and a daughter. References 1903 births 1963 deaths People from Kensal Green Alumni of Lincoln College, Oxford Cornell University fellows Academics of the University of Reading Fellows of St Catherine's College, Oxford English chemists Inorganic chemists Mayors of Oxford Commanders of the Order of the British Empire Fellows of the Royal Institute of Chemistry
Frederick Mason Brewer
[ "Chemistry" ]
412
[ "British inorganic chemists", "Inorganic chemists" ]
38,676,534
https://en.wikipedia.org/wiki/Dershowitz%E2%80%93Manna%20ordering
In mathematics, the Dershowitz–Manna ordering is a well-founded ordering on multisets named after Nachum Dershowitz and Zohar Manna. It is often used in context of termination of programs or term rewriting systems. Suppose that is a well-founded partial order and let be the set of all finite multisets on . For multisets we define the Dershowitz–Manna ordering as follows: whenever there exist two multisets with the following properties: , , , and dominates , that is, for all , there is some such that . An equivalent definition was given by Huet and Oppen as follows: if and only if , and for all in , if then there is some in such that and . References . (Also in Proceedings of the International Colloquium on Automata, Languages and Programming, Graz, Lecture Notes in Computer Science 71, Springer-Verlag, pp. 188–202 [July 1979].) . . Formal languages Logic in computer science Rewriting systems
Dershowitz–Manna ordering
[ "Mathematics" ]
213
[ "Formal languages", "Mathematical logic", "Logic in computer science" ]
38,676,571
https://en.wikipedia.org/wiki/Neurosporene
Neurosporene is a carotenoid pigment. It is an intermediate in the biosynthesis of lycopene and a variety of bacterial carotenoids. References Carotenoids
Neurosporene
[ "Chemistry", "Biology" ]
45
[ "Biomarkers", "Carotenoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
38,677,808
https://en.wikipedia.org/wiki/Voltage%20sensitive%20phosphatase
Voltage sensitive phosphatases or voltage sensor-containing phosphatases, commonly abbreviated VSPs, are a protein family found in many species, including humans, mice, zebrafish, frogs, and sea squirt. Discovery The first voltage sensitive phosphatase was discovered as a result of a genome-wide search in the sea squirt Ciona intestinalis. The search was designed to identify proteins which contained a sequence of amino acids called a voltage sensor, because this sequence of amino acids confers voltage sensitivity to voltage-gated ion channels. Although the initial genomic analysis was primarily concerned with the evolution of voltage-gated ion channels, one of the results of the work was the discovery of the VSP protein in sea squirt, termed Ci-VSP. The homologues to Ci-VSP in mammals are called Transmembrane phosphatases with tensin homology, or TPTEs. TPTE (now also called hVSP2) and the closely related TPIP (also called TPTE2 or hVSP1) were identified before the discovery of Ci-VSP, however no voltage-dependent activity was described in the initial reports of these proteins. Subsequently, computational methods were used to suggest that these proteins may be voltage sensitive, however Ci-VSP is still widely regarded as the first-identified VSP. Species and tissue distribution VSPs are found across animals and choanoflagellates, though lost from nematodes and insects. Humans contain two members, TPTE and TPTE2, which result from a primate-specific duplication . Most reports indicate that VSPs are found primarily in reproductive tissue, especially the testis. Other VSPs discovered include: Dr-VSP (zebrafish Danio rerio, 2008, 2022), Gg-VSP (chicken Gallus gallus domesticus, 2014), Xl-VSP1, Xl-VSP2, and Xt-VSP (frogs: X. laevis and X. tropicalis, 2011), TPTE (mouse), etc. Following the discovery of Ci-VSP, the nomenclature used for naming these proteins consists of two letters corresponding to the initials of the species name, followed by the acronym VSP. For the human VSPs, it has been suggested the adoption of the names Hs-VSP1 and Hs-VSP2 when referring to TPIP and TPTE, respectively. Structure and function VSPs are made up of two protein domains: a voltage sensor domain, and a phosphatase domain coupled to a lipid-binding C2 domain. The voltage sensor The voltage sensor domain contains four transmembrane helices, named S1 through S4. The S4 transmembrane helix contains a number of positively charged arginine and lysine amino acid residues. Voltage sensitivity in VSPs is generated primarily by these charges in the S4, in much the same way that voltage-gated ion channels are gated by voltage. When positive charge builds up on one side of a membrane containing such voltage sensors, it generates an electric force pressing the S4 in the opposite direction. Changes in membrane potential therefore move the S4 back and forth through the membrane, allowing the voltage sensor to act like a switch. Activation of the voltage sensor occurs at depolarized potentials, i.e.: when the membrane collects more positive charge on the inner leaflet. Conversely, deactivation of the voltage sensor takes place at hyperpolarized potentials, when the membrane collects more negative charge on the inner leaflet. Activation of the voltage sensor increases the activity of the phosphatase domain, while deactivation of the voltage sensor decreases phosphatase activity. The phosphatase The phosphatase domain in VSPs is highly homologous to the tumor suppressor PTEN, and acts to remove phosphate groups from phospholipids in the membrane containing the VSP. Phospholipids such as inositol phosphates are signaling molecules which exert different effects depending on the pattern in which they are phosphorylated and dephosphorylated. Therefore, the action of VSPs is to indirectly regulate processes dependent on phospholipids. The main substrate that has been characterized so far for VSPs (including hVSP1 but not hVSP2/TPTE, which shows no phosphatase activity) is phosphatidylinositol (4,5)-bisphosphate, which VSPs dephosphorylate at the 5' position. However, VSP activity has been reported against other phosphoinositides as well, including phosphatidylinositol (3,4,5)-trisphosphate, which is also dephosphorylated at the 5' position. Activity against the 3-phosphate of PI(3,4)P2 has also been demonstrated; this activity seems to become apparent at high membrane potentials, at lower potentials the 5'-phosphatase activity is predominant. X-ray crystal structures X-ray crystallography has been used to generate high-resolution images of the two domains of Ci-VSP, separate from one another. By introducing small mutations in the protein, researchers have produced crystal structures of both the voltage sensing domain and the phosphatase domain from Ci-VSP in what are thought to be the "on" and "off" states. These structures have led to a model of VSP activation where movement of the voltage sensor affects a conformational change in a "gating loop," moving a glutamate residue in the gating loop away from the catalytic pocket of the phosphatase domain to increase phosphatase activity. Uses in research and in biology VSPs have been used as a tool to manipulate phospholipids in experimental settings. Because membrane potential can be controlled using patch clamp techniques, placing VSPs in a membrane allows for experimenters to rapidly dephosphorylate substrates of VSPs. VSPs' voltage sensors have also been used to engineer various types of genetically encoded voltage indicator (GEVI). These probes allow experimenters to visualize voltage in membranes using fluorescence. However, the normal role which VSPs play in the body is still not well understood. See also Gating (electrophysiology) Genetically encoded voltage indicator Ion channel Phosphatase References Human proteins Protein structure Membrane proteins Protein families Articles containing video clips
Voltage sensitive phosphatase
[ "Chemistry", "Biology" ]
1,374
[ "Protein classification", "Structural biology", "Membrane proteins", "Protein families", "Protein structure" ]
38,679,494
https://en.wikipedia.org/wiki/Masao%20Doi
(born 29 March 1948) is a Professor Emeritus at Nagoya University and The University of Tokyo. He is a Fellow of the Toyota Physical and Chemical Research Institute. In 1976, he introduced a second quantised formalism for studying reaction-diffusion systems. In 1978 and 1979 he wrote a series of papers with Sir Sam Edwards expanding on the concept of reptation introduced by Pierre-Gilles de Gennes in 1971. In 1996 he authored the textbook Introduction to Polymer Physics. In 2001 the American Physical Society awarded Doi the Polymer Physics Prize for "pioneering contributions to the theory of dynamics and rheology of entangled polymers and complex fluids." He was also awarded the Bingham Medal in 2001 by the Society of Rheology. In 2016, he was elected a member of the National Academy of Engineering for contributions to the rheology of polymeric liquids, especially the entanglement effect in concentrated solutions and melts. References Living people Academic staff of Nagoya University University of Tokyo alumni Academic staff of the University of Tokyo Japanese scientists Polymer scientists and engineers 1948 births
Masao Doi
[ "Chemistry", "Materials_science" ]
213
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
38,680,298
https://en.wikipedia.org/wiki/Implantable%20loop%20recorder
An implantable loop recorder (ILR), also known as an insertable cardiac monitor (ICM), is a small device that is implanted under the skin of the chest for cardiac monitoring, to record the heart's electrical activity for an extended period. Operation The ILR monitors the electrical activity of the heart, continuously storing information in its circular memory (hence the name "loop" recorder) as electrocardiograms (ECGs). Abnormal electrical activity - arrhythmia is recorded by "freezing" a segment of the memory for later review. Limited number of episodes of abnormal activity can be stored, with the most recent episode replacing the oldest. Recording can be activated in two ways. First, recording may be activated automatically according to heart rate ranges previously defined and set in the ILR by the physician. If the heart rate drops below, or rises above, the set rates, the ILR will record without the patient's knowledge. The second way the ILR records is through a hand-held "patient activator" whereby the patient triggers a recording by pushing a button when they notice symptoms such as skipped beats, lightheadedness or dizziness. The ILR records by "freezing" the electrical information preceding, during and after the symptoms in the format of an electrocardiogram. The technician or physician can download and review the recorded events during an office visit using a special programmer or via online data transmission. Uses The ILR is a useful diagnostic tool to investigate patients who experience symptoms such as syncope (fainting), seizures, recurrent palpitations, lightheadedness, or dizziness not often enough to be captured by a 24-hour or 30-day external monitor. Because of the ILR's long battery life (up to 3 years), the heart can be monitored for an extended period. New devices are able to store a total of 60 minutes of recordings on their memory. Thirty minutes is reserved for automatic storage of arrhythmias according to preprogrammed criteria. The remaining 30 minutes can be divided into a selectable number of slots for storage of manually triggered retrograde recordings as an answer to symptoms (fainting, palpitations etc.) which may be caused by an arrhythmia. Recent studies have underscored the diagnostic effectiveness and cost-efficiency of implantable loop recorders (ILRs) in specific patient populations. In patients with unexplained palpitations, especially those with infrequent symptoms, ILRs have shown a significantly higher diagnostic yield compared to conventional methods. One study reported a diagnosis in 73% of subjects using ILRs, against 21% with conventional strategies, while also proving to be more cost-effective despite higher initial costs. Another study focused on ILR use for detecting asymptomatic atrial fibrillation in individuals aged 70-90 years with additional stroke risk factors. This trial found that ILR screening led to a threefold increase in atrial fibrillation detection and anticoagulation initiation. However, it did not show a significant reduction in the risk of stroke or systemic arterial embolism, indicating that not all screen-detected atrial fibrillation might warrant anticoagulation treatment. Insertion The ILR is implanted by an electrophysiologist under local anesthesia. A small incision (about 3–4 cm or 1.5 inches) is made just lateral to the sternum below the nipple line, usually on the patient's left side. A pocket is created under the skin, and the ILR is placed in the pocket. Patients can go home the day of the procedure with few restrictions on activities. Bruising and discomfort in the implant area may persist for several weeks. Patients are instructed in use of the activator, and advised to schedule an appointment with their physician after using it so that information stored in the ILR can be retrieved for diagnosis. See also Holter monitor References External links Should you remove an implantable loop recorder after the diagnosis is made?, National Institutes of Health Contemporary Reviews in Cardiovascular Medicine: Ambulatory Arrhythmia Monitoring, Circulation Biomedical engineering Cardiac electrophysiology Diagnostic cardiology Implants (medicine) Medical devices Medical testing equipment
Implantable loop recorder
[ "Engineering", "Biology" ]
871
[ "Biological engineering", "Medical technology", "Medical devices", "Biomedical engineering" ]
38,680,789
https://en.wikipedia.org/wiki/Wild%20animal%20suffering
Wild animal suffering is suffering experienced by non-human animals living in the wild, outside of direct human control, due to natural processes. Its sources include disease, injury, parasitism, starvation, malnutrition, dehydration, weather conditions, natural disasters, killings by other animals, and psychological stress. Some estimates indicate that these individual animals make up the vast majority of animals in existence. An extensive amount of natural suffering has been described as an unavoidable consequence of Darwinian evolution, as well as the pervasiveness of reproductive strategies, which favor producing large numbers of offspring, with a low amount of parental care and of which only a small number survive to adulthood, the rest dying in painful ways, has led some to argue that suffering dominates happiness in nature. The topic has historically been discussed in the context of the philosophy of religion as an instance of the problem of evil. More recently, starting in the 19th century, a number of writers have considered the subject from a secular standpoint as a general moral issue, that humans might be able to help prevent. There is considerable disagreement around taking such action, as many believe that human interventions in nature should not take place because of practicality, valuing ecological preservation over the well-being and interests of individual animals, considering any obligation to reduce wild animal suffering implied by animal rights to be absurd, or viewing nature as an idyllic place where happiness is widespread. Some argue that such interventions would be an example of human hubris, or playing God, and use examples of how human interventions, for other reasons, have unintentionally caused harm. Others, including animal rights writers, have defended variants of a laissez-faire position, which argues that humans should not harm wild animals but that humans should not intervene to reduce natural harms that they experience. Advocates of such interventions argue that animal rights and welfare positions imply an obligation to help animals suffering in the wild due to natural processes. Some assert that refusing to help animals in situations where humans would consider it wrong not to help humans is an example of speciesism. Others argue that humans intervene in nature constantly—sometimes in very substantial ways—for their own interests and to further environmentalist goals. Human responsibility for enhancing existing natural harms has also been cited as a reason for intervention. Some advocates argue that humans already successfully help animals in the wild, such as vaccinating and healing injured and sick animals, rescuing animals in fires and other natural disasters, feeding hungry animals, providing thirsty animals with water, and caring for orphaned animals. They also assert that although wide-scale interventions may not be possible with our current level of understanding, they could become feasible in the future with improved knowledge and technologies. For these reasons, they argue it is important to raise awareness about the issue of wild animal suffering, spread the idea that humans should help animals suffering in these situations, and encourage research into effective measures, which can be taken in the future to reduce the suffering of these individuals, without causing greater harms. Extent of suffering in nature Sources of harm Disease Animals in the wild may suffer from diseases which circulate similarly to human colds and flus, as well as epizootics, which are analogous to human epidemics; epizootics are relatively understudied in the scientific literature. Some well-studied examples include chronic wasting disease in elk and deer, white-nose syndrome in bats, devil facial tumour disease in Tasmanian devils and Newcastle disease in birds. Examples of other diseases include myxomatosis and viral haemorrhagic disease in rabbits, ringworm and cutaneous fibroma in deer, and chytridiomycosis in amphibians. Diseases, combined with parasitism, "may induce listlessness, shivering, ulcers, pneumonia, starvation, violent behavior, or other gruesome symptoms over the course of days or weeks leading up to death." Poor health may dispose wild animals to increased risk of infection, which in turn reduces the health of the animal, further increasing the risk of infection. The terminal investment hypothesis holds that infection can lead some animals to focus their limited remaining resources on increasing the number of offspring they produce. Injury Wild animals can experience injury from a variety of causes such as predation; intraspecific competition; accidents, which can cause fractures, crushing injuries, eye injuries and wing tears; self-amputation; molting, a common source of injury for arthropods; extreme weather conditions, such as storms, extreme heat or cold weather; and natural disasters. Such injuries may be extremely painful, which can lead to behaviors which further negatively affect the well-being of the injured animal. Injuries can also make animals susceptible to diseases and other injuries, as well as parasitic infections. Additionally, the affected animal may find it harder to eat and drink and struggle to escape from predators and attacks from other members of their species. Parasitism Many wild animals, particularly larger ones, have been found to be infected with at least one parasite. Parasites can negatively affect the well-being of their hosts by redirecting their host's resources to themselves, destroying their host's tissue and increasing their host's susceptibility to predation. As a result, parasites may reduce the movement, reproduction and survival of their hosts. Parasites can alter the phenotype of their hosts; limb malformations in amphibians caused by Ribeiroia ondatrae, is one example. Some parasites have the capacity to manipulate the cognitive function of their hosts, such as worms which make crickets kill themselves by directing them to drown themselves in water for the purpose of reproduction in an aquatic environment, as well as caterpillars using dopamine containing secretions that manipulate ants to act as bodyguards for protecting the caterpillar from parasites. It is rare that parasites directly cause the death of their host, rather, they may increase the chances of their host's death by other means; one meta-study found that mortality was 2.65 times higher in animals affected by parasites, than those that weren't. Unlike parasites, parasitoids—which include species of worms, wasps, beetles and flies—kill their hosts, who are generally other invertebrates. Parasitoids specialize in attacking one particular species. Different methods are used by parasitoids to infect their hosts: laying their eggs on plants which are frequently visited by their host, laying their eggs on or close to the host's eggs or young and stinging adult hosts so that they are paralyzed, then laying their eggs near or on them. The larvae of parasitoids grow by feeding on the internal organs and bodily fluids of their hosts, which eventually leads to the death of their host when their organs have ceased to function, or they have lost all of their bodily fluids. Superparasitism is a phenomenon where multiple different parasitoid species simultaneously infect the same host. Parasitoid wasps have been described as having the largest number of species of any other animal species. Starvation and malnutrition Starvation and malnutrition particularly affect young, old, sick and weak animals, and can be caused by injury, disease, poor teeth and environmental conditions, with winter being particularly associated with an increased risk. Food availability limits the size of wild animal populations, meaning that a huge number of individuals die as a result of starvation; such deaths are described as prolonged and marked by extreme distress as the animal's bodily functions shut down. Within days of hatching, fish larvae may experience hydrodynamic starvation, whereby the motion of fluids in their environment limits their ability to feed; this can lead to mortality of greater than 99%. Dehydration Dehydration is associated with high mortality in wild animals. Drought can cause many animals in larger populations to die of thirst. Thirst can also expose animals to an increased risk of being preyed upon; they may remain hidden in safe spaces to avoid this. However, their need for water may eventually force them to leave these spaces; being in a weakened state, this makes them easier targets for predatory animals. Animals who remain hidden cannot move due to dehydration and may end up dying of thirst. When dehydration is combined with starvation, the process of dehydration can be accelerated. Diseases, such as chytridiomycosis, can also increase the risk of dehydration. Weather conditions Weather has a strong influence on the health and survival of wild animals. Weather phenomena such as heavy snow, flooding and droughts can directly harm animals and indirectly harm them by increasing the risks of other forms of suffering, such as starvation and disease. Extreme weather can cause the deaths of animals by destroying their habitats and directly killing animals; hailstorms are known to kill thousands of birds. Certain weather conditions may maintain large numbers of individuals over many generations; such conditions, while conducive to survival, may still cause suffering for animals. Humidity or lack thereof can be beneficial or harmful depending on an individual animals' needs. Deaths of large numbers of animals—particularly cold-blooded ones such as amphibians, reptiles, fishes and invertebrates—can take place as a result of temperature fluctuations, with young animals being particularly susceptible. Temperature may not be a problem for parts of the year, but can be a problem in especially hot summers or cold winters. Extreme heat and lack of rainfall are also associated with suffering and increased mortality by increasing susceptibility to disease and causing vegetation that insects and other animals rely upon to dry out; this drying out can also make animals who rely on plants as hiding places more susceptible to predation. Amphibians who rely on moisture to breathe and stay cool may die when water sources dry up. Hot temperatures can cause fish to die by making it hard for them to breathe. Climate change and associated warming and drying is making certain habitats intolerable for some animals through heat stress and reducing available water sources. Mass mortality is particularly linked with winter weather due to low temperatures, lack of food and bodies of water where animals live, such as frogs, freezing over; a study on cottontail rabbits indicates that only 32% of them survive the winter. Fluctuating environmental conditions in the winter months is also associated with increased mortality. Natural disasters Fires, volcanic eruptions, earthquakes, tsunamis, hurricanes, storms, floods and other natural disasters are sources of extensive short- and long-term harm for wild animals, causing death, injury, illness and malnutrition, as well as poisoning by contaminating food and water sources. Such disasters can also alter the physical environment of individual animals in ways which are harmful to them; fires and large volcanic eruptions can affect the weather and marine animals may die due to disasters affecting water temperature and salinity. Killing by other animals Predation has been described as the act of one animal capturing and killing another animal to consume part or all of their body. Jeff McMahan, a moral philosopher, asserts: "Wherever there is animal life, predators are stalking, chasing, capturing, killing, and devouring their prey. Agonized suffering and violent death are ubiquitous and continuous." Preyed upon animals die in a variety of different ways, with the time taking for them to die, which can be lengthy, depending on the method that the predatory animal uses to kill them; some animals are swallowed and digested while still being alive. Other preyed upon animals are paralysed with venom before being eaten; venom can also be used to start digesting the animal. Animals may be killed by members of their own species due to territorial disputes, competition for mates and social status, as well as cannibalism, infanticide, and siblicide. Psychological stress It has been argued that animals in the wild do not appear to be happier than domestic animals, based on findings that these individuals have greater levels of cortisol and elevated stress responses relative to domestic animals; additionally, unlike domestic animals, wild animals do not have their needs provided for them by human caretakers. Sources of stress for these individuals include illness and infection, predation avoidance, nutritional stress and social interactions; these stressors can begin before birth and continue as the individual develops. A framework known as the ecology of fear conceptualises the psychological impact that the fear of predatory animals can have on the individuals that they predate, such as altering their behavior and reducing their survival chances. Fear-inducing interactions with predators may cause lasting effects on behavior and PTSD-like changes in the brains of animals in the wild. These interactions can also cause a spike in stress hormones, such as cortisol, which can increase the risk of both the individual's death and their offspring. Number of affected individuals The number of individual animals in the wild is relatively unexplored in the scientific literature and estimates vary considerably. An analysis, undertaken in 2018, estimates (not including wild mammals) that there are 10 fish, 10 wild birds, 10 terrestrial arthropods and 10 marine arthropods, 10 annelids, 10 molluscs and 10 cnidarians, for a total of 10 wild animals. It has been estimated that there are 2.25 times more wild mammals than wild birds in Britain, but the authors of this estimate assert that this calculation would likely be a severe underestimate when applied to the number of individual wild mammals in other continents. A 2022 study estimated that there are 20 quadrillion individual ants across the world. Based on some of these estimates, it has been argued that the number of individual wild animals in existence is considerably higher, by an order of magnitude, than the number of animals humans kill for food each year, with individuals in the wild making up over 99% of all sentient beings in existence. Natural selection In his autobiography, the naturalist and biologist Charles Darwin acknowledged that the existence of extensive suffering in nature was fully compatible with the workings of natural selection, yet maintained that pleasure was the main driver of fitness-increasing behavior in organisms. Evolutionary biologist Richard Dawkins challenges Darwin's claim in his book River Out of Eden, wherein he argues that wild animal suffering must be extensive due to the interplay of the following evolutionary mechanisms: Selfish genes – genes are wholly indifferent to the well-being of individual organisms as long as DNA is passed on. The struggle for existence – competition over limited resources results in the majority of organisms dying before passing on their genes. Malthusian checks – even bountiful periods within a given ecosystem eventually lead to overpopulation and subsequent population crashes. From this, Dawkins concludes that the natural world must necessarily contain enormous amounts of animal suffering as an inevitable consequence of Darwinian evolution. To illustrate this, he writes: The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. Reproductive strategies and population dynamics Some writers argue that the prevalence of r-selected animals in the wild—who produce large numbers of offspring, with a low amount of parental care, and of which only a small number, in a stable population, will survive to adulthood—indicates that the average life of these individuals is likely to be very short and end in a painful death. The pathologist Keith Simpson describes this as follows: In the wild, plagues of excess population are a rarity. The seas are not crowded with sunfish; the ponds are not brimming with toads; elephants do not stand shoulder to shoulder over the land. With few exceptions, animal populations are remarkably stable. On average, of each pair's offspring, only sufficient survive to replace the parents when they die. Surplus young die, and birth rates are balanced by death rates. In the case of spawners and egg layers, some young are killed before hatching. Almost half of all blackbird eggs are taken by jays, but even so, each pair usually manages to fledge about four young. By the end of summer, however, an average of under two are still alive. Since one parent will probably die or be killed during the winter, only one of the young will survive to breed the following summer. The high mortality rate among young animals is an inevitable consequence of high fecundity. Of the millions of fry produced by a pair of sunfish, only one or two escape starvation, disease or predators. Half the young of house mice living on the Welsh island of Skokholm are lost before weaning. Even in large mammals, the lives of the young can be pathetically brief and the killing wholesale. During the calving season, many young wildebeeste, still wet, feeble and bewildered, are seized and torn apart by jackals, hyenas and lions within minutes of emerging from their mothers' bellies. Three out of every four die violently within six months. According to this view, the lives of the majority of animals in the wild likely contain more suffering than happiness, since a painful death would outweigh any short-lived moments of happiness experienced in their short lives. Welfare economist Yew-Kwang Ng argues that evolutionary dynamics can lead to welfare outcomes that are worse than necessary for a given population equilibrium. A 2019 follow-up paper, by Ng and Zach Groff, challenges the conclusions of Ng's original paper, asserting that subsequent analysis reveals an error in Ng's model, resulting in ambiguity over whether there is more suffering than enjoyment in nature; the paper concludes that the rate of failure to reproduce can either enhance or detract from average welfare depending on additional characteristics of a species and implies that for organisms with more intense conscious experiences, the balance between enjoyment and suffering may tend more towards suffering. History of concern for wild animals Religious views Christianity 2nd-century church fathers, particularly Irenaeus of Lyons and Theophilus of Antioch, hold that animals are originally created as peaceful vegetarians, only becoming carnivorous as a result of human sin and the Fall. They believe that in the future, God restores this harmony, returning animals to their original diet. Irenaeus interprets Isaiah's prophecies as literal, expecting lions to become herbivores once more in the restored creation. Theophilus echoes this view, stating that no animals are created evil or violent, but that sin corrupts their nature. The idea that suffering is common in nature has been observed by several writers historically who engaged with the problem of evil. In his notebooks (written between 1487 and 1505), Italian polymath Leonardo da Vinci describes the suffering experienced by animals in the wild due to predation and reproduction, questioning: "Why did nature not ordain that one animal should not live by the death of another?" In his 1779 posthumous work Dialogues Concerning Natural Religion, the philosopher David Hume describes the antagonism inflicted by animals upon each other and the psychological impact experienced by the victims, observing: "The stronger prey upon the weaker, and keep them in perpetual terror and anxiety." In Natural Theology, published in 1802, Christian philosopher William Paley argues that animals in the wild die as a result of violence, decay, disease, starvation, and malnutrition, and that they exist in a state of suffering and misery; their suffering unaided by their fellow animals. He compares this to humans, who even when they cannot relieve the suffering of their fellow humans, at least provide them with necessities. Paley also engages with the reader of his book, asking whether, based on these observations, "you would alter the present system of pursuit and prey?" Additionally, he argues that "the subject ... of animals devouring one another, forms the chief, if not the only instance, in the works of the Deity ... in which the character of utility can be called in question." He defends predation as being a part of God's design by asserting that it was a solution to the problem of superfecundity; animals producing more offspring than can possibly survive. Paley also contends that venom is a merciful way for poisonous animals to kill the animals that they predate. The problem of evil has also been extended to include the suffering of animals in the context of evolution. In Phytologia, or the Philosophy of Agriculture and Gardening, published in 1800, Erasmus Darwin, a physician and the grandfather of Charles Darwin, aims to vindicate the goodness of God allowing the consumption of "lower" animals by "higher" ones, by asserting that "more pleasurable sensation exists in the world, as the organic matter is taken from a state of less irritability and less sensibility and converted into a greater"; he states that this process secures the greatest happiness for sentient beings. Writing in response in 1894, Edward Payson Evans, a linguist and early advocate for animal rights, argues that the theory of evolution, which regards the antagonism between animals purely as events within the context of a "universal struggle for existence", has disregarded this kind of theodicy and ended "teleological attempts to infer from the nature and operations of creation the moral character of the Creator". In an 1856 letter to Joseph Dalton Hooker, Charles Darwin remarks sarcastically on the cruelty and wastefulness of nature, describing it as something that a "Devil's chaplain" could write about. Writing in 1860, to Asa Gray, Darwin asserts that he could not reconcile an omnibenevolent and omnipotent God with the intentional existence of the Ichneumonidae, a parasitoid wasp family, the larvae of which feed internally on the living bodies of caterpillars. In his autobiography, published in 1887, Darwin described a feeling of revolt at the idea that God's benevolence is limited, stating that "for what advantage can there be in the sufferings of millions of the lower animals throughout almost endless time?" Islam Various solutions for animal suffering have been presented in Islamic philosophy and theology. One proposed solution to address this issue, suggested by Shia theologians, asserts that two conditions together can justify animal suffering: (1) the existence of some basic benefits in animal suffering, such as strengthening courage and sympathy among animals; and (2) compensating for the suffering of animals after death. According to this theodicy, the justification for animal suffering lies in the presence of certain benefits derived from such experiences. Additionally, the theory posits that the pain endured by animals will be compensated on the Day of Judgment. On that day, animals will attain heavenly blessings as a form of recompense for their previous sufferings, morally justifying overall animal suffering. This theodicy embraces the notion of an afterlife for animals. Eastern religions Philosopher Ole Martin Moen argues that, unlike Western and Judeo-Christian views, Eastern religions, such as Jainism, Buddhism, and Hinduism, "all hold that the natural world is filled with suffering, that suffering is bad for all who endure it, and that our ultimate aim should be to bring suffering to an end." Buddhism In Buddhist doctrine, rebirth as an animal is regarded as evil because of the different forms of suffering that animals experience due to humans and natural processes. Buddhists may also regard the suffering experienced by animals in nature as evidence for the truth of dukkha. The Buddhist scripture Aṅguttara Nikāya describes the lives of wild animals as "so cruel, so harsh, so painful". The Indian Buddhist sutra, Saddharmasmṛtyupasthānasūtra, written in the first half of the first millennium, categorises the different forms of suffering experienced by the animals living in the water, on the earth and in the sky and draws attention to certain animals who can be liberated from their suffering through consciousness. It states: "There are those [animals] who—[though] fearful of predation, of threats, beatings, cold, heat, and bad weather—if capable, disregard their trembling and, just for a moment, arouse a mind of faith towards the Buddha, the Dharma, and the Saṅgha." Around 700 AD, the Indian Buddhist monk and scholar Shantideva writes in his Bodhisattvacaryāvatāra: And may the stooping animals be freed From fear of being preyed upon, each other's food. Patrul Rinpoche, a 19th-century Tibetan Buddhist teacher, describes animals in the ocean as experiencing "immense suffering", as a result of predation, as well as parasites burrowing inside them and eating them alive. He also describes animals on land as existing in a state of continuous fear and of killing and being killed. Calvin Baker argues that Buddhist perspectives on wild animal suffering present significant ethical complexities. From a traditional Buddhist standpoint, the cycle of rebirth (samsara) makes it difficult to prioritize animal welfare, as alleviating temporary suffering does not address the deeper issue of suffering inherent in samsaric existence. However, in a naturalized Buddhist view, which rejects the concept of rebirth, Baker contends that sentience alone is insufficient for moral patienthood, as not all sentient beings experience suffering in the way that Buddhist ethics emphasizes. Furthermore, he suggests that if wild animals live predominantly negative lives, their extinction could be morally preferable, as it would represent an end to suffering rather than a tragic loss, challenging conventional conservationist approaches. Hinduism Hindu literature has been described as holding the lives and welfare of wild animals as equal with that of humans. Morris and Thornhill argue that Hinduism provides a framework for addressing wild animal suffering through spiritual advancement and non-violence. They highlight how Hindu beliefs, particularly ahimsa and the transformative power of moral growth, suggest that human sanctity can lead to peace even among hostile species, as reflected in Patanjali's Yoga Sutras. Additionally, they point to the Srimad Bhagavatam, where carnivores coexist peacefully without predation, as an idealized vision of nature free from suffering. For Morris and Thornhill, Hinduism offers a hopeful perspective that spiritual development can mitigate non-anthropogenic suffering, aligning religious values with the protection and care of wild animals. 18th century Georges-Louis Leclerc, Comte de Buffon In Histoire Naturelle, published in 1753, the naturalist Georges-Louis Leclerc, Comte de Buffon describes wild animals as suffering much want in the winter, focusing specifically on the plight of stags who are exhausted by the rutting season, which in turn leads to the breeding of parasites under their skin, further adding to their misery. Later in the book, he describes predation as necessary to prevent the superabundance of animals who produce vast numbers of offspring, who if not killed would have their fecundity diminished due to a lack of food and would die as a result of disease and starvation. Buffon concludes that "violent deaths seem to be equally as necessary as natural ones; they are both modes of destruction and renovation; the one serves to preserve nature in a perpetual spring, and the other maintains the order of her productions, and limits the number of each species." Johann Gottfried Herder Johann Gottfried Herder, a philosopher and theologian, in Ideen zur Philosphie der Geschichte der Menschheit, published between 1784 and 1791, argues that animals exist in a state of constant striving, needing to provide for their own subsistence and to defend their lives. He contends that nature ensured peace in creation by creating an equilibrium of animals with different instincts and belonging to different species who live opposed to each other. 19th century Lewis Gompertz In 1824, Lewis Gompertz, an early vegan and animal rights activist, published Moral Inquiries on the Situation of Man and of Brutes, in which he advocates for an egalitarian view towards animals and aiding animals suffering in the wild. Gompertz asserts that humans and animals in their natural state both suffer similarly:[B]oth of them being miserably subject to almost every evil, destitute of the means of palliating them; living in the continual apprehension of immediate starvation, of destruction by their enemies, which swarm around them; of receiving dreadful injuries from the revengeful and malicious feelings of their associates, uncontrolled by laws or by education, and acting as their strength alone dictates; without proper shelter from the inclemencies of the weather; without proper attention and medical or surgical aid in sickness; destitute frequently of fire, of candle-light, and (in man) also of clothing; without amusements or occupations, excepting a few, the chief of which are immediately necessary for their existence, and subject to all the ill consequences arising from the want of them.Gompertz argues that as much as animals suffer in the wild, they suffer much more at the hands of humans because, in their natural state, they have the capacity to also experience periods of much enjoyment. Additionally, he contends that if he was to encounter a situation where an animal was eating another, that he would intervene to help the animal being attacked, even if "this might probably be wrong". In his 1852 book Fragments in Defence of Animals, and Essays on Morals, Soul, and Future State, Gompertz compares the suffering of animals in the wild to the suffering inflicted by humans, stating: "Much as animals suffer in a natural state, much more do they seem to suffer when under the dominion of the generality of men. What suffering in the former can be supposed to equal the constant torture of a hackney-coach horse?" Pessimist philosophers Philosophers Giacomo Leopardi and Arthur Schopenhauer cite the suffering of animals in the wild as evidence to support their pessimistic worldviews. In his 1824 work "Dialogue between Nature and an Icelander" from Operette morali, Leopardi uses images of animal predation, which he dismisses as having inherent value, to symbolize nature's cycles of creation and destruction. Writing in his notebooks, Zibaldone di pensieri, published posthumously in 1898, Leopardi asserts that predation is a leading example of the evil design of nature. In 1851, Schopenhauer commented on the vast amount of suffering in nature, drawing attention to the asymmetry between the pleasure experienced by a carnivorous animal and the suffering of the animal that they are consuming, stating: "Whoever wants summarily to test the assertion that the pleasure in the world outweighs the pain, or at any rate that the two balance each other, should compare the feelings of an animal that is devouring another with those of that other." John Stuart Mill In the 1874 posthumous essay "Nature", utilitarian philosopher John Stuart Mill writes about suffering in nature and the imperative of struggling against it: In sober truth, nearly all the things which men are hanged or imprisoned for doing to one another, are nature's every day performances. ... The phrases which ascribe perfection to the course of nature can only be considered as the exaggerations of poetic or devotional feeling, not intended to stand the test of a sober examination. No one, either religious or irreligious, believes that the hurtful agencies of nature, considered as a whole, promote good purposes, in any other way than by inciting human rational creatures to rise up and struggle against them. ... Whatsoever, in nature, gives indication of beneficent design proves this beneficence to be armed only with limited power; and the duty of man is to cooperate with the beneficent powers, not by imitating, but by perpetually striving to amend, the course of nature—and bringing that part of it over which we can exercise control more nearly into conformity with a high standard of justice and goodness. Henry Stephens Salt In his 1892 book Animals' Rights: Considered in Relation to Social Progress, the writer and early activist for animal rights Henry Stephens Salt focuses an entire chapter on the plight of wild animals. Salt emphasizes the moral obligation to respect the autonomy and right to life of animals, drawing parallels between the treatment of wild animals and uncivilized human tribes. He argues that animals, like humans, have a right to live unmolested and uninjured unless their existence directly threatens human welfare. While humans are justified in self-defense or safeguarding against the overpopulation of certain species that could disrupt human dominance, they are not justified in unnecessarily killing or torturing harmless creatures. Salt acknowledges the difficulty in defining the ethical limits of interfering with the autonomy of others, whether animals or human tribes, but stresses that unnecessary harm is morally indefensible. 20th century J. Howard Moore In Better-World Philosophy, published in 1899, zoologist and philosopher J. Howard Moore critiques the cruelty of natural selection and the suffering animals experience in the wild, emphasizing the relentless predation and struggle for survival that defines much of nature. He argues that the principle of natural selection is "irrational and barbarous", leading to a world filled with unnecessary suffering, and he called for its replacement with conscious, ethical principles driven by human intervention. Moore saw humanity as uniquely positioned to alleviate this suffering due to its intellectual and moral capacities, proposing that humans take on the role of reforming and regenerating the universe, including improving the relationships among all living beings. He envisions an ideal future where humanity strives to repair the "clumsy natures" of other animals and to reduce the misery imposed by nature's processes, advocating for a compassionate stewardship of life on Earth. Moore expands on these ideas in his 1906 book, The Universal Kinship:Inhumanity is everywhere. The whole planet is steeped in it. Every creature faces an inhospitable universeful, and every life is a campaign. It has all come about as a result of the mindless and inhuman manner in which life has been developed on the earth ... one cannot help thinking sometimes, when, in his more daring and vivid moments, he comes to comprehend the real character and condition of the world ... and cannot help wondering whether an ordinary human being with only common-sense and insight and an average concern for the welfare of the world would not make a great improvement in terrestrial affairs if he only had the opportunity for a while.In Ethics and Education, published in 1912, Moore critiques the human conception of animals in the wild. He writes: "Many of these non-human beings are so remote from human beings in language, appearance, interests, and ways of life, as to be nothing but 'wild animals.' These 'wild things' have, of course, no rights whatever in the eyes of men." Later in the book, he describes them as independent beings who suffer and enjoy in the same way humans do and have their "own ends and justifications of life". Alexander Skutch In his 1952 article "Which Shall We Protect? Thoughts on the Ethics of the Treatment of Free Life", Alexander Skutch, a naturalist and writer, explores five ethical principles that humans could follow when considering their relationship with animals in the wild, including the principle of only considering human interests; the laissez-faire, or "hands-off" principle; the do no harm, ahimsa principle; the principle of favoring the "higher animals", which are most similar to ourselves; the principle of "harmonious association", whereby humans and animals in the wild could live symbiotically, with each providing benefits to the other and individuals who disrupt this harmony, such as predators, are removed. Skutch endorses a combination of the laissez-faire, ahimsa, and harmonious association approaches as the way to create the ultimate harmony between humans and animals in the wild. Perspectives from animal and environmental ethicists In 1973, moral philosopher Peter Singer responded to a question on whether humans have a moral obligation to prevent predation, arguing that intervening in this way may cause more suffering in the long-term but asserting that he would support actions if the long-term outcome was positive. In 1979, the animal rights philosopher Stephen R. L. Clark, published "The Rights of Wild Things", in which he argues that humans should protect animals in the wild from particularly large dangers, but that humans do not have an obligation to regulate all of their relationships. The following year, J. Baird Callicott, an environmental ethicist, published "Animal Liberation: A Triangular Affair", in which he compares the ethical underpinnings of the animal liberation movement, asserting that it is based on Benthamite principles, and Aldo Leopold's land ethic, which he uses as a model for environmental ethics. Callicott concludes that intractable differences exist between the two ethical positions when it comes to the issue of wild animal suffering. In his 1987 book, Morals, Reason, and Animals, animal rights philosopher Steve F. Sapontzis argues that from an antispeciesist perspective, humans should aid animals suffering in the wild, as long as a greater harm is not inflicted overall. In 1991, the environmental philosopher Arne Næss critiques what he termed the "cult of nature" of contemporary and historical attitudes of indifference towards suffering in nature. He argues that humans should confront the reality of the wilderness, including disturbing natural processes—when feasible—to relieve suffering. In his 1993 article "Pourquoi je ne suis pas écologiste" ("Why I Am Not An Environmentalist"), published in the antispeciesist journal Cahiers antispécistes, the animal rights philosopher David Olivier argues that he is opposed to environmentalists because they consider predation to be good because of the preservation of species and "natural balance", while Olivier gives consideration to the suffering of the individual animal being predated. He also asserts that if the environmentalists were themselves at risk of being predated, they wouldn't follow the "order of nature". Olivier concludes: "I don't want to turn the universe into a planned, man-made world. Synthetic food for foxes, contraception for hares, I only half like that. I have a problem that I do not know how to solve, and I am unlikely to find a solution, even theoretical, as long as I am (almost) alone looking for one." 21st century Publications In 2009, essayist Brian Tomasik authored "The Importance of Wild-Animal Suffering", where he argues that the number of wild animals far exceeds the number of non-human animals under human control. Tomasik posits that animal advocates should promote concern for the suffering of animals in their natural habitats. He also highlights the potential for human descendants to vastly increase wild animal suffering if they chose to multiply rather than mitigate it. A revised version of the essay was published in the 2015 journal Relations. Beyond Anthropocentrism, as part of a special issue titled "Wild Animal Suffering and Intervention in Nature", which featured various contributions on the topic. A follow-up issue on the topic was released in 2022. Jeff McMahan's 2010 essay, "The Meat Eaters", published by The New York Times, advocates for reducing wild animal suffering, particularly through the reduction of predation. Following criticism, McMahan responded with another essay, "Predators: A Response". Vox has also explored this topic, publishing Jacy Reese Anthis's 2015 article "Wild animals endure illness, injury, and starvation. We should help". In his 2018 book, The End of Animal Farming, Anthis discusses broadening human moral concern to include invertebrates and wild animals. Vox continued this discussion in 2021 with Dylan Matthews's article "The wild frontier of animal welfare", which examines the perspectives of various philosophers and scientists. Aeon has featured essays on wild animal suffering, including Steven Nadler's 2018 piece "We have an ethical obligation to relieve individual animal suffering" and Jeff Sebo's 2020 article "All we owe to animals". In 2016, philosopher Catia Faria defended her Ph.D. thesis, Animal Ethics Goes Wild: The Problem of Wild Animal Suffering and Intervention in Nature, the first thesis of its kind to argue that humans have an obligation to help animals in the wild. She expanded on this topic in her 2022 book, Animal Ethics in the Wild: Wild Animal Suffering and Intervention in Nature. Philosopher Kyle Johannsen's 2020 book, Wild Animal Ethics: The Moral and Political Problem of Wild Animal Suffering, contends that wild animal suffering is a significant moral issue requiring human intervention. A symposium at Queen's University discussed Johannsen's book the same year. In 2022, animal rights activist and philosopher Oscar Horta included a chapter titled "In defense of animals!" in his book Making a Stand for Animals, arguing for moral consideration and assistance for animals suffering from natural processes. Johannsen has edited Positive Duties to Wild Animals, a collection of essays from various scholars aimed at advancing interventionist approaches to wild animal suffering through diverse theoretical frameworks. Organizations and institutions In response to arguments for the moral and political importance of wild animal suffering, a number of organizations have been created to research and address the issue. Two of these, Utility Farm and Wild-Animal Suffering Research merged in 2019 to form Wild Animal Initiative. The nonprofit organization Animal Ethics also researches wild animal suffering and advocates on behalf of wild animals, among other populations. Rethink Priorities is a research organization which, among other topics, has conducted research on wild animal suffering, particularly around invertebrate sentience and invertebrate welfare. The Wildlife Disaster Network was founded in 2020 with the intention of helping wild animals suffering in natural disasters. Jamie Payton, who works for the network, challenges the view that wild animals in disasters situations manage best when left alone, stating: "Without human interference, these animals will suffer and succumb, due not only to their injuries but also to the loss of food, water and habitat. It is our obligation to provide the missing link for the wildlife that share our home." In September 2022, New York University launched a Wild Animal Welfare Program to research and host events exploring how human activity and environmental changes impact wild animal welfare. The program aims to improve understanding of how humans can improve their interactions with wild animals and includes research in natural, social and humanities sciences. The team conducts outreach to academics, advocates, policymakers and the public. The program is led by Becca Franks and Jeff Sebo, and also includes Arthur Caplan and Danielle Spiegel-Feld. Philosophical status Predation as a moral problem Predation has been considered a moral problem by some philosophers, who argue that humans have an obligation to prevent it, while others argue that intervention is not ethically required. Others argue that humans should not do anything about it right now because there is a chance it may unwittingly cause serious harm but that with better information and technology, it could be possible to take meaningful action in the future. An obligation to prevent predation has been considered untenable or absurd by some writers, who have used the position as a reductio ad absurdum to reject the concept of animal rights altogether. Others argue that attempting to reduce it would be environmentally harmful. Arguments for intervention Animal rights and welfare perspectives Some theorists have reflected on whether the harms animals suffer in the wild should be accepted or if something should be done to mitigate them. The moral basis for interventions aimed at reducing wild animal suffering can be rights or welfare based. Advocates of such interventions argue that non-intervention is inconsistent with either of these approaches. From a rights-based perspective, if animals have a moral right to life or bodily integrity, intervention may be required to prevent such rights from being violated by other animals. Animal rights philosopher Tom Regan was critical of this view; he argues that because animals are not moral agents, in the sense of being morally responsible for their actions, they cannot violate each other's rights. Based on this, he concludes that humans do not need to concern themselves with preventing suffering of this kind, unless such interactions were strongly influenced by humans. Oscar Horta argues that it is a mistaken perception that the animal rights position implies a respect for natural processes because of the assumption that animals in the wild live easy and happy lives, when in reality, they live short and painful lives full of suffering. It has also been argued that a non-speciesist legal system would mean animals in the wild would be entitled to positive rights—similar to what humans are entitled to by their species-membership—which would give them the legal right to food, shelter, healthcare and protection. From a welfare-based perspective, a requirement to intervene may arise insofar as it is possible to prevent some of the suffering experienced by wild animals without causing even more suffering. Katie McShane argues that biodiversity is not a good proxy for wild animal welfare. She states: "A region with high biodiversity is full of lots of different kinds of individuals. They might be suffering; their lives might be barely worth living. But if they are alive, they count positively toward biodiversity." Non-intervention as a form of speciesism Some writers argue that humans refusing to aid animals suffering in the wild, when they would help humans suffering in a similar situation, is an example of speciesism; the differential treatment or moral consideration of individuals based on their species membership. Jamie Mayerfeld contends that a duty to relieve suffering which is blind to species membership implies an obligation to relieve the suffering of animals due to natural processes. Stijn Bruers argues that even long-term animal rights activists sometimes hold speciesist views when it comes to this specific topic, which he calls a "moral blind spot". His view is echoed by Eze Paez, who asserts that advocates who disregard the interests of animals purely because they live in the wild are responsible for the same form of discrimination used by those who justify the exploitation of animals by humans. Oscar Horta argues that spreading awareness of speciesism will in turn increase concern for the plight of animals in the wild. Humans already intervene to further human interests Oscar Horta asserts that humans are constantly intervening in nature, in significant ways, to further human interests, such as furthering environmentalist ideals. He criticizes how interventions are considered to be realistic, safe, or acceptable when their aims favor humans but not when they focus on helping wild animals. He argues that humans should shift the aim of these interventions to consider the interests of sentient beings; not just humans. Human responsibility for enhancing natural harms Philosopher Martha Nussbaum asserts that humans continually "affect the habitats of animals, determining opportunities for nutrition, free movement, and other aspects of flourishing", and contends that the pervasive human involvement in natural processes means that humans have a moral responsibility to help individuals affected by our actions. She also argues that humans may have the capacity to help animals suffering due to entirely natural processes, such as diseases and natural disasters, and asserts that way may have duties to provide care in these cases. Philosopher Jeff Sebo argues that animals in the wild suffer as a result of natural processes, as well as human-caused harms. He asserts that climate change is making existing harms more severe and creating new harms for these individuals. From this, he concludes that there are two reasons to help individual animals in the wild, arguing that "they are suffering and dying, and we are either partly or wholly responsible". Similarly, philosopher Steven Nadler argues that climate change means that "the scope of actions that are proscribed – and, especially, prescribed – by a consideration of animal suffering should be broadened". Nadler goes further, asserting that humans have a moral obligation to help individual animals suffering in the wild regardless of human responsibility. Gender-based perspectives Catia Faria argues that gender identity deeply influences how humans perceive and respond to wild animals, with a male-centered worldview playing a key role in fostering harm and indifference. Anthropogenic harms, or those caused by human activities, are frequently overlooked because gendered assumptions prioritize human-centered and androcentric views. These cultural norms downplay the significance of the suffering animals experience at the hands of humans, reinforcing a tendency to ignore or minimize the ethical implications of such harms. Faria also critiques the widespread indifference toward naturogenic harms—those inflicted by natural processes—arguing that this indifference stems from a gendered view that idealizes nature's autonomy. This male-biased perspective focuses on ecosystems as interconnected wholes, dismissing the suffering of individual animals in favor of an idealized natural order. Faria advocates for a rethinking of these attitudes, calling for more ethical, less gendered views that prioritize compassion for individual animals over abstract ecological concepts. Arguments against intervention Practicality of intervening in nature A common objection to intervening in nature is that it would be impractical, either because of the amount of work involved or because the complexity of ecosystems would make it difficult to know whether or not an intervention would be net beneficial on balance. Aaron Simmons argues that humans should not intervene to save animals in nature because doing so would result in unintended consequences, such as damaging ecosystems, interfering with human projects, or resulting in more animal deaths overall. Nicolas Delon and Duncan Purves argue that the "nature of ecosystems leaves us with no reason to predict that interventions would reduce, rather than exacerbate, suffering". Peter Singer argues that intervention in nature would be justified if one could be reasonably confident that this would greatly reduce wild animal suffering and death in the long run. In practice, Singer cautions against interfering with ecosystems because he fears that doing so would cause more harm than good. Other authors dispute Singer's empirical claim about the likely consequences of intervening in the natural world and argue that some types of intervention can be expected to produce good consequences overall. Economist Tyler Cowen cites examples of animal species whose extinction is not generally regarded as having been on balance bad for the world. Cowen also observes that insofar as humans are already intervening in nature, the relevant practical question is not whether there should be intervention but what particular forms of intervention should be favored. Oscar Horta similarly writes that there are already many cases in which humans intervene in nature for other reasons, such as for human interest in nature and environmental preservation as something valuable in their own rights. Horta has also proposed that courses of action aiming at helping wild animals should be carried out and adequately monitored first in urban, suburban, industrial, or agricultural areas. Likewise, Jeff McMahan argues that since humans "are already causing massive, precipitate changes in the natural world", humans should favor those changes that would promote the survival "of herbivorous rather than carnivorous species". Philosopher Peter Vallentyne suggests that while humans should not eliminate predators in nature, they can intervene to help prey in more limited ways. In the same way that humans help humans in need when the cost is small, humans might help some wild animals at least in limited circumstances. Potential conflict between animal rights and environmentalism It has been argued that the environmentalist goal of preserving certain abstract entities, such as species and ecosystems, and a policy of non-interference in regard to natural processes is incompatible with animal rights views, which place the welfare and interests of individual animals at the center of concern. Examples include environmentalists supporting hunting for species population control, while animal rights advocates oppose it; animal rights advocates arguing for the extinction or reengineering of carnivores or r-strategist species, while deep ecologists defend their right to be and flourish as they are; and animal rights advocates defending the reduction of wildlife habitats or arguing against their expansion out of concern that most animal suffering takes place within them, while environmentalists want to safeguard and expand them. Oscar Horta argues that there are instances where environmentalists and animal rights advocates may both support approaches that would consequently reduce wild animal suffering. Intrinsic value of ecological processes, wilderness and wildness Some writers, such as the environmental ethicist Holmes Rolston III, argue that natural animal suffering is valuable because it serves an ecological purpose and that only animal suffering due to non-natural processes is morally bad, and thus humans do not have a duty to intervene in cases of suffering caused by natural processes. Rolston celebrates carnivores in nature because of the significant ecological role they play. Others argue that the reason that humans have a duty to protect other humans from predation but not wild animals is that humans are part of the cultural world rather than the natural world, and so different rules apply to them in these situations. Some writers assert that animals who are preyed upon are fulfilling their natural function, and thus flourishing when they are preyed upon or otherwise die, since this allows natural selection to work. Yves Bonnardel, an animal rights philosopher, criticizes this view, as well as the concept of nature, which he describes as an "ideological tool" that places humans in a superior position above other animals, who exist only to perform certain ecosystem functions, such as a rabbit being food for a wolf. Bonnardel compares this with the religious idea that a slaves exist for their masters, or that woman exists for the sake of man. He argues that animals as individuals all have an interest in living. Wilderness advocates argue that wilderness is intrinsically valuable; the biologist E. O. Wilson wrote that "wilderness has virtue unto itself and needs no extraneous justification". Joshua Duclos describes the moral argument against preserving wilderness because of the suffering experienced by wild animals who live in them as the "objection from welfare". Jack Walker argues that the "intrinsic value of wildness cannot be used to oppose large-scale interventions to reduce [wild animal suffering]". Joshua Duclos observes that wilderness is given intrinsic value of from a narrow anthropocentric perspective, with a religio-spiritual dimension. Nature as idyllic The idyllic view of nature is described as the widely-held view that happiness in nature is widespread. Oscar Horta argues that even though many people are aware of the harms that animals in the wild experience, such as predation, starvation and disease, as well as recognizing that these animals may suffer as a result of these harms, they do not conclude from this that wild animals have bad enough lives to imply that nature is not a happy place. Horta also contends that a romantic conception of nature has significant implications for attitudes people have towards animals in the wild, as holders of the view may oppose interventions to reduce suffering. Bob Fischer argues that many wild animals may have net negative lives (experiencing more pain than pleasure) even in the absence of human activity. Fischer argues that if many animals have net negative lives, then what is good for the animal, as an individual, may not be good for its species, other species, the climate, or the preservation of biodiversity; for example, some animals may have to have their populations massively reduced and controlled and some species, such as parasites or predators, eliminated. Intervention as hubris Some writers argue that interventions to reduce wild animal suffering would be an example of arrogance, hubris, or playing God, as such interventions could potentially have disastrous unforeseen consequences. They are also sceptical of the competence of humans when it comes to making correct moral judgements, as well as human fallibility. Additionally, they contend that the moral stance of humans and moral agency can lead to the imposition of anthropocentric or paternalistic values on others. To support these claims, they use the history of human negative impacts on nature, including species extinctions, wilderness, and resource depletion, as well as climate change. From this, they conclude that the best way that humans can help animals in the wild is through the preservation of larger wilderness areas and by reducing the human sphere of influence on nature. Critics of this position, such as Beril Sözmen, argue that human negative impacts are not inevitable and that, until recently, interventions were not undertaken with the goal of improving the well-being of individual animals in the wild. Furthermore, she contends that such examples of anthropogenic harms are not the consequence of misguided human intervention gone awry but are in fact the result of human agriculture and industry, which do not consider, or do not care, about their impact on nature and animals in the wild. Sözmen also asserts that the holders of this position may view that nature as exists in a delicate state of balance and have an overly romantic view of the lives of animals in the wild, and she contends that the wild contain vast amounts of suffering. Martha Nussbaum argues that because humans are constantly intervening in nature, the central question should be what form should these interventions take rather than whether interventions should take place, arguing that "intelligently respectful paternalism is vastly superior to neglect". Laissez-faire A laissez-faire view, which holds that humans should not harm animals in the wild, but do not have an obligation to aid these individuals when in need, has been defended by Tom Regan, Elisa Aaltola, Clare Palmer, and Ned Hettinger. Regan argues that the suffering animals inflict on each other should not be a concern of ethically motivated wildlife management, and that these wildlife managers should instead focus on letting animals in the wild exist as they are, with no human predation, and to "carve out their own destiny". Aaltola similarly argues that predators should be left to flourish despite the suffering that they cause to the animals that they predate. Palmer endorses a variant of this position, which argues that humans may have an obligation to assist wild animals if humans are responsible for their situation. Hettinger argues for laissez-faire based on the environmental value of "Respect for an Independent Nature". Catia Faria argues that following the principle that humans should only help individuals when they are being harmed by humans, rather than by natural processes, would also mean refusing to help humans and companion animals when they suffer due to natural processes; this implication does not seem acceptable to most people, and she asserts that there are strong reasons to help these individuals when humans have capacity to do so. Faria argues that there is an obligation to help animals in the wild suffering in similar situations, and thus the laissez-faire view does not hold up. Similarly, Steven Nadler argues that it is morally wrong to refuse help to animals in the wild regardless of whether humans are indirectly or directly responsible for their suffering, as the same arguments used to decline aid to humans who were suffering due to natural harms, such as famine, a tsunami, or pneumonia, would be considered immoral. He concludes that if the only thing that is morally relevant is an individual's capacity to suffer, there is no relevant moral difference between humans and other animals suffering in these situations. In the same vein, Steve F. Sapontizis asserts: "When our interests or the interests of those we care for will be hurt, we do not recognize a moral obligation to 'let nature take its course'." Wild animal sovereignty Some writers, such as the animal rights philosophers Sue Donaldson and Will Kymlicka in Zoopolis, argue that humans should not perform large interventions to help animals in the wild. They assert that these interventions would be taking away their sovereignty by removing the ability for these animals to govern themselves. Christiane Bailey asserts that certain wild animals, especially prosocial animals, have sufficient criteria to be considered as moral agents, that is to say, individuals capable of making moral judgments and who have responsibilities. She argues that aiding them would be reducing wild animals to beings incapable of making decisions for themselves. Oscar Horta emphasizes the fact that although some individuals may form sovereign groups, the vast majority of wild animals are either solitary or re-selectors, whose population size varies greatly from year to year. He contends that most of their interactions would be amensalism, commensalism, antagonism, or competition. Horta concludes that the majority of animals in the wild would not form sovereign communities if humans use the criteria established by Donaldson and Kymlicka. Analogy with colonialism Estiva Reus asserts that a comparison exists, from a certain perspective, between the spirit which animated the defenders of colonialism who saw it as necessary human progress for "backward peoples", and the idea which inspires writers who argue for reforming nature in the interest of wild animals: the proponents of the two positions consider that they have the right and the duty, because of their superior skills, to model the existence of beings unable to remedy by their own means the evils which overwhelm them. Thomas Lepeltier, a historian and writer on animal ethics, argues that "if colonization is to be criticized, it is because, beyond the rhetoric, it was an enterprise of spoliation and exaction exercised with great cruelty". He also contends that writers who advocate for helping wild animals do not do so for their own benefit because they would have nothing to gain by helping these individuals. Lepeltier goes on to assert that the advocates for reducing wild animal suffering would be aware of their doubts about how best to help these individuals and that they would not act by considering them as rudimentary and simple to understand beings, contrary to the vision that the former colonizers had of colonized populations. Intervention in practice Existing forms of assistance Existing ways that individual animals suffering in the wild are aided include providing medical care to sick and injured animals, vaccinating animals to prevent disease, taking care of orphaned animals, rescuing animals who are trapped, or in natural disasters, taking care of the needs of animals who are starving or thirsty, sheltering animals who are suffering due to weather conditions, and using contraception to regulate population sizes. History of interventions Providing aid The Bishnoi, a Hindu sect founded in the 15th century, have a tradition of feeding wild animals. Some Bishnoi temples also act as rescue centres, where priests take care of injured animals; a few of these individuals are returned to the wild, while others remain, roaming freely in the temple compounds. The Borana Oromo people leave out water overnight for wild animals to drink because they believe that the animals have a right to drinking water. Culling In 2002, the Australian government authorized the killing of 15,000, out of 100,000, kangaroos who were trapped in a fenced-in national military base and suffering in a state of illness, misery and starvation. In 2016, 350 starving hippos and buffaloes at Kruger National Park were killed by park rangers; one of the motives for the action was to prevent them from suffering as they died. Rescues Rescues of multiple animals in the wild have taken place. In 1988, the United States and Soviet governments collaborated in Operation Breakthrough to free three gray whales who were trapped in pack ice off the coast of Alaska. In 2018, a team of BBC filmmakers dug a ramp in the snow to allow a group of penguins to escape a ravine in Antarctica. In 2019, 2,000 baby flamingos were rescued during a drought in South Africa. During the 2019–20 Australian bushfire season, a number of fire-threatened wild animals were rescued. In 2020, 120 pilot whales, who were beached, were rescued in Sri Lanka. In 2021, 1,700 Cape cormorant chicks, who had been abandoned by their parents, were rescued in South Africa. In the same year, nearly 5,000 cold-stunned sea turtles were rescued in Texas. Vaccination and contraception programs Vaccination programs have been successfully implemented to prevent rabies and tuberculosis in wild animals. Wildlife contraception has been used to reduce and stabilize populations of wild horses, white-tailed deer, American bison, and African elephants. Future developments Proposed interventions Technological It has been argued that in the future, based on research, feasibility and whether interventions could be carried out without increasing suffering overall, existing forms of assistance for wild animals could be employed on a larger scale to reduce suffering. Technological proposals include gene drives and CRISPR to reduce the suffering of members of r-strategist species, and using biotechnology to eradicate suffering in wild animals. Preventing predation When it comes to reducing suffering as a result of predation, propositions include removing predators from wild areas, refraining from reintroducing predators into areas where they have previously gone extinct, arranging the gradual extinction of carnivorous species, and "reprogramming" them to become herbivores using germline engineering. With predation due to cats and dogs, it has been recommended that these companion animals should always be sterilized to prevent the existence of feral animals, and that cats should be kept indoors and dogs kept on a leash, unless in designated areas. Habitat reduction Some writers, like Brian Tomasik, argue that from a consequentialist perspective that since most wild animals lead lives filled with suffering, habitat loss should be encouraged rather than opposed. Tyler M. John and Jeff Sebo criticize this position, terming it the "Logic of the Logger", based on the concept of the "Logic of the Larder". Welfare biology Welfare biology is a proposed research field for studying the welfare of animals, with a particular focus on their relationship with natural ecosystems. It was first advanced in 1995 by Yew-Kwang Ng, who defined it as "the study of living things and their environment with respect to their welfare (defined as net happiness, or enjoyment minus suffering)". Such research is intended to promote concern for animal suffering in the wild and to establish effective actions that can be undertaken to help these individuals in the future. The organizations Animal Ethics and Wild Animal Initiative promote the establishment of welfare biology as a field of research. Impact of climate change It has been argued that climate change may have a large direct impact on a number of animals, with the largest effect on individuals who belong to specialist species that specialise in living in environments which could be most affected by climate change; this could then lead to replacement by individuals belonging to more generalist species. It has also been asserted that the indirect impact of climate change on wild animal suffering will be whether it leads to an increase or decrease of individuals being born into lives where they suffer and die shortly after coming into existence, with a large number of factors needing to be taken into consideration and requiring further study to assess this. Risks Spreading wild animal suffering beyond Earth Several researchers and non-profit organizations have raised concern that human civilization may cause wild animal suffering outside Earth. For example, wild habitats may be created—or allowed to happen—on extraterrestrial colonies like terraformed planets. Another example of a potential realization of the risk is directed panspermia where the initial microbial population eventually evolves into sentient organisms. Spreading sentient wild animals beyond Earth may constitute a suffering risk, as this could potentially lead to an immense increase in the amount of wild animal suffering in existence. Cultural depictions Wildlife documentaries Criticism of portrayals of wild animal suffering It has been argued that much of people's knowledge about wild animals comes from wildlife documentaries, which have been described as non-representative of the reality of wild animal suffering because they underrepresent uncharismatic animals who may have the capacity to suffer, such as animals who are preyed upon, as well as small animals and invertebrates. In addition, it is argued that such documentaries focus on adult animals, while the majority of animals who likely suffer the most, die before reaching adulthood; that wildlife documentaries don't generally show animals suffering from parasitism; that such documentaries can leave viewers with the false impression that animals who have been attacked by predators and suffered serious injury survived and thrived afterwards; and that much of the particularly violent incidents of predation are not included. In an interview, the documentary broadcaster David Attenborough stated: "People who accuse us of putting in too much violence, [should see] what we leave on the cutting-room floor." It is contended that wildlife documentaries present nature as a spectacle to be passively consumed by viewers, as well as a sacred and unique place that needs protection. Additionally, attention is drawn to how hardships that are experienced by animals are portrayed in a way that give the impression that wild animals, through adaptive processes, are able to overcome these sources of harm. The development of such adaptive traits takes place over a number of generations of individuals who will likely experience much suffering and hardship in their lives, while passing down their genes. David Pearce, a transhumanist and advocate for technological solutions for reducing the suffering of wild animals, is highly critical of how wildlife documentaries, which he refers to as "animal snuff-movies", represent wild animal suffering:Nature documentaries are mostly travesties of real life. They entertain and edify us with evocative mood-music and travelogue-style voice-overs. They impose significance and narrative structure on life's messiness. Wildlife shows have their sad moments, for sure. Yet suffering never lasts very long. It is always offset by homely platitudes about the balance of Nature, the good of the herd, and a sort of poor-man's secular theodicy on behalf of Mother Nature which reassures us that it's not so bad after all. ... That's a convenient lie. ... Lions kill their targets primarily by suffocation; which will last minutes. The wolf pack may start eating their prey while the victim is still conscious, though hamstrung. Sharks and the orca basically eat their prey alive; but in sections for the larger prey, notably seals.Pearce argues, through analogy, how the idea of intelligent aliens creating stylised portrayals of human deaths for popular entertainment would be considered abhorrent; he asserts that, in reality, this is the role that humans play when creating wildlife documentaries. Clare Palmer asserts that even when wildlife documentaries contain vivid images of wild animal suffering, they do not motivate a moral or practical response in the way that companion animals, such as dogs or cats, suffering in similar situations would and most people instinctively adopt the position of laissez-faire: allowing suffering to take its course, without intervention. Non-intervention as a filmmaking rule The question of whether wildlife documentary filmmakers should intervene to help animals is a topic of much debate. It has been described as a "golden rule" of such filmmaking to observe animals but not intervene. The rule is occasionally broken, with BBC documentary crews rescuing some stranded baby turtles in 2016 and rescuing a group of penguins trapped in a ravine in 2018; the latter decision was defended by other wildlife documentary filmmakers. Filmmakers following the rule have been criticized for filming dying animals, such as an elephant dying of thirst, without helping them. In fiction 19th century Herman Melville, in Moby-Dick, published in 1851, describes the sea as a place of "universal cannibalism", where "creatures prey upon each other, carrying on eternal war since the world began"; this is illustrated by a later scene depicting sharks consuming their own entrails. The fairy tales of Hans Christian Andersen contain depictions of the suffering of animals due to natural processes and their rescues by humans. The titular character in "Thumbelina" encounters a seemingly dead frozen swallow. Thumbelina feels sorry for the bird and her companion the mole states: "What a wretched thing it is to be born a little bird. Thank goodness none of my children can be a bird, who has nothing but his 'chirp, chirp', and must starve to death when winter comes along." Thumbelina discovers that the swallow is not actually dead and manages to nurse them back to health. In "The Ugly Duckling", the bitter winter cold causes the duckling to become frozen in an icy pond; the duckling is rescued by a farmer who breaks the ice and takes the duckling to his home to be resuscitated. 20th century In the 1923 book Bambi, a Life in the Woods, Felix Salten portrays a world where predation and death are continuous: a sick young hare is killed by crows, a pheasant and a duck are killed by foxes, a mouse is killed by an owl and a squirrel describes how their family members were killed by predators. The 1942 Disney adaptation of Bambi has been criticized for inaccurately portraying a world where predation and death are no longer emphasized, creating a "fantasy of nature cleansed of the traumas and difficulties that may trouble children and that adults prefer to avoid". The film version has also been criticized for unrealistically portraying nature undisturbed by humans as an idyllic place, made up of interspecies friendships, with Bambi's life undisturbed by many of the harms routinely experienced by his real-life counterparts, such as starvation, predation, bovine tuberculosis, and chronic wasting disease. John Wyndham's character Zelby, in the 1957 book The Midwich Cuckoos, describes nature as "ruthless, hideous, and cruel beyond belief" and observes that the lives of insects are "sustained only by intricate processes of fantastic horror". In Watership Down, published in 1972, Richard Adams compares the hardship experienced by animals in winter to the suffering experienced by poor humans, stating: "For birds and animals, as for poor men, winter is another matter. Rabbits, like most wild animals, suffer hardship." Adams also describes rabbits as being more susceptible to disease in the winter. In the philosopher Nick Bostrom's 1994 short story "Golden", the main character Albert, an uplifted golden retriever, observes that humans observe nature from an ecologically aesthetic perspective which disregards the suffering of the individuals who inhabit "healthy" ecosystems. Albert also asserts that it is a taboo in the animal rights movement that the majority of the suffering experienced by animals is due to natural processes and that "[a]ny proposal for remedying this situation is bound to sound utopian, but my dream is that one day the sun will rise on Earth and all sentient creatures will greet the new day with joy". 21st century The character Lord Vetinari, in Terry Pratchett's Unseen Academicals, in a speech, tells how he once observed a salmon being consumed alive by a mother otter and her children feeding on the salmon's eggs. He sarcastically describes "[m]other and children dining upon mother and children" as one of "nature's wonders", using it as an example of how evil is "built into the very nature of the universe". This depiction of evil has been described as non-traditional because it expresses horror at the idea that evil has been designed as a feature of the universe. In non-fiction Annie Dillard's views on nature, as expressed in Pilgrim at Tinker Creek and Holy the Firm, deviate from the traditional portrayal of the natural world as peaceful and balanced. Instead, she presents nature as a realm marked by inherent brutality and violence, using vivid imagery to depict scenes of predation, parasitism, and death. Dillard explores the idea that the divine is not separate from this violence but is intertwined with it, proposing an immanent God who is present within the chaos and suffering of the natural world. This perspective challenges the concept of a benevolent deity existing independently of nature's harsh realities, inviting readers to consider the possibility of a divine presence within an indifferent universe. Through this approach, Dillard's work contributes a distinct perspective to American nature writing, blending theological inquiry with reflections on the violence in nature. In poetry Ancient Homer, in the Iliad, employs the simile of a stag who, as a victim, is wounded by a human hunter and is then devoured by jackals, who themselves are frightened away by a scavenging lion. In the epigram "The Swallow and the Grasshopper", attributed to Euenus, the poet writes of a swallow feeding a grasshopper to its young, remarking that "wilt not quickly cast it loose? for it is not right nor just that singers should perish by singers' mouths." Medieval Al-Ma'arri wrote of the kindness of giving water to birds and speculated whether there was a future existence where innocent animals would experience happiness to remedy the suffering they experience in this world. In the Luzūmiyyāt, he included a poem addressed to the wolf, who "if he were conscious of his bloodguiltiness, would rather have remained unborn." 18th century In "On Poetry: A Rhapsody", written in 1733, Jonathan Swift argues that Hobbes proved that all creatures exist in a state of eternal war and uses predation by different animals as evidence of this. He wrote: "A Whale of moderate Size will draw / A Shole of Herrings down his Maw. / A Fox with Geese his Belly crams; / A Wolf destroys a thousand Lambs." Voltaire makes similar descriptions of predation in his "Poem on the Lisbon Disaster", published in 1756, arguing: "Elements, animals, humans, everything is at war." Voltaire also asserts that "all animals [are] condemned to live, / All sentient things, born by the same stern law, / Suffer like me, and like me also die." In William Blake's Vala, or The Four Zoas, the character Enion laments the cruelty of nature, observing how ravens cry out but do not receive pity, and how sparrows and robins starve to death in the winter. Enion also mourns how wolves and lions reproduce in a state of love, then abandon their young to the wilds and how a spider labours to create a web, awaiting a fly, but then is consumed by a bird. 19th century Erasmus Darwin in The Temple of Nature, published posthumously in 1803, observes the struggle for existence, describing how different animals feed upon each other. He wrote "The towering eagle, darting from above, / Unfeeling rends the inoffensive dove ... Nor spares, enamour'd of his radiant form, / The hungry nightingale the glowing worm", and how parasitic animals, like botflies, reproduce, their young feeding inside the living bodies of other animals, stating: "Fell Oestrus buries in her rapid course / Her countless brood in stag, or bull, or horse; / Whose hungry larva eats its living way, / Hatch'd by the warmth, and issues into day." He also refers to the world as "one great Slaughter-house". In a footnote, he speculates whether humans could someday create a food source for predatory animals based on sugar, asserting that, as a result, "food for animals would then become as plentiful as water, and they might live upon the earth without preying on each other, as thick as blades of grass, with no restraint to their numbers but the want of local room". The poem has been used as an example of how Erasmus Darwin predicted evolutionary theory. Isaac Gompertz, the brother of Lewis Gompertz, in his 1813 poem "To the Thoughtless", criticizes the assertion that human consumption of other animals is justified because it is designed that way by nature, inviting the reader to imagine themselves being predated by an animal and to consider whether they would want to have their life saved, in the same way an animal being preyed upon—such as a fly attacked by a spider—would, despite predation being part of nature-given law. In the 1818 poem "Epistle to John Hamilton Reynolds", John Keats retells to John Hamilton Reynolds how one evening he was by the ocean, when he saw "Too far into the sea; where every maw / The greater on the less feeds evermore", and observes that there exists an "eternal fierce destruction" at the core of the world: "The Shark at savage prey — the hawk at pounce, — / The gentle Robin, like a Pard or Ounce, / Ravening a worm." The poem has been cited as an example of Erasmus Darwin's writings on Keats. In 1850, Alfred Tennyson published the poem "In Memoriam A.H.H.", which contained the expression "Nature, red in tooth and claw"; this phrase has since become commonly used as a shorthand to refer to the extent of suffering in nature. In his 1855 poem "Maud", Tennyson described nature as irredeemable because of the theft and predation it intrinsically contains: "For nature is one with rapine, a harm no preacher can heal; / The Mayfly is torn by the swallow, the sparrow spear'd by the shrike, / And the whole little wood where I sit is a world of plunder and prey." Edwin Arnold in The Light of Asia, a narrative poem published in 1879 about the life of Prince Gautama Buddha, describes how originally the prince saw the "peace and plenty" of nature but upon closer inspection observed: "Life living upon death. So the fair show / Veiled one vast, savage, grim conspiracy / Of mutual murder, from the worm to man." It has been asserted that the Darwinian struggle depicted in the poem comes more from Arnold than Buddhist tradition. 20th century American poet Robinson Jeffers' poems contain depictions of violence in nature, such as "The Bloody Sire": "What but the wolf's tooth whittled so fine / The fleet limbs of the antelope? / What but fear winged the birds, and hunger / Jewelled with such eyes the great goshawk's head? / Violence has been the sire of all the world's values." In his poem "Hurt Hawks", the narrator describes watching a once-strong and vigorous hawk that has been injured and now faces the grim fate of dying from starvation. See also Animal consciousness Antinatalism Emotion in animals God's utility function Natural evil Pain in animals Pain in amphibians Pain in cephalopods Pain in crustaceans Pain in invertebrates Pain in fish The Problem of Pain Speciesism Suffering-focused ethics Suffering risks Veganism References Further reading External links Wild Animal Initiative Wild Animal Suffering – Animal Ethics Wild animal suffering video course – Animal Ethics Timeline of wild-animal suffering WildAnimalSuffering.org Issues in animal ethics Issues in environmental ethics suffering
Wild animal suffering
[ "Biology", "Environmental_science" ]
16,990
[ "Animals", "Issues in environmental ethics", "Environmental ethics", "Wildlife" ]
24,559,450
https://en.wikipedia.org/wiki/Usage%20of%20personal%20protective%20equipment
The use of personal protective equipment (PPE) is inherent in the theory of universal precaution, which requires specialized clothing or equipment for the protection of individuals from hazard. The term is defined by the Occupational Safety and Health Administration (OSHA), which is responsible for PPE regulation, as the "equipment that protects employees from serious injury or illness resulting from contact with chemical, radiological, physical, electrical, mechanical, or other hazards." While there are common forms of PPEs such as gloves, eye shields, and respirators, the standard set in the OSHA definition indicates a wide coverage. This means that PPE involves a sizable range of equipment. There are several ways to classify them such as how gears could be physiological or environmental. The following list, however, sorts personal protective equipment according to function and body area. PPE by usage Combat The modern PPE used in combat has been increasingly designed to address the emergent dangers poised in the increasing mix of conventional and unconventional conflicts demonstrated in the American experience in Iraq and Afghanistan. The combat protective equipment today is often typified by flame resistance, improved body armor, and reduced weight, among other advances. The gears are shown in the following list, which includes PPEs for defense against ballistic weapons are commonly worn by military and law enforcement personnel. Shield A shield is held in the hand or arm. Its purpose is to intercept attacks, either by stopping projectiles such as arrows or by glancing a blow to the side of the shield-user. Shields vary greatly in size, ranging from large shields that protect the user's entire body to small shields that are mostly for use in hand-to-hand combat. Shields also vary a great deal in thickness; whereas some shields were made of thick wooden planking, to protect soldiers from spears and crossbow bolts, other shields were thinner and designed mainly for glancing blows away (such as a sword blow). In prehistory, shields were made of wood, animal hide, or wicker. In antiquity and in the Middle Ages, shields were used by foot soldiers and mounted soldiers. Even after the invention of gunpowder and firearms, shields continued to be used. In the 18th century, Scottish clans continued to use small shields, and in the 19th century, some non-industrialized peoples continued to use shields. In the 20th and 21st century, shields are used by military and police units that specialize in anti-terrorist action, hostage rescue, and siege-breaching. Torso A ballistic vest helps absorb the impact from firearm-fired projectiles and shrapnel from explosions, and is worn on the torso. Soft vests are made from many layers of woven or laminated fibers and can be capable of protecting the wearer from small caliber handgun and shotgun projectiles, and small fragments from explosives such as hand grenades. Metal or ceramic plates can be used with a soft vest, providing additional protection from rifle rounds, and metallic components or tightly-woven fiber layers can give soft armor resistance to stab and slash attacks from a knife. Soft vests are commonly worn by police forces, private citizens and private security guards or bodyguards, whereas hard-plate reinforced vests are mainly worn by combat soldiers, police tactical units and hostage rescue teams. Modern body armor may combine a ballistic vest with other items of protective clothing, such as a combat helmet. Vests intended for police and military use may also include ballistic shoulder and side protection armor components, and bomb disposal officers wear heavy armor and helmets with face visors and spine protection. Head A combat helmet are among the oldest forms of personal protective equipment, and are known to have been worn by the Assyrians around 900BC, followed by the ancient Greeks and Romans, throughout the Middle Ages, and up to the end of the 1600s by many combatants. Their materials and construction became more advanced as weapons became more and more powerful. Initially constructed from leather and brass, and then bronze and iron during the Bronze and Iron Ages, they soon came to be made entirely from forged steel in many societies after about 950AD. At that time, they were purely military equipment, protecting the head from cutting blows with swords, flying arrows, and low-velocity musketry. Today's militaries often use high-quality helmets made of ballistic materials such as Kevlar and Aramid, which have excellent bullet and fragmentation stopping power. Some helmets also have good non-ballistic protective qualities, though many do not. Non-ballistic injuries may be caused by many things, such as concussive shockwaves from explosions, physical attacks, motor vehicle accidents, or falls. A ballistic face mask, is designed to protect the wearer from ballistic threats. Ballistic face masks are usually made of kevlar or other bullet resistant materials and the inside of the mask may be padded for shock absorption, depending on the design. Due to weight restrictions, protection levels range only up to NIJ Level IIIA. Respirator A gas mask is worn over the face to protect the wearer from inhaling "airborne pollutants" and toxic gases. The mask forms a sealed cover over the nose and mouth, but may also cover the eyes and other vulnerable soft tissues of the face. Airborne toxic materials may be gaseous or particulate. Many gas masks include protection from both types. During riots where tear gas or CS-gas is employed by riot police, gas masks are commonly used by police and rioters alike. Limbs Protection of limbs from bombs is provided by a bombsuit. Sports Limbs Gloves are frequently used to keep the hands warm, a function that is particularly necessary when cycling in cold weather. The hands are also relatively inactive, and do not have a great deal of muscle mass, which also contributes to the possibility of chill. Gloves are therefore vital for insulating the hands from cold, wind, and evaporative cooling. Putting a hand out to break a fall is a natural reaction, however, the hands are one of the more difficult parts of the body to repair. There is little or no spare skin, and immobilising the hands sufficiently to promote healing involves significant inconvenience to the patient. Fingerless gloves, have a lightly padded palm of leather (natural or synthetic), gel or other material. Full-finger gloves are useful in winter, when real warmth is not an issue. These are also generally waterproof but will become soggy in heavy rain. Construction Head A hard hat is a type of helmet predominantly used in workplace environments, such as construction sites, to protect the head from injury by falling objects, impact with other objects, debris, bad weather and electric shock. Inside the helmet is a suspension that spreads the helmet's weight over the top of the head. It also provides a space of approximately 3 cm (1.2 inch) between the helmet's shell and the wearer's head so that if an object strikes the shell, the impact is less likely to be transmitted directly to the skull. Rigid plastic has been the most common material. Respiratory system A respirator is designed to protect the wearer from inhaling harmful dusts, fumes, vapors, and/or gases. Respirators come in a wide range of types and sizes used by the military, private industry, and the public. Respirators range from cheaper, single-use, disposable masks to reusable models with replaceable cartridges. There are two main categories: the air-purifying respirator, which forces contaminated air through a filtering element, and the air-supplied respirator, in which an alternate supply of fresh air is delivered. Within each category, different techniques are employed to reduce or eliminate noxious airborne contents. The term respirator in the hospital setting refers to the N95 filtering face piece masks that are commonly used to care for patients with Tuberculosis. There was much controversy over the use of these masks during the H1N1 outbreak of 2009. PPE by body area Protective headgear Masks Some masks made of hard material like those used by goaltenders in ice hockey (a goalie mask) and catchers in baseball as protection against being struck in the face. For gas masks and similar, see #Respiratory protection. See Mask (disambiguation) Helmets See Helmet#Types of helmet Eye protection See Eye protection. Hearing protection Earplug Earmuffs Earpads/earflaps Other head/neck protection Throat guard Headguard (Head guard) Boxing headgear Mouthguard Armored/insulated hood Association football headgear Arm/shoulder protection Shoulder pads (sport) Forearm guard Fist guard Knuckle guard Wrist guard Elbow guard Elbow pad Hand/Wrist Wraps Hand protection Gloves are available to protect against: Chemicals, contamination and infection (e.g. disposable latex/vinyl/nitrile gloves) Electricity, when voltage is too high Extremes of temperature (e.g. oven gloves, welder's gloves) Mechanical hazards (e.g. rigger gloves, chainmail gloves) Mechanic gloves prime concern is to protect hands against mechanical type of applications, where harsh elements of mechanical work is directly detecting your hands required to be secured against the highest or lowest levels of risks depending upon the working environment which is normally measured in terms of different rating standards specifying the class of gloves. Lacerations and other wounds from sharp objects Baseball glove Belay gloves Cycling gloves Falconry gloves Gymnastics grips Hand guards Hand/wrist wraps Hockey glove Wicket-keeper's gloves Body protection Athletic supporter with cup pocket and protective cup, also called Abdomen guard or cricket box Chestguard (Chest guard, Hogu) Rib guard Foot/Leg/hip protection Foot guard Hip pads (Hip pad) Knee pads Instep guard/instep protector Shin guard (shin guards) Combined knee-shin guards Padded shorts Bouldering mat Chaps are individual pant leggings made of leather and worn by farriers, cowboys, and rodeo contestants to protect the legs from contact with hooves, thorny undergrowth, and other such work hazards. May also be made of other materials for leg protection against other hazards, such as "rain chaps" of waterproof materials, or "saw chaps" of Kevlar for chainsaw workers. Safety footwear & Protective footwear is footwear that comes with a protective toe cap. Full protective garments Protective suit is an umbrella term for any suit or clothing which protects the wearer. Any specific design of suit may offer protection against biological and chemical agents, particle radiation (alpha) and/or radiation (delta and gamma), and may offer flash protection in the case of bomb disposal suits. Most forms of industrial clothing are protective clothing. Personal protective equipment includes: Complete suits The word "chemsuit" is sometimes used to mean a real chemical-protection suit, as well as fictional. Boilersuit NBC suit Hazmat suit Bombsuits Fire proximity suit Riding suits (abrasion-proof: made of leather, kevlar, ballistic nylon, cordura, etc., and waterproof) Spacesuit Splash suit, to protect against splashing chemicals Wetsuit and Drysuit Immersion suit Other garments Apron (protects the body and other clothing from dirt) (also used as distinction by waiters) Nappy ("diaper" in American English) Motorcycle armor Protective vest Safety harness Sun protective clothing References
Usage of personal protective equipment
[ "Engineering", "Environmental_science" ]
2,315
[ "Safety engineering", "Personal protective equipment", "Environmental social science" ]
24,560,170
https://en.wikipedia.org/wiki/Castelnuovo%E2%80%93Mumford%20regularity
In algebraic geometry, the Castelnuovo–Mumford regularity of a coherent sheaf F over projective space is the smallest integer r such that it is r-regular, meaning that whenever . The regularity of a subscheme is defined to be the regularity of its sheaf of ideals. The regularity controls when the Hilbert function of the sheaf becomes a polynomial; more precisely dim is a polynomial in m when m is at least the regularity. The concept of r-regularity was introduced by , who attributed the following results to : An r-regular sheaf is s-regular for any . If a coherent sheaf is r-regular then is generated by its global sections. Graded modules A related idea exists in commutative algebra. Suppose is a polynomial ring over a field k and M is a finitely generated graded R-module. Suppose M has a minimal graded free resolution and let be the maximum of the degrees of the generators of . If r is an integer such that for all j, then M is said to be r-regular. The regularity of M is the smallest such r. These two notions of regularity coincide when F is a coherent sheaf such that contains no closed points. Then the graded module is finitely generated and has the same regularity as F. See also Hilbert scheme Quot scheme References Algebraic geometry
Castelnuovo–Mumford regularity
[ "Mathematics" ]
279
[ "Fields of abstract algebra", "Algebraic geometry" ]
24,561,439
https://en.wikipedia.org/wiki/Diffraction%20in%20time
In quantum physics, diffraction in time is a phenomenon associated with the quantum dynamics of suddenly released matter waves initially confined in a region of space. It was introduced in 1952 by Ukrainian-Mexican physicist Marcos Moshinsky with the shutter problem. A matter-wave beam stopped by an absorbing shutter exhibits an oscillatory density profile during its propagation after removal of the shutter. Whenever this propagation is accurately described by the time-dependent Schrödinger equation, the transient wave functions resemble the solutions that appear for the intensity of light subject to Fresnel diffraction by a straight edge. For this reason, the transient phenomenon was dubbed diffraction in time and has since then been recognised as ubiquitous in quantum dynamics. The experimental confirmation of this phenomenon was only achieved about half a century later in the group of ultracold atoms directed by Jean Dalibard. References Diffraction
Diffraction in time
[ "Physics", "Chemistry", "Materials_science" ]
180
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
24,561,779
https://en.wikipedia.org/wiki/School%20of%20Convergence
9.9 School of Convergence, also known as SoC, is a small media school in New Delhi, India. The school has Pramath Raj Sinha, the founding dean of Indian School of Business, as its dean. History The School of Convergence (SoC) was set up in October 2001 by Kaleidoscope Entertainment, headed by Bobby Bedi. The school started with a Two Year Post Graduate Diploma Course in Content Creation and Management (PGDCCM) combining print, radio, television, cinema and the Internet. This course offered knowledge and skills in all streams of media and management. It combines the curricula of a journalism school, a film school and a management school. The Media School licensed its brand to 9.9 Mediaworx Pvt. Ltd. to start journalism courses as 9.9 School of Convergence and it starting its first batch in September 2009 with just one course, an eleven-month diploma in Applied Journalism. Events 2001: SoC set up in May 2001 at the International Management Institute (IMI), New Delhi. The flagship program was a first-of-its-kind two-years PG Diploma in Content Creation and Management, combining curricula of journalism school, film school and management school 2003: SoC ties-up with St. Stephen’s College to operate a ‘Centre of Media Studies’ for their students 2004: SoC ranked India's fifth-best school in Media and Mass Communication by Outlook magazine 2004: SoC extends college tie-up to Hindu College, Jesus & Mary College, Gargi College, and Sri Venkateswara College 2005: SoC ties-up with the prestigious Indian Institute of Foreign Trade (IIFT), New Delhi, Indian Institute of Management Calcutta, and Film and Television Institute of India, Pune for specialized media courses 2006: SoC launches a Post Graduate Diploma in Advertising and Public Relations 2005: As an indication of the strong industry buy-in of SoC curriculum and students India’s leading media companies - ABP Group, CyberMedia and Kaleidoscope Entertainment offer guaranteed placements to student scoring an 'A' grade 2009: New leadership team revives SoC after a hiatus Programme 9.9 School of Convergence offers a Diploma in Applied Journalism. The programme equips aspiring journalists with the core skills required for success in their profession – high-quality and consistent reporting, writing and editing skills. The course is primarily taught by practising journalists, who have excelled at their craft and gained enough experience to teach the dos and don’ts of the profession. In fact, the weekly modules have been received with enthusiasm by media practitioners approached by 9.9 SoC because of the high degree of practicality in the syllabus. Faculty This school has an impressive list of faculty members, including Graham Watts, a journalist and trainer; BV Rao, consulting editor with MoneyLife, a fortnightly magazine, and a columnist on Indian media; Rasheeda Bhagat, associate editor of Hindu Business Line; Edward Henning, a teacher-turned-journalist; Vanita Kohli-Khandekar, independent media consultant and writer. Other faculty members are Savyasaachi Jain, Mala Bhargava, Jacob Cherian, Radha Hegde, Vinay Kamat, Pooja Kothari, Ranbir Majumdar, Eric Saranovitz and Pramath Raj Sinha himself. Advisory board Prof Sanjeev Chatterjee, vice dean, School of Communication, and executive director, Knight Center for International Media, University of Miami, Florida Clive Crook, Chief Washington commentator for Financial Times; senior editor, The Atlantic Monthly; columnist, National Journal (formerly deputy editor of The Economist) Don Durfee, Hong Kong bureau chief of Thomson Reuters (formerly editor of CFO Asia and CFO China, and with the Economist Intelligence Unit) Indrajit Gupta, editor, Forbes India (formerly resident editor, Economic Times, Mumbai; national business editor, Times of India; deputy editor, Businessworld) Prof Radha Hegde, Dept of Media, Culture and Communication & the Steinhardt School of Culture, Education and Human Development, New York University Tony Joseph, CEO, Mindworks Global Media Services (formerly editor, Businessworld, associate editor, Business Standard; features editor, Economic Times) Vinay Kamat, editor, DNA Bangalore (formerly editor of Business Times, the business supplement of The Times of India, the editor of indiatimes.com and associate editor of Business Today) Surya Mantha, CEO, Web 18 (formerly with Sify India, RealNetworks, PRTM, and Xerox Corp, all in the US) Graham Watts, consultant, New Paragraph Company, Bangkok, Thailand (formerly spent 20 years with the Financial Times, including as a journalism trainer) Campus The 9.9 SoC has tie-ups with TV and radio studios for practical sessions. Address :-Sri Aurobindo Society, New Mehrauli Road, Adchini, New Delhi - 110 017 About 9.9 Media 9.9 Media is a diversified media company started by former ABP CEO Dr. Pramath Raj Sinha along with four of his other colleagues. It targets consumer, business and professional communities through magazines, websites, events, and peer groups. Other than SoC, 9.9 Media publishes several other magazines, manages professional institutes and host online platforms. See also CFO India CTO Forum Digit Digit Channel Connect Digit TV Edu Fast Track Inc. India Industry 2.0 Logistics 2.0 Skoar Pramath Raj Sinha External links 9.9 School of Convergence Official Website References 9.9 Media Products Mass media technology Media studies 2001 establishments in Delhi Mass media in Delhi Educational institutions established in 2001
School of Convergence
[ "Technology" ]
1,164
[ "Information and communications technology", "Mass media technology" ]