id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,693,946 | https://en.wikipedia.org/wiki/Uromyces%20betae | Uromyces betae is a fungal species and plant pathogen infecting beet (Beta vulgaris).
It was originally published as Uredo betae before it was transferred to the Uromyces genus.
Sugar beet rust was first described in Canada in 1935,(Newton and Peturson 1943), and then reported in Europe in 1988 (O'Sullivan).
It is a rust which affects only beet, causing brown-orange spotting of the plant's leaves with rusty pustules of urediniospores at the centre of the spots. The rust can stay on overwintered seed crops or as teliospores which contaminate seed storage.
Severe rust attacks to the crop can cause yield losses (of about 15% of root weight and 1% of sugar content). or up to 10% in the United Kingdom.
Other hosts of the fungus includes, sugar beet, beetroot, spinach beet, mangolds and wild beet (Beta vulgaris subsp. vulgaris, Beta vulgaris subsp. maritima), Beta vulgaris, Beta cycla and Beta rapa.
It is found in; Africa (within Algeria, Canary Islands, Libya, Madeira, Morocco and S. Africa); Asia (within Israel, Iran and U.S.S.R.); Australasia (within Australia, New Zealand and Tasmania); Europe (within Austria, Belgium, Bulgaria, Channel Islands, Czechoslovakia, Cyprus, Denmark, Finland, France, Germany, Greece, Great Britain, Holland, Hungary, Ireland, Italy, Latvia, Malta, Norway, Poland, Portugal, Romania, Sardinia, Spain, Sweden, Switzerland, Turkey and Yugoslavia); North America (within Canada, Mexico and U.S.A.) and also in South America (within Argentina, Bolivia, Chile and Uruguay).
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Food plant pathogens and diseases
betae
Fungi described in 1801
Fungus species | Uromyces betae | [
"Biology"
] | 416 | [
"Fungi",
"Fungus species"
] |
11,694,119 | https://en.wikipedia.org/wiki/Inverse%20second | The inverse second or reciprocal second (s−1), also called per second, is a unit defined as the multiplicative inverse of the second (a unit of time). It is applicable for physical quantities of dimension reciprocal time, such as frequency and strain rate.
It is dimensionally equivalent to:
hertz (Hz), historically known as cycles per second – the SI unit for frequency and rotational frequency
becquerel (Bq) – the SI unit for the rate of occurrence of aperiodic or stochastic radionuclide events
baud (Bd) – the unit for symbol rate over a communication link
bit per second (bit/s) – the unit of bit rate
However, the special names and symbols above for s−1 are recommend for clarity.
Reciprocal second should not be confused with radian per second (rad⋅s−1), the SI unit for angular frequency and angular velocity. As the radian is a dimensionless unit, radian per second is dimensionally consistent with reciprocal second. However, they are used for different kinds of quantity, frequency and angular frequency, whose numerical value differs by 2.
The inverse minute or reciprocal minute (min−1), also called per minute, is 60−1 s−1, as 1 min = 60 s; it is used in quantities of type "counts per minute", such as:
Actions per minute
Beats per minute
Counts per minute
Revolutions per minute (rpm)
Words per minute
Inverse square second (s−2) is involved in the units of linear acceleration, angular acceleration, and rotational acceleration.
See also
Aperiodic frequency
Inverse metre
Reciprocal length
Unit of time
Notes
References
Units of frequency | Inverse second | [
"Mathematics"
] | 339 | [
"Quantity",
"Units of frequency",
"Units of measurement"
] |
11,694,216 | https://en.wikipedia.org/wiki/Opus%20vittatum | Opus vittatum ("banded work"), also called opus listatum, was an ancient Roman construction technique introduced at the beginning of the fourth century, made by parallel horizontal courses of tuff blocks alternated with bricks.
This technique was adopted during the whole 4th century, and is typical of the works of Maxentius and Constantine.
See also
References
Sources
Ancient Roman construction techniques | Opus vittatum | [
"Engineering"
] | 80 | [
"Architecture stubs",
"Architecture"
] |
11,694,222 | https://en.wikipedia.org/wiki/Opus%20craticum | Opus craticum or craticii is an ancient Roman construction technique described by Vitruvius in his books De architectura as wattlework which is plastered over. It is often employed to construct partition walls and floors. Vitruvius disparaged this building technique as a grave fire risk, likely to have cracked plaster, and not durable. Surviving examples were found in the archaeological excavations at Pompeii and more so at Herculaneum, buried by the eruption of Mount Vesuvius in 79 AD and excavated beginning in 1929.
Scholarly confusion exists since the term opus craticium is also used for the Roman building technique very similar, but not identified as being directly related to half-timbering, a timber framework with the wall infill of stones in mortar called opus incertum. An example of this technique is the House of Opus Craticum in Herculaneum. This building, which was constructed some time in the first century or earlier, was reconstructed at Herculaneum's Insula III, nos. 13, 14, and 15.
The opus craticum was not a Roman invention as variations of the technique is also found elsewhere in ancient Mediterranean. Before the Romans, the Minoans, Etruscans, and Greeks are known to have used similar building techniques. At least since the 13th century, this type of construction, common in Europe, was called half-timbered in English, Fachwerk (framework) in German, entramado de madera in Spanish, and colombage in French.
References
Ancient Roman construction techniques
Timber framing | Opus craticum | [
"Technology",
"Engineering"
] | 328 | [
"Structural system",
"Architecture stubs",
"Timber framing",
"Architecture"
] |
11,694,245 | https://en.wikipedia.org/wiki/Opus%20incertum | Opus incertum ("irregular work") was an ancient Roman construction technique, using irregularly shaped and randomly placed uncut stones or fist-sized tuff blocks inserted in a core of opus caementicium.
Initially it consisted of more careful placement of the caementa (rock fragments and small stones mixed with concrete), making the external surface as plain as possible. Later the external surface became plainer still by reducing the amount of concrete and choosing more regular small stones. When the amount of concrete between stones is particularly reduced, it is defined as opus quasi reticulatum. Used from the beginning of the 2nd century BC until the mid-1st century BC, it was later largely superseded by opus reticulatum.
Vitruvius, in De architectura (Ten books on engineering), favours opus incertum, deriding opus reticulatum as more expensive and structurally inferior, since cracks propagate more easily.
See also
Ancient Roman construction techniques | Opus incertum | [
"Engineering"
] | 203 | [
"Architecture stubs",
"Architecture"
] |
11,694,313 | https://en.wikipedia.org/wiki/Opus%20mixtum | Opus mixtum (Latin: "mixed work"), or opus vagecum and opus compositum, was an ancient Roman construction technique. It can consist in a mix of opus reticulatum and at the angles and the sides of opus latericium. It can also consist of opus vittatum and opus testaceum. Opus mixtum was also used from the 4th to 6th centuries AD.
References
See also
Jublains archeological site - the forum there is an example
Ancient Roman construction techniques | Opus mixtum | [
"Engineering"
] | 106 | [
"Architecture stubs",
"Architecture"
] |
11,694,506 | https://en.wikipedia.org/wiki/Population%20and%20Environment | Population and Environment is a quarterly peer-reviewed academic journal covering research on the reciprocal links between population, natural resources, and the natural environment. The journal was established in 1978 as the Journal of Population, obtaining its current title in 1980. The editor-in-chief is Brian Thiede (Penn State University). Vaida Thompson was the founding editor-in-chief (1977-1984). Former editors-in-chief of the journal include Elizabeth Fussell (Brown University), Lori Hunter (University of Colorado Boulder), and Landis MacKellar (Vienna Institute of Demography). According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.283.
Past editors
The following persons have been editor-in-chief:
2007-2017 Lori Hunter (University of Colorado Boulder)
2004-2007 Landis MacKellar (Vienna Institute of Demography)
1999-2004 Kevin MacDonald (California State University, Long Beach)
1988-1999 Virginia Abernethy (Vanderbilt University)
1984-1988 Burton Mindick (Cornell University) and Ralph Taylor (Johns Hopkins University)
1977-1984 Vaida D. Thompson (University of North Carolina, Chapel Hill)
References
External links
Environmental social science journals
Springer Science+Business Media academic journals
Academic journals established in 1978
English-language journals
Quarterly journals
Demography journals | Population and Environment | [
"Environmental_science"
] | 269 | [
"Environmental social science journals",
"Environmental science journals",
"Environmental social science stubs",
"Environmental science journal stubs",
"Environmental social science"
] |
11,694,610 | https://en.wikipedia.org/wiki/Two-body%20problem%20in%20general%20relativity | The two-body problem in general relativity (or relativistic two-body problem) is the determination of the motion and gravitational field of two bodies as described by the field equations of general relativity. Solving the Kepler problem is essential to calculate the bending of light by gravity and the motion of a planet orbiting its sun. Solutions are also used to describe the motion of binary stars around each other, and estimate their gradual loss of energy through gravitational radiation.
General relativity describes the gravitational field by curved space-time; the field equations governing this curvature are nonlinear and therefore difficult to solve in a closed form. No exact solutions of the Kepler problem have been found, but an approximate solution has: the Schwarzschild solution. This solution pertains when the mass M of one body is overwhelmingly greater than the mass m of the other. If so, the larger mass may be taken as stationary and the sole contributor to the gravitational field. This is a good approximation for a photon passing a star and for a planet orbiting its sun. The motion of the lighter body (called the "particle" below) can then be determined from the Schwarzschild solution; the motion is a geodesic ("shortest path between two points") in the curved space-time. Such geodesic solutions account for the anomalous precession of the planet Mercury, which is a key piece of evidence supporting the theory of general relativity. They also describe the bending of light in a gravitational field, another prediction famously used as evidence for general relativity.
If both masses are considered to contribute to the gravitational field, as in binary stars, the Kepler problem can be solved only approximately. The earliest approximation method to be developed was the post-Newtonian expansion, an iterative method in which an initial solution is gradually corrected. More recently, it has become possible to solve Einstein's field equation using a computer instead of mathematical formulae. As the two bodies orbit each other, they will emit gravitational radiation; this causes them to lose energy and angular momentum gradually, as illustrated by the binary pulsar PSR B1913+16.
For binary black holes, the numerical solution of the two-body problem was achieved after four decades of research in 2005 when three groups devised breakthrough techniques.
Historical context
Classical Kepler problem
The Kepler problem derives its name from Johannes Kepler, who worked as an assistant to the Danish astronomer Tycho Brahe. Brahe took extraordinarily accurate measurements of the motion of the planets of the Solar System. From these measurements, Kepler was able to formulate Kepler's laws, the first modern description of planetary motion:
The orbit of every planet is an ellipse with the Sun at one of the two foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit.
Kepler published the first two laws in 1609 and the third law in 1619. They supplanted earlier models of the Solar System, such as those of Ptolemy and Copernicus. Kepler's laws apply only in the limited case of the two-body problem. Voltaire and Émilie du Châtelet were the first to call them "Kepler's laws".
Nearly a century later, Isaac Newton had formulated his three laws of motion. In particular, Newton's second law states that a force F applied to a mass m produces an acceleration a given by the equation F=ma. Newton then posed the question: what must the force be that produces the elliptical orbits seen by Kepler? His answer came in his law of universal gravitation, which states that the force between a mass M and another mass m is given by the formula
where r is the distance between the masses and G is the gravitational constant. Given this force law and his equations of motion, Newton was able to show that two point masses attracting each other would each follow perfectly elliptical orbits. The ratio of sizes of these ellipses is m/M, with the larger mass moving on a smaller ellipse. If M is much larger than m, then the larger mass will appear to be stationary at the focus of the elliptical orbit of the lighter mass m. This model can be applied approximately to the Solar System. Since the mass of the Sun is much larger than those of the planets, the force acting on each planet is principally due to the Sun; the gravity of the planets for each other can be neglected to first approximation.
Apsidal precession
If the potential energy between the two bodies is not exactly the 1/r potential of Newton's gravitational law but differs only slightly, then the ellipse of the orbit gradually rotates (among other possible effects). This apsidal precession is observed for all the planets orbiting the Sun, primarily due to the oblateness of the Sun (it is not perfectly spherical) and the attractions of the other planets to one another. The apsides are the two points of closest and furthest distance of the orbit (the periapsis and apoapsis, respectively); apsidal precession corresponds to the rotation of the line joining the apsides. It also corresponds to the rotation of the Laplace–Runge–Lenz vector, which points along the line of apsides.
Newton's law of gravitation soon became accepted because it gave very accurate predictions of the motion of all the planets. These calculations were carried out initially by Pierre-Simon Laplace in the late 18th century, and refined by Félix Tisserand in the later 19th century. Conversely, if Newton's law of gravitation did not predict the apsidal precessions of the planets accurately, it would have to be discarded as a theory of gravitation. Such an anomalous precession was observed in the second half of the 19th century.
Anomalous precession of Mercury
In 1859, Urbain Le Verrier discovered that the orbital precession of the planet Mercury was not quite what it should be; the ellipse of its orbit was rotating (precessing) slightly faster than predicted by the traditional theory of Newtonian gravity, even after all the effects of the other planets had been accounted for. The effect is small (roughly 43 arcseconds of rotation per century), but well above the measurement error (roughly 0.1 arcseconds per century). Le Verrier realized the importance of his discovery immediately, and challenged astronomers and physicists alike to account for it. Several classical explanations were proposed, such as interplanetary dust, unobserved oblateness of the Sun, an undetected moon of Mercury, or a new planet named Vulcan. After these explanations were discounted, some physicists were driven to the more radical hypothesis that Newton's inverse-square law of gravitation was incorrect. For example, some physicists proposed a power law with an exponent that was slightly different from 2.
Others argued that Newton's law should be supplemented with a velocity-dependent potential. However, this implied a conflict with Newtonian celestial dynamics. In his treatise on celestial mechanics, Laplace had shown that if the gravitational influence does not act instantaneously, then the motions of the planets themselves will not exactly conserve momentum (and consequently some of the momentum would have to be ascribed to the mediator of the gravitational interaction, analogous to ascribing momentum to the mediator of the electromagnetic interaction.) As seen from a Newtonian point of view, if gravitational influence does propagate at a finite speed, then at all points in time a planet is attracted to a point where the Sun was some time before, and not towards the instantaneous position of the Sun. On the assumption of the classical fundamentals, Laplace had shown that if gravity would propagate at a velocity on the order of the speed of light then the solar system would be unstable, and would not exist for a long time. The observation that the solar system is old enough allowed him to put a lower limit on the speed of gravity that turned out to be many orders of magnitude faster than the speed of light.
Laplace's estimate for the speed of gravity is not correct in a field theory which respects the principle of relativity. Since electric and magnetic fields combine, the attraction of a point charge which is moving at a constant velocity is towards the extrapolated instantaneous position, not to the apparent position it seems to occupy when looked at. To avoid those problems, between 1870 and 1900 many scientists used the electrodynamic laws of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann to produce stable orbits and to explain the perihelion shift of Mercury's orbit. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light in his theory. And in another attempt Paul Gerber (1898) even succeeded in deriving the correct formula for the perihelion shift (which was identical to that formula later used by Einstein). However, because the basic laws of Weber and others were wrong (for example, Weber's law was superseded by Maxwell's theory), those hypotheses were rejected. Another attempt by Hendrik Lorentz (1900), who already used Maxwell's theory, produced a perihelion shift which was too low.
Einstein's theory of general relativity
Around 1904–1905, the works of Hendrik Lorentz, Henri Poincaré and finally Albert Einstein's special theory of relativity, exclude the possibility of propagation of any effects faster than the speed of light. It followed that Newton's law of gravitation would have to be replaced with another law, compatible with the principle of relativity, while still obtaining the Newtonian limit for circumstances where relativistic effects are negligible. Such attempts were made by Henri Poincaré (1905), Hermann Minkowski (1907) and Arnold Sommerfeld (1910). In 1907 Einstein came to the conclusion that to achieve this a successor to special relativity was needed. From 1907 to 1915, Einstein worked towards a new theory, using his equivalence principle as a key concept to guide his way. According to this principle, a uniform gravitational field acts equally on everything within it and, therefore, cannot be detected by a free-falling observer. Conversely, all local gravitational effects should be reproducible in a linearly accelerating reference frame, and vice versa. Thus, gravity acts like a fictitious force such as the centrifugal force or the Coriolis force, which result from being in an accelerated reference frame; all fictitious forces are proportional to the inertial mass, just as gravity is. To effect the reconciliation of gravity and special relativity and to incorporate the equivalence principle, something had to be sacrificed; that something was the long-held classical assumption that our space obeys the laws of Euclidean geometry, e.g., that the Pythagorean theorem is true experimentally. Einstein used a more general geometry, pseudo-Riemannian geometry, to allow for the curvature of space and time that was necessary for the reconciliation; after eight years of work (1907–1915), he succeeded in discovering the precise way in which space-time should be curved in order to reproduce the physical laws observed in Nature, particularly gravitation. Gravity is distinct from the fictitious forces centrifugal force and coriolis force in the sense that the curvature of spacetime is regarded as physically real, whereas the fictitious forces are not regarded as forces. The very first solutions of his field equations explained the anomalous precession of Mercury and predicted an unusual bending of light, which was confirmed after his theory was published. These solutions are explained below.
General relativity, special relativity and geometry
In the normal Euclidean geometry, triangles obey the Pythagorean theorem, which states that the square distance ds2 between two points in space is the sum of the squares of its perpendicular components
where dx, dy and dz represent the infinitesimal differences between the x, y and z coordinates of two points in a Cartesian coordinate system. Now imagine a world in which this is not quite true; a world where the distance is instead given by
where F, G and H are arbitrary functions of position. It is not hard to imagine such a world; we live on one. The surface of the earth is curved, which is why it is impossible to make a perfectly accurate flat map of the earth. Non-Cartesian coordinate systems illustrate this well; for example, in the spherical coordinates (r, θ, φ), the Euclidean distance can be written
Another illustration would be a world in which the rulers used to measure length were untrustworthy, rulers that changed their length with their position and even their orientation. In the most general case, one must allow for cross-terms when calculating the distance ds
where the nine functions gxx, gxy, ..., gzz constitute the metric tensor, which defines the geometry of the space in Riemannian geometry. In the spherical-coordinates example above, there are no cross-terms; the only nonzero metric tensor components are grr = 1, gθθ = r2 and gφφ = r2 sin2 θ.
In his special theory of relativity, Albert Einstein showed that the distance ds between two spatial points is not constant, but depends on the motion of the observer. However, there is a measure of separation between two points in space-time — called "proper time" and denoted with the symbol dτ — that is invariant; in other words, it does not depend on the motion of the observer.
which may be written in spherical coordinates as
This formula is the natural extension of the Pythagorean theorem and similarly holds only when there is no curvature in space-time. In general relativity, however, space and time may have curvature, so this distance formula must be modified to a more general form
just as we generalized the formula to measure distance on the surface of the Earth. The exact form of the metric gμν depends on the gravitating mass, momentum and energy, as described by the Einstein field equations. Einstein developed those field equations to match the then known laws of Nature; however, they predicted never-before-seen phenomena (such as the bending of light by gravity) that were confirmed later.
Geodesic equation
According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is
where Γ represents the Christoffel symbol and the variable q parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor gμν, or rather on how it changes with position. The variable q is a constant multiple of the proper time τ for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable q. Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed.
Schwarzschild solution
An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of a stationary, uncharged, non-rotating, spherically symmetric body of mass M. It is characterized by a length scale rs, known as the Schwarzschild radius, which is defined by the formula
where G is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio rs/r goes to zero. In that limit, the metric returns to that defined by special relativity.
In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius rs of the Earth is roughly 9 mm ( inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio rs/r is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes.
Orbits about the central mass
The orbits of a test particle of infinitesimal mass about the central mass is given by the equation of motion
where is the specific relative angular momentum, and is the reduced mass. This can be converted into an equation for the orbit
where, for brevity, two length-scales, and , have been introduced. They are constants of the motion and depend on the initial conditions (position and velocity) of the test particle. Hence, the solution of the orbit equation is
Effective radial potential energy
The equation of motion for the particle derived above
can be rewritten using the definition of the Schwarzschild radius rs as
which is equivalent to a particle moving in a one-dimensional effective potential
The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution
where A is the semi-major axis and e is the eccentricity. Here δφ is not the change in the φ-coordinate in (t, r, θ, φ) coordinates but the change in the argument of periapsis of the classical closed orbit.
The third term is attractive and dominates at small r values, giving a critical inner radius rinner at which a particle is drawn inexorably inwards to r = 0; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the a length-scale defined above.
Circular orbits and their stability
The effective potential V can be re-written in terms of the length a = h/c:
Circular orbits are possible when the effective force is zero:
i.e., when the two attractive forces—Newtonian gravity (first term) and the attraction unique to general relativity (third term)—are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as rinner and router:
which are obtained using the quadratic formula. The inner radius rinner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from rinner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem.
When a is much greater than rs (the classical case), these formulae become approximately
Substituting the definitions of a and rs into router yields the classical formula for a particle of mass m orbiting a body of mass M.
The following equation
where ωφ is the orbital angular speed of the particle, is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force:
where is the reduced mass.
In our notation, the classical orbital angular speed equals
At the other extreme, when a2 approaches 3rs2 from above, the two radii converge to a single value
The quadratic solutions above ensure that router is always greater than 3rs, whereas rinner lies between rs and 3rs. Circular orbits smaller than rs are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at rinner = rs. The sphere of this radius is sometimes known as the photon sphere.
Precession of elliptical orbits
The orbital precession rate may be derived using this radial effective potential V. A small radial deviation from a circular orbit of radius router will oscillate in a stable manner with an angular frequency
which equals
Taking the square root of both sides and expanding using the binomial theorem yields the formula
Multiplying by the period T of one revolution gives the precession of the orbit per revolution
where we have used ωφT = 2 and the definition of the length-scale a. Substituting the definition of the Schwarzschild radius rs gives
This may be simplified using the elliptical orbit's semi-major axis A and eccentricity e related by the formula
to give the precession angle
Since the closed classical orbit is an ellipse in general, the quantity A(1 − e2) is the semi-latus rectum l of the ellipse.
Hence, the final formula of angular apsidal precession for a unit complete revolution is
Beyond the Schwarzschild solution
Post-Newtonian expansion
In the Schwarzschild solution, it is assumed that the larger mass M is stationary and it alone determines the gravitational field (i.e., the geometry of space-time) and, hence, the lesser mass m follows a geodesic path through that fixed space-time. This is a reasonable approximation for photons and the orbit of Mercury, which is roughly 6 million times lighter than the Sun. However, it is inadequate for binary stars, in which the masses may be of similar magnitude.
The metric for the case of two comparable masses cannot be solved in closed form and therefore one has to resort to approximation techniques such as the post-Newtonian approximation or numerical approximations. In passing, we mention one particular exception in lower dimensions (see R = T model for details). In (1+1) dimensions, i.e. a space made of one spatial dimension and one time dimension, the metric for two bodies of equal masses can be solved analytically in terms of the Lambert W function. However, the gravitational energy between the two bodies is exchanged via dilatons rather than gravitons which require three-space in which to propagate.
The post-Newtonian expansion is a calculational method that provides a series of ever more accurate solutions to a given problem. The method is iterative; an initial solution for particle motions is used to calculate the gravitational fields; from these derived fields, new particle motions can be calculated, from which even more accurate estimates of the fields can be computed, and so on. This approach is called "post-Newtonian" because the Newtonian solution for the particle orbits is often used as the initial solution.
The theory can be divided into two parts: first one finds the two-body effective potential that captures the GR corrections to the Newtonian potential. Secondly, one should solve the resulting equations of motion.
Modern computational approaches
Einstein's equations can also be solved on a computer using sophisticated numerical methods. Given sufficient computer power, such solutions can be more accurate than post-Newtonian solutions. However, such calculations are demanding because the equations must generally be solved in a four-dimensional space. Nevertheless, beginning in the late 1990s, it became possible to solve difficult problems such as the merger of two black holes, which is a very difficult version of the Kepler problem in general relativity.
Gravitational radiation
If there is no incoming gravitational radiation, according to general relativity, two bodies orbiting one another will emit gravitational radiation, causing the orbits to gradually lose energy.
The formulae describing the loss of energy and angular momentum due to gravitational radiation from the two bodies of the Kepler problem have been calculated. The rate of losing energy (averaged over a complete orbit) is given by
where e is the orbital eccentricity and a is the semimajor axis of the elliptical orbit. The angular brackets on the left-hand side of the equation represent the averaging over a single orbit. Similarly, the average rate of losing angular momentum equals
The rate of period decrease is given by
where Pb is orbital period.
The losses in energy and angular momentum increase significantly as the eccentricity approaches one, i.e., as the ellipse of the orbit becomes ever more elongated. The radiation losses also increase significantly with a decreasing size a of the orbit.
See also
Binet equation
Center of mass (relativistic)
Gravitational two-body problem
Kepler problem
Newton's theorem of revolving orbits
Schwarzschild geodesics
Notes
References
Bibliography
(See Gravitation (book).)
External links
Animation showing relativistic precession of stars around the Milky Way supermassive black hole
Excerpt from Reflections on Relativity by Kevin Brown.
Exact solutions in general relativity | Two-body problem in general relativity | [
"Mathematics"
] | 5,207 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations"
] |
11,694,732 | https://en.wikipedia.org/wiki/Phragmosis | Phragmosis is any method by which an animal defends itself in its burrow, by using its own body as a barrier. This term was originally coined by W.M. Wheeler (1927), while describing the defensive technique exhibited by insects. Wheeler observed the positioning of specially modified body structures to block nest entrances, as exhibited in various insect species. The term phragmosis has since been further extended beyond just insects.
Examples of phragmosis are found in the order Anura (frogs and toads). Some species, such as Pternohyla fodiens and Corythomantis greeningi, have evolved a peculiarly casqued head adapted to protect the animal as it backs down a hole. Another example is the head-plug defense used by the aphid Astegopteryx sp., in which a banana-bunch shaped gall consisting of several subgalls is used as a barrier. Arguably, the most commonly observed phragmotic behaviour is within the ant family. The behaviour is displayed in numerous taxa such as Camponotus, Colobostruma, Crematogaster, Pheidole, Blepharidatta, Cephalotes pusillus, Carebara elmenteitae, Stenamma expolitum, in which the soldiers have unusually large, disc-shaped heads, which are used to block nest entrances against intruders.
In Anura
Corythomantis greeningi
Anurans involve a diverse group of largely carnivorous, short-bodied, tailless amphibians. Within this group, some frogs are characterized by a peculiar casqued head, with the skin co-ossified with the underlying bones. This type of skull is generally associated with phragmotic behaviour, where the animal will enter a hole and block the entrance with its head.
Recent studies of Corythomantis greeningi, a casque-headed tree frog from semi-arid areas, have provided substantial information regarding the water economy associated with co-ossification of the head. Due to the arid environments of most casque-headed anurans, it has been proposed that head co-ossification, together with phragmotic behaviour confer protection against water loss. Upon further investigation, it has been found that cranial co-ossification contributes little to conservation of water, but instead has a primary role of defence. This type of skull morphology primarily acts to protect the animal against predators, and in doing so, leads to an indirect enhancement of water balance within the body.
In the study conducted by Jared et al. (1999) and Navas, Jared & Antoniazzi (2002), C. greeningi demonstrated the ability to enter test tubes backwards and close the entrance with their heads, a behaviour termed 'experimental phragmosis'. The study found that while phragmotic behaviour does not provide a significant reduction in water evaporation, it is important for preventing desiccation. It was concluded that in C. greeningi, the co-ossified head likely evolved originally as a protective lid for phragmotic individuals, but does aid in reducing water permeability through the head.
Pternohyla fodiens
The Mexican hylid casque-headed frog, Pternohyla fodiens, utilizes the head casque to close the entrance of its refuge in a tree cavity by deflecting the head. Due to their frequent foraging on the ground, this species often makes use of vertical burrows already extant in the ground layer as well. Upon arrival of intruder, P. fodiens assumes an immobile position – the head tipped back, with the entire body assumed in a gentle arch. The eyes close tightly, fore-legs are brought forward and upward, and hind-legs are flexed upward. By exhibiting the phragmotic habit during this interaction, it is more likely to effectively avoid predation.
In gall-forming aphids
The aphid Astegopteryx sp. exhibits a head-plugging defense by forming a banana-bunch shaped gall, consisting of several subgalls, on Styrax benzoin. The soldier aphids of Astegopteryx are characterized by their sclerotic, protruded heads, covered in many spine-like setae. Several soldiers cooperate with one another to plug the ostiole of the subgall, utilizing their specialized morphology.
In a study by Kurosu et al. (2005), of 173 ostioles examined, 90.8% were plugged, with no space among the guarding soldiers. Of the 90.8% plugged ostioles, several male intruders were found outside the phragmotic plug, attempting to enter. All intruders were blocked by the guarding soldiers, and it was nearly impossible to enter the subgall.
Astegopteryx soldiers effectively defend their subgall by plugging the ostiole nearly completely with their sclerotic, spiny heads, which are very likely to have evolved for that purpose.
In ants
Phragmotic-headed ants prevent intruders from entering nests by blocking the entrances, or by pushing them out of entrance galleries.
Within the Neotropical species, Blepharidatta conops, queens are characterized by shield-like heads, and appear to secrete fibrous material. The material acts as a coating and eventually accumulates into a dense tangle of material, creating a disk over the head. When nests are visited, or inhabited by predators (especially beetles), the entrance is quickly blocked by the peculiar phragmotic disk of the queen. This modification of the body enables the queen to act as a living gate to the brood chamber.
Phragmosis in ants has evolved independently in the diverse ant genera Camponotus Mayr (Hypercolobopsis), Colobopsis Mayr, Cephalotes Latreille, Colobostruma Wheeler (C. leae), Crematogaster Lund (Colobocrema), Pheidole Westwood (P. colobopsis, P. lamia), but also in other genera, such as Blepharidatta Smith, (B. conops), Tetraponera Smith (T. phrag- motica) and Carebara Westwood. The behaviour is most developed in the genus Cephalotes, where all castes (both queens and workers), have highly adapted head morphologies. The shield-like armor which characterizes this behaviour enables plugging of nest entrances, without exposing eyes, antenna or mandibles to any potential intruders.
In spiders
The trapdoor spider Cyclocosmia has an abdomen ending in a hardened disc that it uses to plug the entrance to its burrow.
References
Ethology | Phragmosis | [
"Biology"
] | 1,409 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
11,694,800 | https://en.wikipedia.org/wiki/Layer%20element | Layers were the core of a method of dynamic HTML programming specific to Netscape 4. Each layer was treated as a separate document object in JavaScript. The content could be included in the same file within the non-standard element (or any other element with the positioning set to "absolute" via CSS) or loaded from a separate file with or . It could also be generated via JavaScript with the constructor. The content would then be inserted into the layer with .
But in modern browsers, the functionality of layers is provided by using an absolutely-positioned , or, for loading the content from an external file, an .
At the height of the Browser Wars, Netscape 4 and Internet Explorer had significantly different JavaScript implementations. Thus, layers could be used for browser detection. A JavaScript program would very often need to run different blocks of code, depending on the browser. To decide which blocks of code to run, a JavaScript program could test for support for layers, regardless of whether the program involved layers at all. Namely,
if (document.layers) {
// ...code that would be executed only by Netscape browsers...
} else {
// ...code that would be executed only by Internet Explorer...
}
References
Netscape: Dynamic HTML in Netscape Communicator (On the Internet Archive)
HTML tags
Web 1.0 | Layer element | [
"Technology"
] | 284 | [
"Computing stubs",
"World Wide Web stubs"
] |
11,694,965 | https://en.wikipedia.org/wiki/Australian%20Academy%20of%20Technology%20and%20Engineering | The Australian Academy of Technology and Engineering (ATSE) is an independent learned academy that helps Australians understand and use technology to solve complex problems.
History
The Australian Academy of Technological Sciences was founded by Ian McLennan in 1975 in Melbourne.
In 1987 the name was lengthened to include engineering, as the Australian Academy of Technological Sciences and Engineering. In 2015, the Academy adopted a new business name, the Australian Academy of Technology and Engineering, reserving the Australian Academy of Technological Sciences and Engineering as its company name.
Organisation
ATSE operates as an independent, non-government, not-for-profit, chartered organisation.
it was composed of nearly 900 fellows, bringing together Australia’s leading experts in applied science, technology, and engineering, to provide impartial, practical and evidence-based advice on how to achieve sustainable solutions and advance prosperity.
The academy's governance structure consists of a board, an assembly (strategic advisory body), a number of board committees, policy-generating forums, state- and territory-based divisions, and a professional secretariat.
List of presidents
Sir Ian McLennan : 1975–1983
Sir David Zeidler : 1984–1988
Sir Rupert Myers : 1989–1994
Sir Arvi Parbo : 1995–1997
Mr M A (Tim) Besley : 1998–2002
Professor John Zillman : 2003–2006
Professor Robin Batterham : 2007–2012
Dr Alan Finkel : 2013–2015
Professor Peter Gray : 2015–2016
Professor Hugh Bradlow : 2016–2022
Dr Katherine Woodthorpe : 2022–present
Fellowship
Royal Fellow
The academy inducted its Royal Fellow, Prince Philip, Duke of Edinburgh KG KT OM GBE AK PC FRS FAA FTSE, in 1977.
Foundation fellows
Foundation fellows include:
Graeme Bird
John Christian
Bob Durie
Keith Farrer
John Gladstones
Antoni Karbowiak
Philip Law
Alec Lazenby
Ian McLennan – Foundation president
Robert Muncey
Mark Oliphant
June Olley
David Solomon
James Vernon
Bob Ward
Prof Howard Worner
Honorary fellows
Honorary fellows include:
Elizabeth Broderick , a former Sex Discrimination Commissioner
Dame Marie Bashir , the former Governor of NSW
Tim Andrew Fischer , the former Deputy Prime Minister
John Landy , the former Olympian and Governor of Victoria
David Hurley, , Governor-General of Australia, the former Governor of NSW
John Anderson, , the former Deputy Prime Minister
Fellows
Clunies-Ross Award
Founded in 1959 to perpetuate the memory of Sir Ian Clunies Ross, the Ian Clunies Ross Memorial Foundation promoted the development of science and technology in Australia's beneficial interest.
In November 2002, the Foundation was brought under the Academy's umbrella, securing the long-term future of the Awards. It became known as the Clunies Ross Foundation.
The Foundation established the Clunies Ross National Science & Technology Award in 1991. The Foundation was disbanded in 2004 and the Awards are now administered by the Academy in three categories.
See also
Office of the Chief Scientist (Australia)
References
External links
Address by the Governor-General at the ATSE Clunies Ross Awards ceremony, 19 May 2011
Scientific organisations based in Australia
Technological Sciences and Engineering
Australian National Academies
1987 establishments in Australia
National academies of engineering | Australian Academy of Technology and Engineering | [
"Engineering"
] | 646 | [
"National academies of engineering"
] |
11,695,358 | https://en.wikipedia.org/wiki/On%20the%20Equilibrium%20of%20Heterogeneous%20Substances | In the history of thermodynamics, "On the Equilibrium of Heterogeneous Substances" is a 300-page paper written by American chemical physicist Willard Gibbs. It is one of the founding papers in thermodynamics, along with German physicist Hermann von Helmholtz's 1882 paper "Thermodynamik chemischer Vorgänge." Together they form the foundation of chemical thermodynamics as well as a large part of physical chemistry.
Gibbs's paper marked the beginning of chemical thermodynamics by integrating chemical, physical, electrical, and electromagnetic phenomena into a coherent system. It introduced concepts such as chemical potential, phase rule, and more, which form the basis for modern physical chemistry. American writer Bill Bryson describes Gibbs's paper as "the Principia of thermodynamics".
"On the Equilibrium of Heterogeneous Substances", was originally published in a relatively obscure American journal, the Transactions of the Connecticut Academy of Arts and Sciences, in several parts, during the years 1875 to 1878 (although most cite "1876" as the key year). It remained largely unknown until translated into German by Wilhelm Ostwald and into French by Henry Louis Le Châtelier.
Overview
Gibbs first contributed to mathematical physics with two papers published in 1873 in the Transactions of the Connecticut Academy of Arts and Sciences on "Graphical Methods in the Thermodynamics of Fluids," and "Method of Geometrical Representation of the Thermodynamic Properties of Substances by means of Surfaces." His subsequent and most important publication was "On the Equilibrium of Heterogeneous Substances" (in two parts, 1876 and 1878). In this monumental, densely woven, 300-page treatise, the first law of thermodynamics, the second law of thermodynamics, the fundamental thermodynamic relation, are applied to the predication and quantification of thermodynamic reaction tendencies in any thermodynamic system in a visual, three-dimensional graphical language of Lagrangian mechanics and phase transitions, among others. As stated by Le Chatelier, it "founded a new department of chemical science that is becoming comparable in importance to that created by [Antoine] Lavoisier." This work was translated into German by Ostwald (who styled its author the "founder of chemical energetics") in 1891 and into French by Le Châtelier in 1899.
Gibbs's Equilibrium paper is considered one of the greatest achievements in physical science in the 19th century and one of the foundations of the science of physical chemistry. In these papers Gibbs applied thermodynamics to the interpretation of physicochemical phenomena and showed the explanation and interrelationship of what had been known only as isolated, inexplicable facts.
Gibbs' papers on heterogeneous equilibria included:
Some chemical potential concepts
Some free energy concepts
A Gibbsian ensemble ideal (basis of the statistical mechanics field)
the phase rule
References
External links
At the Internet Archive, Part 1 and Part 2 in various file formats.
Thermodynamics literature
1870s in science
1876 in science
Works originally published in American magazines
1876 non-fiction books
Works originally published in science and technology magazines
Physics papers | On the Equilibrium of Heterogeneous Substances | [
"Physics",
"Chemistry"
] | 674 | [
"Thermodynamics literature",
"Thermodynamics"
] |
11,695,384 | https://en.wikipedia.org/wiki/Abundisporus%20quercicola | Abundisporus quercicola is a species of bracket fungus that grows on living oaks in temperate forests in the foothills of the Himalaya (People's Republic of China). The fruit bodies are perennial, grey to black above with concentric markings and white below. The fungus grows to 7 cm wide and 5 cm thick, projecting up to 5 cm from the substrate. The basidiospores are yellow.
References
Polyporaceae
Fungi described in 2002
Fungi of China
Taxa named by Yu-Cheng Dai
Fungus species | Abundisporus quercicola | [
"Biology"
] | 109 | [
"Fungi",
"Fungus species"
] |
11,695,492 | https://en.wikipedia.org/wiki/Acetone%E2%80%93butanol%E2%80%93ethanol%20fermentation | Acetone–butanol–ethanol (ABE) fermentation, also known as the Weizmann process, is a process that uses bacterial fermentation to produce acetone, n-butanol, and ethanol from carbohydrates such as starch and glucose. It was developed by chemist Chaim Weizmann and was the primary process used to produce acetone, which was needed to make cordite, a substance essential for the British war industry during World War I.
Process
The process may be likened to how yeast ferments sugars to produce ethanol for wine, beer, or fuel, but the organisms that carry out the ABE fermentation are strictly anaerobic (obligate anaerobes). The ABE fermentation produces solvents in a ratio of 3 parts acetone, 6 parts butanol to 1 part ethanol. It usually uses a strain of bacteria from the Class Clostridia (Family Clostridiaceae). Clostridium acetobutylicum is the most well-studied and widely used. Although less effective, Clostridium beijerinckii and Clostridium saccharobutylicum bacterial strains have shown good results as well.
The ABE fermentation pathway generally proceeds in two phases. In the initial acidogenesis phase, the cells grow exponentially and accumulate acetate and butyrate. The low pH along with other factors then trigger a metabolic shift to the solventogenesis phase, in which acetate and butyrate are used to produce the solvents.
For gas stripping, the most common gases used are the off-gases from the fermentation itself, a mixture of carbon dioxide and hydrogen gas.
History
The production of butanol by biological means was first performed by Louis Pasteur in 1861. In 1905, Austrian biochemist Franz Schardinger found that acetone could similarly be produced. In 1910 Auguste Fernbach (1860–1939) developed a bacterial fermentation process using potato starch as a feedstock in the production of butanol.
Industrial exploitation of ABE fermentation started in 1916, during World War I, with Chaim Weizmann's isolation of Clostridium acetobutylicum, as described in U.S. patent 1315585.
The Weizmann process was operated by Commercial Solvents Corporation from about 1920 to 1964 with plants in the US (Terre Haute, IN, and Peoria, IL), and Liverpool, England. The Peoria plant was the largest of the three. It used molasses as feedstock and had 96 fermenters with a volume of 96,000 gallons each.
After World War II, ABE fermentation became generally non-profitable, compared to the production of the same three solvents (acetone, butanol, ethanol) from petroleum. During the 1950s and 1960s, ABE fermentation was replaced by petroleum chemical plants. Due to different raw material costs, ABE fermentation was viable in South Africa until the early 1980s, with the last plant closing in 1983. Green Biologics Ltd operated the last attempt to resurrect the process at scale but the plant closed in Minnesota in June 2019.
A new ABE biorefinery has been developed in Scotland by Celtic Renewables Ltd and will begin production in early 2022. The key difference in the process is the use of low value spent materials or residues from other processes removing the variable costs of raw feedstock crops and materials.
Improvement attempts
The most critical aspect in biomass fermentation processes is related to its productivity. The ABE fermentation via Clostridium beijerinckii or Clostridium acetobutylicum for instance is characterized by product inhibition. This means that there is a product concentration threshold that cannot be overcome, resulting in a product stream highly diluted in water.
For this reason, in order to have a comparable productivity and profitability with respect to the petrochemical processes, cost and energy effective solutions for the product purification sections are required to provide a significant product recovery at the desired purity.
The main solutions adopted during the last decades have been as follows:
The employment of less expensive raw materials, and in particular lignocellulosic waste or algae;
The microorganisms modifications or the research of new strains less sensitive to the butanol concentration poisoning to increase productivity and selectivity towards the butanol species;
The fermentation reactor optimization aimed at increasing the productivity;
The reduction of the energy costs of the separation and purification downstream processing and, in particular, to carry out the separation in-situ in the reactor;
The use of side products such as hydrogen and carbon dioxide, solid wastes and discharged microorganisms and carry out less expensive process wastewater treatments.
In the second half of the 20th century, these technologies allowed an increase in the final product concentration in the broth from 15 to 30 g/L, an increase in the final productivity from 0.46 to 4.6 g/(L*h) and an increase in the yield from 15 to 42%.
From a compound purification perspective, the main criticalities in the ABE/W product recovery are due to the water–alcohol mixture's non-ideal interactions leading to homogeneous and heterogeneous azeotropic species, as shown by the ternary equilibrium diagram.
This causes the separation by standard distillation to be particularly impractical but, on the other hand, allows the exploitation of the liquid–liquid demixing region both for analogous and alternative separation processes.
Therefore, in order to enhance the ABE fermentation yield, mainly in situ product recovery systems have been developed. These include gas stripping, pervaporation, liquid–liquid extraction, distillation via Dividing Wall Column, membrane distillation, membrane separation, adsorption, and reverse osmosis. Green Biologics Ltd. implemented many of these technologies at an industrial scale.
Moreover, differently from crude oil feedstocks, biomasses nature fluctuates over the year's seasons and according to the geographical location. For this reasons, biorefinery operations need not only to be effective but also to be flexible and to be able to switch between two operating conditions rather quickly.[citation needed]
Current perspectives
ABE fermentation is attracting renewed interest with a focus on butanol as a renewable biofuel.
Sustainability is by far the topic of major concern over the last years. The energy challenge is the key point of the environmental friendly policies adopted by all the most developed and industrialized countries worldwide. For this purpose Horizon 2020, the biggest EU Research and Innovation programme, was funded by the European Union over the 2014–2020 period.
The International Energy Agency defines renewables as the centre of the transition to a less carbon-intensive and more sustainable energy system. Biofuels are believed to represent around 30% of energy consumption in transport by 2060. Their role is particularly important in sectors which are difficult to decarbonise, such as aviation, shipping and other long-haul transport. That is why several bioprocesses have seen a renewed interest in recent years, both from a research and an industrial perspective.
For this reason, the ABE fermentation process has been reconsidered from a different perspective. Although it was originally conceived to produce acetone, it is considered as a suitable production pathway for biobutanol that has become the product of major interest.
Biogenic butanol is a possible substitute of bioethanol or even better and it is already employed both as fuel additive and as pure fuel instead of standard gasoline because, differently from ethanol, it can be directly and efficiently used in gasoline engines. Moreover, it has the advantage that it can be shipped and distributed through existing pipelines and filling stations.
Finally biobutanol is widely used as a direct solvent for paints, coatings, varnishes, resins, dyes, camphor, vegetable oils, fats, waxes, shellac, rubbers and alkaloids due to its higher energy density, lower volatility, and lower hygroscopicity. It can be produced from different kinds of cellulosic biomass and can be used for further processing of advanced biofuels such as butyl levulinate as well.
The application of n-butanol in the production of butyl acrylate has a wide scope for its expansion, which in turn would help in increasing the consumption of n-butanol globally. Butyl acrylate was the biggest n-butanol application in 2014 and is projected to be worth US$3.9 billion by 2020.
References
Fermentation | Acetone–butanol–ethanol fermentation | [
"Chemistry",
"Biology"
] | 1,783 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
11,698,208 | https://en.wikipedia.org/wiki/Diphthamide | Diphthamide is a post-translationally modified histidine amino acid found in archaeal and eukaryotic elongation factor 2 (eEF-2).
Dipthamide is named after the toxin produced by the bacterium Corynebacterium diphtheriae, which targets diphthamide. Besides this toxin, it is also targeted by exotoxin A from Pseudomonas aeruginosa. It is the only target of these toxins.
Structure and biosynthesis
Diphthamide is proposed to be a 2-[3-carboxyamido-3-(trimethylammonio)propyl]histidine. Though this structure has been confirmed by X-ray crystallography, its stereochemistry is uncertain.
Diphthamide is biosynthesized from histidine and S-adenosyl methionine (SAM). The side chain bound to imidazole group and all methyl groups come from SAM. The whole synthesis takes place in three steps:
transfer of 3-amino-3-carboxypropyl group from SAM
transfer of three methyl groups from SAM – synthesis of diphtine
amidation – synthesis of diphthamide
In eukaryotes, this biosynthetic pathway contains a total of 7 genes (Dph1-7).
Biological function
Diphthamide ensures translation fidelity.
The presence or absence of diphthamide is known to affect NF-κB or death receptor pathways.
References
Amino acids
Imidazoles
Quaternary ammonium compounds
Post-translational modification
Zwitterions | Diphthamide | [
"Physics",
"Chemistry"
] | 337 | [
"Biomolecules by chemical classification",
"Matter",
"Gene expression",
"Biochemical reactions",
"Amino acids",
"Post-translational modification",
"Zwitterions",
"Ions"
] |
11,699,110 | https://en.wikipedia.org/wiki/Primakoff%20effect | In particle physics, the Primakoff effect, named after Henry Primakoff, is the resonant production of neutral pseudoscalar mesons by high-energy photons interacting with an atomic nucleus. It can be viewed as the reverse process of the decay of the meson into two photons and has been used for the measurement of the decay width of neutral mesons.
It could also take place in stars and be a production mechanism of certain hypothetical particles, such as the axion. More precisely, the Primakoff effect is the conversion of axions into photons in the presence of very strong electromagnetic field.
The effect is predicted to lead to optical properties of the vacuum state in the presence of a strong magnetic field.
See also
Two-photon physics
References
Particle physics | Primakoff effect | [
"Physics"
] | 156 | [
"Particle physics stubs",
"Particle physics"
] |
11,699,678 | https://en.wikipedia.org/wiki/Vehicle%20frame | A vehicle frame, also historically known as its chassis, is the main supporting structure of a motor vehicle to which all other components are attached, comparable to the skeleton of an organism.
Until the 1930s, virtually every car had a structural frame separate from its body. This construction design is known as body-on-frame. By the 1960s, unibody construction in passenger cars had become common, and the trend to unibody for passenger cars continued over the ensuing decades.
Nearly all trucks, buses, and most pickups continue to use a separate frame as their chassis.
Functions
The main functions of a frame in a motor vehicle are:
To support the vehicle's mechanical components and body
To deal with static and dynamic loads without undue deflection or distortion
These include:
Weight of the body, passengers, and cargo loads.
Vertical and torsional twisting transmitted by going over uneven surfaces
Transverse lateral forces caused by road conditions, side wind, and steering of the vehicle
Torque from the engine and transmission
Longitudinal tensile forces from starting and acceleration, as well as compression from braking
Sudden impacts from collisions
Frame rails
Typically, the material used to construct vehicle chassis and frames include carbon steel for strength or aluminum alloys to achieve a more lightweight construction. In the case of a separate chassis, the frame is made up of structural elements called the rails or beams. These are ordinarily made of steel channel sections by folding, rolling, or pressing steel plate.
There are three main designs for these. If the material is folded twice, an open-ended cross-section, either C-shaped or hat-shaped (U-shaped), results.
"Boxed" frames contain closed chassis rails, either by welding them up or by using premanufactured metal tubing.
C-Shaped
By far the most common, the C-channel rail has been used on nearly every type of vehicle at one time or another. It is made by taking a flat piece of steel (usually ranging in thickness from 1/8" to 3/16", but up to 1/2" or more in some heavy-duty trucks) and rolling both sides over to form a C-shaped beam running the length of the vehicle. C-channel is typically more flexible than (fully) boxed of the same gauge.
Hat
Hat frames resemble a "U" and may be either right-side-up or inverted, with the open area facing down. They are not commonly used due to weakness and a propensity to rust. However, they can be found on 1936–1954 Chevrolet cars and some Studebakers.
Abandoned for a while, the hat frame regained popularity when companies started welding it to the bottom of unibody cars, effectively creating a boxed frame.
Boxed
Originally, boxed frames were made by welding two matching C-rails together to form a rectangular tube. Modern techniques, however, use a process similar to making C-rails in that a piece of steel is bent into four sides and then welded where both ends meet.
In the 1960s, the boxed frames of conventional American cars were spot-welded in multiple places down the seam; when turned into NASCAR "stock car" racers, the box was continuously welded from end to end for extra strength.
Design features
While appearing at first glance as a simple form made of metal, frames encounter significant stress and are built accordingly. The first issue addressed is "beam height", or the height of the vertical side of a frame. The taller the frame, the better it can resist vertical flex when force is applied to the top of the frame. This is the reason semi-trucks have taller frame rails than other vehicles instead of just being thicker.
As looks, ride quality, and handling became more important to consumers, new shapes were incorporated into frames. The most visible of these are arches and kick-ups. Instead of running straight over both axles, arched frames sit lower—roughly level with their axles—and curve up over the axles and then back down on the other side for bumper placement. Kick-ups do the same thing without curving down on the other side and are more common on the front ends.
Another feature are the tapered rails that narrow vertically or horizontally in front of a vehicle's cabin. This is done mainly on trucks to save weight and slightly increase room for the engine since the front of the vehicle does not bear as much load as the back. Design developments include frames that use multiple shapes in the same frame rail. For example, some pickup trucks have a boxed frame in front of the cab, shorter, narrower rails underneath the cab, and regular C-rails under the bed.
On perimeter frames, the areas where the rails connect from front to center and center to rear are weak compared to regular frames, so that section is boxed in, creating what are called "torque boxes".
Types
Full under-body frames
Ladder frame
Named for its resemblance to a ladder, the ladder frame is one of the oldest, simplest, and most frequently used under-body, separate chassis/frame designs. It consists of two symmetrical beams, rails, or channels, running the length of the vehicle, connected by several transverse cross-members. Initially seen on almost all vehicles, the ladder frame was gradually phased out on cars in favor of perimeter frames and unitized body construction. It is now seen mainly on large trucks. This design offers good beam resistance because of its continuous rails from front to rear, but poor resistance to torsion or warping if simple, perpendicular cross-members are used. The vehicle's overall height will be greater due to the floor pan sitting above the frame instead of inside it.
Backbone tube
A backbone chassis is a type of automotive construction with chassis that is similar to the body-on-frame design. Instead of a relatively flat, ladder-like structure with two longitudinal, parallel frame rails, it consists of a central, strong tubular backbone (usually rectangular in cross-section) that carries the power-train and connects the front and rear suspension attachment structures. Although the backbone is frequently drawn upward into, and mostly above the floor of the vehicle, the body is still placed on or over (sometimes straddling) this structure from above.
X-frame
This is the design used for the full-size American models of General Motors in the late 1950s and early 1960s in which the rails from alongside the engine seemed to cross in the passenger compartment, each continuing to the opposite end of the crossmember at the extreme rear of the vehicle. It was specifically chosen to decrease the overall height of the vehicles regardless of the increase in the size of the transmission and propeller shaft humps since each row had to cover frame rails as well. Several models had the differential located not by the customary bar between axle and frame, but by a ball joint atop the differential connected to a socket in a wishbone hinged onto a crossmember of the frame.
The X-frame was claimed to improve on previous designs, but it lacked side rails and thus did not provide adequate side impact and collision protection. Perimeter frames replaced this design.
Perimeter frame
Similar to a ladder frame, but the middle sections of the frame rails sit outboard of the front and rear rails, routed around the passenger footwells, inside the rocker and sill panels. This allowed the floor pan to be lowered, especially the passenger footwells, lowering the passengers' seating height and thereby reducing both the roof-line and overall vehicle height, as well as the center of gravity, thus improving handling and road-holding in passenger cars.
This became the prevalent design for body-on-frame cars in the United States, but not in the rest of the world, until the unibody gained popularity. For example, Hudson introduced this construction on their 3rd generation Commodore models in 1948. This frame type allowed for annual model changes, and lower cars, introduced in the 1950s to increase sales – without costly structural changes.
The Ford Panther platform, discontinued in 2011, was one of the last perimeter frame passenger car platforms in the United States. The fourth to seventh generation Chevrolet Corvette used a perimeter frame integrated with an internal skeleton that serves as a clamshell.
In addition to a lowered roof, the perimeter frame allows lower seating positions when that is desirable, and offers better safety in the event of a side impact. However, the design lacks stiffness because the transition areas from front to center and center to rear reduce beam and torsional resistance and is used in combination with torque boxes and soft suspension settings.
Platform frame
This is a modification of the perimeter frame, or of the backbone frame, in which the passenger compartment floor, and sometimes the luggage compartment floor, have been integrated into the frame as loadbearing parts for strength and rigidity. The sheet metal used to assemble the components needs to be stamped with ridges and hollows to give it strength.
Platform chassis were used on several successful European cars, most notably the Volkswagen Beetle, where it was called "body-on-pan" construction. Another German example are the Mercedes-Benz "Ponton" cars of the 1950s and 1960s, where it was called a "frame floor" in English-language advertisements.
The French Renault 4, of which over eight million were made, also used a platform frame. The frame of the Citroën 2CV used a minimal interpretation of a platform chassis under its body.
Space frame
In a (tubular) spaceframe chassis, the suspension, engine, and body panels are attached to a three-dimensional skeletal frame of tubes, and the body panels have limited or no structural function. To maximize rigidity and minimize weight, the design frequently makes maximum use of triangles, and all the forces in each strut are either tensile or compressive, never bending, so they can be kept as thin as possible.
The first true spaceframe chassis were produced in the 1930s by Buckminster Fuller and William Bushnell Stout (the Dymaxion and the Stout Scarab) who understood the theory of the true spaceframe from either architecture or aircraft design.
The 1951 Jaguar C-Type racing sports car utilized a lightweight, multi-tubular, triangulated frame over which an aerodynamic aluminum body was crafted.
In 1994, the Audi A8 was the first mass-market car with an aluminium chassis, made feasible by integrating an aluminium space-frame into the bodywork. Audi A8 models have since used this construction method co-developed with Alcoa, and marketed as the Audi Space Frame.
The Italian term Superleggera (meaning 'super-light') was trademarked by Carrozzeria Touring for lightweight sports-car body construction that only resembles a space-frame chassis. Using a three-dimensional frame that consists of a cage of narrow tubes that, besides being under the body, run up the fenders and over the radiator, cowl, and roof, and under the rear window, it resembles a geodesic structure. A skin is attached to the outside of the frame, often made of aluminum. This body construction is, however, not stress-bearing and still requires the addition of a chassis.
Unibody
The terms "unibody" and "unit-body" are short for "unitized body", "unitary construction", or alternatively (fully) integrated body and frame/chassis. It is defined as:
Vehicle structure has shifted from the traditional body-on-frame architecture to the lighter unitized/integrated body structure that is now used for most cars.
Integral frame and body construction requires more than simply welding an unstressed body to a conventional frame. In a fully integrated body structure, the entire car is a load-carrying unit that handles all the loads experienced by the vehicle – forces from driving and cargo loads. Integral-type bodies for wheeled vehicles are typically manufactured by welding preformed metal panels and other components together, by forming or casting whole sections as one piece, or by combining these techniques. Although this is sometimes also referred to as a monocoque structure, because the car's outer skin and panels are made load-bearing, there are still ribs, bulkheads, and box sections to reinforce the body, making the description semi-monocoque more appropriate.
The first attempt to develop such a design technique was on the 1922 Lancia Lambda to provide structural stiffness and a lower body height for its torpedo car body. The Lambda had an open layout with unstressed roof, which made it less of a monocoque shell and more like a bowl. One thousand were produced.
A key role in developing the unitary body was played by the American firm the Budd Company, now ThyssenKrupp Budd. Budd supplied pressed-steel bodywork, fitted to separate frames, to automakers Dodge, Ford, Buick, and the French company, Citroën.
In 1930, Joseph Ledwinka, an engineer with Budd, designed an automobile prototype with a full unitary construction.
Citroën purchased this fully unitary body design for the Citroën Traction Avant. This high-volume, mass-production car was introduced in 1934 and sold 760,000 units over the next 23 years of production. This application was the first iteration of the modern structural integration of body and chassis, using spot welded deeply stamped steel sheets into a structural cage, including sills, pillars, and roof beams. In addition to a unitary body with no separate frame, the Traction Avant also featured other innovations such as front-wheel drive. The result was a low-slung vehicle with an open, flat-floored interior.
For the Chrysler Airflow (1934–1937), Budd supplied a variation – three main sections from the Airflow's body were welded into what Chrysler called a bridge-truss construction. Unfortunately, this method was not ideal because the panel fits were poor. To convince a skeptical public of the strength of unibody, both Citroën and Chrysler created advertising films showing cars surviving after being pushed off a cliff.
Opel was the second European and the first German car manufacturer to produce a car with a unibody structure – production of the compact Olympia started in 1935. A larger Kapitän went into production in 1938, although its front longitudinal beams were stamped separately and then attached to the main body. It was so successful that the Soviet post-war mass produced GAZ-M20 Pobeda of 1946 copied unibody structure from the Opel Kapitän. Later Soviet limousine GAZ-12 ZIM of 1950 introduced unibody design to automobiles with a wheelbase as long as 3.2 m (126 in).
The streamlined 1936 Lincoln-Zephyr with conventional front-engine, rear-wheel-drive layout utilized a unibody structure. By 1941, unit construction was no longer a new idea for cars, "but it was unheard of in the [American] low-price field [and] Nash wanted a bigger share of that market." The single unit-body construction of the Nash 600 provided weight savings and Nash's Chairman and CEO, George W. Mason was convinced "that unibody was the wave of the future."
Since then, more cars were redesigned to the unibody structure, which is now "considered standard in the industry". By 1960, the unitized body design was used by Detroit's Big Three on their compact cars (Ford Falcon, Plymouth Valiant, and Chevrolet Corvair). After Nash merged with Hudson Motors to form American Motors Corporation, its Rambler-badged automobiles continued exclusively building variations of the unibody.
Although the 1934 Chrysler Airflow had a weaker-than-usual frame and body framework welded to the chassis to provide stiffness, in 1960, Chrysler moved from body-on-frame construction to a unit-body design for most of its cars.
Most of the American-manufactured unibody automobiles used torque boxes in their vehicle design to reduce vibrations and chassis flex, except for the Chevy II, which had a bolt-on front apron (erroneously referred to as a subframe).
The unibody is now the preferred construction for mass-market automobiles. This design provides weight savings, improved space utilization, and ease of manufacture. Acceptance grew dramatically in the wake of the two energy crises of the 1970s and that of the 2000s in which compact SUVs using a truck platform (primarily the USA market) were subjected to CAFE standards after 2005 (by the late 2000s truck-based compact SUVs were phased out and replaced with crossovers). An additional advantage of a strong-bodied car lies in the improved crash protection for its passengers.
Uniframe
American Motors (with its partner Renault) during the late 1970s incorporated unibody construction when designing the Jeep Cherokee (XJ) platform using the manufacturing principles (unisides, floorplan with integrated frame rails and crumple zones, and roof panel) used in its passenger cars, such as the Hornets and all-wheel-drive Eagles for a new type of frame called the "Uniframe [...] a robust stamped steel frame welded to a strong unit-body structure, giving the strength of a conventional heavy frame with the weight advantages of Unibody construction." This design was also used with the XJC concept developed by American Motors before its absorption by Chrysler, which later became the Jeep Grand Cherokee (ZJ). The design is still used in modern-day sport utility vehicles such as the Jeep Grand Cherokee and Land Rover Defender. This design is also used in large vans such as Ford Transit, VW Crafter and Mercedes Sprinter.
Partial frames
Subframe
A subframe is a distinct structural frame component, to reinforce or complement a particular section of a vehicle's structure. Typically attached to a unibody or a monocoque, the rigid subframe can handle great forces from the engine and drive train. It can transfer them evenly to a wide area of relatively thin sheet metal of a unitized body shell. Subframes are often found at the front or rear end of cars and are used to attach the suspension to the vehicle. A subframe may also contain the engine and transmission. It normally has pressed or box steel construction but may be tubular and/or other material.
Examples of passenger car use include the 1967–1981 GM F platform, the numerous years and models built on the GM X platform (1962), GM's M/L platform vans (Chevrolet Astro/GMC Safari, which included an all-wheel drive variant), and the unibody AMC Pacer that incorporated a front subframe to isolate the passenger compartment from the engine, suspension, and steering loads.
See also
Bicycle frame
Body-on-frame
Chassis
Coachbuilder
Locomotive frame
Monocoque
Motorcycle frame
C-channel
References
External links
What Is the A-Frame on a Car?
What Is Car frame?
Automotive chassis types
Automotive technologies
Structural engineering
Structural system
Vehicle parts | Vehicle frame | [
"Technology",
"Engineering"
] | 3,856 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Construction",
"Civil engineering",
"Vehicle parts",
"Components"
] |
11,699,827 | https://en.wikipedia.org/wiki/Rothschild%27s%20lobe-billed%20bird-of-paradise | Rothschild's lobe-billed bird-of-paradise (Loborhamphus nobilis), also known as the noble lobe-bill, is one of six enigmatic species of bird-of-paradise collected in Papua New Guinea for zoologist Walter Rothschild, 2nd Baron Rothschild. It is only known from the holotype.
In 1930, it, along with the five other collected species, was considered by Erwin Stresemann to be a hybrid between the long-tailed paradigalla and the superb bird-of-paradise, though doubts have been raised about the parentage. However, a DNA analysis confirmed the hybrid identity.
Notes
References
Hybrid birds of paradise
Birds of Papua New Guinea
Rothschild's lobe-billed bird-of-paradise
Taxa named by Walter Rothschild
Intergeneric hybrids
Endemic birds of New Guinea
Endemic birds of Papua New Guinea | Rothschild's lobe-billed bird-of-paradise | [
"Biology"
] | 173 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
11,700,262 | https://en.wikipedia.org/wiki/TRAU | A Transcoder and Rate Adaptation Unit (TRAU) performs transcoding function for speech channels and RA (Rate Adaptation) for data channels in the GSM network. The Transcoder/Rate Adaptation Unit (TRAU) is the data rate conversion unit. The PSTN/ISDN switch is a switch for 64 kbit/s voice. Current technology permits to decrease the bit-rate (in GSM radio interface it is 16 kbit/s for full rate and 8 kbit/s for half rate). Since MSC is basically a PSTN/ISDN switch its bit-rate is still 64 kbit/s. That is why a rate conversion is required in between the BSC and MSC.
Transcoding is the compression of speech data from 64 kbit/s to 13/12.2/6.5 kbit/s in case FR/EFR/HR (respectively) speech coding.
Rate adaptation without transcoding allows Tandem Free Operation (TFO), allowing the original encoded speech data to be carried in a 64 kbit/s channel. TFO offers benefits because transcoding can lead to a degradation of speech quality and requires computational resources.
TRAU was also the term used for the frame format used in transport of the compressed bits from these speech coders.
Brief explanation
For an MS-to-MS call, the transmission path covers the radio access network (RAN) as
well as the core network (CN). Since the transmission modes and coding standards are
different for RAN and CN, speech data is converted/transcoded at the transition points
from RAN to CN. This conversion is performed in the TRAU network element which
connects RAN and CN.
16 kbit/s for FR (Full Rate), Redundancy (Channel Coding)= 9.8 kbit/s
> Gross data rate after adding redundancy 22.8 kbit/s
> 12.2 kbit/s for EFR (Enhanced Full rate) => Gross data rate after adding redundancy 11.4 kbit/s.
See also
GSM Full Rate
GSM Half Rate
GSM Enhanced Full Rate
GSM Adaptive Multi-Rate
References
Audio format converters | TRAU | [
"Technology"
] | 455 | [
"Computing stubs"
] |
11,700,306 | https://en.wikipedia.org/wiki/The%20Electrician | The Electrician, published in London from 1861–1863 and 1878–1952, was one of the earliest and foremost electrical engineering periodicals and scientific journals. It was published in two series: The original Electrician was published for three years from 1861–1863. After a fifteen year gap, a new series of the Electrician was in print for 72 years from 1878–1952. The Electrician is currently remembered as the publisher of Oliver Heaviside's works, in particular the first publication of the telegrapher's equations, still in wide use for radio engineering.
After the periodical ceased publication in 1952, The Electricians corporation continued on its book publishing business, printing works on physics and electrical engineering, until 1959.
Publication history
The Electrician was originally established in 1861, it was discontinued after about three years. In 1878 a new journal with the same title was launched and thereafter published weekly.
The Electrician billed itself in the early 1860s as "a weekly journal of Telegraphy, Electricity, and Applied Chemistry" and was published by Thomas Piper.
The new Electrician that appeared in the late 1870s was published by James Gray on behalf of the proprietors, John Pender and James Anderson of the Eastern Telegraph Company, the biggest cable firm of the day and had a somewhat different focus. It described itself as "a weekly illustrated journal of electrical engineering, industry and science" and also featured more theoretical aspects of electrical engineering such as electromagnetism.
In the late nineteenth century, The Electrician Printing and Publication Company Limited was established and began publishing shorter electrical engineering texts including well-known early electrical engineering titles such as Oliver Heaviside's Electromagnetic Theory (1893-1912), Oliver Lodge's The Work of Hertz and Some of His Successors (1894), and many others. Some of these publications were based on papers presented elsewhere and published in full in The Electrician.
The new series of The Electrician quickly established itself in the field of electrical engineering and was regularly quoted and cited in Nature, Scientific American, and elsewhere.
Other Electrician magazines
Between 1889 and 1895 an American journal also called The Electrician was published in New York by Williams & Co. Often referred to as the American Electrician, it was merged into another electrical engineering periodical, Electrical World.
References
Electrical and electronic engineering journals
Academic journals established in 1861
Publications disestablished in 1952
Defunct journals of the United Kingdom
English-language journals
Weekly journals
1861 establishments in England
1862 disestablishments in England | The Electrician | [
"Engineering"
] | 501 | [
"Electrical engineering",
"Electronic engineering",
"Electrical and electronic engineering journals"
] |
11,700,418 | https://en.wikipedia.org/wiki/Bifolium | A bifolium is a quartic plane curve with equation in Cartesian coordinates:
Construction and equations
Given a circle C through a point O, and line L tangent to the circle at point O: for each point Q on C, define the point P such that PQ is parallel to the tangent line L, and PQ = OQ. The collection of points P forms the bifolium.
In polar coordinates, the bifolium's equation is
For a = 1, the total included area is approximately 0.10.
References
External links
Bifolium at MathWorld
Plane curves
Algebraic curves | Bifolium | [
"Mathematics"
] | 124 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Plane curves"
] |
11,702,062 | https://en.wikipedia.org/wiki/Problems%20involving%20arithmetic%20progressions | Problems involving arithmetic progressions are of interest in number theory, combinatorics, and computer science, both from theoretical and applied points of view.
Largest progression-free subsets
Find the cardinality (denoted by Ak(m)) of the largest subset of {1, 2, ..., m} which contains no progression of k distinct terms. The elements of the forbidden progressions are not required to be consecutive. For example, A4(10) = 8, because {1, 2, 3, 5, 6, 8, 9, 10} has no arithmetic progressions of length 4, while all 9-element subsets of {1, 2, ..., 10} have one.
In 1936, Paul Erdős and Pál Turán posed a question related to this number and Erdős set a $1000 prize for an answer to it. The prize was collected by Endre Szemerédi for a solution published in 1975, what has become known as Szemerédi's theorem.
Arithmetic progressions from prime numbers
Szemerédi's theorem states that a set of natural numbers of non-zero upper asymptotic density contains finite arithmetic progressions, of any arbitrary length k.
Erdős made a more general conjecture from which it would follow that
The sequence of primes numbers contains arithmetic progressions of any length.
This result was proven by Ben Green and Terence Tao in 2004 and is now known as the Green–Tao theorem.
See also Dirichlet's theorem on arithmetic progressions.
, the longest known arithmetic progression of primes has length 27:
224584605939537911 + 81292139·23#·n, for n = 0 to 26. (23# = 223092870)
As of 2011, the longest known arithmetic progression of consecutive primes has length 10. It was found in 1998. The progression starts with a 93-digit number
100 99697 24697 14247 63778 66555 87969 84032 95093 24689
19004 18036 03417 75890 43417 03348 88215 90672 29719
and has the common difference 210.
Primes in arithmetic progressions
The prime number theorem for arithmetic progressions deals with the asymptotic distribution of prime numbers in an arithmetic progression.
Covering by and partitioning into arithmetic progressions
Find minimal ln such that any set of n residues modulo p can be covered by an arithmetic progression of the length ln.
For a given set S of integers find the minimal number of arithmetic progressions that cover S
For a given set S of integers find the minimal number of nonoverlapping arithmetic progressions that cover S
Find the number of ways to partition {1, ..., n} into arithmetic progressions.
Find the number of ways to partition {1, ..., n} into arithmetic progressions of length at least 2 with the same period.
See also Covering system
See also
Arithmetic combinatorics
PrimeGrid
Notes
Mathematical series
Unsolved problems in number theory | Problems involving arithmetic progressions | [
"Mathematics"
] | 639 | [
"Sequences and series",
"Unsolved problems in mathematics",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Unsolved problems in number theory",
"Mathematical problems",
"Number theory"
] |
11,703,135 | https://en.wikipedia.org/wiki/Roebling%20Medal | The Roebling Medal is the highest award of the Mineralogical Society of America for scientific eminence as represented primarily by scientific publication of outstanding original research in mineralogy. The award is named for Colonel Washington A. Roebling (1837–1926) who was an engineer, bridge builder, mineral collector, and significant friend of the Mineralogical Society of America. It is awarded for scientific eminence represented by scientific publication of outstanding original research in mineralogy. The recipient receives an engraved medal and is made a Life Fellow of the Mineralogical Society.
Roebling Medal Recipients
The recipients of the medal are:
1937 – Charles Palache
1938 – Waldemar T. Schaller
1940 – Leonard James Spencer
1941 – Esper S. Larsen Jr.
1945 – Edward Henry Kraus
1946 – Clarence S. Ross
1947 – Paul Niggli
1948 – William Lawrence Bragg
1949 – Herbert E. Merwin
1950 – Norman L. Bowen
1952 – Frederick E. Wright
1953 – William F. Foshag
1954 – Cecil Edgar Tilley
1955 – Alexander N. Winchell
1956 – Arthur F. Buddington
1957 – Walter F. Hunt
1958 – Martin J. Buerger
1959 – Felix Machatschki
1960 – Tom F. W. Barth
1961 – Paul Ramdohr
1962 – John W. Gruner
1963 – John Frank Schairer
1964 – Clifford Frondel
1965 – Adolf Pabst
1966 – Max H. Hey
1967 – Linus Pauling
1968 – Tei-ichi Ito
1969 – Fritz Laves
1970 – George W. Brindley
1971 – J. D. H. Donnay
1972 – Elburt F. Osborn
1973 – George Tunell
1974 – Ralph E. Grim
1975 – Michael Fleischer
1975 – O. Frank Tuttle
1976 – Carl W. Correns
1977 – Raimond Castaing
1978 – James B. Thompson Jr.
1979 – W. H. Taylor
1980 –
1981 – Robert M. Garrels
1982 – Joseph V. Smith
1983 – Hans P. Eugster
1984 – Paul B. Barton Jr.
1985 – Francis John Turner
1986 – Edwin Roedder
1987 – Gerald V. Gibbs
1988 – Julian R. Goldsmith
1989 – Helen D. Megaw
1990 – Sturges W. Bailey
1991 – E-An Zen
1992 – Hatten S. Yoder Jr.
1993 – Brian Harold Mason
1994 – William A. Bassett
1995 – William S. Fyfe
1996 – Donald H. Lindsley
1997 – Ian S. E. Carmichael
1998 – C. Wayne Burnham
1999 – Ikuo Kushiro
2000 – Robert C. Reynolds Jr.
2001 – Peter J. Wyllie
2002 – Werner F. Schreyer
2003 – Charles T. Prewitt
2004 – Francis R. Boyd
2005 – Ho-kwang Mao
2006 – W. Gary Ernst
2007 – Gordon E. Brown Jr.
2008 – Bernard W. Evans
2009 – Alexandra Navrotsky
2010 – Robert C. Newton
2011 – Juhn G. Liou
2012 – Harry W. Green, II
2013 – Frank C. Hawthorne
2014 – Bernard J. Wood
2015 – Rodney C. Ewing
2016 – Robert M. Hazen
2017 - Edward M. Stolper
2018 – E. Bruce Watson
2019 – Peter R. Buseck
2020 – Andrew Putnis
2021 – George R. Rossman
2022 – John W. Valley
2023 – Georges Calas
2024 – Nancy L. Ross
2025 – Melinda Darby Dyar
See also
List of geology awards
Ewald Prize
References
Mineralogy
Geology awards
American science and technology awards
Awards established in 1937
Earth sciences awards | Roebling Medal | [
"Technology"
] | 729 | [
"Science and technology awards",
"Earth sciences awards"
] |
11,703,336 | https://en.wikipedia.org/wiki/Comparison%20of%20Texas%20Instruments%20graphing%20calculators | A graphing calculator is a class of hand-held calculator that is capable of plotting graphs and solving complex functions. There are several companies that manufacture models of graphing calculators. Texas Instruments is a major manufacturer.
The following table compares general and technical information for a selection of common and uncommon Texas Instruments graphing calculators. Many of the calculators in this list have region-specific models that are not individually listed here, such as the TI-84 Plus CE-T, a TI-84 Plus CE designed for non-French European markets. These region-specific models are usually functionally identical to each other, aside from minor cosmetic differences and circuit board hardware revisions. See the individual calculators' articles for further information.
Programming language support
See also
Comparison of HP graphing calculators
References
Texas Instruments calculators
Graphing calculators | Comparison of Texas Instruments graphing calculators | [
"Technology"
] | 180 | [
"Computing comparisons"
] |
11,703,731 | https://en.wikipedia.org/wiki/Games%20and%20Economic%20Behavior | Games and Economic Behavior (GEB) is a journal of game theory published by Elsevier. Founded in 1989, the journal's stated objective is to communicate game-theoretic ideas across theory and applications. It is considered to be the leading journal of game theory and one of the top journals in economics, and it is one of the two official journals of the Game Theory Society. Apart from game theory and economics, the research areas of the journal also include applications of game theory in political science, biology, computer science, mathematics and psychology.
The founding editor-in-chief of GEB is Ehud Kalai. Current editor-in-chief is Hervé Moulin (since January 1, 2021). Each paper is initially assigned by GEB's chief editor to one of the seven editors (including himself). The chief editor has final decision authority.
Impact Factor
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.287 and the 5-year impact factor of 1.511.
References
Economics journals
Media related to game theory
English-language journals
Elsevier academic journals
Academic journals associated with learned and professional societies
Academic journals established in 1989
Bimonthly journals
Game theory journals | Games and Economic Behavior | [
"Mathematics"
] | 247 | [
"Game theory",
"Media related to game theory"
] |
562,475 | https://en.wikipedia.org/wiki/128%20%28number%29 | 128 (one hundred [and] twenty-eight) is the natural number following 127 and preceding 129.
In mathematics
128 is the seventh power of 2. It is the largest number which cannot be expressed as the sum of any number of distinct squares. However, it is divisible by the total number of its divisors, making it a refactorable number.
The sum of Euler's totient function φ() over the first twenty integers is 128.
128 can be expressed by a combination of its digits with mathematical operators, thus 128 28 − 1, making it a Friedman number in base 10.
A hepteract has 128 vertices.
128 is the only 3-digit number that is a 7th power (27).
In computing
128-bit key size encryption for secure communications over the Internet
IPv6 uses 128-bit (16-byte) addresses
Any bit with a binary prefix is 128 bytes of a lesser binary prefix value, such as 1 gibibit is 128 mebibytes
128-bit integers, memory addresses, or other data units are those that are at most 128 bits 16 octets wide
A 128-bit integer can represent up to 3.40282366...e+38 values (2128 340,282,366,920,938,463,463,374,607,431,768,211,456).
CAST-128 is a block cipher used in a number of products, notably as the default cipher in some versions of GPG and PGP.
In other fields
The number of US fluid ounces in a US gallon.
Notes
References
Wells, D. The Penguin Dictionary of Curious and Interesting Numbers London: Penguin Group. (1987): 138
External links
Code 128 specification at OpenBarcode.org
Integers | 128 (number) | [
"Mathematics"
] | 376 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
562,521 | https://en.wikipedia.org/wiki/175%20%28number%29 | 175 (one hundred [and] seventy-five) is the natural number following 174 and preceding 176.
In mathematics
Raising the decimal digits of 175 to the powers of successive integers produces 175 back again:
175 is a figurate number for a rhombic dodecahedron, the difference of two consecutive fourth powers: It is also a decagonal number and a decagonal pyramid number, the smallest number after 1 that has both properties.
See also
References
Integers | 175 (number) | [
"Mathematics"
] | 96 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
562,523 | https://en.wikipedia.org/wiki/260%20%28number%29 | 260 (two hundred [and] sixty) is the natural number following 259 and preceding 261.
In mathematics
260 is:
an abundant number
an Ulam number
in the Moser-de Bruijn sequence
the magic constant of the normal magic square of order 8
In other fields
Pre-Columbian Mesoamericans used 260-day calendars.
References
Integers | 260 (number) | [
"Mathematics"
] | 74 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
562,526 | https://en.wikipedia.org/wiki/Sugar%20alcohol | Sugar alcohols (also called polyhydric alcohols, polyalcohols, alditols or glycitols) are organic compounds, typically derived from sugars, containing one hydroxyl group attached to each carbon atom. They are white, water-soluble solids that can occur naturally or be produced industrially by hydrogenating sugars. Since they contain multiple groups, they are classified as polyols.
Sugar alcohols are used widely in the food industry as thickeners and sweeteners. In commercial foodstuffs, sugar alcohols are commonly used in place of table sugar (sucrose), often in combination with high-intensity artificial sweeteners, in order to offset their low sweetness. Xylitol and sorbitol are popular sugar alcohols in commercial foods.
Structure
Sugar alcohols have the general formula . In contrast, sugars have two fewer hydrogen atoms, for example, or . Like their parent sugars, sugar alcohols exist in diverse chain length. Most have five- or six-carbon chains, because they are derived respectively from pentoses (five-carbon sugars) and hexoses (six-carbon sugars), which are the more common sugars. They have one −OH group attached to each carbon. They are further differentiated by the relative orientation (stereochemistry) of these −OH groups. Unlike sugars, which tend to exist as rings, sugar alcohols do not, although they can be dehydrated to give cyclic ethers (e.g. sorbitan can be dehydrated to isosorbide).
Production
Sugar alcohols can be, and often are, produced from renewable resources. Particular feedstocks are starch, cellulose and hemicellulose; the main conversion technologies use as the reagent: hydrogenolysis, i.e. the cleavage of single bonds, converting polymers to smaller molecules, and hydrogenation of double bonds, converting sugars to sugar alcohols.
Sorbitol and mannitol
Mannitol is no longer obtained from natural sources; currently, sorbitol and mannitol are obtained by hydrogenation of sugars, using Raney nickel catalysts. The conversion of glucose and mannose to sorbitol and mannitol is given as
Erythritol
Erythritol is obtained by the fermentation of glucose and sucrose.
Health effects
Sugar alcohols do not contribute to tooth decay; in fact, xylitol deters tooth decay.
Sugar alcohols are absorbed at 50% of the rate of sugars, resulting in less of an effect on blood sugar levels as measured by comparing their effect to sucrose using the glycemic index.
Common sugar alcohols
Ethylene glycol (2-carbon)
Glycerol (3-carbon)
Erythritol (4-carbon)
Threitol (4-carbon)
Arabitol (5-carbon)
Xylitol (5-carbon)
Ribitol (5-carbon)
Mannitol (6-carbon)
Sorbitol (6-carbon)
Galactitol (6-carbon)
Fucitol (6-carbon)
Iditol (6-carbon)
Inositol (6-carbon; a cyclic sugar alcohol)
Volemitol (7-carbon)
Isomalt (12-carbon)
Maltitol (12-carbon)
Lactitol (12-carbon)
Maltotriitol (18-carbon)
Maltotetraitol (24-carbon)
Polyglycitol
Both disaccharides and monosaccharides can form sugar alcohols; however, sugar alcohols derived from disaccharides (e.g. maltitol and lactitol) are not entirely hydrogenated because only one aldehyde group is available for reduction.
Sugar alcohols as food additives
This table presents the relative sweetness and food energy of the most widely used sugar alcohols. Despite the variance in food energy content of sugar alcohols, the European Union's labeling requirements assign a blanket value of 2.4 kcal/g to all sugar alcohols.
Characteristics
As a group, sugar alcohols are not as sweet as sucrose, and they have slightly less food energy than sucrose. Their flavor is similar to sucrose, and they can be used to mask the unpleasant aftertastes of some high-intensity sweeteners.
Sugar alcohols are not metabolized by oral bacteria, and so they do not contribute to tooth decay. They do not brown or caramelize when heated.
In addition to their sweetness, some sugar alcohols can produce a noticeable cooling sensation in the mouth when highly concentrated, for instance in sugar-free hard candy or chewing gum. This happens, for example, with the crystalline phase of sorbitol, erythritol, xylitol, mannitol, lactitol and maltitol. The cooling sensation is due to the dissolution of the sugar alcohol being an endothermic (heat-absorbing) reaction, one with a strong heat of solution.
Absorption from the small intestine
Sugar alcohols are usually incompletely absorbed into the blood stream from the small intestine which generally results in a smaller change in blood glucose than "regular" sugar (sucrose). This property makes them popular sweeteners among diabetics and people on low-carbohydrate diets. As an exception, erythritol is actually absorbed in the small intestine and excreted unchanged through urine, so it contributes no calories even though it is rather sweet.
Side effects
Like many other incompletely digestible substances, overconsumption of sugar alcohols can lead to bloating, diarrhea and flatulence because they are not fully absorbed in the small intestine. Some individuals experience such symptoms even in a single-serving quantity. With continued use, most people develop a degree of tolerance to sugar alcohols and no longer experience these symptoms.
References
Sugar substitutes
fr:Polyol | Sugar alcohol | [
"Chemistry"
] | 1,277 | [
"Carbohydrates",
"Sugar alcohols"
] |
562,574 | https://en.wikipedia.org/wiki/Cryptococcus | Cryptococcus is a genus of fungi in the family Cryptococcaceae that includes both yeasts and filamentous species. The filamentous, sexual forms or teleomorphs were formerly classified in the genus Filobasidiella, while Cryptococcus was reserved for the yeasts. Most yeast species formerly referred to Cryptococcus have now been placed in different genera. The name Cryptococcus comes from the Greek for "hidden sphere" (literally "hidden berry"). Some Cryptococcus species cause a disease called cryptococcosis.
Taxonomy
The genus was described by French mycologist Jean Paul Vuillemin in 1901, when he failed to find ascospores characteristic of the genus Saccharomyces in the yeast previously known as Saccharomyces neoformans. Over 300 additional names were subsequently added to the genus, almost all of which were later removed following molecular research based on cladistic analysis of DNA sequences. As a result, some ten species are currently recognized in Cryptococcus.
The teleomorph was first described in 1975 by K.J. Kwon-Chung, who obtained cultures of the type species, Filobasidiella neoformans, by crossing strains of the yeast Cryptococcus neoformans. She was able to observe basidia similar to those of the genus Filobasidium, hence the name Filobasidiella for the new genus. Following changes to the International Code of Nomenclature for algae, fungi, and plants, the practice of giving different names to teleomorph and anamorph forms of the same fungus was discontinued, meaning that Filobasidiella became a synonym of the earlier name Cryptococcus.
General characteristics
The cells of species that produce yeasts are covered in a thin layer of glycoprotein capsular material that has a gelatin-like consistency, and that among other functions, serves to help extract nutrients from the soil. The C. neoformans capsule consists of several polysaccharides, of which the major one is the immunomodulatory polysaccharide called glucuronoxylomannan (GXM). GXM is made up of the monosaccharides glucuronic acid, xylose and mannose and can also contain O-acetyl groups. The capsule functions as the major virulence factor in cryptococcal infection and disease.
Some Cryptococcus species have a huge diversity at the infraspecific level with different molecular types based on their genetic differences, mainly due to their geographical distribution, molecular characteristics, and ecological niches.
Cryptococcus species are not known to produce distinct, visible fruitbodies. All teleomorph forms appear to be parasites of other fungi. In teleomorphs the hyphae are colourless, are clamped or unclamped, and bear haustorial cells with filaments that attach to the hyphae of host fungi. The basidia are club-shaped and highly elongated. Spores arise in succession from four loci at the apex (which is sometimes partly septate). These spores are passively released and may remain on the basidium in chains, unless disturbed. In the type species, the spores germinate to form yeast cells, but yeast states are not known for all species.
Habitat, distribution and species
Cryptococcus neoformans is cosmopolitan and is the most prominent medically important species. It is best known for causing a severe form of meningitis and meningoencephalitis in people with HIV/AIDS. It may also infect organ-transplant recipients and people receiving certain cancer treatments. In its yeast state C. neoformans is found in the droppings of wild birds, often pigeons; when dust of the droppings is stirred up, it can infect humans or pets that inhale the dust. Infected humans and animals do not transmit their infection to others. The taxonomy of C. neoformans has been reviewed: it has now been divided into two species: Cryptococcus neoformans sensu stricto and Cryptococcus deneoformans.
Cryptococcus gattii (formerly C. neoformans var. gattii) is endemic to tropical parts of the continent of Africa and Australia. It is capable of causing disease in non-immunocompromised people. In its yeast state it has been isolated from eucalyptus trees in Australia. The taxonomy of C. gattii has been reviewed; it has now been divided into five species: C. gattii sensu stricto, C. bacillisporus, 'C. deuterogattii, C. tetragattii, and C. decagattii.
Cryptococcus depauperatus is parasitic on Lecanicillium lecanii, an entomopathogenic fungus, and is known from Sri Lanka, England, the Netherlands, the Czech Republic, and Canada. It is not known to produce a yeast state. This species grows as long, branching filaments and is self-fertile, i.e. it is homothallic. It can reproduce sexually with itself throughout its life cycle.
Cryptococcus luteus is parasitic on Granulobasidium vellereum, a corticioid fungus, and is known from England and Italy. It too is not known to produce a yeast state.
Cryptococcus amylolentus was originally isolated as a yeast from beetle tunnels in South African trees. It forms a basidia-bearing teleomorph in culture.
References
Tremellomycetes
Basidiomycota genera
Yeasts | Cryptococcus | [
"Biology"
] | 1,180 | [
"Yeasts",
"Fungi"
] |
562,589 | https://en.wikipedia.org/wiki/Cryptococcus%20neoformans | Cryptococcus neoformans is an encapsulated basidiomycetous yeast belonging to the class Tremellomycetes and an obligate aerobe that can live in both plants and animals. Its teleomorph is a filamentous fungus, formerly referred to Filobasidiella neoformans. In its yeast state, it is often found in bird excrement. It has remarkable genomic plasticity and genetic variability between its strains, making treatment of the disease it causes difficult. Cryptococcus neoformans causes disease primarily in immunocompromised hosts, such as HIV or cancer patients. In addition it has been shown to cause disease in apparently immunocompetent hosts, especially in developed countries.
Classification
Cryptococcus neoformans has undergone numerous nomenclature revisions since its first description in 1895. It formerly contained two varieties: C. neoformans var. neoformans and C. neoformans var. grubii. A third variety, C. neoformans var. gattii, was later defined as a distinct species, Cryptococcus gattii. The most recent classification system divides these varieties into seven species. C. neoformans refers to C. neoformans var. grubii. A new species name, Cryptococcus deneoformans, is used for the former C. neoformans var. neoformans. C. gattii is divided into five species.
The teleomorph was first described in 1975 by K.J. Kwon-Chung, who obtained cultures of Filobasidiella neoformans by crossing strains of the yeast C. neoformans. She was able to observe basidia similar to those of the genus Filobasidium, hence the name Filobasidiella for the new genus. Following changes to the International Code of Nomenclature for algae, fungi, and plants, the practice of giving different names to teleomorph and anamorph forms of the same fungus was discontinued, meaning that Filobasidiella neoformans became a synonym of the earlier name Cryptococcus neoformans.
Characteristics
Cryptococcus neoformans typically grows as a yeast (unicellular) and replicates by budding. It makes hyphae during mating, and eventually creates basidiospores at the end of the hyphae before producing spores. Under host-relevant conditions, including low glucose, serum, 5% carbon dioxide, and low iron, among others, the cells produce a characteristic polysaccharide capsule. The recognition of C. neoformans in Gram-stained smears of purulent exudates may be hampered by the presence of the large gelatinous capsule which apparently prevents definitive staining of the yeast-like cells. In such stained preparations, it may appear either as round cells with Gram-positive granular inclusions impressed upon a pale lavender cytoplasmic background or as Gram-negative lipoid bodies. When grown as a yeast, C. neoformans has a prominent capsule composed mostly of polysaccharides. Under the microscope, the India ink stain is used for easy visualization of the capsule in cerebral spinal fluid. The particles of ink pigment do not enter the capsule that surrounds the spherical yeast cell, resulting in a zone of clearance or "halo" around the cells. This allows for quick and easy identification of C. neoformans. Unusual morphological forms are rarely seen. For identification in tissue, mucicarmine stain provides specific staining of polysaccharide cell wall in C. neoformans. Cryptococcal antigen from cerebrospinal fluid is thought to be the best test for diagnosis of cryptococcal meningitis in terms of sensitivity, though it might be unreliable in HIV-positive patients.
The first genome sequence for a strain of C. neoformans (var. neoformans; now C. deneoformans) was published in 2005.
Studies suggest that colonies of C. neoformans and related fungi growing within the ruins of the Chernobyl Nuclear Power Plant may be able to metabolize ionizing radiation.
Pathology
Infection with C. neoformans is termed cryptococcosis. Most infections with C. neoformans occur in the lungs, as the fungus enters its host through the respiratory route. However, fungal meningitis and encephalitis, especially as a secondary infection for AIDS patients, are often caused by C. neoformans, making it a particularly dangerous fungus. Infections with this fungus were thought to be rare in people with fully functioning immune systems, hence C. neoformans is often referred to as an opportunistic pathogen. However, a study from 2024 done in Australia and New Zealand showed the vast majority of recorded infections to be in non-HIV patients. The fungus is a facultative intracellular pathogen that can utilize host phagocytes to spread within the body. C. neoformans was the first intracellular pathogen for which the non-lytic escape process termed vomocytosis was observed. It has been speculated that this ability to manipulate host cells results from environmental selective pressure by amoebae, a hypothesis first proposed by Arturo Casadevall under the term "accidental virulence".
In human infection, C. neoformans is spread by inhalation of aerosolized basidiospores or dehydrated fungal cells, and can disseminate to the central nervous system, where it can cause meningoencephalitis. In the lungs, C. neoformans cells are phagocytosed by alveolar macrophages. Macrophages produce oxidative and nitrosative agents, creating a hostile environment, to kill invading pathogens. However, some C. neoformans cells can survive intracellularly in macrophages because of the protective nature of the polysaccharide capsule as well as its ability to produce melanin. Intracellular survival appears to be one of the factors contributing to latency, disseminated disease, and resistance to eradication by antifungal agents. One mechanism by which C. neoformans survives the hostile intracellular environment of the macrophage involves upregulation of expression of genes involved in responses to oxidative stress.
Traversal of the blood–brain barrier by C. neoformans plays a key role in meningitis pathogenesis. However, precise mechanisms by which it passes the blood-brain barrier are still unknown; a 2014 study in rats suggested an important role of secreted serine proteases. The metalloprotease Mpr1 has been demonstrated to be critical in blood-brain barrier penetration.
Meiosis (sexual reproduction), another possible survival factor for intracellular C. neoformans
The vast majority of environmental and clinical isolates of C. neoformans are mating type alpha. Filaments of mating type alpha have haploid nuclei ordinarily, but these can undergo a process of diploidization (perhaps by endoduplication or stimulated nuclear fusion) to form diploid cells termed blastospores. The diploid nuclei of blastospores are able to undergo meiosis, including recombination, to form haploid basidiospores that can then be dispersed. This process is referred to as monokaryotic fruiting. Required for this process is a gene designated dmc1, a conserved homologue of genes recA in bacteria, and rad51 in eukaryotes (see articles recA and rad51). Dmc1 mediates homologous chromosome pairing during meiosis and repair of double-strand breaks in DNA. One benefit of meiosis in C. neoformans could be to promote DNA repair in the DNA-damaging environment caused by the oxidative and nitrosative agents produced in macrophages. Thus, C. neoformans can undergo a meiotic process, monokaryotic fruiting, that may promote recombinational repair in the oxidative, DNA-damaging environment of the host macrophage, and this may contribute to its virulence.
Serious complications of human infection
Infection begins in the lungs, and from there the fungus can disseminate to the brain and other body parts via macrophages. An infection of the brain caused by C. neoformans is referred to as cryptococcal meningitis, which is most often fatal when left untreated. Cryptococcal meningitis causes more than 180,000 deaths annually. CNS (central nervous system) infections may also be present as a brain abscesses known as cryptococcomas, subdural effusions, dementia, isolated cranial nerve lesions, spinal cord lesions, and ischemic stroke. The estimated one-year mortality of HIV-related people who receive treatment for cryptococcal meningitis is 70% in low-income countries versus 20–30% for high-income countries. Symptoms include headache, fever, neck stiffness, nausea and vomiting, photophobia. Diagnosis methods include a serum cryptococcal antigen test and lumbar puncture with cerebrospinal fluid (CSF) examination to detect C. neoformans.
Treatment
Cryptococcosis that does not affect the central nervous system can be treated with fluconazole alone.
It was recommended in 2000 that cryptococcal meningitis be treated for two weeks with intravenous amphotericin B 0.7–1.0 mg/kg per day and oral flucytosine 100 mg/kg per day (or intravenous flucytosine 75 mg/kg per dayday if the patient is unable to swallow), followed by oral fluconazole 400–800 mg daily for ten weeks and then 200 mg daily for at least one year and until the patient's CD4 count is above 200 cells/mcl. Flucytosine is a generic, off-patent medicine, but the cost of two weeks of flucytosine therapy is about US$10,000, so that flucytosine has been unavailable in low- and middle-income countries. In 1970, flucytosine was available in Africa. A dose of 200 mg/kg per day of flucytosine is associated with more side effects but is not more effective.
A single high dose of liposomal amphotericin B with 14 days of flucytosine and fluconazole is recommended by the newest WHO guideline for cryptococcal meningitis. A new study found that brain glucose can trigger amphotericin B (AmB) tolerance of C. neoformans during meningitis which means it needs longer treatment time to kill the fungal cells. The study found that the brain glucose induced AmB tolerance of C. neoformans via glucose repression activator Mig1. Mig1 inhibits the production of ergosterol, the target of AmB, and promotes the production of inositol phosphoryl ceramide (IPC), which competes with AmB for ergosterol to limit AmB efficacy in mouse brain and human CSF. Strikingly, Results of this study indicated that IPC synthase inhibitor aureobasidin A (AbA) can enhance the anti-cryptococcal activity of AmB. AbA+AmB AmB had an even better therapeutic effect in a mouse model of cryptococcal meningitis than AmB+flucytosine which may bring new hope for the treatment of Cryptococcal meningitis.
In Africa, oral fluconazole at a rate of 200 mg daily is often used. However, this does not result in cure, because it merely suppresses the fungus and does not kill it; viable fungus can continue to be grown from cerebrospinal fluid of patients not having taken fluconazole for many months. An increased dose of 400 mg daily does not improve outcomes, but prospective studies from Uganda and Malawi reported that higher doses of 1200 mg per day have more fungicidal activity. The outcomes with fluconazole monotherapy have 30% worse survival than amphotericin-based therapies, in a recent systematic review.
The current treatment options for cryptococcosis are not optimal for treatment. AmB is highly toxic to humans, and both fluconazole and flucytosine have been shown to cause development of drug resistanse in C. neoformans. A recent study from 2024 suggested brilacidin as an alternative treatment option. Brilacidin was shown to be non-toxic and it caused no drug resistance development in C. neoformans, while still being efficient at causing fungal mortality. Brilacidin enhances permiability of the cell wall and membrane by binding to ergosterol and disrupting its distribution. It also affects the cell wall integrity pathway and disrupts calsium metabolism. Through these methods it not only causes cell mortality on its own, but also enables more effective use of other antifungal agents such as AmB against C. neoformans.
References
External links
A good overview of Cryptococcus neoformans biology from the Science Creative Quarterly
Cryptococcus neoformans biology, general information, life cycle image at MetaPathogen
The outcome of Cryptococcus neoformans intracellular pathogenesis in human monocytes
Tremellomycetes
Fungal pathogens of humans
Animal fungal diseases
Fungal plant pathogens and diseases
Yeasts
Bird diseases
Fungi and humans
Zoonoses
Fungus species | Cryptococcus neoformans | [
"Biology"
] | 2,816 | [
"Fungi",
"Fungus species",
"Yeasts",
"Fungi and humans",
"Humans and other species"
] |
562,695 | https://en.wikipedia.org/wiki/Simulink | Simulink is a MATLAB-based graphical programming environment for modeling, simulating and analyzing multidomain dynamical systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries. It offers tight integration with the rest of the MATLAB environment and can either drive MATLAB or be scripted from it. Simulink is widely used in automatic control and digital signal processing for multidomain simulation and model-based design.
Add-on products
MathWorks and other third-party hardware and software products can be used with Simulink. For example, Stateflow extends Simulink with a design environment for developing state machines and flow charts.
MathWorks claims that, coupled with another of their products, Simulink can automatically generate C source code for real-time implementation of systems. As the efficiency and flexibility of the code improves, this is becoming more widely adopted for production systems, in addition to being a tool for embedded system design work because of its flexibility and capacity for quick iteration. Embedded Coder creates code efficient enough for use in embedded systems.
Simulink Real-Time (formerly known as xPC Target), together with x86-based real-time systems, is an environment for simulating and testing Simulink and Stateflow models in real-time on the physical system. Another MathWorks product also supports specific embedded targets. When used with other generic products, Simulink and Stateflow can automatically generate synthesizable VHDL and Verilog.
Simulink Verification and Validation enables systematic verification and validation of models through modeling style checking, requirements traceability and model coverage analysis. Simulink Design Verifier uses formal methods to identify design errors like integer overflow, division by zero and dead logic, and generates test case scenarios for model checking within the Simulink environment.
SimEvents is used to add a library of graphical building blocks for modeling queuing systems to the Simulink environment, and to add an event-based simulation engine to the time-based simulation engine in Simulink.
Release history
References
External links
Cross-platform software
Linux software
Mathematical modeling
Numerical software
Simulation programming languages
Simulation software
Visual programming languages | Simulink | [
"Mathematics"
] | 448 | [
"Applied mathematics",
"Mathematical modeling",
"Numerical software",
"Mathematical software"
] |
562,782 | https://en.wikipedia.org/wiki/Vertex%20cover | In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.
In computer science, the problem of finding a minimum vertex cover is a classical optimization problem. It is NP-hard, so it cannot be solved by a polynomial-time algorithm if P ≠ NP. Moreover, it is hard to approximate – it cannot be approximated up to a factor smaller than 2 if the unique games conjecture is true. On the other hand, it has several simple 2-factor approximations. It is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory.
The minimum vertex cover problem can be formulated as a half-integral, linear program whose dual linear program is the maximum matching problem.
Vertex cover problems have been generalized to hypergraphs, see Vertex cover in hypergraphs.
Definition
Formally, a vertex cover of an undirected graph is a subset of such that , that is to say it is a set of vertices where every edge has at least one endpoint in the vertex cover . Such a set is said to cover the edges of . The upper figure shows two examples of vertex covers, with some vertex cover marked in red.
A minimum vertex cover is a vertex cover of smallest possible size. The vertex cover number is the size of a minimum vertex cover, i.e. . The lower figure shows examples of minimum vertex covers in the previous graphs.
Examples
The set of all vertices is a vertex cover.
The endpoints of any maximal matching form a vertex cover.
The complete bipartite graph has a minimum vertex cover of size .
Properties
A set of vertices is a vertex cover if and only if its complement is an independent set.
Consequently, the number of vertices of a graph is equal to its minimum vertex cover number plus the size of a maximum independent set.
Computational problem
The minimum vertex cover problem is the optimization problem of finding a smallest vertex cover in a given graph.
INSTANCE: Graph
OUTPUT: Smallest number such that has a vertex cover of size .
If the problem is stated as a decision problem, it is called the vertex cover problem:
INSTANCE: Graph and positive integer .
QUESTION: Does have a vertex cover of size at most ?
The vertex cover problem is an NP-complete problem: it was one of Karp's 21 NP-complete problems. It is often used in computational complexity theory as a starting point for NP-hardness proofs.
ILP formulation
Assume that every vertex has an associated cost of .
The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP).
{|
| minimize
| colspan="2" |
|
| (minimize the total cost)
|-
| subject to
|
| for all
|
| (cover every edge of the graph),
|-
|
|
| for all .
|
| (every vertex is either in the vertex cover or not)
|}
This ILP belongs to the more general class of ILPs for covering problems.
The integrality gap of this ILP is , so its relaxation (allowing each variable to be in the interval from 0 to 1, rather than requiring the variables to be only 0 or 1) gives a factor- approximation algorithm for the minimum vertex cover problem.
Furthermore, the linear programming relaxation of that ILP is half-integral, that is, there exists an optimal solution for which each entry is either 0, 1/2, or 1. A 2-approximate vertex cover can be obtained from this fractional solution by selecting the subset of vertices whose variables are nonzero.
Exact evaluation
The decision variant of the vertex cover problem is NP-complete, which means it is unlikely that there is an efficient algorithm to solve it exactly for arbitrary graphs. NP-completeness can be proven by reduction from 3-satisfiability or, as Karp did, by reduction from the clique problem. Vertex cover remains NP-complete even in cubic graphs and even in planar graphs of degree at most 3.
For bipartite graphs, the equivalence between vertex cover and maximum matching described by Kőnig's theorem allows the bipartite vertex cover problem to be solved in polynomial time.
For tree graphs, an algorithm finds a minimal vertex cover in polynomial time by finding the first leaf in the tree and adding its parent to the minimal vertex cover, then deleting the leaf and parent and all associated edges and continuing repeatedly until no edges remain in the tree.
Fixed-parameter tractability
An exhaustive search algorithm can solve the problem in time 2knO(1), where k is the size of the vertex cover. Vertex cover is therefore fixed-parameter tractable, and if we are only interested in small k, we can solve the problem in polynomial time. One algorithmic technique that works here is called bounded search tree algorithm, and its idea is to repeatedly choose some vertex and recursively branch, with two cases at each step: place either the current vertex or all its neighbours into the vertex cover.
The algorithm for solving vertex cover that achieves the best asymptotic dependence on the parameter runs in time . The klam value of this time bound (an estimate for the largest parameter value that could be solved in a reasonable amount of time) is approximately 190. That is, unless additional algorithmic improvements can be found, this algorithm is suitable only for instances whose vertex cover number is 190 or less. Under reasonable complexity-theoretic assumptions, namely the exponential time hypothesis, the running time cannot be improved to 2o(k), even when is .
However, for planar graphs, and more generally, for graphs excluding some fixed graph as a minor, a vertex cover of size k can be found in time , i.e., the problem is subexponential fixed-parameter tractable. This algorithm is again optimal, in the sense that, under the exponential time hypothesis, no algorithm can solve vertex cover on planar graphs in time .
Approximate evaluation
One can find a factor-2 approximation by repeatedly taking both endpoints of an edge into the vertex cover, then removing them from the graph. Put otherwise, we find a maximal matching M with a greedy algorithm and construct a vertex cover C that consists of all endpoints of the edges in M. In the following figure, a maximal matching M is marked with red, and the vertex cover C is marked with blue.
The set C constructed this way is a vertex cover: suppose that an edge e is not covered by C; then M ∪ {e} is a matching and e ∉ M, which is a contradiction with the assumption that M is maximal. Furthermore, if e = {u, v} ∈ M, then any vertex cover – including an optimal vertex cover – must contain u or v (or both); otherwise the edge e is not covered. That is, an optimal cover contains at least one endpoint of each edge in M; in total, the set C is at most 2 times as large as the optimal vertex cover.
This simple algorithm was discovered independently by Fanica Gavril and Mihalis Yannakakis.
More involved techniques show that there are approximation algorithms with a slightly better approximation factor. For example, an approximation algorithm with an approximation factor of is known. The problem can be approximated with an approximation factor in - dense graphs.
Inapproximability
No better constant-factor approximation algorithm than the above one is known.
The minimum vertex cover problem is APX-complete, that is, it cannot be approximated arbitrarily well unless P = NP.
Using techniques from the PCP theorem, Dinur and Safra proved in 2005 that minimum vertex cover cannot be approximated within a factor of 1.3606 for any sufficiently large vertex degree unless P = NP.
Later, the factor was improved to for any .
Moreover, if the unique games conjecture is true then minimum vertex cover cannot be approximated within any constant factor better than 2.
Although finding the minimum-size vertex cover is equivalent to finding the maximum-size independent set, as described above, the two problems are not equivalent in an approximation-preserving way: The Independent Set problem has no constant-factor approximation unless P = NP.
Pseudocode
APPROXIMATION-VERTEX-COVER(G)
C = ∅
E'= G.E
while E' ≠ ∅:
let (u, v) be an arbitrary edge of E'
C = C ∪ {u, v}
remove from E' every edge incident on either u or v
return C
Applications
Vertex cover optimization serves as a model for many real-world and theoretical problems. For example, a commercial establishment interested in installing the fewest possible closed circuit cameras covering all hallways (edges) connecting all rooms (nodes) on a floor might model the objective as a vertex cover minimization problem. The problem has also been used to model the elimination of repetitive DNA sequences for synthetic biology and metabolic engineering applications.
See also
Dominating set
Notes
References
A1.1: GT1, pg.190.
External links
River Crossings (and Alcuin Numbers) – Numberphile
Computational problems in graph theory
NP-complete problems
Covering problems | Vertex cover | [
"Mathematics"
] | 1,917 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
562,788 | https://en.wikipedia.org/wiki/Basal%20metabolic%20rate | Basal metabolic rate (BMR) is the rate of energy expenditure per unit time by endothermic animals at rest. It is reported in energy units per unit time ranging from watt (joule/second) to ml O2/min or joule per hour per kg body mass J/(h·kg). Proper measurement requires a strict set of criteria to be met. These criteria include being in a physically and psychologically undisturbed state and being in a thermally neutral environment while in the post-absorptive state (i.e., not actively digesting food). In bradymetabolic animals, such as fish and reptiles, the equivalent term standard metabolic rate (SMR) applies. It follows the same criteria as BMR, but requires the documentation of the temperature at which the metabolic rate was measured. This makes BMR a variant of standard metabolic rate measurement that excludes the temperature data, a practice that has led to problems in defining "standard" rates of metabolism for many mammals.
Metabolism comprises the processes that the body needs to function. Basal metabolic rate is the amount of energy per unit of time that a person needs to keep the body functioning at rest. Some of those processes are breathing, blood circulation, controlling body temperature, cell growth, brain and nerve function, and contraction of muscles. Basal metabolic rate affects the rate that a person burns calories and ultimately whether that individual maintains, gains, or loses weight. The basal metabolic rate accounts for about 70% of the daily calorie expenditure by individuals. It is influenced by several factors. In humans, BMR typically declines by 1–2% per decade after age 20, mostly due to loss of fat-free mass, although the variability between individuals is high.
Description
The body's generation of heat is known as thermogenesis and it can be measured to determine the amount of energy expended. BMR generally decreases with age, and with the decrease in lean body mass (as may happen with aging). Increasing muscle mass has the effect of increasing BMR. Aerobic (resistance) fitness level, a product of cardiovascular exercise, while previously thought to have effect on BMR, has been shown in the 1990s not to correlate with BMR when adjusted for fat-free body mass. But anaerobic exercise does increase resting energy consumption (see "aerobic vs. anaerobic exercise"). Illness, previously consumed food and beverages, environmental temperature, and stress levels can affect one's overall energy expenditure as well as one's BMR.
BMR is measured under very restrictive circumstances when a person is awake. An accurate BMR measurement requires that the person's sympathetic nervous system not be stimulated, a condition which requires complete rest. A more common measurement, which uses less strict criteria, is resting metabolic rate (RMR).
BMR may be measured by gas analysis through either direct or indirect calorimetry, though a rough estimation can be acquired through an equation using age, sex, height, and weight. Studies of energy metabolism using both methods provide convincing evidence for the validity of the respiratory quotient (RQ), which measures the inherent composition and utilization of carbohydrates, fats and proteins as they are converted to energy substrate units that can be used by the body as energy.
Phenotypic flexibility
BMR is a flexible trait (it can be reversibly adjusted within individuals), with, for example, lower temperatures generally resulting in higher basal metabolic rates for both birds and rodents. There are two models to explain how BMR changes in response to temperature: the variable maximum model (VMM) and variable fraction model (VFM). The VMM states that the summit metabolism (or the maximum metabolic rate in response to the cold) increases during the winter, and that the sustained metabolism (or the metabolic rate that can be indefinitely sustained) remains a constant fraction of the former. The VFM says that the summit metabolism does not change, but that the sustained metabolism is a larger fraction of it. The VMM is supported in mammals, and, when using whole-body rates, passerine birds. The VFM is supported in studies of passerine birds using mass-specific metabolic rates (or metabolic rates per unit of mass). This latter measurement has been criticized by Eric Liknes, Sarah Scott, and David Swanson, who say that mass-specific metabolic rates are inconsistent seasonally.
In addition to adjusting to temperature, BMR also may adjust before annual migration cycles. The red knot (ssp. islandica) increases its BMR by about 40% before migrating northward. This is because of the energetic demand of long-distance flights. The increase is likely primarily due to increased mass in organs related to flight. The end destination of migrants affects their BMR: yellow-rumped warblers migrating northward were found to have a 31% higher BMR than those migrating southward.
In humans, BMR is directly proportional to a person's lean body mass. In other words, the more lean body mass a person has, the higher their BMR; but BMR is also affected by acute illnesses and increases with conditions like burns, fractures, infections, fevers, etc. In menstruating females, BMR varies to some extent with the phases of their menstrual cycle. Due to the increase in progesterone, BMR rises at the start of the luteal phase and stays at its highest until this phase ends. There are different findings in research how much of an increase usually occurs. Small sample, early studies, found various figures, such as; a 6% higher postovulatory sleep metabolism, a 7% to 15% higher 24 hour expenditure following ovulation, and an increase and a luteal phase BMR increase by up to 12%. A study by the American Society of Clinical Nutrition found that an experimental group of female volunteers had an 11.5% average increase in 24 hour energy expenditure in the two weeks following ovulation, with a range of 8% to 16%. This group was measured via simultaneously direct and indirect calorimetry and had standardized daily meals and sedentary schedule in order to prevent the increase from being manipulated by change in food intake or activity level. A 2011 study conducted by the Mandya Institute of Medical Sciences found that during a woman's follicular phase and menstrual cycle is no significant difference in BMR, however the calories burned per hour is significantly higher, up to 18%, during the luteal phase. Increased state anxiety (stress level) also temporarily increased BMR.
Physiology
The early work of the scientists J. Arthur Harris and Francis G. Benedict showed that approximate values for BMR could be derived using body surface area (computed from height and weight), age, and sex, along with the oxygen and carbon dioxide measures taken from calorimetry. Studies also showed that by eliminating the sex differences that occur with the accumulation of adipose tissue by expressing metabolic rate per unit of "fat-free" or lean body mass, the values between sexes for basal metabolism are essentially the same. Exercise physiology textbooks have tables to show the conversion of height and body surface area as they relate to weight and basal metabolic values.
The primary organ responsible for regulating metabolism is the hypothalamus. The hypothalamus is located on the diencephalon and forms the floor and part of the lateral walls of the third ventricle of the cerebrum. The chief functions of the hypothalamus are:
control and integration of activities of the autonomic nervous system (ANS)
The ANS regulates contraction of smooth muscle and cardiac muscle, along with secretions of many endocrine organs such as the thyroid gland (associated with many metabolic disorders).
Through the ANS, the hypothalamus is the main regulator of visceral activities, such as heart rate, movement of food through the gastrointestinal tract, and contraction of the urinary bladder.
production and regulation of feelings of rage and aggression
regulation of body temperature
regulation of food intake, through two centers:
The feeding center or hunger center is responsible for the sensations that cause us to seek food. When sufficient food or substrates have been received and leptin is high, then the satiety center is stimulated and sends impulses that inhibit the feeding center. When insufficient food is present in the stomach and ghrelin levels are high, receptors in the hypothalamus initiate the sense of hunger.
The thirst center operates similarly when certain cells in the hypothalamus are stimulated by the rising osmotic pressure of the extracellular fluid. If thirst is satisfied, osmotic pressure decreases.
All of these functions taken together form a survival mechanism that causes us to sustain the body processes that BMR measures.
BMR estimation formulas
Several equations to predict the number of calories required by humans have been published from the early 20th–21st centuries. In each of the formulas below:
P is total heat production at complete rest,
m is mass (kg),
h is height (cm),
a is age (years).
The original Harris–Benedict equation
Historically, the most notable formula was the Harris–Benedict equation, which was published in 1919:
for men,
for women,
The difference in BMR for men and women is mainly due to differences in body mass. For example, a 55-year-old woman weighing and tall would have a BMR of per day.
The revised Harris–Benedict equation
In 1984, the original Harris–Benedict equations were revised using new data. In comparisons with actual expenditure, the revised equations were found to be more accurate:
for men,
for women,
It was the best prediction equation until 1990, when Mifflin et al. introduced the equation:
The Mifflin St Jeor equation
where s is +5 for males and −161 for females.
According to this formula, the woman in the example above has a BMR of per day.
During the last 100 years, lifestyles have changed, and Frankenfield et al. showed it to be about 5% more accurate.
These formulas are based on body mass, which does not take into account the difference in metabolic activity between lean body mass and body fat. Other formulas exist which take into account lean body mass, two of which are the Katch–McArdle formula and Cunningham formula.
The Katch–McArdle formula (resting daily energy expenditure)
The Katch–McArdle formula is used to predict resting daily energy expenditure (RDEE).
The Cunningham formula is commonly cited to predict RMR instead of BMR; however, the formulas provided by Katch–McArdle and Cunningham are the same.
where ℓ is the lean body mass (LBM in kg):
where f is the body fat percentage.
According to this formula, if the woman in the example has a body fat percentage of 30%, her resting daily energy expenditure (the authors use the term of basal and resting metabolism interchangeably) would be 1262 kcal per day.
Research on individual differences in BMR
The basic metabolic rate varies between individuals. One study of 150 adults representative of the population in Scotland reported basal metabolic rates from as low as per day to as high as , with a mean BMR of per day. Statistically, the researchers calculated that 62% of this variation was explained by differences in fat free mass. Other factors explaining the variation included fat mass (7%), age (2%), and experimental error including within-subject difference (2%). The rest of the variation (27%) was unexplained. This remaining difference was not explained by sex nor by differing tissue size of highly energetic organs such as the brain.
A cross-sectional study of more than 1400 subjects in Europe and the US showed that once adjusted for differences in body composition (lean and fat mass) and age, BMR has fallen over the past 35 years. The decline was also observed in a meta-analysis of more than 150 studies dating back to the early 1920s, translating into a decline in total energy expenditure of about 6%.
Biochemistry
About 70% of a human's total energy expenditure is due to the basal life processes taking place in the organs of the body (see table). About 20% of one's energy expenditure comes from physical activity and another 10% from thermogenesis, or digestion of food (postprandial thermogenesis). All of these processes require an intake of oxygen along with coenzymes to provide energy for survival (usually from macronutrients like carbohydrates, fats, and proteins) and expel carbon dioxide, due to processing by the Krebs cycle.
For the BMR, most of the energy is consumed in maintaining fluid levels in tissues through osmoregulation, and only about one-tenth is consumed for mechanical work, such as digestion, heartbeat, and breathing.
What enables the Krebs cycle to perform metabolic changes to fats, carbohydrates, and proteins is energy, which can be defined as the ability or capacity to do work. The breakdown of large molecules into smaller molecules—associated with release of energy—is catabolism. The building up process is termed anabolism. The breakdown of proteins into amino acids is an example of catabolism, while the formation of proteins from amino acids is an anabolic process.
Exergonic reactions are energy-releasing reactions and are generally catabolic. Endergonic reactions require energy and include anabolic reactions and the contraction of muscle. Metabolism is the total of all catabolic, exergonic, anabolic, and endergonic reactions.
Adenosine triphosphate (ATP) is the intermediate molecule that drives the exergonic transfer of energy to switch to endergonic anabolic reactions used in muscle contraction. This is what causes muscles to work which can require a breakdown, and also to build in the rest period, which occurs during the strengthening phase associated with muscular contraction. ATP is composed of adenine, a nitrogen containing base, ribose, a five carbon sugar (collectively called adenosine), and three phosphate groups. ATP is a high energy molecule because it stores large amounts of energy in the chemical bonds of the two terminal phosphate groups. The breaking of these chemical bonds in the Krebs Cycle provides the energy needed for muscular contraction.
Glucose
Because the ratio of hydrogen to oxygen atoms in all carbohydrates is always the same as that in water—that is, 2 to 1—all of the oxygen consumed by the cells is used to oxidize the carbon in the carbohydrate molecule to form carbon dioxide. Consequently, during the complete oxidation of a glucose molecule, six molecules of carbon dioxide and six molecules of water are produced and six molecules of oxygen are consumed.
The overall equation for this reaction is
C6H12O6 + 6 O2 -> 6 CO2 + 6 H2O
(30–32 ATP molecules produced depending on type of mitochondrial shuttle, 5–5.33 ATP molecules per molecule of oxygen.)
Because the gas exchange in this reaction is equal, the respiratory quotient (R.Q.) for carbohydrate is unity or 1.0:
Fats
The chemical composition for fats differs from that of carbohydrates in that fats contain considerably fewer oxygen atoms in proportion to atoms of carbon and hydrogen. When listed on nutritional information tables, fats are generally divided into six categories: total fats, saturated fatty acid, polyunsaturated fatty acid, monounsaturated fatty acid, dietary cholesterol, and trans fatty acid. From a basal metabolic or resting metabolic perspective, more energy is needed to burn a saturated fatty acid than an unsaturated fatty acid. The fatty acid molecule is broken down and categorized based on the number of carbon atoms in its molecular structure. The chemical equation for metabolism of the twelve to sixteen carbon atoms in a saturated fatty acid molecule shows the difference between metabolism of carbohydrates and fatty acids. Palmitic acid is a commonly studied example of the saturated fatty acid molecule.
The overall equation for the substrate utilization of palmitic acid is
C16H32O2 + 23 O2 -> 16 CO2 + 16 H2O
(106 ATP molecules produced, 4.61 ATP molecules per molecule of oxygen.)
Thus the R.Q. for palmitic acid is 0.696:
Proteins
Proteins are composed of carbon, hydrogen, oxygen, and nitrogen arranged in a variety of ways to form a large combination of amino acids. Unlike fat the body has no storage deposits of protein. All of it is contained in the body as important parts of tissues, blood hormones, and enzymes. The structural components of the body that contain these amino acids are continually undergoing a process of breakdown and replacement. The respiratory quotient for protein metabolism can be demonstrated by the chemical equation for oxidation of albumin:
C72H112N18O22S + 77 O2 -> 63 CO2 + 38 H2O + SO3 + 9 CO(NH2)2
The R.Q. for albumin is 0.818:
The reason this is important in the process of understanding protein metabolism is that the body can blend the three macronutrients and based on the mitochondrial density, a preferred ratio can be established which determines how much fuel is utilized in which packets for work accomplished by the muscles. Protein catabolism (breakdown) has been estimated to supply 10% to 15% of the total energy requirement during a two-hour aerobic training session. This process could severely degrade the protein structures needed to maintain survival such as contractile properties of proteins in the heart, cellular mitochondria, myoglobin storage, and metabolic enzymes within muscles.
The oxidative system (aerobic) is the primary source of ATP supplied to the body at rest and during low intensity activities and uses primarily carbohydrates and fats as substrates. Protein is not normally metabolized significantly, except during long term starvation and long bouts of exercise (greater than 90 minutes.) At rest approximately 70% of the ATP produced is derived from fats and 30% from carbohydrates. Following the onset of activity, as the intensity of the exercise increases, there is a shift in substrate preference from fats to carbohydrates. During high intensity aerobic exercise, almost 100% of the energy is derived from carbohydrates, if an adequate supply is available.
Aerobic vs. anaerobic exercise
Studies published in 1992 and 1997 indicate that the level of aerobic fitness of an individual does not have any correlation with the level of resting metabolism. Both studies find that aerobic fitness levels do not improve the predictive power of fat free mass for resting metabolic rate.
However, recent research from the Journal of Applied Physiology, published in 2012, compared resistance training and aerobic training on body mass and fat mass in overweight adults (STRRIDE AT/RT). When time commitment is evaluated against health benefit, aerobic training is the optimal mode of exercise for reducing fat mass and body mass as a primary consideration, and resistance training is good as a secondary factor when aging and lean mass are a concern. Resistance training causes injuries at a much higher rate than aerobic training. Compared to resistance training, it was found that aerobic training resulted in a significantly more pronounced reduction of body weight by enhancing the cardiovascular system which is what is the principal factor in metabolic utilization of fat substrates. Resistance training if time is available is also helpful in post-exercise metabolism, but it is an adjunctive factor because the body needs to heal sufficiently between resistance training episodes, whereas the body can accept aerobic training every day. RMR and BMR are measurements of daily consumption of calories. The majority of studies that are published on this topic look at aerobic exercise because of its efficacy for health and weight management.
Anaerobic exercise, such as weight lifting, builds additional muscle mass. Muscle contributes to the fat-free mass of an individual and therefore effective results from anaerobic exercise will increase BMR. However, the actual effect on BMR is controversial and difficult to enumerate. Various studies suggest that the resting metabolic rate of trained muscle is around 55 kJ/kg per day; it then follows that even a substantial increase in muscle say would make only a minor impact on BMR.
Longevity
In 1926, Raymond Pearl proposed that longevity varies inversely with basal metabolic rate (the "rate of living hypothesis"). Support for this hypothesis comes from the fact that mammals with larger body size have longer maximum life spans (large animals do have higher total metabolic rates, but the metabolic rate at the cellular level is much lower, and the breathing rate and heartbeat are slower in larger animals) and the fact that the longevity of fruit flies varies inversely with ambient temperature. Additionally, the life span of houseflies can be extended by preventing physical activity. This theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy, across the animal kingdom—including humans. Calorie restriction and reduced thyroid hormone levels, both of which decrease the metabolic rate, have been associated with higher longevity in animals.
However, the ratio of total daily energy expenditure to resting metabolic rate can vary between 1.6 and 8.0 between species of mammals. Animals also vary in the degree of coupling between oxidative phosphorylation and ATP production, the amount of saturated fat in mitochondrial membranes, the amount of DNA repair, and many other factors that affect maximum life span.
One problem with understanding the associations of lifespan and metabolism is that changes in metabolism are often confounded by other factors that may affect lifespan. For example under calorie restriction whole body metabolic rate goes down with increasing levels of restriction, but body temperature also follows the same pattern. By manipulating the ambient temperature and exposure to wind it was shown in mice and hamsters that body temperature is a more important modulator of lifespan than metabolic rate.
Medical considerations
A person's metabolism varies with their physical condition and activity. Weight training can have a longer impact on metabolism than aerobic training, but there are no known mathematical formulas that can exactly predict the length and duration of a raised metabolism from trophic changes with anabolic neuromuscular training.
A decrease in food intake will typically lower the metabolic rate as the body tries to conserve energy. Researcher Gary Foster estimates that a very low calorie diet of fewer than 800 calories a day would reduce the metabolic rate by more than 10 percent.
The metabolic rate can be affected by some drugs: antithyroid agents (drugs used to treat hyperthyroidism) such as propylthiouracil and methimazole bring the metabolic rate down to normal, restoring euthyroidism. Some research has focused on developing antiobesity drugs to raise the metabolic rate, such as drugs to stimulate thermogenesis in skeletal muscle.
The metabolic rate may be elevated in stress, illness, and diabetes. Menopause may also affect metabolism.
See also
Abnormal basal metabolic rate
Field metabolic rate
Harris–Benedict equation
Human-body emission
Hypothyroidism
Metabolic age
Metabolic syndrome
Schofield equation
Thermic effect of food
References
Further reading
Republished as:
Exercise physiology
Metabolism
Nutritional physiology
Temporal rates | Basal metabolic rate | [
"Physics",
"Chemistry",
"Biology"
] | 4,852 | [
"Temporal quantities",
"Physical quantities",
"Temporal rates",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
562,827 | https://en.wikipedia.org/wiki/POP-11 | POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham, which hosts the main Poplog website.
POP-11 is an evolution of the language POP-2, developed in Edinburgh University, and features an open stack model (like Forth, among others). It is mainly procedural, but supports declarative language constructs, including a pattern matcher, and is mostly used for research and teaching in artificial intelligence, although it has features sufficient for many other classes of problems. It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions.
POP-11 is the core language of the Poplog system. The availability of the compiler and compiler subroutines at run-time (a requirement for incremental compiling) gives it the ability to support a far wider range of extensions (including run-time extensions, such as adding new data-types) than would be possible using only a macro facility. This made it possible for (optional) incremental compilers to be added for Prolog, Common Lisp and Standard ML, which could be added as required to support either mixed language development or development in the second language without using any POP-11 constructs. This made it possible for Poplog to be used by teachers, researchers, and developers who were interested in only one of the languages. The most successful product developed in POP-11 was the Clementine data mining system, developed by ISL. After SPSS bought ISL, they renamed Clementine to SPSS Modeler and decided to port it to C++ and Java, and eventually succeeded with great effort, and perhaps some loss of the flexibility provided by the use of an AI language.
POP-11 was for a time available only as part of an expensive commercial package (Poplog), but since about 1999 it has been freely available as part of the open-source software version of Poplog, including various added packages and teaching libraries. An online version of ELIZA using POP-11 is available at Birmingham.
At the University of Sussex, David Young used POP-11 in combination with C and Fortran to develop a suite of teaching and interactive development tools for image processing and vision, and has made them available in the Popvision extension to Poplog.
Simple code examples
Here is an example of a simple POP-11 program:
define Double(Source) -> Result;
Source*2 -> Result;
enddefine;
Double(123) =>
That prints out:
** 246
This one includes some list processing:
<nowiki>
define RemoveElementsMatching(Element, Source) -> Result;
lvars Index;
[[%
for Index in Source do
unless Index = Element or Index matches Element then
Index;
endunless;
endfor;
%]] -> Result;
enddefine;
RemoveElementsMatching("the", [[the cat sat on the mat]]) => ;;; outputs [[cat sat on mat]]
RemoveElementsMatching("the", [[the cat] [sat on] the mat]) => ;;; outputs [[the cat] [sat on] mat]
RemoveElementsMatching([[= cat]], [[the cat]] is a [[big cat]]) => ;;; outputs [[is a]]
</nowiki>
Examples using the POP-11 pattern matcher, which makes it relatively easy for students to learn to develop sophisticated list-processing programs without having to treat patterns as tree structures accessed by 'head' and 'tail' functions (CAR and CDR in Lisp), can be found in the online introductory tutorial. The matcher is at the heart of
the SimAgent (sim_agent) toolkit. Some of the powerful features of the toolkit, such as linking pattern variables to inline code variables, would have been very difficult to implement without the incremental compiler facilities.
See also
COWSEL (aka POP-1) programming language
References
R. Burstall, A. Collins and R. Popplestone, Programming in Pop-2 University Press, Edinburgh, 1968
D.J.M. Davies, POP-10 Users' Manual, Computer Science Report #25, University of Western Ontario, 1976
S. Hardy and C. Mellish, 'Integrating Prolog in the Poplog environment', in Implementations of Prolog, Ed., J.A. Campbell, Wiley, New York, 1983, pp 147–162
R. Barrett, A, Ramsay and A. Sloman, POP-11: a Practical Language for Artificial Intelligence, Ellis Horwood, Chicester, 1985
M. Burton and N. Shadbolt, POP-11 Programming for Artificial Intelligence, Addison-Wesley, 1987
J. Laventhol, Programming in POP-11, Blackwell Scientific Publications Ltd., 1987
R. Barrett and A. Ramsay, Artificial Intelligence in Practice:Examples in Pop-11, Ellis Horwood, Chicester, 1987.
M. Sharples et al., Computers and Thought, MIT Press, 1987. (An introduction to Cognitive Science using Pop-11. Online version referenced above.)
James Anderson, Ed., Pop-11 Comes of Age: The Advancement of an AI Programming Language, Ellis Horwood, Chichester, 1989
G. Gazdar and C. Mellish, Natural Language Processing in Pop11/Prolog/Lisp, Addison Wesley, 1989. (read online)
R. Smith, A. Sloman and J. Gibson, POPLOG's two-level virtual machine support for interactive languages, in Research Directions in Cognitive Science Volume 5: Artificial Intelligence, Eds. D. Sleeman and N. Bernsen, Lawrence Erlbaum Associates, pp. 203–231, 1992. (Available as Cognitive Science Research Report 153, School of Informatics, University of Sussex).
Chris Thornton and Benedict du Boulay, Artificial Intelligence Through Search, Kluwer Academic (Paperback version Intellect Books) Dordrecht Netherlands & Norwell, MA USA (Intellect at Oxford) 1992.
A. Sloman, Pop-11 Primer, 1999 (Third edition)
External links
, Free Poplog Portal
Information about POP-11 teaching materials
The Poplog.org website (including partial mirror of Free poplog web site) (currently defunct: see its more recent copy (Jun 17, 2008) @ Internet Archive Wayback Machine)
An Overview of POP-11 (Primer for experienced programmers) (alt. PDF)
Waldek Hebisch produced a small collection of programming examples in Pop-11, showing how it can be used for symbol manipulation, numerical calculation, logic and mathematics.
Computers and Thought: A practical Introduction to Artificial Intelligence on-line book introducing Cognitive Science through Pop-11.
The SimAgent (sim_agent) Toolkit
Pop-11 Eliza in the poplog system. Tutorial on Eliza
History of AI teaching in Pop-11 since about 1976.
2-D (X) graphics in Pop-11
Objectclass the object oriented programming extension to Pop-11 (modelled partly on CLOS and supporting multiple inheritance).
Tutorial introduction to object oriented programming in Pop-11.
Further references
Online documentation on Pop-11 and Poplog
Online system documentation, including porting information
Entry for Pop-11 at HOPL (History of Programming Languages) web site
Lisp programming language family
Artificial intelligence
History of computing in the United Kingdom
Science and technology in East Sussex
University of Sussex | POP-11 | [
"Technology"
] | 1,607 | [
"History of computing",
"History of computing in the United Kingdom"
] |
562,879 | https://en.wikipedia.org/wiki/Glasgow%20Haskell%20Compiler | The Glasgow Haskell Compiler (GHC) is a native or machine code compiler for the functional programming language Haskell. It provides a cross-platform software environment for writing and testing Haskell code and supports many extensions, libraries, and optimisations that streamline the process of generating and executing code. GHC is the most commonly used Haskell compiler. It is free and open-source software released under a BSD license.
History
GHC originally begun in 1989 as a prototype, written in Lazy ML (LML) by Kevin Hammond at the University of Glasgow. Later that year, the prototype was completely rewritten in Haskell, except for its parser, by Cordelia Hall, Will Partain, and Simon Peyton Jones. Its first beta release was on 1 April 1991. Later releases added a strictness analyzer and language extensions such as monadic I/O, mutable arrays, unboxed data types, concurrent and parallel programming models (such as software transactional memory and data parallelism) and a profiler.
Peyton Jones, and Marlow, later moved to Microsoft Research in Cambridge, where they continued to be primarily responsible for developing GHC. GHC also contains code from more than three hundred other contributors.
From 2009 to about 2014, third-party contributions to GHC were funded by the Industrial Haskell Group.
GHC names
Since early releases the official website has referred to GHC as The Glasgow Haskell Compiler, whereas in the executable version command it is identified as The Glorious Glasgow Haskell Compilation System. This has been reflected in the documentation. Initially, it had the internal name of The Glamorous Glasgow Haskell Compiler.
Architecture
GHC is written in Haskell, but the runtime system for Haskell, essential to run programs, is written in C and C--.
GHC's front end, incorporating the lexer, parser and typechecker, is designed to preserve as much information about the source language as possible until after type inference is complete, toward the goal of providing clear error messages to users. After type checking, the Haskell code is desugared into a typed intermediate language known as "Core" (based on System F, extended with let and case expressions). Core has been extended to support generalized algebraic datatypes in its type system, and is now based on an extension to System F known as System FC.
In the tradition of type-directed compiling, GHC's simplifier, or "middle end", where most of the optimizations implemented in GHC are performed, is structured as a series of source-to-source transformations on Core code. The analyses and transformations performed in this compiler stage include demand analysis (a generalization of strictness analysis), application of user-defined rewrite rules (including a set of rules included in GHC's standard libraries that performs foldr/build fusion), unfolding (called "inlining" in more traditional compilers), let-floating, an analysis that determines which function arguments can be unboxed, constructed product result analysis, specialization of overloaded functions, and a set of simpler local transformations such as constant folding and beta reduction.
The back end of the compiler transforms Core code into an internal representation of C--, via an intermediate language STG (short for "Spineless Tagless G-machine"). The C-- code can then take one of three routes: it is either printed as C code for compilation with GCC, converted directly into native machine code (the traditional "code generation" phase), or converted to LLVM IR for compilation with LLVM. In all three cases, the resultant native code is finally linked against the GHC runtime system to produce an executable.
Language
GHC complies with the language standards, both Haskell 98 and Haskell 2010.
It also supports many optional extensions to the Haskell standard: for example, the software transactional memory (STM) library, which allows for Composable Memory Transactions.
Extensions to Haskell
Many extensions to Haskell have been proposed. These provide features not described in the language specification, or they redefine existing constructs. As such, each extension may not be supported by all Haskell implementations. There is an ongoing effort to describe extensions and select those which will be included in future versions of the language specification.
The extensions supported by the Glasgow Haskell Compiler include:
Unboxed types and operations. These represent the primitive datatypes of the underlying hardware, without the indirection of a pointer to the heap or the possibility of deferred evaluation. Numerically intensive code can be significantly faster when coded using these types.
The ability to specify strict evaluation for a value, pattern binding, or datatype field.
More convenient syntax for working with modules, patterns, list comprehensions, operators, records, and tuples.
Syntactic sugar for computing with arrows and recursively-defined monadic values. Both of these concepts extend the monadic -notation provided in standard Haskell.
A significantly more powerful system of types and typeclasses, described below.
Template Haskell, a system for compile-time metaprogramming. Expressions can be written to produce Haskell code in the form of an abstract syntax tree. These expressions are typechecked and evaluated at compile time; the generated code is then included as if it were part of the original code. Together with the ability to reflect on definitions, this provides a powerful tool for further extensions to the language.
Quasi-quotation, which allows the user to define new concrete syntax for expressions and patterns. Quasi-quotation is useful when a metaprogram written in Haskell manipulates code written in a language other than Haskell.
Generic typeclasses, which specify functions solely in terms of the algebraic structure of the types they operate on.
Parallel evaluation of expressions using multiple CPU cores. This does not require explicitly spawning threads. The distribution of work happens implicitly, based on annotations provided in the program.
Compiler pragmas for directing optimizations such as inline expansion and specializing functions for particular types.
Customizable rewrite rules are rules describing how to replace one expression with an equivalent, but more efficiently evaluated expression. These are used within core data structure libraries to improve performance throughout application-level code.
Record dot syntax. Provides syntactic sugar for accessing the fields of a (potentially nested) record which is similar to the syntax of many other programming languages.
Type system extensions
An expressive static type system is one of the major defining features of Haskell. Accordingly, much of the work in extending the language has been directed towards data types and type classes.
The Glasgow Haskell Compiler supports an extended type system based on the theoretical System FC. Major extensions to the type system include:
Arbitrary-rank and impredicative polymorphism. Essentially, a polymorphic function or datatype constructor may require that one of its arguments is also polymorphic.
Generalized algebraic data types. Each constructor of a polymorphic datatype can encode information into the resulting type. A function which pattern-matches on this type can use the per-constructor type information to perform more specific operations on data.
Existential types. These can be used to "bundle" some data together with operations on that data, in such a way that the operations can be used without exposing the specific type of the underlying data. Such a value is very similar to an object as found in object-oriented programming languages.
Data types that do not actually contain any values. These can be useful to represent data in type-level metaprogramming.
Type families: user-defined functions from types to types. Whereas parametric polymorphism provides the same structure for every type instantiation, type families provide ad hoc polymorphism with implementations that can differ between instantiations. Use cases include content-aware optimizing containers and type-level metaprogramming.
Implicit function parameters that have dynamic scope. These are represented in types in much the same way as type class constraints.
Linear types (GHC 9.0)
Extensions relating to type classes include:
A type class may be parametrized on more than one type. Thus a type class can describe not only a set of types, but an n-ary relation on types.
Functional dependencies, which constrain parts of that relation to be a mathematical function on types. That is, the constraint specifies that some type class parameter is completely determined once some other set of parameters is fixed. This guides the process of type inference in situations where otherwise there would be ambiguity.
Significantly relaxed rules regarding the allowable shape of type class instances. When these are enabled in full, the type class system becomes a Turing-complete language for logic programming at compile time.
Type families, as described above, may also be associated with a type class.
The automatic generation of certain type class instances is extended in several ways. New type classes for generic programming and common recursion patterns are supported. Also, when a new type is declared as isomorphic to an existing type, any type class instance declared for the underlying type may be lifted to the new type "for free".
Portability
Versions of GHC are available for several system or computing platform, including Windows and most varieties of Unix (such as Linux, FreeBSD, OpenBSD, and macOS). GHC has also been ported to several different processor architectures.
See also
Hugs (interpreter)
Yhc
Haskell Platform
References
External links
Cross-platform free software
Free and open source compilers
Free Haskell implementations
History of computing in the United Kingdom
Software using the BSD license
University of Glasgow | Glasgow Haskell Compiler | [
"Technology"
] | 1,978 | [
"History of computing",
"History of computing in the United Kingdom"
] |
562,883 | https://en.wikipedia.org/wiki/Higher-order%20logic | In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic.
The term "higher-order logic" is commonly used to mean higher-order simple predicate logic. Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types.
Quantification scope
First-order logic quantifies only variables that range over individuals; second-order logic, also quantifies over sets; third-order logic also quantifies over sets of sets, and so on.
Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply.
Semantics
There are two possible semantics for higher-order logic.
In the standard or full semantics, quantifiers over higher-type objects range over all possible objects of that type. For example, a quantifier over sets of individuals ranges over the entire powerset of the set of individuals. Thus, in standard semantics, once the set of individuals is specified, this is enough to specify all the quantifiers. HOL with standard semantics is more expressive than first-order logic. For example, HOL admits categorical axiomatizations of the natural numbers, and of the real numbers, which are impossible with first-order logic. However, by a result of Kurt Gödel, HOL with standard semantics does not admit an effective, sound, and complete proof calculus. The model-theoretic properties of HOL with standard semantics are also more complex than those of first-order logic. For example, the Löwenheim number of second-order logic is already larger than the first measurable cardinal, if such a cardinal exists. The Löwenheim number of first-order logic, in contrast, is ℵ0, the smallest infinite cardinal.
In Henkin semantics, a separate domain is included in each interpretation for each higher-order type. Thus, for example, quantifiers over sets of individuals may range over only a subset of the powerset of the set of individuals. HOL with these semantics is equivalent to many-sorted first-order logic, rather than being stronger than first-order logic. In particular, HOL with Henkin semantics has all the model-theoretic properties of first-order logic, and has a complete, sound, effective proof system inherited from first-order logic.
Properties
Higher-order logics include the offshoots of Church's simple theory of types and the various forms of intuitionistic type theory. Gérard Huet has shown that unifiability is undecidable in a type-theoretic flavor of third-order logic, that is, there can be no algorithm to decide whether an arbitrary equation between second-order (let alone arbitrary higher-order) terms has a solution.
Up to a certain notion of isomorphism, the powerset operation is definable in second-order logic. Using this observation, Jaakko Hintikka established in 1955 that second-order logic can simulate higher-order logics in the sense that for every formula of a higher-order logic, one can find an equisatisfiable formula for it in second-order logic.
The term "higher-order logic" is assumed in some context to refer to classical higher-order logic. However, modal higher-order logic has been studied as well. According to several logicians, Gödel's ontological proof is best studied (from a technical perspective) in such a context.
See also
Zeroth-order logic (propositional logic)
First-order logic
Second-order logic
Type theory
Higher-order grammar
Higher-order logic programming
HOL (proof assistant)
Many-sorted logic
Typed lambda calculus
Modal logic
Notes
References
Andrews, Peter B. (2002). An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed, Kluwer Academic Publishers,
Stewart Shapiro, 1991, "Foundations Without Foundationalism: A Case for Second-Order Logic". Oxford University Press.,
Stewart Shapiro, 2001, "Classical Logic II: Higher Order Logic," in Lou Goble, ed., The Blackwell Guide to Philosophical Logic. Blackwell,
Lambek, J. and Scott, P. J., 1986. Introduction to Higher Order Categorical Logic, Cambridge University Press,
External links
Andrews, Peter B, Church's Type Theory in Stanford Encyclopedia of Philosophy.
Miller, Dale, 1991, "Logic: Higher-order," Encyclopedia of Artificial Intelligence, 2nd ed.
Herbert B. Enderton, Second-order and Higher-order Logic in Stanford Encyclopedia of Philosophy, published Dec 20, 2007; substantive revision Mar 4, 2009.
Predicate logic
Systems of formal logic | Higher-order logic | [
"Mathematics"
] | 1,114 | [
"Mathematical logic",
"Predicate logic",
"Basic concepts in set theory"
] |
562,904 | https://en.wikipedia.org/wiki/Dc%20%28computer%20program%29 | dc (desk calculator) is a cross-platform reverse-Polish calculator which supports arbitrary-precision arithmetic. It was written by Lorinda Cherry and Robert Morris at Bell Labs. It is one of the oldest Unix utilities, preceding even the invention of the C programming language. Like other utilities of that vintage, it has a powerful set of features but terse syntax.
Traditionally, the bc calculator program (with infix notation) was implemented on top of dc.
This article provides some examples in an attempt to give a general flavour of the language; for a complete list of commands and syntax, one should consult the man page for one's specific implementation.
History
dc is the oldest surviving Unix language program. When its home Bell Labs received a PDP-11, dcwritten in Bwas the first language to run on the new computer, even before an assembler. Ken Thompson has opined that dc was the very first program written on the machine.
Basic operations
To multiply four and five in dc (note that most of the whitespace is optional):
$ cat << EOF > cal.txt
4 5 *
p
EOF
$ dc cal.txt
20
$
The results are also available from the commands:
$ echo "4 5 * p" | dc
or
$ dc -
4 5*pq
20
$ dc
4 5 *
p
20
q
$ dc -e '4 5 * p'
This translates into "push four and five onto the stack, then, with the multiplication operator, pop two elements from the stack, multiply them and push the result onto the stack." Then the p command is used to examine (print out to the screen) the top element on the stack. The q command quits the invoked instance of dc. Note that numbers must be spaced from each other even as some operators need not be.
The arithmetic precision is changed with the command k, which sets the number of fractional digits (the number of digits following the point) to be used for arithmetic operations. Since the default precision is zero, this sequence of commands produces 0 as a result:
2 3 / p
By adjusting the precision with k, an arbitrary number of decimal places can be produced. This command sequence outputs .66666.
5 k
2 3 / p
To evaluate : (v computes the square root of the top of the stack and _ is used to input a negative number):
12 _3 4 ^ + 11 / v 22 -
p
To swap the top two elements of the stack, use the r command. To duplicate the top element, use the d command.
Input/output
To read a line from stdin, use the ? command. This evaluates the line as if it were a dc command, and so it is necessary that it be syntactically correct and presents a potential security problem because the ! dc command enables arbitrary command execution.
As mentioned above, p prints the top of the stack with a newline after it. n pops the top of the stack and prints it without a trailing newline. f prints the entire stack with one entry per line.
dc also supports arbitrary input and output radices. The i command pops the top of the stack and uses it for the input base. Hex digits must be in upper case to avoid collisions with dc commands and are limited to A-F. The o command does the same for the output base, but keep in mind that the input base affects the parsing of every numeric value afterwards so it is usually advisable to set the output base first. Therefore 10o sets the output radix to the current input radix, but generally not to 10 (ten). Nevertheless Ao resets the output base to 10 (ten), regardless of the input base. To read the values, the K, I and O commands push the current precision, input radix and output radix on to the top of the stack.
As an example, to convert from hex to binary:
$ echo 16i2o DEADBEEFp | dc
11011110101011011011111011101111
Language features
Registers
In addition to these basic arithmetic and stack operations, dc includes support for macros, conditionals and storing of results for later retrieval.
The mechanism underlying macros and conditionals is the register, which in dc is a storage location with a single character name which can be stored to and retrieved from: sc pops the top of the stack and stores it in register c, and lc pushes the value of register c onto the stack. For example:
3 sc 4 lc * p
Registers can also be treated as secondary stacks, so values can be pushed and popped between them and the main stack using the S and L commands.
Strings
String values are enclosed in [ and ] characters and may be pushed onto the stack and stored in registers. The a command converts the low order byte of the numeric value into an ASCII character, or if the top of the stack is a string it replaces it with the first character of the string. There are no ways to build up strings or perform string manipulation other than executing it with the x command, or printing it with the P command.
The # character begins a comment to the end of the line.
Macros
Macros are then implemented by allowing registers and stack entries to be strings as well as numbers. A string can be printed, but it can also be executed (i.e. processed as a sequence of dc commands). So for instance we can store a macro to add one and then multiply by 2 into register m:
[1 + 2 *] sm
and then (using the x command which executes the top of the stack) we can use it like this:
3 lm x p
Conditionals
Finally, we can use this macro mechanism to provide conditionals. The command =r pops two values from the stack, and executes the macro stored in register r only if they are equal. So this prints the string equal only if the top two values on the stack are of equal value:
[[equal]p] sm 5 5 =m
Other conditionals are >, !>, <, !<, !=, which execute the specified macro if the top two values on the stack are greater, less than or equal to ("not greater"), less than, greater than or equal to ("not less than"), and not equals, respectively. Note that the order of the operands in inequality comparisons is the opposite of the order for arithmetic; evaluates to , but runs the contents of the register because .
Loops
Looping is then possible by defining a macro which (conditionally) reinvokes itself. A simple factorial of the top of the stack might be implemented as:
# F(x): return x!
# if x-1 > 1
# return x * F(x-1)
# otherwise
# return x
[d1-d1<F*]dsFxp
The 1Q command exits from a macro, allowing an early return. q quits from two levels of macros (and dc itself if there are less than two levels on the call stack). z pushes the current stack depth before the z operation.
Examples
Summing the entire stack
This is implemented with a macro stored in register a which conditionally calls itself, performing an addition each time, until only one value remains on the stack. The z operator is used to push the number of entries in the stack onto the stack. The comparison operator > pops two values off the stack in making the comparison.
dc -e "1 2 4 8 16 100 0d[+z1<a]dsaxp"
And the result is 131.
Summing all dc expressions as lines from file
A bare number is a valid dc expression, so this can be used to sum a file where each line contains a single number.
This is again implemented with a macro stored in register a which conditionally calls itself, performing an addition each time, until only one value remains on the stack.
dc -e "0d[?+z1<a]dsaxp" < file
The ? operator reads another command from the input stream. If the input line contains a decimal number, that value is added to the stack. When the input file reaches end of file, the command is null, and no value is added to the stack.
{ echo "5"; echo "7"; } | dc -e "0d[?+z1<a]dsaxp"
And the result is 12.
The input lines can also be complex dc commands.
{ echo "3 5 *"; echo "4 3 *"; echo "5dd++"; } | dc -e "0d[?+z1<a]dsaxp"
And the result is 42.
Note that since dc supports arbitrary precision, there is no concern about numeric overflow or loss of precision, no matter how many lines the input stream contains, unlike a similarly concise solution in AWK.
Downsides of this solution are: the loop stops on encountering a blank line in the input stream (technically, any input line which does not add at least one numeric value to the stack); and, for handling negative numbers, leading instances of '-' to denote a negative sign must be change to '_' in the input stream, because of dc's nonstandard negative sign. The ? operator in dc does not provide a clean way to discern reading a blank line from reading end of file.
Unit conversion
As an example of a relatively simple program in dc, this command (in 1 line):
dc -e '[[Enter a number (metres), or 0 to exit]PAP]sh[q]sz[lhx?d0=zAk.0254/.5+0kC~1/rn[ feet ]Pn[ inches]PAPdx]dx'
converts distances from metres to feet and inches; the bulk of it is concerned with prompting for input, printing output in a suitable format and looping around to convert another number.
Greatest common divisor
As an example, here is an implementation of the Euclidean algorithm to find the GCD:
dc -e '??[dSarLa%d0<a]dsax+p' # shortest
dc -e '[a=]P?[b=]P?[dSarLa%d0<a]dsax+[GCD:]Pp' # easier-to-read version
Factorial
Computing the factorial of an input value,
dc -e '?[q]sQ[d1=Qd1-lFx*]dsFxp'
Quines in dc
There exist also quines in the programming language dc; programs that produce its source code as output.
dc -e '[91Pn[dx]93Pn]dx'
dc -e '[91PP93P[dx]P]dx'
Printing all prime numbers
dc -e '2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x'
This program was written by Michel Charpentier.
It outputs the sequence of prime numbers.
Note that shorter implementation is possible, which needs fourteen symbols fewer.
dc -e '2p3p[pq]s$[l!2+ds!l^<$dl!%0<#]s#[+dvs^1s!l#x2l.x]ds.x'
Integer factorization
dc -e '[n=]P?[p]s2[lip/dli%0=1dvsr]s12sid2%0=13sidvsr[dli%0=1lrli2+dsi!>.]ds.xd1<2'
This program was also written by Michel Charpentier.
There is a shorter
dc -e "[n=]P?[lfp/dlf%0=Fdvsr]sF[dsf]sJdvsr2sf[dlf%0=Flfdd2%+1+sflr<Jd1<M]dsMx"
and a faster solution (try with the 200-bit number (input 2 200^1-)
dc -e "[n=]P?[lfp/dlf% 0=Fdvsr]sFdvsr2sfd2%0=F3sfd3%0=F5sf[dlf%0=Flfd4+sflr>M]sN[dlf%0=Flfd2+sflr>N]dsMx[p]sMd1<M"
Note that the latter can be sped up even more, if the access to a constant is replaced by a register access.
dc -e "[n=]P?[lfp/dlf%l0=Fdvsr]sF2s2dvsr2sf4s4d2%0=F3sfd3%0=F5sf[dlf%l0=Flfdl4+sflr>M]sN[dlf%l0=Flfdl2+sflr>N]dsMx[p]sMd1<M"
Calculating Pi
An implementation of the Chudnovsky algorithm in the programming language dc. The program will print better and better approximations as it runs. But as pi is a transcendental number, the program will continue until interrupted or resource exhaustion of the machine it is run on.
dc -e '_640320[0ksslk3^16lkd12+sk*-lm*lhd1+sh3^/smlxlj*sxll545140134+dsllm*lxlnk/ls+dls!=P]sP3^sj7sn[6sk1ddshsxsm13591409dsllPx10005v426880*ls/K3-k1/pcln14+snlMx]dsMx'
A fast divide and conquer implementation of the same formula that doubles in size each iteration. It evaluates a finite number if sums as an exact rational number and only performs one large division and square root per iteration. It is fast, but will still quickly slow down as the size of the fraction increases.
dc -e '1Sk1SR13591409dSBSP426880dSQ4/3^9*SC[0r-]s-[lkE*1-k10005vlQ*lP/nAan0k]dSox[Lkd1+Skdd1+Sk3^lC*SQ2*1-d3*d*4-*dSR545140134LB+dSB*lk2%0=-SP]dszx[LRLRdLP*LPLQdLQ*SQ*+SP*SR]sc[d1-d0<yd0<yd0=z0=zlcx]sy0[lcxlox1+lyxllx]dslx'
Diffie–Hellman key exchange
A more complex example of dc use embedded in a Perl script performs a Diffie–Hellman key exchange. This was popular as a signature block among cypherpunks during the ITAR debates, where the short script could be run with only Perl and dc, ubiquitous programs on Unix-like operating systems:
#!/usr/bin/perl -- -export-a-crypto-system-sig Diffie-Hellman-2-lines
($g, $e, $m) = @ARGV, $m || die "$0 gen exp mod\n";
print `echo "16dio1[d2%Sa2/d0<X+d*La1=z\U$m%0]SX$e"[$g*]\EszlXx+p | dc`
A commented version is slightly easier to understand and shows how to use loops, conditionals, and the q command to return from a macro. With the GNU version of dc, the | command can be used to do arbitrary precision modular exponentiation without needing to write the X function.
#!/usr/bin/perl
my ($g, $e, $m) = map { "\U$_" } @ARGV;
die "$0 gen exp mod\n" unless $m;
print `echo $g $e $m | dc -e '
# Hex input and output
16dio
# Read m, e and g from stdin on one line
?SmSeSg
# Function z: return g * top of stack
[lg*]sz
# Function Q: remove the top of the stack and return 1
[sb1q]sQ
# Function X(e): recursively compute g^e % m
# It is the same as Sm^Lm%, but handles arbitrarily large exponents.
# Stack at entry: e
# Stack at exit: g^e % m
# Since e may be very large, this uses the property that g^e % m ==
# if( e == 0 )
# return 1
# x = (g^(e/2)) ^ 2
# if( e % 2 == 1 )
# x *= g
# return x %
[
d 0=Q # return 1 if e==0 (otherwise, stack: e)
d 2% Sa # Store e%2 in a (stack: e)
2/ # compute e/2
lXx # call X(e/2)
d* # compute X(e/2)^2
La1=z # multiply by g if e%2==1
lm % # compute (g^e) % m
] SX
le # Load e from the register
lXx # compute g^e % m
p # Print the result
'`;
Environment variables
If the environment variable DC_LINE_LENGTH exists and contains an integer that is greater than 1 and less than , the output of number digits (according to the output base) will be restricted to this value, inserting thereafter backslashes and newlines. The default line length is 70. The special value of 0 disables line breaks.
See also
bc (programming language)
Calculator input methods
HP calculators
Stack machine
Reverse Polish notation
References
External links
Package dc in Debian GNU/Linux repositories
Native Windows port of bc, which includes dc.
Cross-platform software
Unix software
Software calculators
Free mathematics software
Numerical programming languages
Stack-oriented programming languages
Plan 9 commands | Dc (computer program) | [
"Mathematics",
"Technology"
] | 3,974 | [
"Software calculators",
"Free mathematics software",
"Computing commands",
"Plan 9 commands",
"Mathematical software"
] |
562,964 | https://en.wikipedia.org/wiki/Esrange | [
{ "type": "ExternalData", "service": "geoshape", "ids": "Q1368518", "properties": { "title": "main buildings", "stroke": "#52b483", "fill": "#52b483" } },
{ "type": "ExternalData", "service": "geoshape", "ids": "Q127446463", "properties": { "title": "safety area A", "stroke": "#f99991", "fill": "#f99991" } },
{ "type": "ExternalData", "service": "geoshape", "ids": "Q127448345", "properties": { "title": "safety area B", "stroke": "#f99991", "fill": "#f99991" } },
{ "type": "ExternalData", "service": "geoshape", "ids": "Q127450009", "properties": { "title": "safety area C", "stroke": "#f99991", "fill": "#f99991" } },
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [21.10722, 67.89331] }, "properties": { "title": "suborbital launch site", "marker-symbol": "rocket", "marker-color": "#B22222" } },
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [21.16233, 67.87808] }, "properties": { "title": "orbital launch site", "marker-symbol": "rocket", "marker-color": "#B22222" } },
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [21.08384, 67.88741] }, "properties": { "title": "balloon launch site", "marker-symbol": "circle-stroked", "marker-color": "#B22222" } },
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [21.06169, 67.87896] }, "properties": { "title": "satellite ground station", "marker-symbol": "communications-tower", "marker-color": "#B22222" } },
{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [21.081389, 67.891667] }, "properties": { "title": "main building complex", "marker-symbol": "home", "marker-color": "#228B22" } },
]
Esrange Space Center is a rocket range and research centre located about 40 kilometers east of the town of Kiruna in northern Sweden. It is a base for scientific research with high-altitude balloons, investigation of the aurora borealis, sounding rocket launches, and satellite tracking, among other things. Located 200 km north of the Arctic Circle and surrounded by a vast wilderness, its geographic location is ideal for many of these purposes.
Esrange was built in 1964 by ESRO, the European Space Research Organisation, which later became European Space Agency by merging with ELDO, the European Launcher Development Organisation. The first rocket launch from Esrange occurred on 19 November 1966. In 1972, ownership was transferred to the newly started Swedish Space Corporation.
History
In the 1960s, Esrange was established as an ESRO sounding rocket launching range located in Kiruna. This location was chosen because it was generally agreed that it was important to carry out a sounding rocket programme in the auroral zone, and for this reason it was essential that ESRO equip itself with a suitable range in the northern latitudes. Access to Kiruna was good by air, road and rail, and the launching range was relatively close to the town of Kiruna. Finally and perhaps decisively, Esrange could be located near Kiruna Geophysical Observatory (subsequently renamed to Swedish Institute of Space Physics). In 1972 ownership and operations of the range was transferred to the Swedish Space Corporation.
Name
The name of the facility was originally ESRANGE, which was an abbreviation for ESRO Sounding Rocket Launching Range.
When Swedish Space Corporation took over the range, its name became Esrange (with capital 'E' only).
Esrange Space Center is the name that is currently used for the facility.
Other ways to interpret the name over the years has been European Space and Sounding Rocket Range, and European Space Range.
Rocket activities
There had been Swedish rocket activities previously, mainly at Kronogård (18 launches in the period 1961–1964). However, the rocket activity in Sweden did not gain thrust until after ESRO established Esrange in 1964.
During the period 1966–1972 ESRO launched more than 150 rockets from Esrange. Most of these were Centaure, Nike Apache, and Skua rockets reaching 100–220 km altitude. They supported many branches of European research, but the emphasis was on atmospheric and ionospheric research.
In 1972 the management of Esrange was transferred to the Swedish Space Corporation (SSC). Gradually the smaller rockets were complemented by larger rockets reaching higher altitudes, achieving weightlessness for a few minutes when the rocket is above the parts of the atmosphere giving an appreciable friction. Three main programmes, Texus, Maser, and Maxus currently dominate the rocket activities at Esrange and support microgravity research for ESA and DLR:
SSC, jointly with DLR, introduced a new launch service with the Suborbital Express programme launched in 2019. Suborbital Express is now integrating the Maser microgravity programme.
More than 500 rockets have been launched from Esrange since 1966. For information on individual rockets, see the List of rockets launched from Esrange.
Esrange has six launchers:
MAXUS launcher (used for the CASTOR 4B rocket)
MAN launcher (owned by DLR)
MRL Launcher (used for the Orion, Nike-Orion, Taurus-Orion, Nike-Black Brant V, Terrier-Black Brant rockets)
Skylark launch tower (now used for the VSB-30 rocket)
FFAR launcher (used for Folding-Fin Aerial Rockets)
SULO/VIPER launcher (used for Super Loki and VIPER rockets)
Balloon activities
Since 1974, more than 500 high-altitude balloons have been launched from Esrange for research purposes. The launch pad can handle balloons with volumes exceeding 1 million cubic meters.
Satellite services
The arctic latitude of Esrange makes it very suitable for communication with satellites in polar orbits. Esrange Satellite Station is part of a global network with stations in Canada, Alaska, Hawaii, Chile and Australia. This global network is managed from Esrange.
Esrange Space Center satellite station focuses on data acquisition and processing for remote sensing and scientific missions as well as TT&C support. The station is often used in combination with SSC's Inuvik Satellite Station in northern Canada, to increase coverage opportunities for polar orbiting missions.
Esrange Space Center satellite station includes six independent Telemetry Tracking & Command (TT&C) systems in S-Band (one with receive capability also in the UHF-Band), six multi-frequency receive antenna systems in S/X-Band and an operational building which houses reception system electronics and data processing equipment. Satellite services at Esrange began in 1978.
Satellite control services
A number of telecommunication satellites have been controlled through Esrange:
Tele-X (1989–1998)
Sirius-1 (1995–2003)
Sirius-2 (1997–2009)
Sirius-3 (1998–2015)
Sirius-4 (2008–)
Most research satellites of the Swedish space programme have received control commands through Esrange:
Viking (1986–1987)
Freja (1992–1996)
Astrid-1 (1995)
Odin (2001–)
The exception was controlled from SSC's laboratories in Solna outside Stockholm:
Astrid-2 (1998–1999)
Ground station services
Data have been received at Esrange from more than 50 satellites, including SPOT 1–5, Landsat 2–7, ERS-1–2 and Envisat.
Satellite launch capability
Ideas to use Esrange Space Center for orbital launches has existed since the inauguration of the base in 1966, then in the vision of ESRO. As new smaller launcher projects started to emerge in the beginning of the new millennia, SSC started to form new ideas to use these to obtain an orbital capability.
On October 14, 2020, Matilda Ernkrans, the Swedish Space Minister, announced the decision of the Swedish government to establish capability to launch small satellites from Esrange Space Center in northern Sweden.
The orbital launch site, LC-3, was inaugurated on 13 January 2023 as the ribbon was cut by the Swedish king Carl XVI Gustaf, prime minister Ulf Kristersson together with head commissioner President Ursula Von der Leyen. There are currently plans for an orbital launch at 2024.
Impact
The area of the site is traditional land of the Sami people, particularly for reindeer herding. Shelters have been established for people in the surrounding area to take cover during launches.
Increased industrial, military and aeronautic activity in the region has been viewed critically by Sami people.
See also
List of rockets launched from Esrange
Swedish Space Corporation
Swedish National Space Agency
Swedish Institute of Space Physics
North European Aerospace Test range
List of rocket launch sites
Rexus/Bexus
SaxaVord Spaceport
References
Footnotes
Sources
The History of Sounding Rockets and Their Contribution to European Space Research, Günther Seibert, ESA HSR-38, November 2006, .
External links
Esrange Space Center
List of stratospheric balloons launched from Esrange
Swedish Space Corporation - Official site
European Space Agency
Spaceflight
Spaceports in Europe
Rocket launch sites in Sweden
Science and technology in Sweden
Space programme of Sweden
Kiruna
Buildings and structures in Norrbotten County
1966 establishments in Sweden | Esrange | [
"Astronomy"
] | 2,242 | [
"Spaceflight",
"Outer space"
] |
562,998 | https://en.wikipedia.org/wiki/3000%20%28number%29 | 3000 (three thousand) is the natural number following 2999 and preceding 3001. It is the smallest number requiring thirteen letters in English (when "and" is required from 101 forward).
Selected numbers in the range 3001–3999
3001 to 3099
3001 – super-prime; divides the Euclid number 2999# + 1
3003 – triangular number, only number known to appear eight times in Pascal's triangle; no number is known to appear more than eight times other than 1. (see Singmaster's conjecture)
3019 – super-prime, happy prime
3023 – 84th Sophie Germain prime, 51st safe prime
3025 = 552, sum of the cubes of the first ten integers, centered octagonal number, dodecagonal number
3037 – star number, cousin prime with 3041
3045 – sum of the integers 196 to 210 and sum of the integers 211 to 224
3046 – centered heptagonal number
3052 – decagonal number
3059 – centered cube number
3061 – prime of the form 2p-1
3063 – perfect totient number
3067 – super-prime
3071 – Thabit number
3072 – 3-smooth number (210×3)
3075 – nonagonal number
3078 – 18th pentagonal pyramidal number
3080 – pronic number
3081 – triangular number, 497th sphenic number
3087 – sum of first 40 primes
3100 to 3199
3109 – super-prime
3119 – safe prime
3121 – centered square number, emirp, largest minimal prime in quinary.
3125 – a solution to the expression , where ().
3136 = 562, palindromic in ternary (110220113), tribonacci number
3137 – Proth prime, both a left- and right-truncatable prime
3149 – highly cototient number
3150 = 153 - 152
3155 – member of the Mian–Chowla sequence
3159 = number of trees with 14 unlabeled nodes
3160 – triangular number
3167 – safe prime
3169 – super-prime, Cuban prime of the form .
3192 – pronic number
3200 to 3299
3203 – safe prime
3207 – number of compositions of 14 whose run-lengths are either weakly increasing or weakly decreasing
3229 – super-prime
3240 – triangular number
3248 – member of a Ruth-Aaron pair with 3249 under second definition, largest number whose factorial is less than 1010000 – hence its factorial is the largest certain advanced computer programs can handle.
3249 = 572, palindromic in base 7 (123217), centered octagonal number, member of a Ruth–Aaron pair with 3248 under second definition
3253 – sum of eleven consecutive primes (269 + 271 + 277 + 281 + 283 + 293 + 307 + 311 + 313 + 317 + 331)
3256 – centered heptagonal number
3259 – super-prime, completes the ninth prime quadruplet set
3264 – solution to Steiner's conic problem: number of smooth conics tangent to 5 given conics in general position
3266 – sum of first 41 primes, 523rd sphenic number
3276 – tetrahedral number
3277 – 5th super-Poulet number, decagonal number
3279 – first composite Wieferich number
3281 – octahedral number, centered square number
3286 – nonagonal number
3299 – 85th Sophie Germain prime, super-prime
3300 to 3399
3306 – pronic number
3307 – balanced prime
3313 – balanced prime, star number
3319 – super-prime, happy number
3321 – triangular number
3329 – 86th Sophie Germain prime, Proth prime, member of the Padovan sequence
3354 – member of the Mian–Chowla sequence
3358 – sum of the squares of the first eleven primes
3359 – 87th Sophie Germain prime, highly cototient number
3360 – largely composite number
3363/2378 ≈ √2
3364 = 582
3367 = 153 - 23 = 163 - 93 = 343 - 333
3375 = 153, palindromic in base 14 (133114), 15th cube
3389 – 88th Sophie Germain prime
3400 to 3499
3403 – triangular number
3407 – super-prime
3413 – 89th Sophie Germain prime, sum of the first 5 nn: 3413 = 11 + 22 + 33 + 44 + 55
3422 – pronic number, 553rd sphenic number, melting point of tungsten in degrees Celsius
3435 – a perfect digit-to-digit invariant, equal to the sum of its digits to their own powers (33 + 44 + 33 + 55 = 3435)
3439 – magic constant of n×n normal magic square and n-queens problem for n = 19.
3445 – centered square number
3447 – sum of first 42 primes
3449 – 90th Sophie Germain prime
3456 – 3-smooth number (27×33)
3457 – Proth prime
3463 – happy number
3467 – safe prime
3469 – super-prime, Cuban prime of the form x = y + 2, completes the tenth prime quadruplet set
3473 – centered heptagonal number
3481 = 592, centered octagonal number
3486 – triangular number
3491 – 91st Sophie Germain prime
3500 to 3599
3504 – nonagonal number
3510 – decagonal number
3511 – largest known Wieferich prime
3512 – number of primes .
3517 – super-prime, sum of nine consecutive primes (367 + 373 + 379 + 383 + 389 + 397 + 401 + 409 + 419)
3539 – 92nd Sophie Germain prime
3540 – pronic number
3559 – super-prime
3569 – highly cototient number
3570 – triangular number
3571 – 500th prime, Cuban prime of the form x = y + 1, 17th Lucas number, 4th balanced prime of order 4.
3591 – member of the Mian–Chowla sequence
3593 – 93rd Sophie Germain prime, super-prime
3600 to 3699
3600 = 602, number of seconds in an hour, called šār or šāru in the sexagesimal system of Ancient Mesopotamia (cf. Saros), 1201-gonal number
3601 – star number
3610 – 19th pentagonal pyramidal number
3613 – centered square number
3617 – sum of eleven consecutive primes (293 + 307 + 311 + 313 + 317 + 331 + 337 + 347 + 349 + 353 + 359)
3623 – 94th Sophie Germain prime, safe prime
3637 – balanced prime, super-prime
3638 – sum of first 43 primes, 599th sphenic number
3643 – happy number, sum of seven consecutive primes (499 + 503 + 509 + 521 + 523 + 541 + 547)
3654 – tetrahedral number
3655 – triangular number, 601st sphenic number
3660 – pronic number
3684 – 13th Keith number
3697 – centered heptagonal number
3700 to 3799
3721 = 612, centered octagonal number
3729 – nonagonal number
3733 – balanced prime, super-prime
3741 – triangular number, 618th sphenic number
3751 – decagonal number
3761 – 95th Sophie Germain prime, super-prime
3779 – 96th Sophie Germain prime, safe prime
3780 – largely composite number
3782 – pronic number, 623rd sphenic number
3785 – centered square number
3797 – member of the Mian–Chowla sequence, both a left- and right- truncatable prime
3800 to 3899
3803 – 97th Sophie Germain prime, safe prime, the largest prime factor of 123,456,789
3821 – 98th Sophie Germain prime
3828 – triangular number
3831 – sum of first 44 primes
3840 = 163 - 162, double factorial of 10
3844 = 622
3851 – 99th Sophie Germain prime
3856 – number of 17-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
3863 – 100th Sophie Germain prime
3865 – greater of third pair of Smith brothers
3888 – longest number when expressed in Roman numerals I, V, X, L, C, D, and M (MMMDCCCLXXXVIII), 3-smooth number (24×35)
3889 – Cuban prime of the form x = y + 2
3894 – octahedral number
3900 to 3999
3901 – star number
3906 – pronic number
3911 – 101st Sophie Germain prime, super-prime
3914 – number of 18-bead necklaces (turning over is allowed) where complements are equivalent
3916 – triangular number
3925 – centered cube number
3926 – 12th open meandric number, 654th sphenic number
3928 – centered heptagonal number
3937 – product of distinct Mersenne primes, repeated sum of divisors is prime, denominator of conversion factor from meter to US survey foot
3940 – there are 3940 distinct ways to arrange the 12 flat pentacubes (or 3-D pentominoes) into a 3x4x5 box (not counting rotations and reflections)
3943 – super-prime
3947 – safe prime
3960 – largely composite number
3961 – nonagonal number, centered square number
3969 = 632, centered octagonal number
3989 – highly cototient number
3998 – member of the Mian–Chowla sequence
3999 – largest number properly expressible using Roman numerals I, V, X, L, C, D, and M (MMMCMXCIX), ignoring vinculum
Prime numbers
There are 120 prime numbers between 3000 and 4000:
3001, 3011, 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079, 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167, 3169, 3181, 3187, 3191, 3203, 3209, 3217, 3221, 3229, 3251, 3253, 3257, 3259, 3271, 3299, 3301, 3307, 3313, 3319, 3323, 3329, 3331, 3343, 3347, 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413, 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491, 3499, 3511, 3517, 3527, 3529, 3533, 3539, 3541, 3547, 3557, 3559, 3571, 3581, 3583, 3593, 3607, 3613, 3617, 3623, 3631, 3637, 3643, 3659, 3671, 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727, 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797, 3803, 3821, 3823, 3833, 3847, 3851, 3853, 3863, 3877, 3881, 3889, 3907, 3911, 3917, 3919, 3923, 3929, 3931, 3943, 3947, 3967, 3989
References
Integers | 3000 (number) | [
"Mathematics"
] | 2,498 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
562,999 | https://en.wikipedia.org/wiki/4000%20%28number%29 | 4000 (four thousand) is the natural number following 3999 and preceding 4001. It is a decagonal number.
Selected numbers in the range 4001–4999
4001 to 4099
4005 – triangular number
4007 – safe prime
4010 – magic constant of n × n normal magic square and n-queens problem for n = 20
4013 – balanced prime
4019 – Sophie Germain prime
4021 – prime of the form 2p-1
4027 – super-prime
4028 – sum of the first 45 primes
4030 – third weird number
4031 – sum of the cubes of the first six primes
4032 – pronic number
4033 – sixth super-Poulet number; strong pseudoprime in base 2
4057 – prime of the form 2p-1
4060 – tetrahedral number
4073 – Sophie Germain prime
4079 – safe prime
4091 – super-prime
4095 – triangular number and odd abundant number; number of divisors in the sum of the fifth and largest known unitary perfect number, largest Ramanujan–Nagell number of the form
4096 = 642 = 163 = 84 = 46 = 212, smallest number with exactly 13 factors, a superperfect number
4100 to 4199
4104 = 23 + 163 = 93 + 153
4127 – safe prime
4133 – super-prime
4139 – safe prime
4140 – Bell number
4141 – centered square number
4147 – smallest cyclic number in duodecimal represented in base-12 notation as 2497122×4147dez = 4972123×4147dez = 7249124×4147dez = 972412
4153 – super-prime
4160 – pronic number
4166 – centered heptagonal number
4167 = 7! − 6! − 5! − 4! − 3! − 2! − 1!, number of planar partitions of 14
4169 – a number of points of norm <= 10 in cubic lattice
4177 – prime of the form 2p-1
4181 – Fibonacci number, Markov number
4186 – triangular number
4187 – factor of R13, the record number of wickets taken in first-class cricket by Wilfred Rhodes
4199 – highly cototient number, product of three consecutive primes
4200 to 4299
4200 – nonagonal number, pentagonal pyramidal number, largely composite number
4210 – 11th semi-meandric number
4211 – Sophie Germain prime
4213 – Riordan number
4217 – super-prime, happy number
4219 – cuban prime of the form x = y + 1, centered hexagonal number
4225 = 652, centered octagonal number
4227 – sum of the first 46 primes
4240 – Leyland number
4257 – decagonal number
4259 – safe prime
4261 – prime of the form 2p-1
4271 – Sophie Germain prime
4273 – super-prime, number of non-isomorphic set-systems of weight 11
4278 – triangular number
4279 – little Schroeder number
4283 – safe prime
4289 – highly cototient number
4290 – pronic number
4300 to 4399
4320 – largely composite number
4324 – 23rd square pyramidal number
4325 – centered square number
4339 – super-prime, twin prime
4349 – Sophie Germain prime
4356 = 662, sum of the cubes of the first eleven integers
4357 – prime of the form 2p-1
4359 – perfect totient number
4369 – seventh super-Poulet number
4371 – triangular number
4373 – Sophie Germain prime
4374 – The largest number such that both it and the next number (4375) are 7-smooth
4375 – perfect totient number (the smallest not divisible by 3)
4391 – Sophie Germain prime
4397 – Year of Comet Hale–Bopp's return, super-prime
4400 to 4499
4400 – the number of missing persons in the sci-fi show The 4400
4409 – Sophie Germain prime, highly cototient number, balanced prime, 600th prime number
4410 – member of the Padovan sequence
4411 – centered heptagonal number
4421 – super-prime, alternating factorial
4422 – pronic number
4425 = 15 + 25 + 35 + 45 + 55
4438 – sum of the first 47 primes
4444 - repdigit
4446 – nonagonal number
4447 – cuban prime of the form x = y + 1
4457 – balanced prime
4463 – super-prime
4465 – triangular number
4481 – Sophie Germain prime
4489 = 672, centered octagonal number
4495 – tetrahedral number
4500 to 4599
4503 – largest number not the sum of four or fewer squares of composites
4505 – fifth Zeisel number
4513 – centered square number
4516 – centered pentagonal number
4517 – super-prime, happy number
4522 – decagonal number
4547 – safe prime
4549 – super-prime
4556 – pronic number
4560 – triangular number
4567 – super-prime
4579 – octahedral number
4597 – balanced prime
4600 to 4699
4604 – sum of the only two known Wieferich primes, 1093 and 3511
4607 – Woodall number
4608 – 3-smooth number (29×32)
4619 – highly cototient number
4620 – largely composite number
4621 – prime of the form 2p-1
4624 = 682, 173 – 172
4641 – magic constant of n × n normal magic square and n-queens problem for n = 21
4655 – number of free decominoes
4656 – triangular number
4657 – balanced prime
4661 – sum of the first 48 primes
4663 – super-prime, centered heptagonal number
4679 – safe prime
4680 – largely composite number
4681 – eighth super-Poulet number
4688 – 2-automorphic number
4689 – sum of divisors and number of divisors are both triangular numbers
4691 – balanced prime
4692 – pronic number
4699 – nonagonal number
4700 to 4799
4703 – safe prime
4705 = 482 + 492 = 172 + 182 + … + 262, centered square number
4727 – sum of the squares of the first twelve primes
4731 – centered pentagonal number
4733 – Sophie Germain prime
4753 – triangular number
4759 – super-prime
4761 = 692, centered octagonal number
4769 = number of square (0,1)-matrices without zero rows and with exactly 5 entries equal to 1
4787 – safe prime, super-prime
4788 – 14th Keith number
4793 – Sophie Germain prime
4795 – decagonal number
4799 – safe prime
4800 to 4899
4801 – super-prime, cuban prime of the form x = y + 2, smallest prime with a composite sum of digits in base 7
4830 – pronic number
4840 - square yards in an acre
4851 – triangular number, pentagonal pyramidal number
4862 – Catalan number
4871 – Sophie Germain prime
4877 – super-prime
4879 – 11th Kaprekar number
4888 – sum of the first 49 primes
4900 to 4999
4900 = 702, the only square-pyramidal square other than 1 ()
4901 – centered square number
4913 = 173
4919 – Sophie Germain prime, safe prime
4922 – centered heptagonal number
4933 – super-prime
4941 – centered cube number
4943 – Sophie Germain prime, super-prime
4950 – triangular number, 12th Kaprekar number
4951 – centered pentagonal number
4957 – sum of three and five consecutive primes (1637 + 1657 + 1663, 977 + 983 + 991 + 997 + 1009)
4959 – nonagonal number
4960 – tetrahedral number; greater of fourth pair of Smith brothers
4970 – pronic number
4973 – the 666th prime
4991 – Lucas–Carmichael number
4993 – balanced prime
4999 – prime of the form
Prime numbers
There are 119 prime numbers between 4000 and 5000:
4001, 4003, 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057, 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129, 4133, 4139, 4153, 4157, 4159, 4177, 4201, 4211, 4217, 4219, 4229, 4231, 4241, 4243, 4253, 4259, 4261, 4271, 4273, 4283, 4289, 4297, 4327, 4337, 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409, 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481, 4483, 4493, 4507, 4513, 4517, 4519, 4523, 4547, 4549, 4561, 4567, 4583, 4591, 4597, 4603, 4621, 4637, 4639, 4643, 4649, 4651, 4657, 4663, 4673, 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751, 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937, 4943, 4951, 4957, 4967, 4969, 4973, 4987, 4993, 4999
References
Integers | 4000 (number) | [
"Mathematics"
] | 2,162 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
563,000 | https://en.wikipedia.org/wiki/5000%20%28number%29 | 5000 (five thousand) is the natural number following 4999 and preceding 5001. Five thousand is, at the same time, the largest isogrammic numeral, and the smallest number that contains every one of the five vowels (a, e, i, o, u) in the English language.
Selected numbers in the range 5001–5999
5001 to 5099
5003 – Sophie Germain prime
5020 – amicable number with 5564
5021 – super-prime, twin prime with 5023
5023 – twin prime with 5021
5039 – factorial prime, Sophie Germain prime
5040 = 7!, superior highly composite number
5041 = 712, centered octagonal number
5050 – triangular number, Kaprekar number, sum of first 100 integers
5051 – Sophie Germain prime
5059 – super-prime
5076 – decagonal number
5077 – prime of the form 2p-1
5081 – Sophie Germain prime
5087 – safe prime
5099 – safe prime
5100 to 5199
5101 – prime of the form 2p-1
5107 – super-prime, balanced prime
5113 – balanced prime, prime of the form 2p-1
5117 – sum of the first 50 primes
5151 – triangular number
5167 – Leonardo prime, cuban prime of the form x = y + 1
5171 – Sophie Germain prime
5184 = 722
5186 – φ(5186) = 2592
5187 – φ(5187) = 2592
5188 – φ(5189) = 2592, centered heptagonal number
5189 – super-prime
5200 to 5299
5209 - largest minimal prime in base 6
5226 – nonagonal number
5231 – Sophie Germain prime
5233 – prime of the form 2p-1
5244 = 222 + 232 + … + 292 = 202 + 212 + … + 282
5249 – highly cototient number
5253 – triangular number
5279 – Sophie Germain prime, twin prime with 5281, 700th prime number
5280 is the number of feet in a mile. It is divisible by three, yielding 1760 yards per mile and by 16.5, yielding 320 rods per mile. Also, 5280 is connected with both Klein's J-invariant and the Heegner numbers. Specifically:
5281 – super-prime, twin prime with 5279
5282 - used in various paintings by Thomas Kinkade
5292 – Kaprekar number
5300 to 5399
5303 – Sophie Germain prime, balanced prime
5329 = 732, centered octagonal number
5333 – Sophie Germain prime
5335 – magic constant of n × n normal magic square and n-queens problem for n = 22.
5340 – octahedral number
5350 - sum of the first 51 primes
5356 – triangular number
5365 – decagonal number
5381 – super-prime
5387 – safe prime, balanced prime
5392 – Leyland number
5393 – balanced prime
5399 – Sophie Germain prime, safe prime
5400 to 5499
5402 – number of non-equivalent ways of expressing 1,000,000 as the sum of two prime numbers
5405 – member of a Ruth–Aaron pair with 5406 (either definition)
5406 – member of a Ruth–Aaron pair with 5405 (either definition)
5413 – prime of the form 2p-1
5419 – Cuban prime of the form x = y + 1
5437 – prime of the form 2p-1
5441 – Sophie Germain prime, super-prime
5456 – tetrahedral number
5459 – highly cototient number
5460 – triangular number
5461 – super-Poulet number, centered heptagonal number
5476 = 742
5483 – safe prime
5500 to 5599
5500 – nonagonal number
5501 – Sophie Germain prime, twin prime with 5503
5503 – super-prime, twin prime with 5501, cousin prime with 5507
5507 – safe prime, cousin prime with 5503
5508 = 183 – 182
5525 – square pyramidal number
5527 – happy prime
5536 – tetranacci number
5555 – repdigit
5557 – super-prime
5563 – balanced prime
5564 – amicable number with 5020
5565 – triangular number
5566 – pentagonal pyramidal number
5569 – happy prime
5571 – perfect totient number
5581 – prime of the form 2p-1
5589 - sum of the first 52 primes
5600 to 5699
5623 – super-prime
5625 = 752, centered octagonal number
5631 – number of compositions of 15 whose run-lengths are either weakly increasing or weakly decreasing
5639 – Sophie Germain prime, safe prime
5651 – super-prime
5659 – happy prime, completes the eleventh prime quadruplet set
5662 – decagonal number
5671 – triangular number
5700 to 5799
5701 – super-prime, prime of the form 2p-1
5711 – Sophie Germain prime
5719 – Zeisel number, Lucas–Carmichael number
5741 – Sophie Germain prime, Pell prime, Markov prime, centered heptagonal number
5743 = number of signed trees with 9 nodes
5749 – super-prime
5768 – tribonacci number
5776 = 762
5777 – smallest counterexample to the conjecture that all odd numbers are of the form p + 2a2
5778 – triangular number
5781 – nonagonal number
5798 – Motzkin number
5800 to 5899
5801 – super-prime
5807 – safe prime, balanced prime
5830 - sum of the first 53 primes
5832 = 183
5842 – member of the Padovan sequence
5849 – Sophie Germain prime
5869 – super-prime
5879 – safe prime, highly cototient number
5886 – triangular number
5900 to 5999
5903 – Sophie Germain prime
5913 – sum of the first seven factorials
5927 – safe prime
5929 = 772, centered octagonal number
5939 – safe prime
5967 – decagonal number
5971 – first composite Wilson number
5984 – tetrahedral number
5995 – triangular number
Prime numbers
There are 114 prime numbers between 5000 and 6000:
5003, 5009, 5011, 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087, 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167, 5171, 5179, 5189, 5197, 5209, 5227, 5231, 5233, 5237, 5261, 5273, 5279, 5281, 5297, 5303, 5309, 5323, 5333, 5347, 5351, 5381, 5387, 5393, 5399, 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443, 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507, 5519, 5521, 5527, 5531, 5557, 5563, 5569, 5573, 5581, 5591, 5623, 5639, 5641, 5647, 5651, 5653, 5657, 5659, 5669, 5683, 5689, 5693, 5701, 5711, 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791, 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849, 5851, 5857, 5861, 5867, 5869, 5879, 5881, 5897, 5903, 5923, 5927, 5939, 5953, 5981, 5987
References
Integers | 5000 (number) | [
"Mathematics"
] | 1,727 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
563,001 | https://en.wikipedia.org/wiki/6000%20%28number%29 | 6000 (six thousand) is the natural number following 5999 and preceding 6001.
Selected numbers in the range 6001–6999
6001 to 6099
6025 – Stage name of rhythm guitarist of the Dead Kennedys from June 1978 to March 1979. Full name is Carlos Cadona.
6028 – centered heptagonal number
6037 – super-prime, prime of the form 2p-1
6042 – 6042 Cheshirecat is a Mars-crossing asteroid.
6047 – safe prime
6053 – Sophie Germain prime
6069 – nonagonal number
6073 – balanced prime
6079 – The serial number Winston Smith is referred to as in the George Orwell novel Nineteen Eighty-Four
6084 = 782, sum of the cubes of the first twelve integers
6081 - sum of the first 54 primes
6089 – highly cototient number
6095 – magic constant of n × n normal magic square and n-Queens Problem for n = 23.
6100 to 6199
6101 – Sophie Germain prime
6105 – triangular number
6113 – Sophie Germain prime, super-prime
6121 – prime of the form 2p-1
6131 – Sophie Germain prime, twin prime with 6133
6133 – 800th prime number, twin prime with 6131
6143 – Thabit number
6144 – 3-smooth number (211×3)
6173 – Sophie Germain prime
6174 – Kaprekar's constant
6181 – octahedral number
6200 to 6299
6200 – harmonic divisor number
6201 – square pyramidal number
6216 – triangular number
6217 – super-prime, prime of the form 2p-1
6229 – super-prime
6232 – amicable number with 6368
– Most widely accepted figure for the number of verses in the Qur'an
6241 = 792, centered octagonal number
6250 – Leyland number
6263 – Sophie Germain prime, balanced prime
6269 – Sophie Germain prime
6280 – decagonal number
6300 to 6399
6311 – super-prime
6317 – balanced prime
6322 – centered heptagonal number
6323 – Sophie Germain prime, balanced prime, super-prime
6328 – triangular number
6329 – Sophie Germain prime
6337 - star prime
6338 - sum of the first 55 primes
– Number of verses in the Qur'an according to the sect founded by Rashad Khalifa.
6348 – pentagonal pyramidal number
6361 – prime of the form 2p-1, twin prime
6364 – nonagonal number
6367 – balanced prime
6368 – amicable number with 6232
6373 – balanced prime, sum of three and seven consecutive primes (2113 + 2129 + 2131 and 883 + 887 + 907 + 911 + 919 + 929 + 937)
6397 – sum of three consecutive primes (2129 + 2131 + 2137)
6399 – smallest integer that cannot be expressed as a sum of fewer than 279 eighth powers
6400 to 6499
6400 = 802
6408 – sum of the squares of the first thirteen primes
6441 – triangular number
6449 – Sophie Germain prime
6466 – Markov number
6480 – smallest number with exactly 50 factors
6491 – Sophie Germain prime
6500 to 6599
6502 – model number of the MOS Technology 6502 which equipped early computers such as the Apple I and II, Commodore PET, Atari and others.
6509 – highly cototient number
6521 – Sophie Germain prime
6542 – number of primes .
6545 – tetrahedral number
6551 – Sophie Germain prime
6555 – triangular number
6556 – member of a Ruth-Aaron pair with 6557 (first definition)
6557 – member of a Ruth-Aaron pair with 6556 (first definition)
6561 = 812 = 94 = 38, perfect totient number
6563 – Sophie Germain prime
6581 – Sophie Germain prime
6599 – safe prime
6600 to 6699
6601 - Carmichael number, decagonal number, sum of the first 56 primes
6623 – centered heptagonal number
6659 – safe prime
6666 – forty-fourth nonagonal number, and the 11th third-convolution of Fibonacci numbers. In Christian demonology it represents the number of demons in a legion of demons.
6670 – triangular number, centered nonagonal number, centered 19-gonal number,
6700 to 6799
6719 – safe prime, highly cototient number
6724 = 822
6733 - star prime
6728 – number of domino tilings of a 6×6 checkerboard
6761 – Sophie Germain prime
6765 – 20th Fibonacci number
6779 – safe prime
6786 – triangular number
6800 to 6899
6811 – member of a Ruth-Aaron pair with 6812 (first definition)
6812 – member of a Ruth-Aaron pair with 6811 (first definition)
6827 – safe prime
6841 - largest right-truncatable prime in base 7
6842 – number of parallelogram polyominoes with 12 cells
6859 = 193
6863 – balanced prime
6870 - sum of the first 57 primes
6879 – number of planar partitions of 15
6880 – vampire number
6889 = 832, centered octagonal number
6899 – Sophie Germain prime, safe prime
6900 to 6999
6903 – triangular number
6912 – 3-smooth number (28×33)
6924 – magic constant of n × n normal magic square and n-Queens Problem for n = 24.
6929 – highly cototient number
6930 – decagonal number, square pyramidal number
6931 – centered heptagonal number
6969 – 2015 comedic progressive rock song by the band Ninja Sex Party
6975 – nonagonal number
6977 – balanced prime
6983 – Sophie Germain prime, safe prime
6997 – 900th prime number
Prime numbers
There are 117 prime numbers between 6000 and 7000:
6007, 6011, 6029, 6037, 6043, 6047, 6053, 6067, 6073, 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133, 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211, 6217, 6221, 6229, 6247, 6257, 6263, 6269, 6271, 6277, 6287, 6299, 6301, 6311, 6317, 6323, 6329, 6337, 6343, 6353, 6359, 6361, 6367, 6373, 6379, 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473, 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563, 6569, 6571, 6577, 6581, 6599, 6607, 6619, 6637, 6653, 6659, 6661, 6673, 6679, 6689, 6691, 6701, 6703, 6709, 6719, 6733, 6737, 6761, 6763, 6779, 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833, 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907, 6911, 6917, 6947, 6949, 6959, 6961, 6967, 6971, 6977, 6983, 6991, 6997
See also
Year 6000
References
Integers | 6000 (number) | [
"Mathematics"
] | 1,685 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
563,056 | https://en.wikipedia.org/wiki/List%20of%20software%20patents | This is a list of software patents, which contains notable patents and patent applications involving computer programs (also known as a software patent). Software patents cover a wide range of topics and there is therefore important debate about whether such subject-matter should be excluded from patent protection. However, there is no official way of identifying software patents and different researchers have devised their own ways of doing so.
This article lists patents relating to software which have been the subject of litigation or have achieved notoriety in other ways. Notable patent applications are also listed and comparisons made between corresponding patents and patent applications in different countries. The patents and patent applications are categorised according to the subject matter of the patent or the particular field in which the patent had an effect that brought it into the public view.
Business methods
Data compression
Data compression in general
(Main article: Stac Electronics)
also granted as - now expired
Stac Electronics sued Microsoft for patent infringement when Microsoft introduced the DoubleSpace data compression scheme into MS-DOS. Stac was awarded $120 million by a jury in 1994 and Microsoft was ordered to recall versions of MS-DOS with the infringing technology.
Audio compression
(Main article: MP3)
One of several patents covering the MP3 format owned by the Fraunhofer Society which led to the development of the Ogg Vorbis format as an alternative to MP3.
(Main article: Alcatel-Lucent v. Microsoft)
Two patents owned by Alcatel-Lucent relating to MP3 technology under which they sued Microsoft for $1.5 billion. Microsoft thought they had already licensed the technology from Fraunhofer, and this case illustrates one of the basic principles of patents that a license does not necessarily permit the licensee to work the technology, but merely prevents the licensee from being sued by the licensor.
Image compression
(Main article: GIF)
Unisys's patent on LZW compression, a fundamental part of the widely used GIF graphics format.
and its EP equivalent
(Main article: Forgent Networks)
Forgent Networks claimed this patent, granted in 1987, covered the JPEG image compression format. The broadest claims of the US patent were found to be invalid in 2005 following re-examination by the US Patent and Trademark Office.
This patent, owned by Lizardtech, Inc., was the subject of infringement proceedings against companies including Earth Resource Mapping, Inc. However, Lizardtech lost the trial on the grounds that an important part of their invention was the step of "maintaining updated sums of discrete wavelet transform coefficients from the discrete tile image to form a seamless discrete wavelet transform of the image". Claim 21 of the patent lacked this feature and was therefore obvious. The remaining claims contained this feature, but were not infringed by ERM. Internet buzz suggested the patent covered the JPEG 2000 image compression format but the additional feature of the valid claims appears not to be a JPEG 2000 requirement.
Video compression
Data encryption
Gaming systems
(Main article: Menashe v. William Hill)
A patent for a gaming system that has particular importance regarding Internet usage. A server running the game was located outside the UK but could be used within the UK. The Court of Appeal of England and Wales judged that the patent was being infringed by virtue of the sale of CDs in the UK containing software intended to put the invention into effect in the UK.
Image processing
also granted as - (Main article: Photographic mosaic)
Robert Silver's patent on his photographic mosaicing technique. The UK part of the European patent is currently undergoing revocation proceedings, the results of which may be important in comparing the practice of the UK Patent Office with that of the European Patent Office.
(Main article: Shadow volume)
A patent covering the technique commonly known as Carmack's Reverse
Internet tools
Fair division
- (Main article: Adjusted winner procedure)
An algorithm to divide n divisible goods between two parties as fairly as possible.
Search engines
(Main article: Yahoo! Search Marketing)
A patent relating to pay-per-click Internet search engine advertising. Originally filed by Goto.com, Inc. (renamed Overture Services, Inc.), Google and FindWhat were both sued for infringement prior to Overture's acquisition by Yahoo!
Telecommunications
Washington Research Foundation asserted this patent in December 2006 against Matsushita (owners of the Panasonic brand), Nokia and Samsung. Granted in October 2006 (originating from a 1996 filing) it relates to dynamically varying the passband bandwidth of a tuner. If the claims had been upheld, CSR plc (previously known as Cambridge Silicon Radio), who supply the defendants with Bluetooth chips, could have lost market share to Broadcom who already had a license under the patent.
One of three patents granted in respect of Karmarkar's algorithm, which relates to linear programming problems. Claim 1 of this patent suggests the algorithm should be applied to the allocation of telecommunication transmission facilities among subscribers.
User interfaces
and related to
Immersion Corporation sued Sony under these US patents in 2002. They relate to force-feedback technology such as that used in PlayStation 2 DualShock controllers. Sony lost the case and Immersion were awarded $90.7 million, an injunction (stayed pending appeal), and a compulsory license. The claims of the related European patent application require the device to be attached to a body part and were, in any event, refused by the examining division of the European Patent Office for lacking an inventive step.
The patent relates to a progress bar. Filed in 1989, it was highlighted in 2005 by Richard Stallman in New Scientist and The Guardian as an example of a software patent granted by the European Patent Office, that would impede software development and would be dangerous. The claims as granted describe a process of breaking down a task to be performed by a computer into a number of equal task units and updating a display each time a unit is completed and therefore does not cover progress bars which operate in different ways.
Miscellaneous
Notable due to proprietor hyperbole
Owned at various times by Encyclopædia Britannica, Inc. and Compton's NewMedia, Inc. this patent was granted in August 1993. Just a few months later, in November 1993, Compton's announced that "Everything that is now multimedia and computer-based utilizes this invention" and tried to use the patent to ensure that everyone licensed their software. Although a cursory review of the granted claims showed this statement to be mere hyperbole, there was nonetheless an outcry from the industry and the patent was revoked following re-examination.
and
Patents owned by Scientigo and claimed by them to cover the markup language XML, a notion rejected by patent attorneys and other commentators including Microsoft.
Notable due to misconception
Emoticon keyboard button patent application.
Early in 2006, rumours circulated on the Internet that Cingular Wireless had patented the emoticon and, in particular, had patented the concept of using emoticons on mobile phones. This resulted in a great deal of anger directed at the US Patent Office that such patents should never have been granted. Ultimately, it was pointed out that it was only a published patent application, not a granted patent, and that the claims of the patent application actually related to a mobile phone with a dedicated button for inserting emoticons.
This patent application is currently being examined by the US patent office. All of the originally filed claims were rejected on 22 February 2007 as being known or obvious, although the rejection was not final. Examination of the corresponding European patent application also suggested that the claims lacked an inventive step, and the application lapsed in 2010.
This design patent was granted to Google on 1 September 2009 for the simple and clean appearance of their homepage from five years earlier. Referred to in the media as a patent, it received criticism for not being as original as Google's web search technology and was held up as evidence that the US patent system was broken. The New York Post said that Google now had the right to sue anyone who used a similarly no-frills website. However, a "design patent" is not the same as a "patent" (sometimes referred to as a "utility patent") since it provides only limited protection for ornamental appearance. Design patents are typically hard to infringe and even Google's own homepage at the time the design patent was granted was almost certainly different enough from the design patent that it did not infringe it.
References
Software patent law
Software
Software patents | List of software patents | [
"Technology"
] | 1,730 | [
"Computing-related lists"
] |
563,071 | https://en.wikipedia.org/wiki/Granulocyte%20colony-stimulating%20factor | Granulocyte colony-stimulating factor (G-CSF or GCSF), also known as colony-stimulating factor 3 (CSF 3), is a glycoprotein that stimulates the bone marrow to produce granulocytes and stem cells and release them into the bloodstream.
Functionally, it is a cytokine and hormone, a type of colony-stimulating factor, and is produced by a number of different tissues. The pharmaceutical analogs of naturally occurring G-CSF are called filgrastim and lenograstim.
G-CSF also stimulates the survival, proliferation, differentiation, and function of neutrophil precursors and mature neutrophils.
Biological function
G-CSF is produced by endothelium, macrophages, and a number of other immune cells. The natural human glycoprotein exists in two forms, a 174- and 177-amino-acid-long protein of molecular weight 19,600 grams per mole. The more-abundant and more-active 174-amino acid form has been used in the development of pharmaceutical products by recombinant DNA (rDNA) technology.
White blood cells The G-CSF-receptor is present on precursor cells in the bone marrow, and, in response to stimulation by G-CSF, initiates proliferation and differentiation into mature granulocytes. G-CSF stimulates the survival, proliferation, differentiation, and function of neutrophil precursors and mature neutrophils. G-CSF regulates them using Janus kinase (JAK)/signal transducer and activator of transcription (STAT) and Ras/mitogen-activated protein kinase (MAPK) and phosphatidylinositol 3-kinase (PI3K)/protein kinase B (Akt) signal transduction pathway.
Hematopoietic System G-CSF is also a potent inducer of hematopoietic stem cell (HSC) mobilization from the bone marrow into the bloodstream, although it has been shown that it does not directly affect the hematopoietic progenitors that are mobilized.
Neurons G-CSF can also act on neuronal cells as a neurotrophic factor. Indeed, its receptor is expressed by neurons in the brain and spinal cord. The action of G-CSF in the central nervous system is to induce neurogenesis, to increase the neuroplasticity and to counteract apoptosis. These properties are currently under investigations for the development of treatments of neurological diseases such as cerebral ischemia.
Genetics
The gene for G-CSF is located on chromosome 17, locus q11.2-q12. Nagata et al. found that the GCSF gene has four introns, and that two different polypeptides are synthesized from the same gene by differential splicing of mRNA.
The two polypeptides differ by the presence or absence of three amino acids. Expression studies indicate that both have authentic GCSF activity.
It is thought that stability of the G-CSF mRNA is regulated by an RNA element called the G-CSF factor stem-loop destabilising element.
Medical use
Chemotherapy-induced neutropenia
Chemotherapy can cause myelosuppression and unacceptably low levels of white blood cells (leukopenia), making patients susceptible to infections and sepsis. G-CSF stimulates the production of granulocytes, a type of white blood cell. In oncology and hematology, a recombinant form of G-CSF is used with certain cancer patients to accelerate recovery and reduce mortality from neutropenia after chemotherapy, allowing higher-intensity treatment regimens. It is administered to oncology patients via subcutaneous or intravenous routes. A QSP model of neutrophil production and a PK/PD model of a cytotoxic chemotherapeutic drug (Zalypsis) have been developed to optimize the use of G-CSF in chemotherapy regimens with the aim to prevent mild-neutropenia.
G-CSF was first trialled as a therapy for neutropenia induced by chemotherapy in 1988. The treatment was well tolerated and a dose-dependent rise in circulating neutrophils was noted.
A study in mice has shown that G-CSF may decrease bone mineral density.
G-CSF administration has been shown to attenuate the telomere loss associated with chemotherapy.
Use in drug-induced neutropenia
Neutropenia can be a severe side effect of clozapine, an antipsychotic medication in the treatment of schizophrenia. G-CSF can restore neutrophil count. Following a return to baseline after stopping the drug, it may sometimes be safely rechallenged with the added use of G-CSF.
Before blood donation
G-CSF is also used to increase the number of hematopoietic stem cells in the blood of the donor before collection by leukapheresis for use in hematopoietic stem cell transplantation. For this purpose, G-CSF appears to be safe in pregnancy during implantation as well as during the second and third trimesters. Breastfeeding should be withheld for three days after CSF administration to allow for clearance of it from the milk. People who have been administered colony-stimulating factors do not have a higher risk of leukemia than people who have not.
Stem cell transplants
G-CSF may also be given to the receiver in hematopoietic stem cell transplantation, to compensate for conditioning regimens.
Side effect
The skin disease Sweet's syndrome is a known side effect of using this drug.
History
Two research teams independently identified mouse granulocyte-colony stimulating factor (G-CSF) in the 1960s: Ray Bradley at Universtiy of Melbourne and Donald Metcalf at Walter and Eliza Hall Institute, from Australia, and Yasuo Ichikawa, Dov Pluznik and Leo Sachs at the Weizmann Institute of Science, Israel.
In 1983, Donald Metcalf's research team, led by Nicos Nicola, isolated the murine cytokine from medium conditioned with lung tissue obtained from endotoxin-treated mice.
In 1985, Karl Welte, Erich Platzer, Janice Gabrilove, Roland Mertelsmann and Malcolm Moore at the Memorial Sloan Kettering Cancer Center (MSK) purified human G-CSF produced by bladder cancer cell line 5637 from conditioned medium.
In 1986, Karl Welte's team at MSK patented the method of producing and using human G-CSF under the name "human hematopoietic pluripotent colony stimulating factor" or "human pluripotent colony stimulating factor" (P-CSF). Also in 1986, two independent research groups working with pharmaceutical companies cloned the G-CSF gene that made possible large-scale production and its clinical use: Shigekazu Nagata's team in collaboration with Chugai Pharmaceutical Co. from Japan, and Lawrence Souza's team at Amgen in collaboration with Karl Welte's research team members from Germany and the USA.
Pharmaceutical variants
The recombinant human G-CSF (rhG-CSF) synthesised in an E. coli expression system is called filgrastim. The structure of filgrastim differs slightly from the structure of the natural glycoprotein. Most published studies have used filgrastim.
The Food and Drugs Administration (FDA) first approved filgrastim on February 20, 1991 marketed by Amgen with the brand name Neupogen. It was innitially approved to reduce the risk of infection in patients with non-myeloid malignancies who are taking myelosuppressive anti-cancer drugs associated with febrile neutropenia with fever.
Several bio-generic versions are now also available in markets such as Europe and Australia. Filgrastim (Neupogen) and PEG-filgrastim (Neulasta), or pegylated form of filgratim, are two commercially available forms of rhG-CSF. The pegylated form of filgratim form has a much longer half-life, reducing the necessity of daily injections.
The FDA approved the first biosimilar of Neulasta in June 2018. It is made by Mylan and sold as Fulphila.
Another form of rhG-CSF called lenograstim is synthesised in Chinese hamster ovary cells (CHO cells). As this is a mammalian cell expression system, lenograstim is indistinguishable from the 174-amino acid natural human G-CSF. No clinical or therapeutic consequences of the differences between filgrastim and lenograstim have yet been identified, but there are no formal comparative studies.
In 2015, filgrastim was included on the WHO Model List of Essential Medicines, a list containing the medications considered to be most effective and safe to meet the most important needs in a health system.
Research
G-CSF when given early after exposure to radiation may improve white blood cell counts, and is stockpiled for use in radiation incidents.
Mesoblast planned in 2004 to use G-CSF to treat heart degeneration by injecting it into the blood-stream, plus SDF (stromal cell-derived factor) directly to the heart.
G-CSF has been shown to reduce inflammation, reduce amyloid beta burden, and reverse cognitive impairment in a mouse model of Alzheimer's disease.
Due to its neuroprotective properties, G-CSF is currently under investigation for cerebral ischemia in a clinical phase IIb and several clinical pilot studies are published for other neurological disease such as amyotrophic lateral sclerosis A combination of human G-CSF and cord blood cells has been shown to reduce impairment from chronic traumatic brain injury in rats.
See also
PEGylation
References
Further reading
External links
Growth factors
Peptide hormones
Amgen
Cytokines
Drugs acting on the blood and blood forming organs
fr:Filgrastim | Granulocyte colony-stimulating factor | [
"Chemistry"
] | 2,140 | [
"Cytokines",
"Growth factors",
"Signal transduction"
] |
563,074 | https://en.wikipedia.org/wiki/Sundaland | Sundaland (also called Sundaica or the Sundaic region) is a biogeographical region of Southeast Asia corresponding to a larger landmass that was exposed throughout the last 2.6 million years during periods when sea levels were lower. It includes Bali, Borneo, Java, and Sumatra in Indonesia, and their surrounding small islands, as well as the Malay Peninsula on the Asian mainland.
Extent
The area of Sundaland encompasses the Sunda Shelf, a tectonically stable extension of Southeast Asia's continental shelf that was exposed during glacial periods of the last 2 million years.
The extent of the Sunda Shelf is approximately equal to the 120-meter isobath. In addition to the Malay Peninsula and the islands of Borneo, Java, and Sumatra, it includes the Java Sea, the Gulf of Thailand, and portions of the South China Sea. In total, the area of Sundaland is approximately 1,800,000 km2. The area of exposed land in Sundaland has fluctuated considerably during the past recent 2 million years; the modern land area is approximately half of its maximum extent.
The western and southern borders of Sundaland are clearly marked by the deeper waters of the Sunda Trench – some of the deepest in the world – and the Indian Ocean. The eastern boundary of Sundaland is the Wallace Line, identified by Alfred Russel Wallace as the eastern boundary of the range of Asia's land mammal fauna, and thus the boundary of the Indomalayan and Australasian realms. The islands east of the Wallace line are known as Wallacea, a separate biogeographical region that is considered part of Australasia. The Wallace Line corresponds to a deep-water channel that has never been crossed by any land bridges. The northern border of Sundaland is more difficult to define in bathymetric terms; a phytogeographic transition at approximately 9ºN is considered to be the northern boundary.
Greater portions of Sundaland were most recently exposed during the last glacial period from approximately 110,000 to 12,000 years ago. When the sea level was decreased by 30–40 meters or more, land bridges connected the islands of Borneo, Java, and Sumatra to the Malay Peninsula and mainland Asia. Because the sea level was 30 meters or more lower throughout much of the last 800,000 years, the current status of Borneo, Java, and Sumatra as islands has been a relatively rare occurrence throughout the Pleistocene. In contrast, the sea level was higher during the late Pliocene, and the exposed area of Sundaland was smaller than what is observed at present. Sundaland was partially submerged starting around 18,000 years ago and continuing until about 5000 BC. During the Last Glacial Maximum the sea level fell by approximately 120 meters, and the entire Sunda Shelf was exposed.
Modern climate
All of Sundaland is within the tropics; the equator runs through central Sumatra and Borneo. Like elsewhere in the tropics, rainfall, rather than temperature, is the major determinant of regional variation. Most of Sundaland is classified as perhumid, or everwet, with over 2,000 millimeters of rain annually; rainfall exceeds evapotranspiration throughout the year and there are no predictable dry seasons like elsewhere in Southeast Asia.
The warm and shallow seas of the Sunda Shelf (averaging 28 °C or more) are part of the Indo-Pacific Warm Pool/Western Pacific Warm Pool and an important driver of the Hadley circulation and the El Niño-Southern Oscillation (ENSO), particularly in January when it is a major heat source to the atmosphere. ENSO also has a major influence on the climate of Sundaland; strong positive ENSO events result in droughts throughout Sundaland and tropical Asia.
Modern ecology
The high rainfall supports closed canopy evergreen forests throughout the islands of Sundaland, transitioning to deciduous forest and savanna woodland with increasing latitude. The remaining primary (unlogged) lowland forest is known for giant dipterocarp trees and orangutans; after logging, forest structure and community composition change to be dominated by shade intolerant trees and shrubs. Dipterocarps are notable for mast fruiting events, where tree fruiting is synchronized at unpredictable intervals resulting in predator satiation. Higher elevation forests are shorter and dominated by trees in the oak family. Botanists often include Sundaland, the adjacent Philippines, Wallacea and New Guinea in a single floristic province of Malesia, based on similarities in their flora, which is predominantly of Asian origin.
During the last glacial period, sea levels were lower and all of Sundaland was an extension of the Asian continent. As a result, the modern islands of Sundaland are home to many Asian mammals including elephants, monkeys, apes, tigers, tapirs, and rhinoceros. The flooding of Sundaland separated species that had once shared the same environment. One example is the river threadfin (Polydactylus macrophthalmus, Bleeker 1858), which once thrived in a river system now called "North Sunda River" or "Molengraaff river". The fish is now found in the Kapuas River on the island of Borneo, and in the Musi and Batanghari rivers in Sumatra. Selective pressure (in some cases resulting in extinction) has operated differently on each of the islands of Sundaland, and as a consequence, a different assemblage of mammals is found on each island. However, the current species assemblage on each island is not simply a subset of a universal Sundaland or Asian fauna, as the species that inhabited Sundaland before flooding did not all have ranges encompassing the entire Sunda Shelf. Island area and number of terrestrial mammal species are related, with the largest islands of Sundaland (Borneo and Sumatra) having the highest diversity.
Ecoregions
Tropical and subtropical moist broadleaf forests
Eastern Java–Bali rain forests (Java, Bali)
Eastern Java–Bali montane rain forests (Java, Bali).
Western Java montane rain forests (Java)
Western Java rain forests (Java)
Borneo lowland rain forests (Borneo)
Borneo montane rain forests (Borneo)
Borneo peat swamp forests (Borneo)
Mentawai Islands rain forests (Mentawai Islands)
Peninsular Malaysian montane rain forests (Malay Peninsula)
Peninsular Malaysian peat swamp forests (Malay Peninsula)
Peninsular Malaysian rain forests (Anambas Islands, Malay Peninsula)
Southwest Borneo freshwater swamp forests (Borneo)
Sumatran freshwater swamp forests (Sumatra)
Sumatran lowland rain forests (Sumatra, Nias, Bangka Island)
Sumatran montane rain forests (Sumatra)
Sumatran peat swamp forests (Sumatra)
Sundaland heath forests (Indonesia)
Tropical and subtropical coniferous forests
Sumatran tropical pine forests (Sumatra)
Montane grasslands and shrublands
Kinabalu montane alpine meadows (Borneo)
Mangroves
Sunda Shelf mangroves (Borneo, Sumatra, Riau Islands)
History
Early research
The name "Sunda" goes back to antiquity, appearing in Ptolemy's Geography, written around 150 AD. In an 1852 publication, English navigator George Windsor Earl advanced the idea of a "Great Asiatic Bank", based in part on common features of mammals found in Java, Borneo and Sumatra.
Explorers and scientists began measuring and mapping the seas of Southeast Asia in the 1870s, primarily using depth sounding. In 1921 Gustaaf Molengraaff, a Dutch geologist, postulated that the nearly uniform sea depths of the shelf indicated an ancient peneplain that was the result of repeated flooding events as ice caps melted, with the peneplain becoming more perfect with each successive flooding event. Molengraaff also identified ancient, now submerged, drainage systems that drained the area during periods of lower sea levels.
The name "Sundaland" for the peninsular shelf was first proposed by Reinout Willem van Bemmelen in his Geography of Indonesia in 1949, based on his research during World War II. The ancient drainage systems described by Molengraaff were verified and mapped by Tjia in 1980 and described in greater detail by Emmel and Curray in 1982 complete with river deltas, floodplains and backswamps.
Data types
The climate and ecology of Sundaland throughout the Quaternary has been investigated by analyzing foraminiferal δ18O and pollen from cores drilled into the ocean bed, δ18O in speleothems from caves, and δ13C and δ15N in bat guano from caves, as well as species distribution models, phylogenetic analysis, and community structure and species richness analysis.
Climate
Perhumid climate has existed in Sundaland since the early Miocene; though there is evidence for several periods of drier conditions, a perhumid core persisted in Borneo. The presence of fossil coral reefs dating to the late Miocene and early Pliocene suggests that, as the Indian monsoon grew more intense, seasonality increased in some portions of Sundaland during these epochs. Palynological evidence from Sumatra suggests that temperatures were cooler during the late Pleistocene; mean annual temperatures at high elevation sites may have been as much as 5 °C cooler than present.
Most recent research agrees that Indo-Pacific sea surface temperatures were at most 2-3 °C lower during the Last Glacial Maximum. Snow was found much lower than at present (approximately 1,000 meters lower) and there is evidence that glaciers existed on Borneo and Sumatra around 10,000 years before present. However, debate continues on how precipitation regimes changed throughout the Quaternary. Some authors argue that rainfall decreased with the area of ocean available for evaporation as sea levels fell with ice sheet expansion. Others posit that changes in precipitation have been minimal and an increase in land area in the Sunda Shelf alone (due to lowered sea level) is not enough to decrease precipitation in the region.
One possible explanation for the lack of agreement on hydrologic change throughout the Quaternary is that there was significant heterogeneity in climate during the Last Glacial Maximum throughout Indonesia. Alternatively, the physical and chemical processes that underlie the method of inferring precipitation from δ18O records may have operated differently in the past. Some authors working primarily with pollen records have also noted the difficulties of using vegetation records to detect changes in precipitation regimes in such a humid environment, as water is not a limiting factor in community assemblage.
Ecology
Sundaland, and in particular Borneo, has been an evolutionary hotspot for biodiversity since the early Miocene due to repeated immigration and vicariance events. The modern islands of Borneo, Java, and Sumatra have served as refugia for the flora and fauna of Sundaland during multiple glacial periods in the last million years, and are serving the same role at present.
Savanna corridor theory
Dipterocarp trees characteristic of modern Southeast Asian tropical rainforest have been present in Sundaland since before the Last Glacial Maximum. There is also evidence for savanna vegetation, particularly in now submerged areas of Sundaland, throughout the last glacial period. However, researchers disagree on the spatial extent of savanna that was present in Sundaland. There are two opposing theories about the vegetation of Sundaland, particularly during the last glacial period: (1) that there was a continuous savanna corridor connecting modern mainland Asia to the islands of Java and Borneo, and (2) that the vegetation of Sundaland was instead dominated by tropical rainforest, with only small, discontinuous patches of savanna vegetation.
The presence of a savanna corridor—even if fragmented—would have allowed for savanna-dwelling fauna (as well as early humans) to disperse between Sundaland and the Indochinese biogeographic region; emergence of a savanna corridor during glacial periods and subsequent disappearance during interglacial periods would have facilitated speciation through both vicariance (allopatric speciation) and geodispersal. Morley and Flenley (1987) and Heaney (1991) were the first to postulate the existence of a continuous corridor of savanna vegetation through the center of Sundaland (from the modern Malay Peninsula to Borneo) during the last glacial period, based on palynological evidence. Using the modern distribution of primates, termites, rodents, and other species, other researchers infer that the extent of tropical forest contracted—replaced by savanna and open forest —during the last glacial period. Vegetation models using data from climate simulations show varying degrees of forest contraction; Bird et al. (2005) noted that although no single model predicts a continuous savanna corridor through Sundaland, many do predict open vegetation between modern Java and southern Borneo. Combined with other evidence, they suggest that a 50–150 kilometer wide savanna corridor ran down the Malay Peninsula, through Sumatra and Java, and across to Borneo. Additionally, Wurster et al. (2010) analyzed stable carbon isotope composition in bat guano deposits in Sundaland and found strong evidence for the expansion of savanna in Sundaland. Similarly, stable isotope composition of fossil mammal teeth supports the existence of the savanna corridor.
In contrast, other authors argue that Sundaland was primarily covered by tropical rainforest. Using species distribution models, Raes et al. (2014) suggest that Dipterocarp rainforest persisted throughout the last glacial period. Others have observed that the submerged rivers of the Sunda Shelf have obvious, incised meanders, which would have been maintained by trees on river banks. Pollen records from sediment cores around Sundaland are contradictory; for example, cores from highland sites suggest that forest cover persisted throughout the last glacial period, but other cores from the region show pollen from savanna-woodland species increasing through glacial periods. And in contrast to previous findings, Wurster et al. (2017) again used stable carbon isotope analysis of bat guano, but found that at some sites rainforest cover was maintained through much of the last glacial period. Soil type, rather than long-term existence of a savanna corridor, has also been posited as an explanation for species distribution differences within Sundaland; Slik et al. (2011) suggest that the sandy soils of the now submerged seabed are a more likely dispersal barrier.
Paleofauna
Before Sundaland emerged during the late Pliocene and early Pleistocene (~2.4 million years ago), there were no mammals on Java. As sea level lowered, species such as the dwarf elephantoid Sinomastodon bumiajuensis colonized Sundaland from mainland Asia. Later fauna included tigers, Sumatran rhinoceros, and Indian elephant, which were found throughout Sundaland; smaller animals were also able to disperse across the region.
Human migrations
According to the most widely accepted theory, the ancestors of the modern-day Austronesian populations of the Maritime Southeast Asia and adjacent regions are believed to have migrated southward, from the East Asia mainland to Taiwan, and then to the rest of Maritime Southeast Asia. An alternative theory points to the now-submerged Sundaland as the possible cradle of Austronesian languages: thus the "Out of Sundaland" theory. However, this view is an extreme minority view among professional archaeologists, linguists, and geneticists. The Out of Taiwan model (though not necessarily the Express Train Out of Taiwan model) is accepted by the vast majority of professional researchers.
A study from Leeds University and published in Molecular Biology and Evolution, examining mitochondrial DNA lineages, suggested that shared ancestry between Taiwan and Southeast Asian resulted from earlier migrations. Population dispersals seem to have occurred at the same time as sea levels rose, which may have resulted in migrations from the Philippine Islands to as far north as Taiwan within the last 10,000 years.
The population migrations were most likely to have been driven by climate change — the effects of the drowning of an ancient continent. Rising sea levels in three massive pulses may have caused flooding and the submerging of the Sunda continent, creating the Java and South China Seas and the thousands of islands that make up Indonesia and the Philippines today. The changing sea levels would have caused these humans to move away from their coastal homes and culture, and farther inland throughout southeast Asia. This forced migration would have caused these humans to adapt to the new forest and mountainous environments, developing farms and domestication, and becoming the predecessors to future human populations in these regions.
Genetic similarities were found between populations throughout Asia and an increase in genetic diversity from northern to southern latitudes. Although the Chinese population is very large, it has less variation than the smaller number of individuals living in Southeast Asia, because the Chinese expansion occurred fairly recently, from the mid to late-Holocene.
Oppenheimer locates the origin of the Austronesians in Sundaland and its upper regions. From the standpoint of historical linguistics, the home of the Austronesian languages is the main island of Taiwan, also known by its unofficial Portuguese name of Formosa; on this island the deepest divisions in Austronesian are found, among the families of the native Formosan languages.
See also
Austronesian languages
Banda Arc
Biogeography
Father Tongue hypothesis
List of islands of Indonesia
Oceania
Australasia
Australia (continent)
Oceanic trench
Plate tectonics
Sunda Arc
Sundadonty, named after Sunda
Sunda Islands
Greater Sunda Islands
Lesser Sunda Islands
Sunda Shelf
Sunda Trench
References
Selected faunal references in Borneo
Abdullah MT. 2003. Biogeography and variation of Cynopterus brachyotis in Southeast Asia. PhD thesis. The University of Queensland, St Lucia, Australia.
Corbet, GB, Hill JE. 1992. The mammals of the Indomalayan region: a systematic review. Oxford University Press, Oxford.
Hall LS, Gordon G. Grigg, Craig Moritz, Besar Ketol, Isa Sait, Wahab Marni, Abdullah MT. 2004. Biogeography of fruit bats in Southeast Asia. Sarawak Museum Journal LX(81):191–284.
Karim, C., A.A. Tuen, Abdullah MT. 2004. Mammals. Sarawak Museum Journal Special Issue No. 6. 80: 221–234.
Wilson DE, Reeder DM. 2005. Mammal species of the world. Smithsonian Institution Press, Washington DC.
External links
Review of Oppenheimer's Eden in the East, about Sundaland
Historical continents
Biogeography
Continental shelves
Historical geology
Indomalayan realm
Malesia
Prehistoric Indonesia | Sundaland | [
"Biology"
] | 3,745 | [
"Biogeography"
] |
563,093 | https://en.wikipedia.org/wiki/Paracrine%20signaling | In cellular biology, paracrine signaling is a form of cell signaling, a type of cellular communication in which a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain.
Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses.
Paracrine factors induce competent responders
In order for paracrine factors to successfully induce a response in the receiving cell, that cell must have the appropriate receptors available on the cell membrane to receive the signals, also known as being competent. Additionally, the responding cell must also have the ability to be mechanistically induced.
Fibroblast growth factor (FGF) family
Although the FGF family of paracrine factors has a broad range of functions, major findings support the idea that they primarily stimulate proliferation and differentiation. To fulfill many diverse functions, FGFs can be alternatively spliced or even have different initiation codons to create hundreds of different FGF isoforms.
One of the most important functions of the FGF receptors (FGFR) is in limb development. This signaling involves nine different alternatively spliced isoforms of the receptor. Fgf8 and Fgf10 are two of the critical players in limb development. In the forelimb initiation and limb growth in mice, axial (lengthwise) cues from the intermediate mesoderm produces Tbx5, which subsequently signals to the same mesoderm to produce Fgf10. Fgf10 then signals to the ectoderm to begin production of Fgf8, which also stimulates the production of Fgf10. Deletion of Fgf10 results in limbless mice.
Additionally, paracrine signaling of Fgf is essential in the developing eye of chicks. The fgf8 mRNA becomes localized in what differentiates into the neural retina of the optic cup. These cells are in contact with the outer ectoderm cells, which will eventually become the lens.
Phenotype and survival of mice after knockout of some FGFR genes:
Receptor tyrosine kinase (RTK) pathway
Paracrine signaling through fibroblast growth factors and its respective receptors utilizes the receptor tyrosine pathway. This signaling pathway has been highly studied, using Drosophila eyes and human cancers.
Binding of FGF to FGFR phosphorylates the idle kinase and activates the RTK pathway. This pathway begins at the cell membrane surface, where a ligand binds to its specific receptor. Ligands that bind to RTKs include fibroblast growth factors, epidermal growth factors, platelet-derived growth factors, and stem cell factor. This dimerizes the transmembrane receptor to another RTK receptor, which causes the autophosphorylation and subsequent conformational change of the homodimerized receptor. This conformational change activates the dormant kinase of each RTK on the tyrosine residue. Due to the fact that the receptor spans across the membrane from the extracellular environment, through the lipid bilayer, and into the cytoplasm, the binding of the receptor to the ligand also causes the trans phosphorylation of the cytoplasmic domain of the receptor.
An adaptor protein (such as SOS) recognizes the phosphorylated tyrosine on the receptor. This protein functions as a bridge which connects the RTK to an intermediate protein (such as GNRP), starting the intracellular signaling cascade. In turn, the intermediate protein stimulates GDP-bound Ras to the activated GTP-bound Ras. GAP eventually returns Ras to its inactive state. Activation of Ras has the potential to initiate three signaling pathways downstream of Ras: Ras→Raf→MAP kinase pathway, PI3 kinase pathway, and Ral pathway. Each pathway leads to the activation of transcription factors which enter the nucleus to alter gene expression.
RTK receptor and cancer
Paracrine signaling of growth factors between nearby cells has been shown to exacerbate carcinogenesis. In fact, mutant forms of a single RTK may play a causal role in very different types of cancer. The Kit proto-oncogene encodes a tyrosine kinase receptor whose ligand is a paracrine protein called stem cell factor (SCF), which is important in hematopoiesis (formation of cells in blood). The Kit receptor and related tyrosine kinase receptors actually are inhibitory and effectively suppresses receptor firing. Mutant forms of the Kit receptor, which fire constitutively in a ligand-independent fashion, are found in a diverse array of cancerous malignancies.
RTK pathway and cancer
Research on thyroid cancer has elucidated the theory that paracrine signaling may aid in creating tumor microenvironments. Chemokine transcription is upregulated when Ras is in the GTP-bound state. The chemokines are then released from the cell, free to bind to another nearby cell. Paracrine signaling between neighboring cells creates this positive feedback loop. Thus, the constitutive transcription of upregulated proteins form ideal environments for tumors to arise. Effectively, multiple bindings of ligands to the RTK receptors overstimulates the Ras-Raf-MAPK pathway, which overexpresses the mitogenic and invasive capacity of cells.
JAK-STAT pathway
In addition to RTK pathway, fibroblast growth factors can also activate the JAK-STAT signaling pathway. Instead of carrying covalently associated tyrosine kinase domains, Jak-STAT receptors form noncovalent complexes with tyrosine kinases of the Jak (Janus kinase) class. These receptors bind are for erythropoietin (important for erythropoiesis), thrombopoietin (important for platelet formation), and interferon (important for mediating immune cell function).
After dimerization of the cytokine receptors following ligand binding, the JAKs transphosphorylate each other. The resulting phosphotyrosines attract STAT proteins. The STAT proteins dimerize and enter the nucleus to act as transcription factors to alter gene expression. In particular, the STATs transcribe genes that aid in cell proliferation and survival – such as myc.
Phenotype and survival of mice after knockout of some JAK or STAT genes:
Aberrant JAK-STAT pathway and bone mutations
The JAK-STAT signaling pathway is instrumental in the development of limbs, specifically in its ability to regulate bone growth through paracrine signaling of cytokines. However, mutations in this pathway have been implicated in severe forms of dwarfism: thanatophoric dysplasia (lethal) and achondroplasic dwarfism (viable). This is due to a mutation in a Fgf gene, causing a premature and constitutive activation of the Stat1 transcription factor. Chondrocyte cell division is prematurely terminated, resulting in lethal dwarfism. Rib and limb bone growth plate cells are not transcribed. Thus, the inability of the rib cage to expand prevents the newborn's breathing.
JAK-STAT pathway and cancer
Research on paracrine signaling through the JAK-STAT pathway revealed its potential in activating invasive behavior of ovarian epithelial cells. This epithelial to mesenchymal transition is highly evident in metastasis. Paracrine signaling through the JAK-STAT pathway is necessary in the transition from stationary epithelial cells to mobile mesenchymal cells, which are capable of invading surrounding tissue. Only the JAK-STAT pathway has been found to induce migratory cells.
Hedgehog family
The Hedgehog protein family is involved in induction of cell types and the creation of tissue boundaries and patterning and are found in all bilateral organisms. Hedgehog proteins were first discovered and studied in Drosophila. Hedgehog proteins produce key signals for the establishment of limb and body plan of fruit flies as well as homeostasis of adult tissues, involved in late embryogenesis and metamorphosis. At least three "Drosophila" hedgehog homologs have been found in vertebrates: sonic hedgehog, desert hedgehog, and Indian hedgehog. Sonic hedgehog (SHH) has various roles in vertebrae development, mediating signaling and regulating the organization of central nervous system, limb, and somite polarity. Desert hedgehog (DHH) is expressed in the Sertoli cells involved in spermatogenesis. Indian hedgehog (IHH) is expressed in the gut and cartilage, important in postnatal bone growth.
Hedgehog signaling pathway
Members of the Hedgehog protein family act by binding to a transmembrane "Patched" receptor, which is bound to the "Smoothened" protein, by which the Hedgehog signal can be transduced. In the absence of Hedgehog, the Patched receptor inhibits Smoothened action. Inhibition of Smoothened causes the Cubitus interruptus (Ci), Fused, and Cos protein complex attached to microtubules to remain intact. In this conformation, the Ci protein is cleaved so that a portion of the protein is allowed to enter the nucleus and act as a transcriptional repressor. In the presence of Hedgehog, Patched no longer inhibits Smoothened. Then active Smoothened protein is able to inhibit PKA and Slimb, so that the Ci protein is not cleaved. This intact Ci protein can enter the nucleus, associate with CPB protein and act as a transcriptional activator, inducing the expression of Hedgehog-response genes.
Hedgehog signaling pathway and cancer
The Hedgehog Signaling pathway is critical in proper tissue patterning and orientation during normal development of most animals. Hedgehog proteins induce cell proliferation in certain cells and differentiations in others. Aberrant activation of the Hedgehog pathway has been implicated in several types of cancers, Basal Cell Carcinoma in particular. This uncontrolled activation of the Hedgehog proteins can be caused by mutations to the signal pathway, which would be ligand independent, or a mutation that causes overexpression of the Hedgehog protein, which would be ligand dependent. In addition, therapy-induced Hedgehog pathway activation has been shown to be necessary for progression of Prostate Cancer tumors after androgen deprivation therapy. This connection between the Hedgehog signaling pathway and human cancers may provide for the possible of therapeutic intervention as treatment for such cancers. The Hedgehog signaling pathway is also involved in normal regulation of stem-cell populations, and required for normal growth and regeneration of damaged organs. This may provide another possible route for tumorigenesis via the Hedgehog pathway.
Wnt family
The Wnt protein family includes a large number of cysteine-rich glycoproteins. The Wnt proteins activate signal transduction cascades via three different pathways, the canonical Wnt pathway, the noncanonical planar cell polarity (PCP) pathway, and the noncanonical Wnt/Ca2+ pathway. Wnt proteins appear to control a wide range of developmental processes and have been seen as necessary for control of spindle orientation, cell polarity, cadherin mediated adhesion, and early development of embryos in many different organisms. Current research has indicated that deregulation of Wnt signaling plays a role in tumor formation, because at a cellular level, Wnt proteins often regulated cell proliferation, cell morphology, cell motility, and cell fate.
The canonical Wnt signaling pathway
In the canonical pathway, Wnt proteins binds to its transmembrane receptor of the Frizzled family of proteins. The binding of Wnt to a Frizzled protein activates the Dishevelled protein. In its active state the Dishevelled protein inhibits the activity of the glycogen synthase kinase 3 (GSK3) enzyme. Normally active GSK3 prevents the dissociation of β-catenin to the APC protein, which results in β-catenin degradation. Thus inhibited GSK3, allows β-catenin to dissociate from APC, accumulate, and travel to nucleus. In the nucleus β-catenin associates with Lef/Tcf transcription factor, which is already working on DNA as a repressor, inhibiting the transcription of the genes it binds. Binding of β-catenin to Lef/Tcf works as a transcription activator, activating the transcription of the Wnt-responsive genes.
The noncanonical Wnt signaling pathways
The noncanonical Wnt pathways provide a signal transduction pathway for Wnt that does not involve β-catenin. In the noncanonical pathways, Wnt affects the actin and microtubular cytoskeleton as well as gene transcription.
The noncanonical planar cell polarity (PCP) pathway
The noncanonical PCP pathway regulates cell morphology, division, and movement. Once again Wnt proteins binds to and activates Frizzled so that Frizzled activates a Dishevelled protein that is tethered to the plasma membrane through a Prickle protein and transmembrane Stbm protein. The active Dishevelled activates RhoA GTPase through Dishevelled associated activator of morphogenesis 1 (Daam1) and the Rac protein. Active RhoA is able to induce cytoskeleton changes by activating Roh-associated kinase (ROCK) and affect gene transcription directly. Active Rac can directly induce cytoskeleton changes and affect gene transcription through activation of JNK.
The noncanonical Wnt/Ca2+ pathway
The noncanonical Wnt/Ca2+ pathway regulates intracellular calcium levels. Again Wnt binds and activates to Frizzled. In this case however activated Frizzled causes a coupled G-protein to activate a phospholipase (PLC), which interacts with and splits PIP2 into DAG and IP3. IP3 can then bind to a receptor on the endoplasmic reticulum to release intracellular calcium stores, to induce calcium-dependent gene expression.
Wnt signaling pathways and cancer
The Wnt signaling pathways are critical in cell-cell signaling during normal development and embryogenesis and required for maintenance of adult tissue, therefore it is not difficult to understand why disruption in Wnt signaling pathways can promote human degenerative disease and cancer.
The Wnt signaling pathways are complex, involving many different elements, and therefore have many targets for misregulation. Mutations that cause constitutive activation of the Wnt signaling pathway lead to tumor formation and cancer. Aberrant activation of the Wnt pathway can lead to increase cell proliferation. Current research is focused on the action of the Wnt signaling pathway the regulation of stem cell choice to proliferate and self renew. This action of Wnt signaling in the possible control and maintenance of stem cells, may provide a possible treatment in cancers exhibiting aberrant Wnt signaling.
TGF-β superfamily
"TGF" (Transforming Growth Factor) is a family of proteins that includes 33 members that encode dimeric, secreted polypeptides that regulate development. Many developmental processes are under its control including gastrulation, axis symmetry of the body, organ morphogenesis, and tissue homeostasis in adults. All TGF-β ligands bind to either Type I or Type II receptors, to create heterotetramic complexes.
TGF-β pathway
The TGF-β pathway regulates many cellular processes in developing embryo and adult organisms, including cell growth, differentiation, apoptosis, and homeostasis. There are five kinds of type II receptors and seven types of type I receptors in humans and other mammals. These receptors are known as "dual-specificity kinases" because their cytoplasmic kinase domain has weak tyrosine kinase activity but strong serine/threonine kinase activity. When a TGF-β superfamily ligand binds to the type II receptor, it recruits a type I receptor and activates it by phosphorylating the serine or threonine residues of its "GS" box. This forms an activation complex that can then phosphorylate SMAD proteins.
SMAD pathway
There are three classes of SMADs:
Receptor-regulated SMAD (R-SMAD)
Common-mediator SMAD (Co-SMAD)
Inhibitory SMAD (I-SMAD)
Examples of SMADs in each class:
The TGF-β superfamily activates members of the SMAD family, which function as transcription factors. Specifically, the type I receptor, activated by the type II receptor, phosphorylates R-SMADs that then bind to the co-SMAD, SMAD4. The R-SMAD/Co-SMAD forms a complex with importin and enters the nucleus, where they act as transcription factors and either up-regulate or down-regulate in the expression of a target gene.
Specific TGF-β ligands will result in the activation of either the SMAD2/3 or the SMAD1/5 R-SMADs. For instance, when activin, Nodal, or TGF-β ligand binds to the receptors, the phosphorylated receptor complex can activate SMAD2 and SMAD3 through phosphorylation. However, when a BMP ligand binds to the receptors, the phosphorylated receptor complex activates SMAD1 and SMAD5. Then, the Smad2/3 or the Smad1/5 complexes form a dimer complex with SMAD4 and become transcription factors. Though there are many R-SMADs involved in the pathway, there is only one co-SMAD, SMAD4.
Non-SMAD pathway
Non-Smad signaling proteins contribute to the responses of the TGF-β pathway in three ways. First, non-Smad signaling pathways phosphorylate the Smads. Second, Smads directly signal to other pathways by communicating directly with other signaling proteins, such as kinases. Finally, the TGF-β receptors directly phosphorylate non-Smad proteins.
Members of TGF-β superfamily
1. TGF-β family
This family includes TGF-β1, TGF-β2, TGF-β3, and TGF-β5. They are involved in positively and negatively regulation of cell division, the formation of the extracellular matrix between cells, apoptosis, and embryogenesis. They bind to TGF-β type II receptor (TGFBRII).
TGF-β1 stimulates the synthesis of collagen and fibronectin and inhibits the degradation of the extracellular matrix. Ultimately, it increases the production of extracellular matrix by epithelial cells.
TGF-β proteins regulate epithelia by controlling where and when they branch to form kidney, lung, and salivary gland ducts.
2. Bone morphogenetic protein (BMPs) family
Members of the BMP family were originally found to induce bone formation, as their name suggests. However, BMPs are very multifunctional and can also regulate apoptosis, cell migration, cell division, and differentiation. They also specify the anterior/posterior axis, induce growth, and regulate homeostasis.
The BMPs bind to the bone morphogenetic protein receptor type II (BMPR2). Some of the proteins of the BMP family are BMP4 and BMP7. BMP4 promotes bone formation, causes cell death, or signals the formation of epidermis, depending on the tissue it is acting on. BMP7 is crucial for kidney development, sperm synthesis, and neural tube polarization. Both BMP4 and BMP7 regulate mature ligand stability and processing, including degrading ligands in lysosomes. BMPs act by diffusing from the cells that create them.
Other members of TGF-β superfamily
Vg1 Family
Activin Family
Involved in embryogenesis and osteogenesis
Regulate insulin and pituitary, gonadal, and hypothalamic hormones
Nerve cell survival factors
3 Activins: Activin A, Activin B and Activin AB.
Glial-Derived Neurotrophic Factor (GDNF)
Needed for kidney and enteric neuron differentiation
Müllerian Inhibitory Factor
Involved in mammalian sex determination
Nodal
Binds to Activin A Type 2B receptor
Forms receptor complex with Activin A Type 1B receptor or with Activin A Type 1C receptor.
Growth and differentiation factors (GDFs)
Summary table of TGF-β signaling pathway
Examples
Growth factor and clotting factors are paracrine signaling agents. The local action of growth factor signaling plays an especially important role in the development of tissues. Also, retinoic acid, the active form of vitamin A, functions in a paracrine fashion to regulate gene expression during embryonic development in higher animals.
In insects, Allatostatin controls growth through paracrine action on the corpora allata.
In mature organisms, paracrine signaling is involved in responses to allergens, tissue repair, the formation of scar tissue, and blood clotting. Histamine is a paracrine that is released by immune cells in the bronchial tree. Histamine causes the smooth muscle cells of the bronchi to constrict, narrowing the airways.
See also
cAMP dependent pathway
Crosstalk (biology)
Lipid signaling
Local hormone – either a paracrine hormone, or a hormone acting in both a paracrine and an endocrine fashion
MAPK signaling pathway
Netpath – A curated resource of signal transduction pathways in humans
Paracrine regulator
References
External links
Signal transduction | Paracrine signaling | [
"Chemistry",
"Biology"
] | 4,736 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
563,105 | https://en.wikipedia.org/wiki/Approximation%20algorithm | In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.
The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.
There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2-approximation for the Steiner Forest problem by Agrawal et al. The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans–Williamson algorithm for maximum cut, which solves a graph theoretic problem using high dimensional geometry.
Introduction
A simple example of an approximation algorithm is one for the minimum vertex cover problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. One way to find a vertex cover is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a matching), the vertex cover produced, therefore, is at most twice as large as the optimal one. In other words, this is a constant-factor approximation algorithm with an approximation factor of 2. Under the recent unique games conjecture, this factor is even the best possible one.
NP-hard problems vary greatly in their approximability; some, such as the knapsack problem, can be approximated within a multiplicative factor , for any fixed , and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a polynomial-time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial, factor unless P = NP, as in the case of the maximum clique problem. Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the theory of NP-completeness. In other words, although NP-complete problems may be equivalent (under polynomial-time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions.
Algorithm design techniques
By now there are several established techniques to design approximation algorithms. These include the following ones.
Greedy algorithm
Local search
Enumeration and dynamic programming (which is also often used for parameterized approximations)
Solving a convex programming relaxation to get a fractional solution. Then converting this fractional solution into a feasible solution by some appropriate rounding. The popular relaxations include the following.
Linear programming relaxations
Semidefinite programming relaxations
Primal-dual methods
Dual fitting
Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding.
Random sampling and the use of randomness in general in conjunction with the methods above.
A posteriori guarantees
While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input. For example, there is a different approximation algorithm for minimum vertex cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation. Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover).
Hardness of approximation
Approximation algorithms as a research area is closely related to and informed by inapproximability theory where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of reductions. In the case of the metric traveling salesman problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied. Coupled with the knowledge of the existence of Christofides' 1.5 approximation algorithm, this tells us that the threshold of approximability for metric traveling salesman (if it exists) is somewhere between 123/122 and 1.5.
While inapproximability results have been proved since the 1970s, such results were obtained by ad hoc means and no systematic understanding was available at the time. It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem, that modern tools for proving inapproximability results were uncovered. The PCP theorem, for example, shows that Johnson's 1974 approximation algorithms for Max SAT, set cover, independent set and coloring all achieve the optimal approximation ratio, assuming P ≠ NP.
Practicality
Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial linear programming/semidefinite relaxations (which may themselves invoke the ellipsoid algorithm), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used "out of the box" in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights.
In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora (and independently by Joseph Mitchell) which had a prohibitive running time of for a approximation. Yet, within a year these ideas were incorporated into a near-linear time algorithm for any constant .
Structure of approximation algorithms
Given an optimization problem:
where is an approximation problem, the set of inputs and the set of solutions, we can define the cost function:
and the set of feasible solutions:
finding the best solution for a maximization or a minimization problem:
,
Given a feasible solution , with , we would want a guarantee of the quality of the solution, which is a performance to be guaranteed (approximation factor).
Specifically, having , the algorithm has an approximation factor (or approximation ratio) of if , we have:
for a minimization problem: , which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of ;
for a maximization problem: , which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of ;
The approximation can be proven tight (tight approximation) by demonstrating that there exist instances where the algorithm performs at the approximation limit, indicating the tightness of the bound. In this case, it's enough to construct an input instance designed to force the algorithm into a worst-case scenario.
Performance guarantees
For some approximation algorithms it is possible to prove certain properties about the approximation of th.e optimum result. For example, a ρ-approximation algorithm A is defined to be an algorithm for which it has been proven that the value/cost, f(x), of the approximate solution A(x) to an instance x will not be more (or less, depending on the situation) than a factor ρ times the value, OPT, of an optimum solution.
The factor ρ is called the relative performance guarantee. An approximation algorithm has an absolute performance guarantee or bounded error c, if it has been proven for every instance x that
Similarly, the performance guarantee, R(x,y), of a solution y to an instance x is defined as
where f(y) is the value/cost of the solution y for the instance x. Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if y is an optimal solution. If an algorithm A guarantees to return solutions with a performance guarantee of at most r(n), then A is said to be an r(n)-approximation algorithm and has an approximation ratio of r(n). Likewise, a problem with an r(n)-approximation algorithm is said to be r(n)-approximable or have an approximation ratio of r(n).
For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of . In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1.
The absolute performance guarantee of some approximation algorithm A, where x refers to an instance of a problem, and where is the performance guarantee of A on x (i.e. ρ for problem instance x) is:
That is to say that is the largest bound on the approximation ratio, r, that one sees over all possible instances of the problem. Likewise, the asymptotic performance ratio is:
That is to say that it is the same as the absolute performance ratio, with a lower bound n on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant.
Epsilon terms
In the literature, an approximation ratio for a maximization (minimization) problem of c - ϵ (min: c + ϵ) means that the algorithm has an approximation ratio of c ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shown for ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 + ϵ for satisfiable MAX-3SAT instances due to Johan Håstad. As mentioned previously, when c = 1, the problem is said to have a polynomial-time approximation scheme.
An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size n goes to infinity as n does. In this case, the approximation ratio is c ∓ k / OPT = c ∓ o(1) for some constants c and k. Given arbitrary ϵ > 0, one can choose a large enough N such that the term k / OPT < ϵ for every n ≥ N. For every fixed ϵ, instances of size n < N can be solved by brute force, thereby showing an approximation ratio — existence of approximation algorithms with a guarantee — of c ∓ ϵ for every ϵ > 0.
See also
Domination analysis considers guarantees in terms of the rank of the computed solution.
PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter
Parameterized approximation algorithm - a type of approximation algorithm that runs in FPT time
APX is the class of problems with some constant-factor approximation algorithm
Approximation-preserving reduction
Exact algorithm
Citations
References
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Chapter 35: Approximation Algorithms, pp. 1022–1056.
Dorit S. Hochbaum, ed. Approximation Algorithms for NP-Hard problems, PWS Publishing Company, 1997. . Chapter 9: Various Notions of Approximations: Good, Better, Best, and More
External links
Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A compendium of NP optimization problems.
Computational complexity theory | Approximation algorithm | [
"Mathematics"
] | 2,806 | [
"Mathematical relations",
"Approximations",
"Approximation algorithms"
] |
563,120 | https://en.wikipedia.org/wiki/Biological%20rhythm | Biological rhythms are repetitive biological processes. Some types of biological rhythms have been described as biological clocks. They can range in frequency from microseconds to less than one repetitive event per decade. Biological rhythms are studied by chronobiology. In the biochemical context biological rhythms are called biochemical oscillations.
The variations of the timing and duration of biological activity in living organisms occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms).
Circadian rhythm
The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks.
The circadian rhythm can further be broken down into routine cycles during the 24-hour day:
Diurnal, which describes organisms active during daytime
Nocturnal, which describes organisms active in the night
Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: white-tailed deer, some bats)
While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate.
Other cycles
Many other important cycles are also studied, including:
Infradian rhythms, which are cycles longer than a day. Examples include circannual or annual cycles that govern migration or reproduction cycles in many plants and animals, or the human menstrual cycle.
Ultradian rhythms, which are cycles shorter than 24 hours, such as the 90-minute REM cycle, the 4-hour nasal cycle, or the 3-hour cycle of growth hormone production.
Tidal rhythms, commonly observed in marine life, which follow the roughly 12.4-hour transition from high to low tide and back.
Lunar rhythms, which follow the lunar month (29.5 days). They are relevant e.g. for marine life, as the level of the tides is modulated across the lunar cycle.
Gene oscillations – some genes are expressed more during certain hours of the day than during other hours.
Within each cycle, the time period during which the process is more active is called the acrophase. When the process is less active, the cycle is in its bathyphase or trough phase. The particular moment of highest activity is the peak or maximum; the lowest point is the nadir. How high (or low) the process gets is measured by the amplitude.
Biochemical basis of biological rhythms
Goldbeter's book provides a thorough analysis of the biochemical mechanisms and their kinetic properties that underlie biological rhythms.
References
External links
Society for Research on Biological Rhythms
Biological rhythm in Encyclopedia Britannica
Biological processes | Biological rhythm | [
"Biology"
] | 707 | [
"nan",
"Chronobiology"
] |
563,133 | https://en.wikipedia.org/wiki/Wallacea | Wallacea is a biogeographical designation for a group of mainly Indonesian islands separated by deep-water straits from the Asian and Australian continental shelves. Wallacea includes Sulawesi, the largest island in the group, as well as Lombok, Sumbawa, Flores, Sumba, Timor, Halmahera, Buru, Seram, and many smaller islands. The islands of Wallacea lie between the Sunda Shelf (the Malay Peninsula, Sumatra, Borneo, Java, and Bali) to the west, and the Sahul Shelf including Australia and New Guinea to the south and east. The total land area of Wallacea is .
Geography
Wallacea is defined as the series of islands stretching between the two continental shelves of Sunda and Sahul, but excluding the Philippines. Its eastern border (separating Wallacea from Sahul) is represented by a zoogeographical boundary known as Lydekker's Line, while the Wallace Line (separating Wallacea from Sunda) defines its western border.
The Weber Line is the midpoint, at which Asian and Australian fauna and flora are approximately equally represented. It follows the deepest straits traversing the Indonesian Archipelago.
The Wallace Line is named after the Welsh naturalist Alfred Russel Wallace, who recorded the differences between mammal and bird fauna between the islands on either side of the line. The islands of Sundaland to the west of the line, including Sumatra, Java, Bali, and Borneo, share a mammal fauna similar to that of East Asia, which includes tigers, rhinoceros, and apes; whereas the mammal fauna of Lombok and areas extending eastwards are mostly populated by marsupials and birds similar to those in Australasia. Sulawesi shows signs of both.
During the ice ages, sea levels were lower, exposing the Sunda shelf that links the islands of Sundaland to one another and to Asia and allowing Asian land animals to inhabit these islands.
The islands of Wallacea have few land mammals, land birds, or freshwater fish of continental origin, which find it difficult to cross open ocean. Many species of birds, reptiles, and insects were better able to cross the straits, and many such species of Australian and Asian origin are found there. Wallacea's plants are predominantly of Asian origin, and botanists include Sundaland, Wallacea, and New Guinea as the floristic province of Malesia.
Similarly, Australia and New Guinea to the east are linked by a shallow continental shelf, and were linked by a land bridge during the ice ages, forming a single continent that scientists variously call Australia-New Guinea, Meganesia, Papualand, or Sahul. Consequently, Australia, New Guinea, and the Aru Islands share many marsupial mammals, land birds, and freshwater fish that are not found in Wallacea.
Biota and conservation issues
Although the distant ancestors of Wallacea's flora and fauna may have been from Asia or Australia-New Guinea, Wallacea is home to many endemic species. There is extensive autochthonous speciation and proportionately large numbers of endemics; the area is an important contributor to the overall mega-biodiversity of the Indonesian Archipelago.
Fauna includes the lowland and mountain anoa, or dwarf buffalo (Bubalus sp.), and the babirusa, or "deer-pig" (Babyrousa sp.), both found on Sulawesi, among other islands. Maluku shares a number of similar species with Sulawesi, albeit with fewer total, given the differences in size between the two islands—Sulawesi has at least 4,000 recorded terrestrial plant and animal species, while Maluku has just over 1,000, by comparison. Sulawesi is home to over 2,000 invertebrate species (with over 1,000 known species of arthropod, not including nearly 900 lepidopterans), 100 species of reptiles and amphibians, and 288 bird species. Maluku has around 70 reptile and amphibian, 250 avian, and over 550 invertebrate species. Seram Island is particularly noted for its butterflies and birds, including the Moluccan king parrot. Smaller mammals, including some carnivorans (such as civets), marsupials (such as the cuscus), primates and rodents are common throughout the region.
A large portion of the waters surrounding Wallacea are part of the Coral Triangle, considered to be the richest coral reef and marine ecosystems on earth, with the highest number of species, adding to the total biodiversity of the region.
Wallacea was originally almost completely forested, mostly tropical moist broadleaf forests, with some areas of tropical dry broadleaf forest. The higher mountains are home to montane and subalpine forests, and mangroves are common in coastal areas. According to Conservation International, Wallacea is home to over 10,000 plant species, of which approximately 1,500 (15%) are endemic.
Endemism is higher among terrestrial vertebrate species; out of 1,142 species described there, almost half (529) were endemic. 45% of the region retains some sort of forest cover, though only 52,017 km2 (15%) is in a pristine state. Of Wallacea's total 347,000 km2-area, about 20,000 km2 are protected.
Ecoregions
Tropical and subtropical moist broadleaf forests:
Banda Sea Islands moist deciduous forests (Kai Islands, Tanimbar Islands, Babar Islands, Leti Islands, eastern Barat Daya Islands)
Buru rain forests (Buru)
Halmahera rain forests (Halmahera, Morotai, Obi Islands, Bacan Island)
Seram rain forests (Seram, Banda Islands, Ambon Island, Saparua, Gorong archipelago)
Sulawesi lowland rain forests (Sulawesi, Banggai Islands, Sula Islands, Sangihe Islands, Talaud Islands)
Sulawesi montane rain forests (Sulawesi)
Tropical and subtropical dry broadleaf forests:
Lesser Sundas deciduous forests (Lombok, Sumbawa, Komodo, Flores, Alor)
Sumba deciduous forests (Sumba)
Timor and Wetar deciduous forests (Timor, Wetar)
Distribution between Asia and Australia
Australia may be isolated by sea, but technically through Wallacea, it can be zoologically extended. Australian Early-Middle Pliocene rodent fossils have been found in Chinchilla Sands and Bluffs Down in Queensland, but a mix of ancestral and derived traits suggest murid rodents made it to Australia earlier, maybe in the Miocene, over a forested archipelago, i.e. Wallacea, and evolved in Australia in isolation.
Australia's rodents make up much of the continent's placental mammal fauna and include various species from stick-nest rats to hopping mice. Other mammals invaded from the east. Two species of cuscus, the Sulawesi bear cuscus and the Sulawesi dwarf cuscus, are the westernmost representatives of the Australasian marsupials.
The tectonic uplift of Wallacea during the collision between Australia and Asia 23 million years ago allowed the global dispersal of passerine birds from Australia across the Indonesian islands. Bustards and megapodes must have somehow colonized Australia. Cockatiels similar to those from Australia inhabit Komodo Island in Wallacea.
A few species of Eucalyptus, a predominant genus of trees in Australia, are found in Wallacea: Eucalyptus deglupta on Sulawesi, and E. urophylla and E. alba in East Nusa Tenggara. For land snails Wallacea and Wallace's Line do not form a barrier for dispersal.
See also
References
External links
Conservation International: Wallacea
George Gaylord Simpson (Apr. 29, 1977), "Too Many Lines; The Limits of the Oriental and Australian Zoogeographic Regions", Proceedings of the American Philosophical Society, Vol. 121, No. 2, pp. 107–120.
Wallacea Research Group
Australasian realm
Biogeography
Ecoregions of Asia
Indomalayan realm
Maritime Southeast Asia
Regions of Indonesia
Regions of Southeast Asia | Wallacea | [
"Biology"
] | 1,649 | [
"Biogeography"
] |
563,161 | https://en.wikipedia.org/wiki/Membrane%20potential | Membrane potential (also transmembrane potential or membrane voltage) is the difference in electric potential between the interior and the exterior of a biological cell. It equals the interior potential minus the exterior potential. This is the energy (i.e. work) per charge which is required to move a (very small) positive charge at constant velocity across the cell membrane from the exterior to the interior. (If the charge is allowed to change velocity, the change of kinetic energy and production of radiation must be taken into account.)
Typical values of membrane potential, normally given in units of milli volts and denoted as mV, range from –80 mV to –40 mV. For such typical negative membrane potentials, positive work is required to move a positive charge from the interior to the exterior. However, thermal kinetic energy allows ions to overcome the potential difference. For a selectively permeable membrane, this permits a net flow against the gradient. This is a kind of osmosis.
Description
All animal cells are surrounded by a membrane composed of a lipid bilayer with proteins embedded in it. The membrane serves as both an insulator and a diffusion barrier to the movement of ions. Transmembrane proteins, also known as ion transporter or ion pump proteins, actively push ions across the membrane and establish concentration gradients across the membrane, and ion channels allow ions to move across the membrane down those concentration gradients. Ion pumps and ion channels are electrically equivalent to a set of batteries and resistors inserted in the membrane, and therefore create a voltage between the two sides of the membrane.
All plasma membranes have an electrical potential across them, with the inside usually negative with respect to the outside. The membrane potential has two basic functions. First, it allows a cell to function as a battery, providing power to operate a variety of "molecular devices" embedded in the membrane. Second, in electrically excitable cells such as neurons and muscle cells, it is used for transmitting signals between different parts of a cell.
Signals in neurons and muscle cells
Signals are generated in excitable cells by opening or closing of ion channels at one point in the membrane, producing a local change in the membrane potential. This change in the electric field can be quickly sensed by either adjacent or more distant ion channels in the membrane. Those ion channels can then open or close as a result of the potential change, reproducing the signal.
In non-excitable cells, and in excitable cells in their baseline states, the membrane potential is held at a relatively stable value, called the resting potential. For neurons, resting potential is defined as ranging from –80 to –70 millivolts; that is, the interior of a cell has a negative baseline voltage of a bit less than one-tenth of a volt. The opening and closing of ion channels can induce a departure from the resting potential. This is called a depolarization if the interior voltage becomes less negative (say from –70 mV to –60 mV), or a hyperpolarization if the interior voltage becomes more negative (say from –70 mV to –80 mV). In excitable cells, a sufficiently large depolarization can evoke an action potential, in which the membrane potential changes rapidly and significantly for a short time (on the order of 1 to 100 milliseconds), often reversing its polarity. Action potentials are generated by the activation of certain voltage-gated ion channels.
In neurons, the factors that influence the membrane potential are diverse. They include numerous types of ion channels, some of which are chemically gated and some of which are voltage-gated. Because voltage-gated ion channels are controlled by the membrane potential, while the membrane potential itself is influenced by these same ion channels, feedback loops that allow for complex temporal dynamics arise, including oscillations and regenerative events such as action potentials.
Ion concentration gradients
Differences in the concentrations of ions on opposite sides of a cellular membrane lead to a voltage called the membrane potential.
Many ions have a concentration gradient across the membrane, including potassium (K+), which is at a high concentration inside and a low concentration outside the membrane. Sodium (Na+) and chloride (Cl−) ions are at high concentrations in the extracellular region, and low concentrations in the intracellular regions. These concentration gradients provide the potential energy to drive the formation of the membrane potential. This voltage is established when the membrane has permeability to one or more ions.
In the simplest case, illustrated in the top diagram ("Ion concentration gradients"), if the membrane is selectively permeable to potassium, these positively charged ions can diffuse down the concentration gradient to the outside of the cell, leaving behind uncompensated negative charges. This separation of charges is what causes the membrane potential.
The system as a whole is electro-neutral. The uncompensated positive charges outside the cell, and the uncompensated negative charges inside the cell, physically line up on the membrane surface and attract each other across the lipid bilayer. Thus, the membrane potential is physically located only in the immediate vicinity of the membrane. It is the separation of these charges across the membrane that is the basis of the membrane voltage.
The top diagram is only an approximation of the ionic contributions to the membrane potential. Other ions including sodium, chloride, calcium, and others play a more minor role, even though they have strong concentration gradients, because they have more limited permeability than potassium.
Physical basis
The membrane potential in a cell derives ultimately from two factors: electrical force and diffusion. Electrical force arises from the mutual attraction between particles with opposite electrical charges (positive and negative) and the mutual repulsion between particles with the same type of charge (both positive or both negative). Diffusion arises from the statistical tendency of particles to redistribute from regions where they are highly concentrated to regions where the concentration is low.
Voltage
Voltage, which is synonymous with difference in electrical potential, is the ability to drive an electric current across a resistance. Indeed, the simplest definition of a voltage is given by Ohm's law: V=IR, where V is voltage, I is current and R is resistance. If a voltage source such as a battery is placed in an electrical circuit, the higher the voltage of the source the greater the amount of current that it will drive across the available resistance. The functional significance of voltage lies only in potential differences between two points in a circuit. The idea of a voltage at a single point is meaningless. It is conventional in electronics to assign a voltage of zero to some arbitrarily chosen element of the circuit, and then assign voltages for other elements measured relative to that zero point. There is no significance in which element is chosen as the zero point—the function of a circuit depends only on the differences not on voltages per se. However, in most cases and by convention, the zero level is most often assigned to the portion of a circuit that is in contact with ground.
The same principle applies to voltage in cell biology. In electrically active tissue, the potential difference between any two points can be measured by inserting an electrode at each point, for example one inside and one outside the cell, and connecting both electrodes to the leads of what is in essence a specialized voltmeter. By convention, the zero potential value is assigned to the outside of the cell and the sign of the potential difference between the outside and the inside is determined by the potential of the inside relative to the outside zero.
In mathematical terms, the definition of voltage begins with the concept of an electric field , a vector field assigning a magnitude and direction to each point in space. In many situations, the electric field is a conservative field, which means that it can be expressed as the gradient of a scalar function , that is, . This scalar field is referred to as the voltage distribution. The definition allows for an arbitrary constant of integration—this is why absolute values of voltage are not meaningful. In general, electric fields can be treated as conservative only if magnetic fields do not significantly influence them, but this condition usually applies well to biological tissue.
Because the electric field is the gradient of the voltage distribution, rapid changes in voltage within a small region imply a strong electric field; on the converse, if the voltage remains approximately the same over a large region, the electric fields in that region must be weak. A strong electric field, equivalent to a strong voltage gradient, implies that a strong force is exerted on any charged particles that lie within the region.
Ions and the forces driving their motion
Electrical signals within biological organisms are, in general, driven by ions. The most important cations for the action potential are sodium (Na+) and potassium (K+). Both of these are monovalent cations that carry a single positive charge. Action potentials can also involve calcium (Ca2+), which is a divalent cation that carries a double positive charge. The chloride anion (Cl−) plays a major role in the action potentials of some algae, but plays a negligible role in the action potentials of most animals.
Ions cross the cell membrane under two influences: diffusion and electric fields. A simple example wherein two solutions—A and B—are separated by a porous barrier illustrates that diffusion will ensure that they will eventually mix into equal solutions. This mixing occurs because of the difference in their concentrations. The region with high concentration will diffuse out toward the region with low concentration. To extend the example, let solution A have 30 sodium ions and 30 chloride ions. Also, let solution B have only 20 sodium ions and 20 chloride ions. Assuming the barrier allows both types of ions to travel through it, then a steady state will be reached whereby both solutions have 25 sodium ions and 25 chloride ions. If, however, the porous barrier is selective to which ions are let through, then diffusion alone will not determine the resulting solution. Returning to the previous example, let's now construct a barrier that is permeable only to sodium ions. Now, only sodium is allowed to diffuse cross the barrier from its higher concentration in solution A to the lower concentration in solution B. This will result in a greater accumulation of sodium ions than chloride ions in solution B and a lesser number of sodium ions than chloride ions in solution A.
This means that there is a net positive charge in solution B from the higher concentration of positively charged sodium ions than negatively charged chloride ions. Likewise, there is a net negative charge in solution A from the greater concentration of negative chloride ions than positive sodium ions. Since opposite charges attract and like charges repel, the ions are now also influenced by electrical fields as well as forces of diffusion. Therefore, positive sodium ions will be less likely to travel to the now-more-positive B solution and remain in the now-more-negative A solution. The point at which the forces of the electric fields completely counteract the force due to diffusion is called the equilibrium potential. At this point, the net flow of the specific ion (in this case sodium) is zero.
Plasma membranes
Every cell is enclosed in a plasma membrane, which has the structure of a lipid bilayer with many types of large molecules embedded in it. Because it is made of lipid molecules, the plasma membrane intrinsically has a high electrical resistivity, in other words a low intrinsic permeability to ions. However, some of the molecules embedded in the membrane are capable either of actively transporting ions from one side of the membrane to the other or of providing channels through which they can move.
In electrical terminology, the plasma membrane functions as a combined resistor and capacitor. Resistance arises from the fact that the membrane impedes the movement of charges across it. Capacitance arises from the fact that the lipid bilayer is so thin that an accumulation of charged particles on one side gives rise to an electrical force that pulls oppositely charged particles toward the other side. The capacitance of the membrane is relatively unaffected by the molecules that are embedded in it, so it has a more or less invariant value estimated at 2 μF/cm2 (the total capacitance of a patch of membrane is proportional to its area). The conductance of a pure lipid bilayer is so low, on the other hand, that in biological situations it is always dominated by the conductance of alternative pathways provided by embedded molecules. Thus, the capacitance of the membrane is more or less fixed, but the resistance is highly variable.
The thickness of a plasma membrane is estimated to be about 7-8 nanometers. Because the membrane is so thin, it does not take a very large transmembrane voltage to create a strong electric field within it. Typical membrane potentials in animal cells are on the order of 100 millivolts (that is, one tenth of a volt), but calculations show that this generates an electric field close to the maximum that the membrane can sustain—it has been calculated that a voltage difference much larger than 200 millivolts could cause dielectric breakdown, that is, arcing across the membrane.
Facilitated diffusion and transport
The resistance of a pure lipid bilayer to the passage of ions across it is very high, but structures embedded in the membrane can greatly enhance ion movement, either actively or passively, via mechanisms called facilitated transport and facilitated diffusion. The two types of structure that play the largest roles are ion channels and ion pumps, both usually formed from assemblages of protein molecules. Ion channels provide passageways through which ions can move. In most cases, an ion channel is permeable only to specific types of ions (for example, sodium and potassium but not chloride or calcium), and sometimes the permeability varies depending on the direction of ion movement. Ion pumps, also known as ion transporters or carrier proteins, actively transport specific types of ions from one side of the membrane to the other, sometimes using energy derived from metabolic processes to do so.
Ion pumps
Ion pumps are integral membrane proteins that carry out active transport, i.e., use cellular energy (ATP) to "pump" the ions against their concentration gradient. Such ion pumps take in ions from one side of the membrane (decreasing its concentration there) and release them on the other side (increasing its concentration there).
The ion pump most relevant to the action potential is the sodium–potassium pump, which transports three sodium ions out of the cell and two potassium ions in. As a consequence, the concentration of potassium ions K+ inside the neuron is roughly 30-fold larger than the outside concentration, whereas the sodium concentration outside is roughly five-fold larger than inside. In a similar manner, other ions have different concentrations inside and outside the neuron, such as calcium, chloride and magnesium.
If the numbers of each type of ion were equal, the sodium–potassium pump would be electrically neutral, but, because of the three-for-two exchange, it gives a net movement of one positive charge from intracellular to extracellular for each cycle, thereby contributing to a positive voltage difference. The pump has three effects: (1) it makes the sodium concentration high in the extracellular space and low in the intracellular space; (2) it makes the potassium concentration high in the intracellular space and low in the extracellular space; (3) it gives the intracellular space a negative voltage with respect to the extracellular space.
The sodium-potassium pump is relatively slow in operation. If a cell were initialized with equal concentrations of sodium and potassium everywhere, it would take hours for the pump to establish equilibrium. The pump operates constantly, but becomes progressively less efficient as the concentrations of sodium and potassium available for pumping are reduced.
Ion pumps influence the action potential only by establishing the relative ratio of intracellular and extracellular ion concentrations. The action potential involves mainly the opening and closing of ion channels not ion pumps. If the ion pumps are turned off by removing their energy source, or by adding an inhibitor such as ouabain, the axon can still fire hundreds of thousands of action potentials before their amplitudes begin to decay significantly. In particular, ion pumps play no significant role in the repolarization of the membrane after an action potential.
Another functionally important ion pump is the sodium-calcium exchanger. This pump operates in a conceptually similar way to the sodium-potassium pump, except that in each cycle it exchanges three Na+ from the extracellular space for one Ca++ from the intracellular space. Because the net flow of charge is inward, this pump runs "downhill", in effect, and therefore does not require any energy source except the membrane voltage. Its most important effect is to pump calcium outward—it also allows an inward flow of sodium, thereby counteracting the sodium-potassium pump, but, because overall sodium and potassium concentrations are much higher than calcium concentrations, this effect is relatively unimportant. The net result of the sodium-calcium exchanger is that in the resting state, intracellular calcium concentrations become very low.
Ion channels
Ion channels are integral membrane proteins with a pore through which ions can travel between extracellular space and cell interior. Most channels are specific (selective) for one ion; for example, most potassium channels are characterized by 1000:1 selectivity ratio for potassium over sodium, though potassium and sodium ions have the same charge and differ only slightly in their radius. The channel pore is typically so small that ions must pass through it in single-file order. Channel pores can be either open or closed for ion passage, although a number of channels demonstrate various sub-conductance levels. When a channel is open, ions permeate through the channel pore down the transmembrane concentration gradient for that particular ion. Rate of ionic flow through the channel, i.e. single-channel current amplitude, is determined by the maximum channel conductance and electrochemical driving force for that ion, which is the difference between the instantaneous value of the membrane potential and the value of the reversal potential.
A channel may have several different states (corresponding to different conformations of the protein), but each such state is either open or closed. In general, closed states correspond either to a contraction of the pore—making it impassable to the ion—or to a separate part of the protein, stoppering the pore. For example, the voltage-dependent sodium channel undergoes inactivation, in which a portion of the protein swings into the pore, sealing it. This inactivation shuts off the sodium current and plays a critical role in the action potential.
Ion channels can be classified by how they respond to their environment. For example, the ion channels involved in the action potential are voltage-sensitive channels; they open and close in response to the voltage across the membrane. Ligand-gated channels form another important class; these ion channels open and close in response to the binding of a ligand molecule, such as a neurotransmitter. Other ion channels open and close with mechanical forces. Still other ion channels—such as those of sensory neurons—open and close in response to other stimuli, such as light, temperature or pressure.
Leakage channels
Leakage channels are the simplest type of ion channel, in that their permeability is more or less constant. The types of leakage channels that have the greatest significance in neurons are potassium and chloride channels. Even these are not perfectly constant in their properties: First, most of them are voltage-dependent in the sense that they conduct better in one direction than the other (in other words, they are rectifiers); second, some of them are capable of being shut off by chemical ligands even though they do not require ligands in order to operate.
Ligand-gated channels
Ligand-gated ion channels are channels whose permeability is greatly increased when some type of chemical ligand binds to the protein structure. Animal cells contain hundreds, if not thousands, of types of these. A large subset function as neurotransmitter receptors—they occur at postsynaptic sites, and the chemical ligand that gates them is released by the presynaptic axon terminal. One example of this type is the AMPA receptor, a receptor for the neurotransmitter glutamate that when activated allows passage of sodium and potassium ions. Another example is the GABAA receptor, a receptor for the neurotransmitter GABA that when activated allows passage of chloride ions.
Neurotransmitter receptors are activated by ligands that appear in the extracellular area, but there are other types of ligand-gated channels that are controlled by interactions on the intracellular side.
Voltage-dependent channels
Voltage-gated ion channels, also known as voltage dependent ion channels, are channels whose permeability is influenced by the membrane potential. They form another very large group, with each member having a particular ion selectivity and a particular voltage dependence. Many are also time-dependent—in other words, they do not respond immediately to a voltage change but only after a delay.
One of the most important members of this group is a type of voltage-gated sodium channel that underlies action potentials—these are sometimes called Hodgkin-Huxley sodium channels because they were initially characterized by Alan Lloyd Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the physiology of the action potential. The channel is closed at the resting voltage level, but opens abruptly when the voltage exceeds a certain threshold, allowing a large influx of sodium ions that produces a very rapid change in the membrane potential. Recovery from an action potential is partly dependent on a type of voltage-gated potassium channel that is closed at the resting voltage level but opens as a consequence of the large voltage change produced during the action potential.
Reversal potential
The reversal potential (or equilibrium potential) of an ion is the value of transmembrane voltage at which diffusive and electrical forces counterbalance, so that there is no net ion flow across the membrane. This means that the transmembrane voltage exactly opposes the force of diffusion of the ion, such that the net current of the ion across the membrane is zero and unchanging. The reversal potential is important because it gives the voltage that acts on channels permeable to that ion—in other words, it gives the voltage that the ion concentration gradient generates when it acts as a battery.
The equilibrium potential of a particular ion is usually designated by the notation Eion.The equilibrium potential for any ion can be calculated using the Nernst equation. For example, reversal potential for potassium ions will be as follows:
where
Eeq,K+= equilibrium potential for potassium, measured in volts
R = universal gas constant, equal to 8.314 joules·K−1·mol−1
T = absolute temperature, measured in kelvins (= K = degrees Celsius + 273.15)
z = number of elementary charges of the ion in question involved in the reaction
F = Faraday constant, equal to 96,485 coulombs·mol−1 or J·V−1·mol−1
[K+]o= extracellular concentration of potassium, measured in mol·m−3 or mmol·l−1
[K+]i= intracellular concentration of potassium
Even if two different ions have the same charge (i.e., K+ and Na+), they can still have very different equilibrium potentials, provided their outside and/or inside concentrations differ. Take, for example, the equilibrium potentials of potassium and sodium in neurons. The potassium equilibrium potential EK is −84 mV with 5 mM potassium outside and 140 mM inside. On the other hand, the sodium equilibrium potential, ENa, is approximately +66 mV with approximately 12 mM sodium inside and 140 mM outside.
Changes to membrane potential during development
A neuron's resting membrane potential actually changes during the development of an organism. In order for a neuron to eventually adopt its full adult function, its potential must be tightly regulated during development. As an organism progresses through development the resting membrane potential becomes more negative. Glial cells are also differentiating and proliferating as development progresses in the brain. The addition of these glial cells increases the organism's ability to regulate extracellular potassium. The drop in extracellular potassium can lead to a decrease in membrane potential of 35 mV.
Cell excitability
Cell excitability is the change in membrane potential that is necessary for cellular responses in various tissues. Cell excitability is a property that is induced during early embriogenesis. Excitability of a cell has also been defined as the ease with which a response may be triggered. The resting and threshold potentials forms the basis of cell excitability and these processes are fundamental for the generation of graded and action potentials.
The most important regulators of cell excitability are the extracellular electrolyte concentrations (i.e. Na+, K+, Ca2+, Cl−, Mg2+) and associated proteins. Important proteins that regulate cell excitability are voltage-gated ion channels, ion transporters (e.g. Na+/K+-ATPase, magnesium transporters, acid–base transporters), membrane receptors and hyperpolarization-activated cyclic-nucleotide-gated channels. For example, potassium channels and calcium-sensing receptors are important regulators of excitability in neurons, cardiac myocytes and many other excitable cells like astrocytes. Calcium ion is also the most important second messenger in excitable cell signaling. Activation of synaptic receptors initiates long-lasting changes in neuronal excitability. Thyroid, adrenal and other hormones also regulate cell excitability, for example, progesterone and estrogen modulate myometrial smooth muscle cell excitability.
Many cell types are considered to have an excitable membrane. Excitable cells are neurons, muscle (cardiac, skeletal, smooth), vascular endothelial cells, pericytes, juxtaglomerular cells, interstitial cells of Cajal, many types of epithelial cells (e.g. beta cells, alpha cells, delta cells, enteroendocrine cells, pulmonary neuroendocrine cells, pinealocytes), glial cells (e.g. astrocytes), mechanoreceptor cells (e.g. hair cells and Merkel cells), chemoreceptor cells (e.g. glomus cells, taste receptors), some plant cells and possibly immune cells. Astrocytes display a form of non-electrical excitability based on intracellular calcium variations related to the expression of several receptors through which they can detect the synaptic signal. In neurons, there are different membrane properties in some portions of the cell, for example, dendritic excitability endows neurons with the capacity for coincidence detection of spatially separated inputs.
Equivalent circuit
Electrophysiologists model the effects of ionic concentration differences, ion channels, and membrane capacitance in terms of an equivalent circuit, which is intended to represent the electrical properties of a small patch of membrane. The equivalent circuit consists of a capacitor in parallel with four pathways each consisting of a battery in series with a variable conductance. The capacitance is determined by the properties of the lipid bilayer, and is taken to be fixed. Each of the four parallel pathways comes from one of the principal ions, sodium, potassium, chloride, and calcium. The voltage of each ionic pathway is determined by the concentrations of the ion on each side of the membrane; see the Reversal potential section above. The conductance of each ionic pathway at any point in time is determined by the states of all the ion channels that are potentially permeable to that ion, including leakage channels, ligand-gated channels, and voltage-gated ion channels.
For fixed ion concentrations and fixed values of ion channel conductance, the equivalent circuit can be further reduced, using the Goldman equation as described below, to a circuit containing a capacitance in parallel with a battery and conductance. In electrical terms, this is a type of RC circuit (resistance-capacitance circuit), and its electrical properties are very simple. Starting from any initial state, the current flowing across either the conductance or the capacitance decays with an exponential time course, with a time constant of , where is the capacitance of the membrane patch, and is the net resistance. For realistic situations, the time constant usually lies in the 1—100 millisecond range. In most cases, changes in the conductance of ion channels occur on a faster time scale, so an RC circuit is not a good approximation; however, the differential equation used to model a membrane patch is commonly a modified version of the RC circuit equation.
Resting potential
When the membrane potential of a cell goes for a long period of time without changing significantly, it is referred to as a resting potential or resting voltage. This term is used for the membrane potential of non-excitable cells, but also for the membrane potential of excitable cells in the absence of excitation. In excitable cells, the other possible states are graded membrane potentials (of variable amplitude), and action potentials, which are large, all-or-nothing rises in membrane potential that usually follow a fixed time course. Excitable cells include neurons, muscle cells, and some secretory cells in glands. Even in other types of cells, however, the membrane voltage can undergo changes in response to environmental or intracellular stimuli. For example, depolarization of the plasma membrane appears to be an important step in programmed cell death.
The interactions that generate the resting potential are modeled by the Goldman equation. This is similar in form to the Nernst equation shown above, in that it is based on the charges of the ions in question, as well as the difference between their inside and outside concentrations. However, it also takes into consideration the relative permeability of the plasma membrane to each ion in question.
The three ions that appear in this equation are potassium (K+), sodium (Na+), and chloride (Cl−). Calcium is omitted, but can be added to deal with situations in which it plays a significant role. Being an anion, the chloride terms are treated differently from the cation terms; the intracellular concentration is in the numerator, and the extracellular concentration in the denominator, which is reversed from the cation terms. Pi stands for the relative permeability of the ion type i.
In essence, the Goldman formula expresses the membrane potential as a weighted average of the reversal potentials for the individual ion types, weighted by permeability. (Although the membrane potential changes about 100 mV during an action potential, the concentrations of ions inside and outside the cell do not change significantly. They remain close to their respective concentrations when then membrane is at resting potential.) In most animal cells, the permeability to potassium is much higher in the resting state than the permeability to sodium. As a consequence, the resting potential is usually close to the potassium reversal potential. The permeability to chloride can be high enough to be significant, but, unlike the other ions, chloride is not actively pumped, and therefore equilibrates at a reversal potential very close to the resting potential determined by the other ions.
Values of resting membrane potential in most animal cells usually vary between the potassium reversal potential (usually around -80 mV) and around -40 mV. The resting potential in excitable cells (capable of producing action potentials) is usually near -60 mV—more depolarized voltages would lead to spontaneous generation of action potentials. Immature or undifferentiated cells show highly variable values of resting voltage, usually significantly more positive than in differentiated cells. In such cells, the resting potential value correlates with the degree of differentiation: undifferentiated cells in some cases may not show any transmembrane voltage difference at all.
Maintenance of the resting potential can be metabolically costly for a cell because of its requirement for active pumping of ions to counteract losses due to leakage channels. The cost is highest when the cell function requires an especially depolarized value of membrane voltage. For example, the resting potential in daylight-adapted blowfly (Calliphora vicina) photoreceptors can be as high as -30 mV. This elevated membrane potential allows the cells to respond very rapidly to visual inputs; the cost is that maintenance of the resting potential may consume more than 20% of overall cellular ATP.
On the other hand, the high resting potential in undifferentiated cells does not necessarily incur a high metabolic cost. This apparent paradox is resolved by examination of the origin of that resting potential. Little-differentiated cells are characterized by extremely high input resistance, which implies that few leakage channels are present at this stage of cell life. As an apparent result, potassium permeability becomes similar to that for sodium ions, which places resting potential in-between the reversal potentials for sodium and potassium as discussed above. The reduced leakage currents also mean there is little need for active pumping in order to compensate, therefore low metabolic cost.
Graded potentials
As explained above, the potential at any point in a cell's membrane is determined by the ion concentration differences between the intracellular and extracellular areas, and by the permeability of the membrane to each type of ion. The ion concentrations do not normally change very quickly (with the exception of Ca2+, where the baseline intracellular concentration is so low that even a small influx may increase it by orders of magnitude), but the permeabilities of the ions can change in a fraction of a millisecond, as a result of activation of ligand-gated ion channels. The change in membrane potential can be either large or small, depending on how many ion channels are activated and what type they are, and can be either long or short, depending on the lengths of time that the channels remain open. Changes of this type are referred to as graded potentials, in contrast to action potentials, which have a fixed amplitude and time course.
As can be derived from the Goldman equation shown above, the effect of increasing the permeability of a membrane to a particular type of ion shifts the membrane potential toward the reversal potential for that ion. Thus, opening Na+ channels shifts the membrane potential toward the Na+ reversal potential, which is usually around +100 mV. Likewise, opening K+ channels shifts the membrane potential toward about –90 mV, and opening Cl− channels shifts it toward about –70 mV (resting potential of most membranes). Thus, Na+ channels shift the membrane potential in a positive direction, K+ channels shift it in a negative direction (except when the membrane is hyperpolarized to a value more negative than the K+ reversal potential), and Cl− channels tend to shift it towards the resting potential.
Graded membrane potentials are particularly important in neurons, where they are produced by synapses—a temporary change in membrane potential produced by activation of a synapse by a single graded or action potential is called a postsynaptic potential. Neurotransmitters that act to open Na+ channels typically cause the membrane potential to become more positive, while neurotransmitters that activate K+ channels typically cause it to become more negative; those that inhibit these channels tend to have the opposite effect.
Whether a postsynaptic potential is considered excitatory or inhibitory depends on the reversal potential for the ions of that current, and the threshold for the cell to fire an action potential (around –50mV). A postsynaptic current with a reversal potential above threshold, such as a typical Na+ current, is considered excitatory. A current with a reversal potential below threshold, such as a typical K+ current, is considered inhibitory. A current with a reversal potential above the resting potential, but below threshold, will not by itself elicit action potentials, but will produce subthreshold membrane potential oscillations. Thus, neurotransmitters that act to open Na+ channels produce excitatory postsynaptic potentials, or EPSPs, whereas neurotransmitters that act to open K+ or Cl− channels typically produce inhibitory postsynaptic potentials, or IPSPs. When multiple types of channels are open within the same time period, their postsynaptic potentials summate (are added together).
Other values
From the viewpoint of biophysics, the resting membrane potential is merely the membrane potential that results from the membrane permeabilities that predominate when the cell is resting. The above equation of weighted averages always applies, but the following approach may be more easily visualized.
At any given moment, there are two factors for an ion that determine how much influence that ion will have over the membrane potential of a cell:
That ion's driving force
That ion's permeability
If the driving force is high, then the ion is being "pushed" across the membrane. If the permeability is high, it will be easier for the ion to diffuse across the membrane.
Driving force is the net electrical force available to move that ion across the membrane. It is calculated as the difference between the voltage that the ion "wants" to be at (its equilibrium potential) and the actual membrane potential (Em). So, in formal terms, the driving force for an ion = Em - Eion
For example, at our earlier calculated resting potential of −73 mV, the driving force on potassium is 7 mV : (−73 mV) − (−80 mV) = 7 mV. The driving force on sodium would be (−73 mV) − (60 mV) = −133 mV.
Permeability is a measure of how easily an ion can cross the membrane. It is normally measured as the (electrical) conductance and the unit, siemens, corresponds to 1 C·s−1·V−1, that is one coulomb per second per volt of potential.
So, in a resting membrane, while the driving force for potassium is low, its permeability is very high. Sodium has a huge driving force but almost no resting permeability. In this case, potassium carries about 20 times more current than sodium, and thus has 20 times more influence over Em than does sodium.
However, consider another case—the peak of the action potential. Here, permeability to Na is high and K permeability is relatively low. Thus, the membrane moves to near ENa and far from EK.
The more ions are permeant the more complicated it becomes to predict the membrane potential. However, this can be done using the Goldman-Hodgkin-Katz equation or the weighted means equation. By plugging in the concentration gradients and the permeabilities of the ions at any instant in time, one can determine the membrane potential at that moment. What the GHK equations means is that, at any time, the value of the membrane potential will be a weighted average of the equilibrium potentials of all permeant ions. The "weighting" is the ions relative permeability across the membrane.
Effects and implications
While cells expend energy to transport ions and establish a transmembrane potential, they use this potential in turn to transport other ions and metabolites such as sugar. The transmembrane potential of the mitochondria drives the production of ATP, which is the common currency of biological energy.
Cells may draw on the energy they store in the resting potential to drive action potentials or other forms of excitation. These changes in the membrane potential enable communication with other cells (as with action potentials) or initiate changes inside the cell, which happens in an egg when it is fertilized by a sperm.
Changes in the dielectric properties of plasma membrane may act as hallmark of underlying conditions such as diabetes and dyslipidemia.
In neuronal cells, an action potential begins with a rush of sodium ions into the cell through sodium channels, resulting in depolarization, while recovery involves an outward rush of potassium through potassium channels. Both of these fluxes occur by passive diffusion.
A dose of salt may trigger the still-working neurons of a fresh cut of meat into firing, causing muscle spasms.
See also
Bioelectrochemistry
Chemiosmotic potential
Electrochemical potential
Goldman equation
Membrane biophysics
Microelectrode array
Saltatory conduction
Surface potential
Gibbs–Donnan effect
Synaptic potential
Notes
References
Further reading
Alberts et al. Molecular Biology of the Cell. Garland Publishing; 4th Bk&Cdr edition (March, 2002). . Undergraduate level.
Guyton, Arthur C., John E. Hall. Textbook of medical physiology. W.B. Saunders Company; 10th edition (August 15, 2000). . Undergraduate level.
Hille, B. Ionic Channel of Excitable Membranes Sinauer Associates, Sunderland, MA, USA; 1st Edition, 1984.
Nicholls, J.G., Martin, A.R. and Wallace, B.G. From Neuron to Brain Sinauer Associates, Inc. Sunderland, MA, USA 3rd Edition, 1992.
Ove-Sten Knudsen. Biological Membranes: Theory of Transport, Potentials and Electric Impulses. Cambridge University Press (September 26, 2002). . Graduate level.
National Medical Series for Independent Study. Physiology. Lippincott Williams & Wilkins. Philadelphia, PA, USA 4th Edition, 2001.
External links
Functions of the Cell Membrane
Nernst/Goldman Equation Simulator
Nernst Equation Calculator
Goldman-Hodgkin-Katz Equation Calculator
Electrochemical Driving Force Calculator
The Origin of the Resting Membrane Potential - Online interactive tutorial (Flash)
Cell communication
Cell signaling
Cellular processes
Cellular neuroscience
Electrochemical concepts
Electrophysiology
Membrane biology | Membrane potential | [
"Chemistry",
"Biology"
] | 8,658 | [
"Cell communication",
"Membrane biology",
"Electrochemical concepts",
"Electrochemistry",
"Cellular processes",
"Molecular biology"
] |
563,219 | https://en.wikipedia.org/wiki/369%20%28number%29 | 369 (three hundred [and] sixty-nine) is the natural number following 368 and preceding 370.
In mathematics
369 is the magic constant of the 9 × 9 magic square and the n-Queens Problem for n = 9.
There are 369 free octominoes (polyominoes of order 8).
369 is a Ruth-Aaron Pair with 370. The sums of their prime factors are equivalent.
References
Integers | 369 (number) | [
"Mathematics"
] | 92 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
563,239 | https://en.wikipedia.org/wiki/Biogenic%20substance | A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways.
The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks.
Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro algae, including cyanobacteria. Due to their antimicrobial properties they are currently the subject of research in both industrial projects, such as for anti-fouling paints, or in medicine.
History of discovery and classification
During a meeting of the New York Academy of Sciences' Section of Geology and Mineralogy in 1903, geologist Amadeus William Grabau proposed a new rock classification system in his paper 'Discussion of and Suggestions Regarding a New Classification of Rocks'. Within the primary subdivision of "Endogenetic rocks" – rocks formed through chemical processes – was a category termed "Biogenic rocks", which was used synonymously with "Organic rocks". Other secondary categories were "Igneous" and "Hydrogenic" rocks.
In the 1930s German chemist Alfred E. Treibs first detected biogenic substances in petroleum as part of his studies of porphyrins. Based on this research, there was a later increase in the 1970s in the investigation of biogenic substances in sedimentary rocks as part of the study of geology. This was facilitated by the development of more advanced analytical methods, and led to greater collaboration between geologists and organic chemists in order to research the biogenic compounds in sediments.
Researchers additionally began to investigate the production of compounds by microorganisms in the marine environment during the early 1960s. By 1975, different research areas had developed in the study of marine biochemistry. These were "marine toxins, marine bioproducts and marine chemical ecology". Following this in 1994, Teuscher and Lindequist defined biogenic substances as "chemical compounds which are synthesised by living organisms and which, if they exceed certain concentrations, cause temporary or permanent damage or even death of other organisms by chemical or physicochemical effects" in their book, Biogene Gifte. This emphasis in research and classification on the toxicity of biogenic substances was partly due to the cytotoxicity-directed screening assays that were used to detect the biologically active compounds. The diversity of biogenic products has since been expanded from cytotoxic substances through the use of alternative pharmaceutical and industrial assays.
In the environment
Hydroecology
Through studying the transport of biogenic substances in the Tatar Strait in the Sea of Japan, a Russian team noted that biogenic substances can enter the marine environment due to input from either external sources, transport inside the water masses, or development by metabolic processes within the water. They can likewise be expended due to biotransformation processes, or biomass formation by microorganisms. In this study the biogenic substance concentrations, transformation frequency, and turnover were all highest in the upper layer of the water. Additionally, in different regions of the strait the biogenic substances with the highest annual transfer were constant. These were O2, DOC, and DISi, which are normally found in large concentrations in natural water. The biogenic substances that tend to have lower input through the external boundaries of the strait and therefore least transfer were mineral and detrital components of N and P. These same substances take active part in biotransformation processes in the marine environment and have lower annual output as well.
Geological sites
Organic geochemists also have an interest in studying the diagenesis of biogenic substances in petroleum and how they are transformed in sediment and fossils. While 90% of this organic material is insoluble in common organic solvents – called kerogen – 10% is in a form that is soluble and can be extracted, from where biogenic compounds can then be isolated. Saturated linear fatty acids and pigments have the most stable chemical structures and are therefore suited to withstanding degradation from the diagenesis process and being detected in their original forms. However, macromolecules have also been found in protected geological regions. Typical sedimentation conditions involve enzymatic, microbial and physicochemical processes as well as increased temperature and pressure, which lead to transformations of biogenic substances. For example, pigments that arise from dehydrogenation of chlorophyll or hemin can be found in many sediments as nickel or vanadyl complexes. A large proportion of the isoprenoids in sediments are also derived from chlorophyll. Similarly, linear saturated fatty acids discovered in the Messel oil shale of the Messel Pit in Germany arise from organic material of vascular plants.
Additionally, alkanes and isoprenoids are found in soluble extracts of Precambrian rock, indicating the probable existence of biological material more than three billion years ago. However, there is the potential that these organic compounds are abiogenic in nature, especially in Precambrian sediments. While Studier et al.'s (1968) simulations of the synthesis of isoprenoids in abiogenic conditions did not produce the long-chain isoprenoids used as biomarkers in fossils and sediments, traces of C9-C14 isoprenoids were detected. It is also possible for polyisoprenoid chains to be stereoselectively synthesised using catalysts such as Al(C2H5)3 – VCl3. However, the probability of these compounds being available in the natural environment is unlikely.
Measurement
The different biomolecules that make up a plant's biogenic substances – particularly those in seed exudates - can be identified by using different varieties of chromatography in a lab environment. For metabolite profiling, gas chromatography-mass spectrometry is used to find flavonoids such as quercetin. Compounds can then be further differentiated using reversed-phase high-performance liquid chromatography-mass spectrometry.
When it comes to measuring biogenic substances in a natural environment such as a body of water, a hydroecological CNPSi model can be used to calculate the spatial transport of biogenic substances, in both the horizontal and vertical dimensions. This model takes into account the water exchange and flow rate, and yields the values of biogenic substance rates for any area or layer of the water for any month. There are two main evaluation methods involved: measuring per unit water volume (mg/m3 year) and measuring substances per entire water volume of layer (t of element/year). The former is mostly used to observe biogenic substance dynamics and individual pathways for flux and transformations, and is useful when comparing individual regions of the strait or waterway. The second method is used for monthly substance fluxes and must take into account that there are monthly variations in the water volume in the layers.
In the study of geochemistry, biogenic substances can be isolated from fossils and sediments through a process of scraping and crushing the target rock sample, then washing with 40% hydrofluoric acid, water, and benzene/methanol in the ratio 3:1. Following this, the rock pieces are ground and centrifuged to produce a residue. Chemical compounds are then derived through various chromatography and mass spectrometry separations. However, extraction should be accompanied by rigorous precautions to ensure there is no amino acid contaminants from fingerprints, or silicone contaminants from other analytical treatment methods.
Applications
Anti-fouling paints
Metabolites produced by marine algae have been found to have many antimicrobial properties. This is because they are produced by the marine organisms as chemical deterrents and as such contain bioactive compounds. The principal classes of marine algae that produce these types of secondary metabolites are Cyanophyceae, Chlorophyceae and Rhodophyceae. Observed biogenic products include polyketides, amides, alkaloids, fatty acids, indoles and lipopeptides. For example, over 10% of compounds isolated from Lyngbya majuscula, which is one of the most abundant cyanobacteria, have antifungal and antimicrobial properties. Additionally, a study by Ren et al. (2002) tested halogenated furanones produced by Delisea pulchra from the Rhodophyceae class against the growth of Bacillus subtilis. When applied at a 40 μg/mL concentration, the furanone inhibited the formation of a biofilm by the bacteria and reduced the biofilm's thickness by 25% and the number of live cells by 63%.
These characteristics then have the potential to be utilised in man-made materials, such as making anti-fouling paints without the environment-damaging chemicals. Environmentally safe alternatives are needed to TBT (tin-based antifouling agent) which releases toxic compounds into water and the environment and has been banned in several countries. A class of biogenic compounds that has had a sizeable effect against the bacteria and microalgae that cause fouling are acetylene sesquiterpenoid esters produced by Caulerpa prolifera (from the Chlorophyceae class), which Smyrniotopoulos et al. (2003) observed inhibiting bacterial growth with up to 83% of the efficacy of TBT oxide.
Current research also aims to produce these biogenic substances on a commercial level using metabolic engineering techniques. By pairing these techniques with biochemical engineering design, algae and their biogenic substances can be produced on a large scale using photobioreactors. Different system types can be used to yield different biogenic products.
Paleochemotaxonomy
In the field of paleochemotaxonomy the presence of biogenic substances in geological sediments is useful for comparing old and modern biological samples and species. These biological markers can be used to verify the biological origin of fossils and serve as paleo-ecological markers. For example, the presence of pristane indicates that the petroleum or sediment is of marine origin, while biogenic material of non-marine origin tends to be in the form of polycyclic compounds or phytane. The biological markers also provide valuable information about the degradation reactions of biological material in geological environments. Comparing the organic material between geologically old and recent rocks shows the conservation of different biochemical processes.
Metallic nanoparticle production
Another application of biogenic substances is in the synthesis of metallic nanoparticles. The current chemical and physical production methods for nanoparticles used are costly and produce toxic waste and pollutants in the environment. Additionally, the nanoparticles that are produced can be unstable and unfit for use in the body. Using plant-derived biogenic substances aims to create an environmentally-friendly and cost-effective production method. The biogenic phytochemicals used for these reduction reactions can be derived from plants in numerous ways, including a boiled leaf broth, biomass powder, whole plant immersion in solution, or fruit and vegetable juice extracts. C. annuum juices have been shown to produce Ag nanoparticles at room temperature when treated with silver ions and additionally deliver essential vitamins and amino acids when consumed, making them a potential nanomaterials agent. Another procedure is through the use of a different biogenic substance: the exudate of germinating seeds. When seeds are soaked, they passively release phytochemicals into the surrounding water, which after reaching equilibrium can be mixed with metal ions to synthesise metallic nanoparticles. M. sativa exudate in particular has had success in effectively producing Ag metallic particles, while L. culinaris is an effective reactant for manufacturing Au nanoparticles. This process can also be further adjusted by manipulating factors such as pH, temperature, exudate dilution and plant origin to produce different shapes of nanoparticles, including triangles, spheres, rods, and spirals. These biogenic metallic nanoparticles then have applications as catalysts, glass window coatings to insulate heat, in biomedicine, and in biosensor devices.
Examples
Coal and oil are possible examples of constituents which may have undergone changes over geologic time periods.
Chalk and limestone are examples of secretions (marine animal shells) which are of geologic age.
Grass and wood are biogenic constituents of contemporary origin.
Pearls, silk and ambergris are examples of secretions of contemporary origin.
Biogenic neurotransmitters.
Table of isolated biogenic compounds
Abiogenic (opposite)
An abiogenic substance or process does not result from the present or past activity of living organisms. Abiogenic products may, e.g., be minerals, other inorganic compounds, as well as simple organic compounds (e.g. extraterrestrial methane, see also abiogenesis).
See also
Biogenic minerals
Natural product
Microalgae
Phytochemical
References
Biosphere
Geological processes
Natural materials
Organic compounds
Phycology
Paleobiology | Biogenic substance | [
"Physics",
"Chemistry",
"Biology"
] | 2,900 | [
"Algae",
"Natural materials",
"Phycology",
"Organic compounds",
"Materials",
"Paleobiology",
"Matter"
] |
563,245 | https://en.wikipedia.org/wiki/Biogenic%20amine | A biogenic amine is a biogenic substance with one or more amine groups. They are basic nitrogenous compounds formed mainly by decarboxylation of amino acids or by amination and transamination of aldehydes and ketones. Biogenic amines are organic bases with low molecular weight and are synthesized by microbial, vegetable and animal metabolisms. In food and beverages they are formed by the enzymes of raw material or are generated by microbial decarboxylation of amino acids.
List of notable biogenic amines
Monoamines
Some prominent examples of biogenic monoamines include:
Monoamine neurotransmitters
Imidazoleamines
Histamine – a substance derived from the amino acid histidine that acts as a neurotransmitter mediating arousal and attention, as well as a pro-inflammatory signal released from mast cells in response to allergic reactions or tissue damage. Histamine is also an important stimulant of HCl secretion by the stomach through histamine H2 receptors.
Indolamines
Serotonin – a central nervous system neurotransmitter derived from the amino acid tryptophan involved in regulating mood, sleep, appetite, and sexuality.
The three catecholamine neurotransmitters:
Norepinephrine (noradrenaline) – a neurotransmitter involved in sleep and wakefulness, attention, and feeding behavior, as well as a stress hormone released by the adrenal glands that regulates the sympathetic nervous system.
Epinephrine (adrenaline) – an adrenal stress hormone, as well as a neurotransmitter present at lower levels in the brain.
Dopamine – a neurotransmitter involved in motivation, reward, addiction, behavioral reinforcement, and coordination of bodily movement.
Trace amines (endogenous amines that activate the human TAAR1 receptor)
Tryptamines
N-Methyltryptamine
N,N-Dimethyltryptamine
Other biogenic monoamines
Trimethylamine
Trimethylamine N-oxide
Indoleamines
Melatonin
6-Hydroxymelatonin
N-Acetylserotonin
Polyamines
Examples of notable biogenic polyamines include:
Agmatine
Cadaverine
Putrescine
Spermine
Spermidine
Physiological importance
There is a distinction between endogenous and exogenous biogenic amines. Endogenous amines are produced in many different tissues (for example: adrenaline in adrenal medulla or histamine in mast cells and liver). Serotonin, an endogenous amine, is a neurotransmitter derived from the amino acid tryptophan. Serotonin is involved in regulating mood, sleep, appetite, and sexuality. The amines are transmitted locally or via the blood system. The exogenous amines are directly absorbed from food in the intestine. Alcohol can increase the absorption rate. Monoamine oxidase (MAO) breaks down biogenic amines and prevents excessive resorption. MAO inhibitors (MAOIs) are also used as medications for the treatment of depression to prevent MAO from breaking down amines important for positive mood.
Importance in food
Biogenic amines can be found in all foods containing proteins or free amino acids and are found in a wide range of food products including fish products, meat products, dairy products, wine, beer, vegetables, fruits, nuts and chocolate. In non-fermented foods the presence of biogenic amines is mostly undesired and can be used as indication for microbial spoilage. In fermented foods, one can expect the presence of many kinds of microorganisms, some of them being capable of producing biogenic amines.
Some lactic acid bacteria isolated from commercial bottled yoghurt have been shown to produce biogenic amines.
They play an important role as source of nitrogen and precursor for the synthesis of hormones, alkaloids, nucleic acids, proteins, amines and food aroma components. However, food containing high amounts of biogenic amines may have toxicological effects.
Determination of biogenic amines in wines
Biogenic amines are naturally present in grapes or can occur during the vinification and aging processes, essentially due to the microorganism's activity. When present in wines in high amount, biogenic amines may cause not only organoleptic defects but also adverse effects in sensitive human individuals, namely due to the toxicity of histamine, tyramine and putrescine. Even though there are no legal limits for the concentration of biogenic amines in wines, some European countries only recommend maximum limits for histamine. In this sense, biogenic amines in wines have been widely studied. The determination of amines in wines is commonly achieved by liquid chromatography, using derivatization reagents in order to promote its separation and detection. In alternative, other promising methodologies have been developed using capillary electrophoresis or biosensors, revealing lower costs and faster results, without needing a derivatization step. It is still a challenge to develop faster and inexpensive techniques or methodologies to apply in the wine industry.
See also
Monoamine neurotransmitter
Trace amine
References
External links
The Biogenic Amines – Neuroscience 2nd edition, Dale Purves et al. | Biogenic amine | [
"Chemistry"
] | 1,117 | [
"Biomolecules by chemical classification",
"Biogenic amines"
] |
563,299 | https://en.wikipedia.org/wiki/Human%20behavior | Human behavior is the potential and expressed capacity (mentally, physically, and socially) of human individuals or groups to respond to internal and external stimuli throughout their life. Behavior is driven by genetic and environmental factors that affect an individual. Behavior is also driven, in part, by thoughts and feelings, which provide insight into individual psyche, revealing such things as attitudes and values. Human behavior is shaped by psychological traits, as personality types vary from person to person, producing different actions and behavior.
Social behavior accounts for actions directed at others. It is concerned with the considerable influence of social interaction and culture, as well as ethics, interpersonal relationships, politics, and conflict. Some behaviors are common while others are unusual. The acceptability of behavior depends upon social norms and is regulated by various means of social control. Social norms also condition behavior, whereby humans are pressured into following certain rules and displaying certain behaviors that are deemed acceptable or unacceptable depending on the given society or culture.
Cognitive behavior accounts for actions of obtaining and using knowledge. It is concerned with how information is learned and passed on, as well as creative application of knowledge and personal beliefs such as religion. Physiological behavior accounts for actions to maintain the body. It is concerned with basic bodily functions as well as measures taken to maintain health. Economic behavior accounts for actions regarding the development, organization, and use of materials as well as other forms of work. Ecological behavior accounts for actions involving the ecosystem. It is concerned with how humans interact with other organisms and how the environment shapes human behavior.
Study
Human behavior is studied by the social sciences, which include psychology, sociology, ethology, and their various branches and schools of thought. There are many different facets of human behavior, and no one definition or field study encompasses it in its entirety. The nature versus nurture debate is one of the fundamental divisions in the study of human behavior; this debate considers whether behavior is predominantly affected by genetic or environmental factors. The study of human behavior sometimes receives public attention due to its intersection with cultural issues, including crime, sexuality, and social inequality.
Some natural sciences also place emphasis on human behavior. Neurology and evolutionary biology, study how behavior is controlled by the nervous system and how the human mind evolved, respectively. In other fields, human behavior may be a secondary subject of study when considering how it affects another subject. Outside of formal scientific inquiry, human behavior and the human condition is also a major focus of philosophy and literature. Philosophy of mind considers aspects such as free will, the mind–body problem, and malleability of human behavior.
Human behavior may be evaluated through questionnaires, interviews, and experimental methods. Animal testing may also be used to test behaviors that can then be compared to human behavior. Twin studies are a common method by which human behavior is studied. Twins with identical genomes can be compared to isolate genetic and environmental factors in behavior. Lifestyle, susceptibility to disease, and unhealthy behaviors have been identified to have both genetic and environmental indicators through twin studies.
Social behavior
Human social behavior is the behavior that considers other humans, including communication and cooperation. It is highly complex and structured, based on advanced theory of mind that allows humans to attribute thoughts and actions to one another. Through social behavior, humans have developed society and culture distinct from other animals. Human social behavior is governed by a combination of biological factors that affect all humans and cultural factors that change depending on upbringing and societal norms. Human communication is based heavily on language, typically through speech or writing. Nonverbal communication and paralanguage can modify the meaning of communications by demonstrating ideas and intent through physical and vocal behaviors.
Social norms
Human behavior in a society is governed by social norms. Social norms are unwritten expectations that members of society have for one another. These norms are ingrained in the particular culture that they emerge from, and humans often follow them unconsciously or without deliberation. These norms affect every aspect of life in human society, including decorum, social responsibility, property rights, contractual agreement, morality, and justice. Many norms facilitate coordination between members of society and prove mutually beneficial, such as norms regarding communication and agreements. Norms are enforced by social pressure, and individuals that violate social norms risk social exclusion.
Systems of ethics are used to guide human behavior to determine what is moral. Humans are distinct from other animals in the use of ethical systems to determine behavior. Ethical behavior is human behavior that takes into consideration how actions will affect others and whether behaviors will be optimal for others. What constitutes ethical behavior is determined by the individual value judgments of the person and the collective social norms regarding right and wrong. Value judgments are intrinsic to people of all cultures, though the specific systems used to evaluate them may vary. These systems may be derived from divine law, natural law, civil authority, reason, or a combination of these and other principles. Altruism is an associated behavior in which humans consider the welfare of others equally or preferentially to their own. While other animals engage in biological altruism, ethical altruism is unique to humans.
Deviance is behavior that violates social norms. As social norms vary between individuals and cultures, the nature and severity of a deviant act is subjective. What is considered deviant by a society may also change over time as new social norms are developed. Deviance is punished by other individuals through social stigma, censure, or violence. Many deviant actions are recognized as crimes and punished through a system of criminal justice. Deviant actions may be punished to prevent harm to others, to maintain a particular worldview and way of life, or to enforce principles of morality and decency. Cultures also attribute positive or negative value to certain physical traits, causing individuals that do not have desirable traits to be seen as deviant.
Interpersonal relationships
Interpersonal relationships can be evaluated by the specific choices and emotions between two individuals, or they can be evaluated by the broader societal context of how such a relationship is expected to function. Relationships are developed through communication, which creates intimacy, expresses emotions, and develops identity. An individual's interpersonal relationships form a social group in which individuals all communicate and socialize with one another, and these social groups are connected by additional relationships. Human social behavior is affected not only by individual relationships, but also by how behaviors in one relationship may affect others. Individuals that actively seek out social interactions are extraverts, and those that do not are introverts.
Romantic love is a significant interpersonal attraction toward another. Its nature varies by culture, but it is often contingent on gender, occurring in conjunction with sexual attraction and being either heterosexual or homosexual. It takes different forms and is associated with many individual emotions. Many cultures place a higher emphasis on romantic love than other forms of interpersonal attraction. Marriage is a union between two people, though whether it is associated with romantic love is dependent on the culture. Individuals that are closely related by consanguinity form a family. There are many variations on family structures that may include parents and children as well as stepchildren or extended relatives. Family units with children emphasize parenting, in which parents engage in a high level of parental investment to protect and instruct children as they develop over a period of time longer than that of most other mammals.
Politics and conflict
When humans make decisions as a group, they engage in politics. Humans have evolved to engage in behaviors of self-interest, but this also includes behaviors that facilitate cooperation rather than conflict in collective settings. Individuals will often form in-group and out-group perceptions, through which individuals cooperate with the in-group and compete with the out-group. This causes behaviors such as unconsciously conforming, passively obeying authority, taking pleasure in the misfortune of opponents, initiating hostility toward out-group members, artificially creating out-groups when none exist, and punishing those that do not comply with the standards of the in-group. These behaviors lead to the creation of political systems that enforce in-group standards and norms.
When humans oppose one another, it creates conflict. It may occur when the involved parties have a disagreement of opinion, when one party obstructs the goals of another, or when parties experience negative emotions such as anger toward one another. Conflicts purely of disagreement are often resolved through communication or negotiation, but incorporation of emotional or obstructive aspects can escalate conflict. Interpersonal conflict is that between specific individuals or groups of individuals. Social conflict is that between different social groups or demographics. This form of conflict often takes place when groups in society are marginalized, do not have the resources they desire, wish to instigate social change, or wish to resist social change. Significant social conflict can cause civil disorder. International conflict is that between nations or governments. It may be solved through diplomacy or war.
Cognitive behavior
Human cognition is distinct from that of other animals. This is derived from biological traits of human cognition, but also from shared knowledge and development passed down culturally. Humans are able to learn from one another due to advanced theory of mind that allows knowledge to be obtained through education. The use of language allows humans to directly pass knowledge to one another. The human brain has neuroplasticity, allowing it to modify its features in response to new experiences. This facilitates learning in humans and leads to behaviors of practice, allowing the development of new skills in individual humans. Behavior carried out over time can be ingrained as a habit, where humans will continue to regularly engage in the behavior without consciously deciding to do so.
Humans engage in reason to make inferences with a limited amount of information. Most human reasoning is done automatically without conscious effort on the part of the individual. Reasoning is carried out by making generalizations from past experiences and applying them to new circumstances. Learned knowledge is acquired to make more accurate inferences about the subject. Deductive reasoning infers conclusions that are true based on logical premises, while inductive reasoning infers what conclusions are likely to be true based on context.
Emotion is a cognitive experience innate to humans. Basic emotions such as joy, distress, anger, fear, surprise, and disgust are common to all cultures, though social norms regarding the expression of emotion may vary. Other emotions come from higher cognition, such as love, guilt, shame, embarrassment, pride, envy, and jealousy. These emotions develop over time rather than instantly and are more strongly influenced by cultural factors. Emotions are influenced by sensory information, such as color and music, and moods of happiness and sadness. Humans typically maintain a standard level of happiness or sadness determined by health and social relationships, though positive and negative events have short-term influences on mood. Humans often seek to improve the moods of one another through consolation, entertainment, and venting. Humans can also self-regulate mood through exercise and meditation.
Creativity is the use of previous ideas or resources to produce something original. It allows for innovation, adaptation to change, learning new information, and novel problem solving. Expression of creativity also supports quality of life. Creativity includes personal creativity, in which a person presents new ideas authentically, but it can also be expanded to social creativity, in which a community or society produces and recognizes ideas collectively. Creativity is applied in typical human life to solve problems as they occur. It also leads humans to carry out art and science. Individuals engaging in advanced creative work typically have specialized knowledge in that field, and humans draw on this knowledge to develop novel ideas. In art, creativity is used to develop new artistic works, such as visual art or music. In science, those with knowledge in a particular scientific field can use trial and error to develop theories that more accurately explain phenomena.
Religious behavior is a set of traditions that are followed based on the teachings of a religious belief system. The nature of religious behavior varies depending on the specific religious traditions. Most religious traditions involve variations of telling myths, practicing rituals, making certain things taboo, adopting symbolism, determining morality, experiencing altered states of consciousness, and believing in supernatural beings. Religious behavior is often demanding and has high time, energy, and material costs, and it conflicts with rational choice models of human behavior, though it does provide community-related benefits. Anthropologists offer competing theories as to why humans adopted religious behavior. Religious behavior is heavily influenced by social factors, and group involvement is significant in the development of an individual's religious behavior. Social structures such as religious organizations or family units allow the sharing and coordination of religious behavior. These social connections reinforce the cognitive behaviors associated with religion, encouraging orthodoxy and commitment. According to a Pew Research Center report, 54% of adults around the world state that religion is very important in their lives as of 2018.
Physiological behavior
Humans undergo many behaviors common to animals to support the processes of the human body. Humans eat food to obtain nutrition. These foods may be chosen for their nutritional value, but they may also be eaten for pleasure. Eating often follows a food preparation process to make it more enjoyable. Humans dispose of waste through urination and defecation. Excrement is often treated as taboo, particularly in developed and urban communities where sanitation is more widely available and excrement has no value as fertilizer. Humans also regularly engage in sleep, based on homeostatic and circadian factors. The circadian rhythm causes humans to require sleep at a regular pattern and is typically calibrated to the day-night cycle and sleep-wake habits. Homeostasis is also maintained, causing longer sleep longer after periods of sleep deprivation. The human sleep cycle takes place over 90 minutes, and it repeats 3–5 times during normal sleep.
There are also unique behaviors that humans undergo to maintain physical health. Humans have developed medicine to prevent and treat illnesses. In industrialized nations, eating habits that favor better nutrition, hygienic behaviors that promote sanitation, medical treatment to eradicate diseases, and the use of birth control significantly improve human health. Humans can also engage in exercise beyond that required for survival to maintain health. Humans engage in hygiene to limit exposure to dirt and pathogens. Some of these behaviors are adaptive while others are learned. Basic behaviors of disgust evolved as an adaptation to prevent contact with sources of pathogens, resulting in a biological aversion to feces, body fluids, rotten food, and animals that are commonly disease vectors. Personal grooming, disposal of human corpses, use of sewerage, and use of cleaning agents are hygienic behaviors common to most human societies.
Humans reproduce sexually, engaging in sexual intercourse for both reproduction and sexual pleasure. Human reproduction is closely associated with human sexuality and an instinctive desire to procreate, though humans are unique in that they intentionally control the number of offspring that they produce. Humans engage in a large variety of reproductive behaviors relative to other animals, with various mating structures that include forms of monogamy, polygyny, and polyandry. How humans engage in mating behavior is heavily influenced by cultural norms and customs. Unlike most mammals, human women ovulate spontaneously rather than seasonally, with a menstrual cycle that typically lasts 25–35 days.
Humans are bipedal and move by walking. Human walking corresponds to the bipedal gait cycle, which involves alternating heel contact and toe off with the ground and slight elevation and rotation of the pelvis. Balance while walking is learned during the first 7–9 years of life, and individual humans develop unique gaits while learning to displace weight, adjust center of mass, and coordinate neural control with movement. Humans can achieve higher speed by running. The endurance running hypothesis proposes that humans can outpace most other animals over long distances through running, though human running causes a higher rate of energy exertion. The human body self-regulates through perspiration during periods of exertion, allowing humans more endurance than other animals. The human hand is prehensile and capable of grasping objects and applying force with control over the hand's dexterity and grip strength. This allows the use of complex tools by humans.
Economic behavior
Humans engage in predictable behaviors when considering economic decisions, and these behaviors may or may not be rational. Humans make basic decisions through cost–benefit analysis and the acceptable rate of return at the minimum risk. Human economic decision making is often reference dependent, in which options are weighed in reference to the status quo rather than absolute gains and losses. Humans are also loss averse, fearing loss rather than seeking gain. Advanced economic behavior developed in humans after the Neolithic Revolution and the development of agriculture. These developments led to a sustainable supply of resources that allowed specialization in more complex societies.
Work
The nature of human work is defined by the complexity of society. The simplest societies are tribes that work primarily for sustenance as hunter-gatherers. In this sense, work is not a distinct activity but a constant that makes up all parts of life, as all members of the society must work consistently to stay alive.
More advanced societies developed after the Neolithic Revolution, emphasizing work in agricultural and pastoral settings. In these societies, production is increased, ending the need for constant work and allowing some individuals to specialize and work in areas outside of food-production. This also created non-laborious work, as increasing occupational complexity required some individuals to specialize in technical knowledge and administration. Laborious work in these societies has variously been carried out by slaves, serfs, peasants, and guild craftsmen.
The nature of work changed significantly during the Industrial Revolution in which the factory system was developed for use by industrializing nations. In addition to further increasing general quality of life, this development changed the dynamic of work. Under the factory system, workers increasingly collaborate with others, employers serve as authority figures during work hours, and forced labor is largely eradicated. Further changes occur in post-industrial societies where technological advance makes industries obsolete, replacing them with mass production and service industries.
Humans approach work differently based on both physical and personal attributes, and some work with more effectiveness and commitment than others. Some find work to contribute to personal fulfillment, while others work only out of necessity. Work can also serve as an identity, with individuals identifying themselves based on their occupation. Work motivation is complex, both contributing to and subtracting from various human needs. The primary motivation for work is for material gain, which takes the form of money in modern societies. It may also serve to create self-esteem and personal worth, provide activity, gain respect, and express creativity. Modern work is typically categorized as laborious or blue-collar work and non-laborious or white-collar work.
Leisure
Leisure is activity or lack of activity that takes place outside of work. It provides relaxation, entertainment, and improved quality of life for individuals. Engaging in leisure can be beneficial for physical and mental health. It may be used to seek temporary relief from psychological stress, to produce positive emotions, or to facilitate social interaction. However, leisure can also facilitate health risks and negative emotions caused by boredom, substance abuse, or high-risk behavior.
Leisure may be defined as serious or casual. Serious leisure behaviors involve non-professional pursuit of arts and sciences, the development of hobbies, or career volunteering in an area of expertise. Casual leisure behaviors provide short-term gratification, but they do not provide long-term gratification or personal identity. These include play, relaxation, casual social interaction, volunteering, passive entertainment, active entertainment, and sensory stimulation. Passive entertainment is typically derived from mass media, which may include written works or digital media. Active entertainment involves games in which individuals participate. Sensory stimulation is immediate gratification from behaviors such as eating or sexual intercourse.
Consumption
Humans operate as consumers that obtain and use goods. All production is ultimately designed for consumption, and consumers adapt their behavior based on the availability of production. Mass consumption began during the Industrial Revolution, caused by the development of new technologies that allowed for increased production. Many factors affect a consumer's decision to purchase goods through trade. They may consider the nature of the product, its associated cost, the convenience of purchase, and the nature of advertising around the product. Cultural factors may influence this decision, as different cultures value different things, and subcultures may have different priorities when it comes to purchasing decisions. Social class, including wealth, education, and occupation may affect one's purchasing behavior. A consumer's interpersonal relationships and reference groups may also influence purchasing behavior.
Ecological behavior
Like all living things, humans live in ecosystems and interact with other organisms. Human behavior is affected by the environment in which a human lives, and environments are affected by human habitation. Humans have also developed man-made ecosystems such as urban areas and agricultural land. Geography and landscape ecology determine how humans are distributed within an ecosystem, both naturally and through planned urban morphology.
Humans exercise control over the animals that live within their environment. Domesticated animals are trained and cared for by humans. Humans can develop social and emotional bonds with animals in their care. Pets are kept for companionship within human homes, including dogs and cats that have been bred for domestication over many centuries. Livestock animals, such as cattle, sheep, goats, and poultry, are kept on agricultural land to produce animal products. Domesticated animals are also kept in laboratories for animal testing. Non-domesticated animals are sometimes kept in nature reserves and zoos for tourism and conservation.
Causes and factors
Human behavior is influenced by biological and cultural elements. The structure and agency debate considers whether human behavior is predominantly led by individual human impulses or by external structural forces. Behavioral genetics considers how human behavior is affected by inherited traits. Though genes do not guarantee certain behaviors, certain traits can be inherited that make individuals more likely to engage in certain behaviors or express certain personalities. An individual's environment can also affect behavior, often in conjunction with genetic factors. An individual's personality and attitudes affect how behaviors are expressed, formed in conjunction by genetic and environmental factors.
Age
Infants
Infants are limited in their ability to interpret their surroundings shortly after birth. Object permanence and understanding of motion typically develop within the first six months of an infant's life, though the specific cognitive processes are not understood. The ability to mentally categorize different concepts and objects that they perceive also develops within the first year. Infants are quickly able to discern their body from their surroundings and often take interest in their own limbs or actions they cause by two months of age.
Infants practice imitation of other individuals to engage socially and learn new behaviors. In young infants, this involves imitating facial expressions, and imitation of tool use takes place within the first year. Communication develops over the first year, and infants begin using gestures to communicate intention around nine to ten months of age. Verbal communication develops more gradually, taking form during the second year of age.
Children
Children develop fine motor skills shortly after infancy, in the range of three to six years of age, allowing them to engage in behaviors using the hands and eye–hand coordination and perform basic activities of self sufficiency. Children begin expressing more complex emotions in the three- to six-year-old range, including humor, empathy, and altruism, as well engaging in creativity and inquiry. Aggressive behaviors also become varied at this age as children engage in increased physical aggression before learning to favor diplomacy over aggression. Children at this age can express themselves using language with basic grammar.
As children grow older, they develop emotional intelligence. Young children engage in basic social behaviors with peers, typically forming friendships centered on play with individuals of the same age and gender. Behaviors of young children are centered around play, which allows them to practice physical, cognitive, and social behaviors. Basic self-concept first develops as children grow, particularly centered around traits such as gender and ethnicity, and behavior is heavily affected by peers for the first time.
Adolescents
Adolescents undergo changes in behavior caused by puberty and the associated changes in hormone production. Production of testosterone increases sensation seeking and sensitivity to rewards in adolescents as well as aggression and risk-taking in adolescent boys. Production of estradiol causes similar risk-taking behavior among adolescent girls. The new hormones cause changes in emotional processing that allow for close friendships, stronger motivations and intentions, and adolescent sexuality.
Adolescents undergo social changes on a large scale, developing a full self-concept and making autonomous decisions independently of adults. They typically become more aware of social norms and social cues than children, causing an increase in self-consciousness and adolescent egocentrism that guides behavior in social settings throughout adolescence.
Culture and environment
Human brains, as with those of all mammals, are neuroplastic. This means that the structure of the brain changes over time as neural pathways are altered in response to the environment. Many behaviors are learned through interaction with others during early development of the brain. Human behavior is distinct from the behavior of other animals in that it is heavily influenced by culture and language. Social learning allows humans to develop new behaviors by following the example of others. Culture is also the guiding influence that defines social norms.
Physiology
Neurotransmitters, hormones, and metabolism are all recognized as biological factors in human behavior.
Physical disabilities can prevent individuals from engaging in typical human behavior or necessitate alternative behaviors. Accommodations and accessibility are often made available for individuals with physical disabilities in developed nations, including health care, assistive technology, and vocational services. Severe disabilities are associated with increased leisure time but also with a lower satisfaction in the quality of leisure time. Productivity and health both commonly undergo long term decline following the onset of a severe disability. Mental disabilities are those that directly affect cognitive and social behavior. Common mental disorders include mood disorders, anxiety disorders, personality disorders, and substance dependence.
See also
Behavioral modernity
Behaviorism
Cultural ecology
Human behavioral ecology
References
Bibliography
Further reading
Ardrey, Robert. 1970. The Social Contract: A Personal Inquiry into the Evolutionary Sources of Order and Disorder. Atheneum. .
Tissot, S. A. D. (1768), An essay on diseases incidental to literary and sedentary persons.
External links
Culture
Main topic articles | Human behavior | [
"Biology"
] | 5,281 | [
"Behavior",
"Human behavior"
] |
563,334 | https://en.wikipedia.org/wiki/DISCiPLE | The DISCiPLE is a floppy disk interface for the ZX Spectrum home computer. Designed by Miles Gordon Technology, it was marketed by Rockfort Products and launched in 1986.
Like Sinclair's own ZX Interface 1, the DISCiPLE was a wedge-shaped unit fitting underneath the Spectrum. It was designed as a super-interface, providing all the facilities a Spectrum owner could need. In addition to floppy-disk, parallel port printer interface and a "magic button" (see Non-maskable interrupt), it also offered twin joystick ports, Sinclair ZX Net-compatible network ports and an inhibit button for disabling the device.
At the rear of the unit was a pass-through port for connecting further devices, although the complexity of the DISCiPLE meant that many would not work, or only if the DISCiPLE was "turned off" using the inhibit button.
The DISCiPLE was a considerable success but its sophistication (the device included 8kB of ROM) meant that it was expensive and the plastic casing, located beneath the computer itself, was sometimes prone to overheating. These factors led to the development of MGT's later +D interface.
The DISCiPLE's DOS was named GDOS. MGT's later DOSs (G+DOS for the +D, and SAM DOS for the SAM Coupé) were backwards-compatible with GDOS. In later years a complete new system called UNI-DOS was developed by SD Software for the DISCiPLE and +D interfaces. In October 1993 "The Complete DISCiPLE Disassembly" was published in book form, documenting the "GDOS system 3d" version.
The popularity of the DISCiPLE led to the formation of a user group and magazine, INDUG, which later became Format Publications. Usergroups like INDUG/Format in the UK or DISCiPLE-Nieuwsbrief in the Netherlands produced enhancements such as extended printer support.
See also
Beta Disk Interface
References
Microcomputers
Home computers
ZX Spectrum
Computer storage devices
Computer-related introductions in 1986 | DISCiPLE | [
"Technology"
] | 414 | [
"Computer storage devices",
"Computing stubs",
"Recording devices",
"Computer hardware stubs"
] |
563,367 | https://en.wikipedia.org/wiki/John%20Pope%20%28general%29 | John Pope (March 16, 1822 – September 23, 1892) was a career United States Army officer and Union general in the American Civil War. He had a brief stint in the Western Theater, but he is best known for his defeat at the Second Battle of Bull Run (Second Manassas) in the East.
Pope was a graduate of the United States Military Academy in 1842. He served in the Mexican–American War and had numerous assignments as a topographical engineer and surveyor in Florida, New Mexico, and Minnesota. He spent much of the last decade before the Civil War surveying possible southern routes for the proposed first transcontinental railroad. He was an early appointee as a Union brigadier general of volunteers and served initially under Maj. Gen. John C. Frémont. He achieved initial success against Brig. Gen. Sterling Price in Missouri, then led a successful campaign that captured Island No. 10 on the Mississippi River. This inspired the Lincoln administration to bring him to the Eastern Theater to lead the newly formed Army of Virginia.
He initially distanced himself from many of his officers and men by publicly denigrating their record in comparison to his Western command. He launched an offensive against the Confederate army of General Robert E. Lee, in which he fell prey to a strategic turning movement into his rear areas by Maj. Gen. Stonewall Jackson. At Second Bull Run, he concentrated his attention on attacking Jackson while the other Confederate corps led by Maj. Gen. James Longstreet attacked his flank and routed his army.
Following Manassas, Pope was banished far from the Eastern Theater to the Department of the Northwest in Minnesota, where he commanded U.S. Forces in the Dakota War of 1862. He was appointed to command the Department of the Missouri in 1865 and was a prominent and activist commander during Reconstruction in Atlanta. For the rest of his military career, he fought in the Indian Wars, particularly against the Apache and Sioux.
Early life
Pope was born in Louisville, Kentucky, the son of Nathaniel Pope, a prominent Federal judge in early Illinois Territory and a friend of lawyer Abraham Lincoln. He was the brother-in-law of Manning Force, and a distant cousin married the sister of Mary Todd Lincoln. He graduated from the United States Military Academy, 17th in a class of 56, in 1842, and was commissioned a brevet second lieutenant in the Corps of Topographical Engineers.
He served in Florida and then helped survey the northeastern border between the United States and Canada. He fought under Zachary Taylor in the Battle of Monterrey and Battle of Buena Vista during the Mexican–American War, for which he was appointed a brevet first lieutenant and captain, respectively. After the war Pope worked as a surveyor in Minnesota. In 1850 he demonstrated the navigability of the Red River. He served as the chief engineer of the Department of New Mexico from 1851 to 1853 and spent the remainder of the years preceding the Civil War surveying a route for the Pacific Railroad.
Civil War
Pope was serving on lighthouse duty when Abraham Lincoln was elected and he was one of four officers selected to escort the president-elect to Washington, D.C. He offered to serve Lincoln as an aide, but on June 14, 1861, he was appointed brigadier general of volunteers (date of rank effective May 17, 1861) and was ordered to Illinois to recruit volunteers.
In the Department of the West under Maj. Gen. John C. Frémont, Pope assumed command of the District of North and Central Missouri in July, with operational control along a portion of the Mississippi River. He had an uncomfortable relationship with Frémont and politicked behind the scenes to get him removed from command. Frémont was convinced that Pope had treacherous intentions toward him, demonstrated by his lack of action in following Frémont's offensive plans in Missouri. Historian Allan Nevins wrote, "Actually, incompetence and timidity offer a better explanation of Pope than treachery, though he certainly showed an insubordinate spirit."
Pope eventually forced the Confederates under Sterling Price to retreat southward, taking 1,200 prisoners in a minor action at Blackwater, Missouri, on December 18. Pope, who established a reputation as a braggart early in the war, was able to generate significant press interest in his minor victory, which brought him to the attention of Frémont's replacement, Maj. Gen. Henry W. Halleck.
Halleck appointed Pope to command the Army of the Mississippi (and the District of the Mississippi, Department of the Missouri) on February 23, 1862. Given 25,000 men, he was ordered to clear Confederate obstacles on the Mississippi River. He made a surprise march on New Madrid, Missouri, and captured it on March 14. He then orchestrated a campaign to capture Island No. 10, a strongly fortified post garrisoned by 12,000 men and 58 guns. Pope's engineers cut a channel that allowed him to bypass the island. Assisted by the gunboats of Captain Andrew H. Foote, he landed his men on the opposite shore, which isolated the defenders. The island garrison surrendered on April 7, 1862, freeing Union navigation of the Mississippi as far south as Memphis.
Pope's outstanding performance on the Mississippi earned him a promotion to major general, dated as of March 21, 1862. During the Siege of Corinth, he commanded the left wing of Halleck's army, but he was soon summoned to the East by Lincoln. After the collapse of Maj. Gen. George B. McClellan's Peninsula Campaign, Pope was appointed to command the Army of Virginia, assembled from scattered forces in the Shenandoah Valley and Northern Virginia. This promotion infuriated Frémont, who resigned his commission.
Pope brought an attitude of self-assurance that was offensive to the eastern soldiers under his command. He issued an astonishing message to his new army on July 14, 1862, that included the following:
Despite this bravado, and despite receiving units from McClellan's Army of the Potomac that swelled the Army of Virginia to 70,000 men, Pope's aggressiveness exceeded his strategic capabilities, particularly since he was now facing Confederate General Robert E. Lee. Lee, sensing that Pope was indecisive, split his smaller (55,000-man) army, sending Maj. Gen. Thomas J. "Stonewall" Jackson with 24,000 men as a diversion to Cedar Mountain, where Jackson defeated Pope's subordinate, Nathaniel Banks.
As Lee advanced on Pope with the remainder of his army, Jackson swung around to the north and captured Pope's main supply base at Manassas Station. Confused and unable to locate the main Confederate force, Pope walked into a trap in the Second Battle of Bull Run. His men withstood a combined attack by Jackson and Lee on August 29, 1862, but on the following day, reluctantly obeying Pope's orders, Maj. Gen. Fitz John Porter swung to attack Jackson, exposing his (and by extension the whole Union army's) flank. Maj. Gen. James Longstreet launched a surprise flanking attack, and the Union Army was soundly defeated and forced to retreat. Pope compounded his unpopularity with the Army by blaming his defeat on disobedience by Maj. Gen. Porter, who was found guilty by court-martial and disgraced.
Brigadier General Alpheus S. Williams, who served briefly under Pope, held the general in particularly low esteem. In a letter to his daughter, he wrote:
Pope himself was relieved of command on September 12, 1862, and his army was merged into the Army of the Potomac under McClellan. He spent the remainder of the war in the Department of the Northwest in Minnesota, dealing with the Dakota War of 1862. His months campaigning in the West paid career dividends because he was assigned to command the Military Division of the Missouri on January 30, 1865, and received a brevet promotion to major general in the regular army on March 13, 1865, for his service at Island No. 10.
Postwar years
In April 1867, Pope was named governor of the Reconstruction Third Military District and made his headquarters in Atlanta, issuing orders that allowed African Americans to serve on juries, ordering Mayor James Williams to remain in office another year, postponing elections, and banning city advertising in newspapers that did not favor Reconstruction. President Andrew Johnson removed him from command December 28, 1867, replacing him with George G. Meade. Following this, Pope was appointed head of the Department of the Lakes (based in Detroit, Michigan) from January 13, 1868, to April 30, 1870.
Pope returned to the West as commander of the Department of the Missouri (the nation's second-largest geographical command) during the Grant presidency, and held that command through 1883. He served with distinction in the Apache Wars, including the Red River War relocating Southern Plains tribes to reservations in Oklahoma. General Pope made political enemies in Washington when he recommended that the reservation system would be better administered by the military than the corrupt Indian Bureau. He also engendered controversy by calling for better and more humane treatment of Native Americans, but author Walter Donald Kennedy notes that he also said "It is my purpose to utterly exterminate the Sioux" and planned to make a "final settlement with all these Indians".
Pope's reputation suffered a serious blow in 1879 when a late-convened Board of Inquiry called by President Rutherford B. Hayes and led by Maj. Gen. John Schofield (Pope's immediate predecessor in the Department of the Missouri and then head of the Department of the Pacific) concluded that Major General Fitz John Porter had been unfairly convicted of cowardice and disobedience at the Second Battle of Bull Run. The Schofield report used evidence of former Confederate commanders and concluded that Pope himself bore most of the responsibility for the Union loss. The report characterized Pope as reckless and dangerously uninformed about events during the battle, also criticized General Irvin McDowell (whom Pope detested), and credited Porter's perceived disobedience with saving the Union army from complete ruin.
Pope was promoted to major general in the Regular Army in 1882 and was assigned to command of the Military Division of the Pacific in 1883 where he served until his retirement.
Death and legacy
Pope retired as a major general in the Regular Army on March 16, 1886, and his wife, Clara Pope, died two years later. The National Tribune serialized his memoirs, publishing them between February 1887 and March 1891. General Pope died on September 23, 1892, at the Ohio Soldiers' Home near Sandusky, Ohio. He is buried beside his wife in Bellefontaine Cemetery, St. Louis, Missouri.
See also
List of American Civil War generals (Union)
The Court-martial of Fitz John Porter
Notes
References
Eicher, John H., and David J. Eicher. Civil War High Commands. Stanford, CA: Stanford University Press, 2001. .
Frederiksen, John C. "John Pope." In Encyclopedia of the American Civil War: A Political, Social, and Military History, edited by David S. Heidler and Jeanne T. Heidler. New York: W. W. Norton & Company, 2000. .
Hennessy, John J. Return to Bull Run: The Campaign and Battle of Second Manassas. Norman, OK: University of Oklahoma Press, 1993. .
Nevins, Allan. The War for the Union. Vol. 1: The Improvised War 1861–1862. New York: Charles Scribner's Sons, 1959. .
U.S. War Department, The War of the Rebellion : a Compilation of the Official Records of the Union and Confederate Armies. 128 vols. Washington, DC: U.S. Government Printing Office, 1880–1901.
Warner, Ezra J. Generals in Blue: Lives of the Union Commanders. Baton Rouge: Louisiana State University Press, 1964. .
Winters, John D. The Civil War in Louisiana. Baton Rouge: Louisiana State University Press, 1963. .
Further reading
Cooling, Benjamin Franklin. Counter-thrust: from the Peninsula to the Antietam. Lincoln : University of Nebraska Press, 2007.
Cozzens, Peter. General John Pope: A Life for the Nation. Urbana: University of Illinois Press, 2000.
Ellis, Richard M. General Pope and U.S. Indian Policy. Albuquerque: University of New Mexico Press, 1970. .
McPherson, James M. Battle Cry of Freedom, Volume 2, Oxford University Press, 1988. .
Foote, Shelby. The Civil War: A Narrative. Vol. 1, Fort Sumter to Perryville. New York: Random House, 1958. .
Pope, John, Peter Cozzens, and Robert I. Girardi. The Military Memoirs of General John Pope. Civil War America. Chapel Hill: University of North Carolina Press, 1998. .
Ropes, John Codman. The Army in the Civil War. Vol. 4, The Army under Pope. New York: Charles Scribner's Sons, 1881. .
Strother, David Hunter. A Virginia Yankee in the Civil War: The Diaries of David Hunter Strother. Edited by Cecil D. Elby. Chapel Hill: University of North Carolina Press, 1998. . First published 1961.
External links
John Pope in Encyclopedia Virginia
John Pope (1822–1892)
John Pope at Spartacus.net
Harper's Weekly, September 13, 1862
Photograph of John Pope from the Maine Memory Network
1822 births
1892 deaths
American white supremacists
Burials at Bellefontaine Cemetery
Military personnel from Louisville, Kentucky
People from Sandusky, Ohio
People of Kentucky in the American Civil War
Union army generals
United States Army Corps of Topographical Engineers
United States Military Academy alumni | John Pope (general) | [
"Engineering"
] | 2,786 | [
"United States Army Corps of Topographical Engineers",
"Civil engineering organizations"
] |
563,386 | https://en.wikipedia.org/wiki/Select%20Society%20of%20Sanitary%20Sludge%20Shovelers | The Select Society of Sanitary Sludge Shovelers (5S) is used by water environment associations (i.e., those working with sewage and sewage treatment) to honour those who have made a particular contribution to the industry.
Pennsylvania started the High Hat Society in 1937 and used the words "Sludge Shovelers Society" in its initiation ceremony. Later, this became known as the Ted Moses Sludge Shovelers Society. The second Chapter of the Five S Society was formed in Arizona in October 1940, the idea being conceived by A.W. "Dusty" Miller and F. Carlyle Roberts, Jr. There are chapters in the United States and in Canada, as well as the United Kingdom, Australia and New Zealand.
5S chapters do not accept applications, but select potential members. Each inductee receives a badge in the form of a gold tie bar in the shape of a round-nosed shovel.
References
Sewerage
Professional associations | Select Society of Sanitary Sludge Shovelers | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 191 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
563,387 | https://en.wikipedia.org/wiki/Receptor%20potential | A receptor potential, also known as a generator potential, a type of graded potential, is the transmembrane potential difference produced by activation of a sensory receptor.
A receptor potential is often produced by sensory transduction. It is generally a depolarizing event resulting from inward current flow. The influx of current will often bring the membrane potential of the sensory receptor towards the threshold for triggering an action potential. Receptor potential can work to trigger an action potential either within the same neuron or on an adjacent cell. Within the same neuron, a receptor potential can cause local current to flow to a region capable of generating an action potential by opening voltage-gated ion channels. A receptor potential can also cause the release of neurotransmitters from one cell that will act on another cell, generating an action potential in the second cell. The magnitude of the receptor potential determines the frequency with which action potentials are generated and is controlled by adaptation, stimulus strength, and temporal summation of successive receptor potentials. Receptor potential relies on receptor sensitivity which can adapt slowly, resulting in a slowly decaying receptor potential or rapidly, resulting in a quickly generated but shorter-lasting receptor potential.
An example of a receptor potential is in a taste bud, where taste is converted into an electrical signal sent to the brain. When stimulated, the taste bud triggers the release of neurotransmitters through exocytosis of synaptic vesicles from the presynaptic membrane. The neurotransmitter molecules diffuse across the synaptic cleft to the postsynaptic membrane of the primary sensory neuron, where they elicit an action potential.
See also
Resting potential
Action potential
References
Receptors
Electrophysiology
Graded potentials | Receptor potential | [
"Chemistry"
] | 354 | [
"Receptors",
"Signal transduction"
] |
563,439 | https://en.wikipedia.org/wiki/Controlled%20natural%20language | Controlled natural languages (CNLs) are subsets of natural languages that are obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity. Traditionally, controlled languages fall into two major types: those that improve readability for human readers (e.g. non-native speakers),
and those that enable reliable automatic semantic analysis of the language.
The first type of languages (often called "simplified" or "technical" languages), for example ASD Simplified Technical English, Caterpillar Technical English, IBM's Easy English, are used in the industry to increase the quality of technical documentation, and possibly simplify the semi-automatic translation of the documentation. These languages restrict the writer by general rules such as "Keep sentences short", "Avoid the use of pronouns", "Only use dictionary-approved words", and "Use only the active voice".
The second type of languages have a formal syntax and formal semantics, and can be mapped to an existing formal language, such as first-order logic. Thus, those languages can be used as knowledge representation languages, and writing of those languages is supported by fully automatic consistency and redundancy checks, query answering, etc.
Languages
Existing controlled natural languages include:
ASD Simplified Technical English
Attempto Controlled English
Aviation English
Basic English
ClearTalk
Common Logic Controlled English
Distributed Language Translation Esperanto
Easy Japanese
E-Prime
Français fondamental
Gellish Formal English
Interlingua-IL sive Latino sine flexione (Giuseppe Peano)
Logical English
ModeLang
Newspeak (fictional)
Processable English (PENG)
Seaspeak
Semantics of Business Vocabulary and Business Rules
Special English
Encoding
IETF has reserved as a BCP 47 variant subtag for simplified versions of languages.
See also
Constructed language
Knowledge representation and reasoning
Natural language processing
Controlled vocabulary
Controlled language in machine translation
Structured English
Word-sense disambiguation
Simple English Wikipedia
References
External links
Controlled Natural Languages
Natural language processing | Controlled natural language | [
"Technology"
] | 395 | [
"Natural language processing",
"Natural language and computing"
] |
563,456 | https://en.wikipedia.org/wiki/Methanogen | Methanogens are anaerobic archaea that produce methane as a byproduct of their energy metabolism, i.e., catabolism. Methane production, or methanogenesis, is the only biochemical pathway for ATP generation in methanogens. All known methanogens belong exclusively to the domain Archaea, although some bacteria, plants, and animal cells are also known to produce methane. However, the biochemical pathway for methane production in these organisms differs from that in methanogens and does not contribute to ATP formation. Methanogens belong to various phyla within the domain Archaea. Previous studies placed all known methanogens into the superphylum Euryarchaeota. However, recent phylogenomic data have led to their reclassification into several different phyla. Methanogens are common in various anoxic environments, such as marine and freshwater sediments, wetlands, the digestive tracts of animals, wastewater treatment plants, rice paddy soil, and landfills. While some methanogens are extremophiles, such as Methanopyrus kandleri, which grows between 84 and 110°C, or Methanonatronarchaeum thermophilum, which grows at a pH range of 8.2 to 10.2 and a concentration of 3 to 4.8 M, most of the isolates are mesophilic and grow around neutral pH.
Physical description
Methanogens are usually cocci (spherical) or rods (cylindrical) in shape, but long filaments (Methanobrevibacter filliformis, Methanospirillum hungatei) and curved forms (Methanobrevibacter curvatus, Methanobrevibacter cuticularis) also occur. There are over 150 described species of methanogens, which do not form a monophyletic group in the phylum Euryarchaeota (see Taxonomy). They are exclusively anaerobic organisms that cannot function under aerobic conditions due to the extreme oxygen sensitivity of methanogenesis enzymes and FeS clusters involved in ATP production. However, the degree of oxygen sensitivity varies, as methanogenesis has often been detected in temporarily oxygenated environments such as rice paddy soil, and various molecular mechanisms potentially involved in oxygen and reactive oxygen species (ROS) detoxification have been proposed. For instance, a recently identified species Candidatus Methanothrix paradoxum common in wetlands and soil can function in anoxic microsites within aerobic environments but it is sensitive to the presence of oxygen even at trace level and cannot usually sustain oxygen stress for a prolonged time. However, Methanosarcina barkeri from a sister family Methanosarcinaceae is exceptional in possessing a superoxide dismutase (SOD) enzyme, and may survive longer than the others in the presence of O2.
As is the case for other archaea, methanogens lack peptidoglycan, a polymer that is found in the cell walls of bacteria. Instead, some methanogens have a cell wall formed by pseudopeptidoglycan (also known as pseudomurein). Other methanogens have a paracrystalline protein array (S-layer) that fits together like a jigsaw puzzle. In some lineages there are less common types of cell envelope such as the proteinaceous sheath of Methanospirillum or the methanochondroitin of Methanosarcina aggregated cells.
Ecology
In anaerobic environments, methanogens play a vital ecological role, removing excess hydrogen and fermentation products that have been produced by other forms of anaerobic respiration. Methanogens typically thrive in environments in which all electron acceptors other than CO2 (such as oxygen, nitrate, ferric iron (Fe(III)), and sulfate) have been depleted. Such environments include wetlands and rice paddy soil, the digestive tracts of various animals (ruminants, arthropods, humans), wastewater treatment plants and landfills, deep-water oceanic sediments, and hydrothermal vents. Most of these environments are not categorized as extreme, and thus the methanogens inhabiting them are also not considered extremophiles. However, many well-studied methanogens are thermophiles such as Methanopyrus kandleri, Methanothermobacter marburgensis, Methanocaldococcus jannaschii. On the other hand, gut methanogens such as Methanobrevibacter smithii common in humans or Methanobrevibacter ruminantium omnipresent in ruminants are mesophiles.
Methanogens in extreme environments
In deep basaltic rocks near the mid-ocean ridges, methanogens can obtain their hydrogen from the serpentinization reaction of olivine as observed in the hydrothermal field of Lost City. The thermal breakdown of water and water radiolysis are other possible sources of hydrogen. Methanogens are key agents of remineralization of organic carbon in continental margin sediments and other aquatic sediments with high rates of sedimentation and high sediment organic matter. Under the correct conditions of pressure and temperature, biogenic methane can accumulate in massive deposits of methane clathrates that account for a significant fraction of organic carbon in continental margin sediments and represent a key reservoir of a potent greenhouse gas.
Methanogens have been found in several extreme environments on Earth – buried under kilometres of ice in Greenland and living in hot, dry desert soil. They are known to be the most common archaea in deep subterranean habitats. Live microbes making methane were found in a glacial ice core sample retrieved from about three kilometres under Greenland by researchers from the University of California, Berkeley. They also found a constant metabolism able to repair macromolecular damage, at temperatures of 145 to –40 °C.
Another study has also discovered methanogens in a harsh environment on Earth. Researchers studied dozens of soil and vapour samples from five different desert environments in Utah, Idaho and California in the United States, and in Canada and Chile. Of these, five soil samples and three vapour samples from the vicinity of the Mars Desert Research Station in Utah were found to have signs of viable methanogens.
Some scientists have proposed that the presence of methane in the Martian atmosphere may be indicative of native methanogens on that planet. In June 2019, NASA's Curiosity rover detected methane, commonly generated by underground microbes such as methanogens, which signals possibility of life on Mars.
Closely related to the methanogens are the anaerobic methane oxidizers, which utilize methane as a substrate in conjunction with the reduction of sulfate and nitrate. Most methanogens are autotrophic producers, but those that oxidize CH3COO− are classed as chemotroph instead.
Methanogens in the digestive tract of animals
The digestive tract of animals is characterized by a nutrient-rich and predominantly anaerobic environment, making it an ideal habitat for many microbes, including methanogens. Despite this, methanogens and archaea, in general, were largely overlooked as part of the gut microbiota until recently. However, they play a crucial role in maintaining gut balance by utilizing end products of bacterial fermentation, such as H2, acetate, methanol, and methylamines.
Recent extensive surveys of archaea presence in the animal gut, based on 16S rRNA analysis, have provided a comprehensive view of archaea diversity and abundance. These studies revealed that only a few archaeal lineages are present, with the majority being methanogens, while non-methanogenic archaea are rare and not abundant. Taxonomic classification of archaeal diversity identified that representatives of only three phyla are present in the digestive tracts of animals: Methanobacteriota (order Methanobacteriales), Thermoplasmatota (order Methanomassiliicoccales), and Halobacteriota (orders Methanomicrobiales and Methanosarcinales). However, not all families and genera within these orders were detected in animal guts, but only a few genera, suggesting their specific adaptations to the gut environment.
Comparative genomics and molecular signatures
Comparative proteomic analysis has led to the identification of 31 signature proteins which are specific for methanogens (also known as Methanoarchaeota). Most of these proteins are related to methanogenesis, and they could serve as potential molecular markers for methanogens. Additionally, 10 proteins found in all methanogens, which are shared by Archaeoglobus, suggest that these two groups are related. In phylogenetic trees, methanogens are not monophyletic and they are generally split into three clades. Hence, the unique shared presence of large numbers of proteins by all methanogens could be due to lateral gene transfers. Additionally, more recent novel proteins associated with sulfide trafficking have been linked to methanogen archaea. More proteomic analysis is needed to further differentiate specific genera within the methanogen class and reveal novel pathways for methanogenic metabolism.
Modern DNA or RNA sequencing approaches has elucidated several genomic markers specific to several groups of methanogens. One such finding isolated nine methanogens from genus Methanoculleus and found that there were at least 2 trehalose synthases genes that were found in all nine genomes. Thus far, the gene has been observed only in this genus, therefore it can be used as a marker to identify the archaea Methanoculleus. As sequencing techniques progress and databases become populated with an abundance of genomic data, a greater number of strains and traits can be identified, but many genera have remained understudied. For example, halophilic methanogens are potentially important microbes for carbon cycling in coastal wetland ecosystems but seem to be greatly understudied. One recent publication isolated a novel strain from genus Methanohalophilus which resides in sulfide-rich seawater. Interestingly, they have isolated several portions of this strain's genome that are different from other isolated strains of this genus (Methanohalophilus mahii, Methanohalophilus halophilus, Methanohalophilus portucalensis, Methanohalophilus euhalbius). Some differences include a highly conserved genome, sulfur and glycogen metabolisms and viral resistance. Genomic markers consistent with the microbes environment have been observed in many other cases. One such study found that methane producing archaea found in hydraulic fracturing zones had genomes which varied with vertical depth. Subsurface and surface genomes varied along with the constraints found in individual depth zones, though fine-scale diversity was also found in this study. Genomic markers pointing at environmentally relevant factors are often non-exclusive. A survey of Methanogenic Thermoplasmata has found these organisms in human and animal intestinal tracts. This novel species was also found in other methanogenic environments such as wetland soils, though the group isolated in the wetlands did tend to have a larger number of genes encoding for anti-oxidation enzymes that were not present in the same group isolated in the human and animal intestinal tract. A common issue with identifying and discovering novel species of methanogens is that sometimes the genomic differences can be quite small, yet the research group decides they are different enough to separate into individual species. One study took a group of Methanocellales and ran a comparative genomic study. The three strains were originally considered identical, but a detailed approach to genomic isolation showed differences among their previously considered identical genomes. Differences were seen in gene copy number and there was also metabolic diversity associated with the genomic information.
Genomic signatures not only allow one to mark unique methanogens and genes relevant to environmental conditions; it has also led to a better understanding of the evolution of these archaea. Some methanogens must actively mitigate against oxic environments. Functional genes involved with the production of antioxidants have been found in methanogens, and some specific groups tend to have an enrichment of this genomic feature. Methanogens containing a genome with enriched antioxidant properties may provide evidence that this genomic addition may have occurred during the Great Oxygenation Event. In another study, three strains from the lineage Thermoplasmatales isolated from animal gastro-intestinal tracts revealed evolutionary differences. The eukaryotic-like histone gene which is present in most methanogen genomes was not present, alluding to evidence that an ancestral branch was lost within Thermoplasmatales and related lineages. Furthermore, the group Methanomassiliicoccus has a genome which appears to have lost many common genes coding for the first several steps of methanogenesis. These genes appear to have been replaced by genes coding for a novel methylated methogenic pathway. This pathway has been reported in several types of environments, pointing to non-environment specific evolution, and may point to an ancestral deviation.
Metabolism
Methane production
Methanogens are known to produce methane from substrates such as H2/CO2, acetate, formate, methanol and methylamines in a process called methanogenesis. Different methanogenic reactions are catalyzed by unique sets of enzymes and coenzymes. While reaction mechanism and energetics vary between one reaction and another, all of these reactions contribute to net positive energy production by creating ion concentration gradients that are used to drive ATP synthesis. The overall reaction for H2/CO2 methanogenesis is:
CO2 + 4 H2 -> CH4 + 2 H2O (∆G˚’ = -134 kJ/mol CH4)
Well-studied organisms that produce methane via H2/CO2 methanogenesis include Methanosarcina barkeri, Methanobacterium thermoautotrophicum, and Methanobacterium wolfei. These organisms are typically found in anaerobic environments.
In the earliest stage of H2/CO2 methanogenesis, CO2 binds to methanofuran (MF) and is reduced to formyl-MF. This endergonic reductive process (∆G˚’= +16 kJ/mol) is dependent on the availability of H2 and is catalyzed by the enzyme formyl-MF dehydrogenase.
CO2 + H2 + MF -> HCO-MF + H2O
The formyl constituent of formyl-MF is then transferred to the coenzyme tetrahydromethanopterin (H4MPT) and is catalyzed by a soluble enzyme known as formyltransferase. This results in the formation of formyl-H4MPT.
HCO-MF + H4MPT -> HCO-H4MPT + MF
Formyl-H4MPT is subsequently reduced to methenyl-H4MPT. Methenyl-H4MPT then undergoes a one-step hydrolysis followed by a two-step reduction to methyl-H4MPT. The two-step reversible reduction is assisted by coenzyme F420 whose hydride acceptor spontaneously oxidizes. Once oxidized, F420’s electron supply is replenished by accepting electrons from H2. This step is catalyzed by methylene H4MPT dehydrogenase.
HCO-H4MPT + H+ -> CH-H4MPT+ + H2O (Formyl-H4MPT reduction)
CH-H4MPT+ + F420H2 -> CH2=H4MPT + F420 + H+(Methenyl-H4MPT hydrolysis)
CH2=H4MPT + H2 -> CH3-H4MPT + H+(H4MPT reduction)
Next, the methyl group of methyl-M4MPT is transferred to coenzyme M via a methyltransferase-catalyzed reaction.
CH3-H4MPT + HS-CoM -> CH3-S-CoM + H4MPT
The final step of H2/CO2 methanogenic involves methyl-coenzyme M reductase and two coenzymes: N-7 mercaptoheptanoylthreonine phosphate (HS-HTP) and coenzyme F430. HS-HTP donates electrons to methyl-coenzyme M allowing the formation of methane and mixed disulfide of HS-CoM. F430, on the other hand, serves as a prosthetic group to the reductase. H2 donates electrons to the mixed disulfide of HS-CoM and regenerates coenzyme M.
CH3-S-CoM + HS-HTP -> CH4 + CoM-S-S-HTP (Formation of methane)
CoM-S-S-HTP + H2 -> HS-CoM + HS-HTP (Regeneration of coenzyme M)
Biotechnological application
Wastewater treatment
Methanogens are widely used in anaerobic digestors to treat wastewater as well as aqueous organic pollutants. Industries have selected methanogens for their ability to perform biomethanation during wastewater decomposition thereby rendering the process sustainable and cost-effective.
Bio-decomposition in the anaerobic digester involves a four-staged cooperative action performed by different microorganisms. The first stage is the hydrolysis of insoluble polymerized organic matter by anaerobes such as Streptococcus and Enterobacterium. In the second stage, acidogens break down dissolved organic pollutants in wastewater to fatty acids. In the third stage, acetogens convert fatty acids to acetates. In the final stage, methanogens metabolize acetates to gaseous methane. The byproduct methane leaves the aqueous layer and serves as an energy source to power wastewater-processing within the digestor, thus generating a self-sustaining mechanism.
Methanogens also effectively decrease the concentration of organic matter in wastewater run-off. For instance, agricultural wastewater, highly rich in organic material, has been a major cause of aquatic ecosystem degradation. The chemical imbalances can lead to severe ramifications such as eutrophication. Through anaerobic digestion, the purification of wastewater can prevent unexpected blooms in water systems as well as trap methanogenesis within digesters. This allocates biomethane for energy production and prevents a potent greenhouse gas, methane, from being released into the atmosphere.
The organic components of wastewater vary vastly. Chemical structures of the organic matter select for specific methanogens to perform anaerobic digestion. An example is the members of Methanosaeta genus dominate the digestion of palm oil mill effluent (POME) and brewery waste. Modernizing wastewater treatment systems to incorporate higher diversity of microorganisms to decrease organic content in treatment is under active research in the field of microbiological and chemical engineering. Current new generations of Staged Multi-Phase Anaerobic reactors and Upflow Sludge Bed reactor systems are designed to have innovated features to counter high loading wastewater input, extreme temperature conditions, and possible inhibitory compounds.
Taxonomy
Initially, methanogens were considered to be bacteria, as it was not possible to distinguish archaea and bacteria before the introduction of molecular techniques such as DNA sequencing and PCR. Since the introduction of the domain Archaea by Carl Woese in 1977, methanogens were for a prolonged period considered a monophyletic group, later named Euryarchaeota (super)phylum. However, intensive studies of various environments have proved that there are more and more non-methanogenic lineages among methanogenic ones.
The development of genome sequencing directly from environmental samples (metagenomics) allowed the discovery of the first methanogens outside the Euryarchaeota superphylum. The first such putative methanogenic lineage was Bathyarchaeia, a class within the Thermoproteota phylum. Later, it was shown that this lineage is not methanogenic but alkane-oxidizing utilizing highly divergent enzyme Acr similar to the hallmark gene of methanogenesis, methyl-CoM reductase (McrABG). The first isolate of Bathyarchaeum tardum from sediment of coastal lake in Russia showed that it metabolizes aromatic compounds and proteins as it was previously predicted based on metagenomic studies. However, more new putative methanogens outside of Euryarchaeota were discovered based on the presence McrABG.
For instance, methanogens were found in the phyla Thermoproteota (orders Methanomethyliales, Korarchaeales, Methanohydrogenales, Nezhaarchaeales) and Methanobacteriota_B (order Methanofastidiosales). Additionally, some new lineages of methanogens were isolated in pure culture, which allowed the discovery of a new type of methanogenesis: H2-dependent methyl-reducing methanogenesis, which is independent of the Wood-Ljungdahl pathway. For example, in 2012, the order Methanoplasmatales from the phylum Thermoplasmatota was described as a seventh order of methanogens. Later, the order was renamed Methanomassiliicoccales based on the isolated from human gut Methanomassiliicoccus luminyensis.
Another new lineage in the Halobacteriota phylum, order Methanonatronarchaeales, was discovered in alkaline saline lakes in Siberia in 2017. It also employs H2-dependent methyl-reducing methanogenesis but intriguingly harbors almost the full Wood-Ljungdahl pathway. However, it is disconnected from McrABG as no MtrA-H complex was detected.
The taxonomy of methanogens reflects the evolution of these archaea, with some studies suggesting that the Last Archaeal Common Ancestor was methanogenic. If correct, this suggests that many archaeal lineages lost the ability to produce methane and switched to other types of metabolism. Currently, most of the isolated methanogens belong to one of three archaeal phyla (classification GTDB release 220): Halobacteriota, Methanobacteriota, and Thermoplasmatota. Under the International Code of Nomenclature for Prokaryotes, all three phyla belong to the same kingdom, Methanobacteriati. In total, more than 150 methanogen species are known in culture, with some represented by more than one strain.
Phylum Halobacteriota
Class Methanocellia
Order Methanocellales
Family Methanocellaceae
Genus Methanocella Sakai et al. 2008
Methanocella paludicola Sakai et al. 2008 (type species)
Methanocella arvoryzae Sakai et al. 2010
Methanocella conradii Lü and Lu 2012
Class Methanomicrobia
Order Methanomicrobiales=
Family Methanocalculaceae Zhilina et al. 2014
Family Methanocorpusculaceae Zellner et al. 1989
Methanocorpusculum Zellner et al. 1988
Methanocorpusculum parvum Zellner et al. 1988 (type species)
Methanocorpusculum bavaricum Zellner et al. 1989
Methanocorpusculum labreanum
Methanocorpusculum sinense Zellner et al. 1989
Family Methanomicrobiaceae Balch and Wolfe 1981==
Genus Methanomicrobium Balch and Wolfe 1981
Methanomicrobium mobile (Paynter and Hungate 1968) Balch and Wolfe 1981 (type species)
Methanomicrobium antiquum Mochimaru et al. 2016
Genus Methanoculleus Maestrojuán et al. 1990
Methanoculleus bourgensis corrig. (Ollivier et al. 1986) Maestrojuán et al. 1990 (type species)
Methanoculleus chikugoensis Dianou et al. 2001
Methanoculleus horonobensis Shimizu et al. 2013
Methanoculleus hydrogenitrophicus Tian et al. 2010
Methanoculleus marisnigri
Methanoculleus palmolei Zellner et al. 1998
Methanoculleus receptaculi Cheng et al. 2008
Methanoculleus sediminis Chen et al. 2015
Methanoculleus submarinus Mikucki et al. 2003
Methanoculleus taiwanensis Weng et al. 2015
Methanoculleus thermophilus corrig. (Rivard and Smith 1982) Maestrojuán et al. 1990
GenusMethanogenium Romesser et al. 1981
Methanogenium cariaci Romesser et al. 1981 (type species)
Methanogenium frigidum
Methanogenium marinum Chong et al. 2003
Methanogenium organophilum
GenusMethanofollis Zellner et al. 1999
Methanofollis tationis (Zabel et al. 1986) Zellner et al. 1999 (type strains)
Methanofollis aquaemaris Lai and Chen 2001
Methanofollis ethanolicus Imachi et al. 2009
Methanofollis fontis Chen et al. 2020
Methanofollis formosanus Wu et al. 2005
Methanofollis liminatans (Zellner et al. 1990) Zellner et al. 1999
Family Methanoregulaceae Sakai et al. 2012
GenusMethanoregula Bräuer et al. 2011
Methanoregula boonei Bräuer et al. 2011 (type species)
Methanoregula formicica Yashiro et al. 2011
Family Methanospirillaceae Boone et al. 2002
Methanospirillum Ferry et al. 1974
Methanospirillum hungatei corrig. Ferry et al. 1974 (type species)
Methanospirillum lacunae Iino et al. 2010
Methanospirillum psychrodurum Zhou et al. 2014
Methanospirillum stamsii Parshina et al. 2014
Class Methanonatronarchaeia
Order Methanonatronarchaeales
Family Methanonatronarchaeaceae Sorokin et al. 2018
GenusMethanonatronarchaeum Sorokin et al. 2018
Methanonatronarchaeum thermophilum Sorokin et al. 2018 (type species)
Class Methanosarcinia
Order Methanosarcinales
Family Methanosarcinaceae
GenusMethanosarcina Kluyver and van Niel 1936
Methanosarcina barkeri Schnellen 1947 (type species)
Methanosarcina acetivorans
Methanosarcina baltica von Klein et al. 2002
Methanosarcina flavescens Kern et al. 2016
Methanosarcina horonobensis Shimizu et al. 2011
Methanosarcina lacustris Simankova et al. 2002
Methanosarcina mazei (Barker 1936) Mah and Kuhn 1984
Methanosarcina semesiae Lyimo et al. 2000
Methanosarcina siciliae (Stetter and König 1989) Ni et al. 1994
Methanosarcina soligelidi Wagner et al. 2013
Methanosarcina spelaei Ganzert et al. 2014
Methanosarcina subterranea Shimizu et al. 2015
Methanosarcina thermophila Zinder et al. 1985
Methanosarcina vacuolata Zhilina and Zavarzin 1987
GenusMethanimicrococcus corrig. Sprenger et al. 2000
Methanimicrococcus blatticola corrig. Sprenger et al. 2000
GenusMethanococcoides Sowers and Ferry 1985
Methanococcoides methylutens Sowers and Ferry 1985 (type species)
Methanococcoides alaskense Singh et al. 2005
Methanococcoides burtonii Franzmann et al. 1993
Methanococcoides orientis Liang et al. 2022
Methanococcoides vulcani L'Haridon et al. 2014
GenusMethanohalobium Zhilina and Zavarzin 1988
Methanohalobium evestigatum corrig. Zhilina and Zavarzin 1988 (type species)
GenusMethanohalophilus Paterek and Smith 1988
Methanohalophilus mahii Paterek and Smith 1988 (type species)
Methanohalophilus halophilus (Zhilina 1984) Wilharm et al. 1991
Methanohalophilus levihalophilus Katayama et al. 2014
Methanohalophilus portucalensis Boone et al. 1993
Methanohalophilus profundi L'Haridon et al. 2021
GenusMethanolobus König and Stetter 1983
Methanolobus tindarius König and Stetter 1983 (type species)
Methanolobus bombayensis Kadam et al. 1994
Methanolobus chelungpuianus Wu and Lai 2015
Methanolobus halotolerans Shen et al. 2020
Methanolobus mangrovi Zhou et al. 2023
Methanolobus oregonensis (Liu et al. 1990) Boone 2002
Methanolobus profundi Mochimaru et al. 2009
Methanolobus psychrotolerans Chen et al. 2018
Methanolobus sediminis Zhou et al. 2023
Methanolobus taylorii Oremland and Boone 1994
Methanolobus vulcani Stetter et al. 1989
Methanolobus zinderi Doerfert et al. 2009
GenusMethanomethylovorans Lomans et al. 2004
Methanomethylovorans hollandica Lomans et al. 2004 (type species)
Methanomethylovorans thermophila Jiang et al. 2005
Methanomethylovorans uponensis Cha et al. 2014
GenusMethanosalsum Boone and Baker 2002
Methanosalsum zhilinae (Mathrani et al. 1988) Boone and Baker 2002 (type species)
Methanosalsum natronophilum Sorokin et al. 2015
Family Methanotrichaceae
GenusMethanothrix Huser et al. 1983
Methanothrix soehngenii Huser et al. 1983 (type species)
Methanothrix harundinacea (Ma et al. 2006) Akinyemi et al. 2021
Methanothrix thermoacetophila corrig. Nozhevnikova and Chudina 1988
"Candidatus Methanothrix paradoxa" corrig. Angle et al. 2017
Family Methermicoccaceae
GenusMethermicoccus Cheng et al. 2007
Methermicoccus shengliensis Cheng et al. 2007 (type species)
Phylum Methanobacteriota
Class Methanobacteria
Order Methanobacteriales
Family Methanobacteriaceae
GenusMethanobacterium Kluyver and van Niel 1936
Methanobacterium formicicum Schnellen 1947 (type species)
Methanobacterium bryantii
GenusMethanobrevibacter Balch and Wolfe 1981
Methanobrevibacter ruminantium (Smith and Hungate 1958) Balch and Wolfe 1981 (type species)
Methanobrevibacter acididurans Savant et al. 2002
Methanobrevibacter arboriphilus corrig. (Zeikus and Henning 1975) Balch and Wolfe 1981
Methanobrevibacter boviskoreani Lee et al. 2013
Methanobrevibacter curvatus Leadbetter and Breznak 1997
Methanobrevibacter cuticularis Leadbetter and Breznak 1997
Methanobrevibacter filiformis Leadbetter et al. 1998
Methanobrevibacter gottschalkii Miller and Lin 2002
Methanobrevibacter millerae Rea et al. 2007
Methanobrevibacter olleyae Rea et al. 2007
Methanobrevibacter oralis Ferrari et al. 1995
Methanobrevibacter smithii Balch and Wolfe 1981
Methanobrevibacter thaueri Miller and Lin 2002
Methanobrevibacter woesei Miller and Lin 2002
Methanobrevibacter wolinii Miller and Lin 2002
"Methanobrevibacter massiliense" Huynh et al. 2015
"Candidatus Methanobrevibacter intestini" Chibani et al. 2022
GenusMethanosphaera Miller and Wolin 1985
Methanosphaera stadtmanae corrig. Miller and Wolin 1985 (type species)
Methanosphaera cuniculi Biavati et al. 1990
GenusMethanothermobacter Wasserfallen et al. 2000
Methanothermobacter thermautotrophicus corrig. (Zeikus and Wolfe 1972) Wasserfallen et al. 2000 (type species)
Methanothermobacter crinale Cheng et al. 2012
Methanothermobacter defluvii (Kotelnikova et al. 1994) Boone 2002
Methanothermobacter marburgensis Wasserfallen et al. 2000
Methanothermobacter tenebrarum Nakamura et al. 2013
Methanothermobacter thermoflexus (Kotelnikova et al. 1994) Boone 2002
Methanothermobacter thermophilus (Laurinavichus et al. 1990) Boone 2002
Methanothermobacter wolfei corrig. (Winter et al. 1985) Wasserfallen et al. 2000
Family Methanothermaceae
GenusMethanothermus Stetter 1982
Methanothermus fervidus Stetter 1982 (type species)
Class Methanopyri
Order Methanopyrales
Family Methanopyraceae
GenusMethanopyrus Kurr et al. 1992
Methanopyrus kandleri Kurr et al. 1992 (type species)
Class Methanococci
Order Methanococcales
Family Methanococcaceae Balch and Wolfe 1981
GenusMethanococcus Kluyver and van Niel 1936
Methanococcus vannielii Stadtman and Barker 1951 (type species)
Methanococcus aeolicus
Methanococcus burtonii
Methanococcus chunghsingensis
Methanococcus deltae
Methanococcus jannaschii
Methanococcus maripaludis
GenusMethanofervidicoccus
Methanofervidicoccus abyssi Sakai et al. 2019 (type species)
GenusMethanothermococcus
Methanothermococcus thermolithotrophicus (Huber et al. 1984) Whitman 2002 (type species)
Family Methanocaldococcaceae
GenusMethanocaldococcus
Methanocaldococcus jannaschii (Jones et al. 1984) Whitman 2002 (type species)
GenusMethanotorris
Methanotorris igneus (Burggraf et al. 1990) Whitman 2002 (type species)
Phylum Thermoplasmatota
Class Thermoplasmata
Order Methanomassiliicoccales
Family Methanomassiliicoccaceae
GenusMethanomassiliicoccus Dridi et al. 2012
Methanomassiliicoccus luminyensis Dridi et al. 2012 (type species)
Family Methanomethylophilaceae
GenusMethanomethylophilus Borrel et al. 2024
Methanomethylophilus alvi Borrel et al. 2024 (type species)
See also
Extremophile
Hydrogen cycle
Kraken Mare
List of Archaea genera
Methane clathrate
Methanogens in digestive tract of ruminants
Methanopyrus
Methanotroph
References
Anaerobic digestion
Gen
Archaea biology
Environmental microbiology | Methanogen | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 7,731 | [
"Archaea",
"Methane",
"Archaea biology",
"Anaerobic digestion",
"Water technology",
"Environmental engineering",
"Greenhouse gases",
"Environmental microbiology"
] |
563,466 | https://en.wikipedia.org/wiki/Perimeter%20Institute%20for%20Theoretical%20Physics | Perimeter Institute for Theoretical Physics (PI, Perimeter, PITP) is an independent research centre in foundational theoretical physics located in Waterloo, Ontario, Canada. It was founded in 1999. The institute's founding and major benefactor is Canadian entrepreneur and philanthropist Mike Lazaridis.
The original building, designed by Saucier + Perrotte, opened in 2004 and was awarded a Governor General's Medal for Architecture in 2006. The Stephen Hawking Centre, designed by Teeple Architects, was opened in 2011 and was LEED Silver certified in 2015.
In addition to research, Perimeter also provides scientific training and educational outreach activities to the general public. This is done in part through Perimeter's Educational Outreach team.
History
In 1999, Howard Burton—who had a PhD in theoretical physics from the University of Waterloo—emailed Mike Lazaridis along with 20 CEOs in an attempt to leave his Wall Street job. Lazaridis then pitched the idea of the Perimeter Institute to Burton as he wanted to use his BlackBerry wealth for a philanthropic endeavour. Lazaridis' initial donation of $100 million was announced on October 23, 2000, believed to be the biggest private donation in Canadian history to that point. Jim Balsille and Doug Fregin each donated $10 million. The city of Waterloo offered four sites of land for free; Lazaridis chose the former site of the Waterloo Memorial Arena (near Uptown Waterloo).
Research operations began in 2001, in a temporary site in a nearby post office. Burton became the Institute's founding director. The permanent building's construction finished in 2004. The Ontario budget, announced in March 2006, included a commitment to provide $50 million in funding to PI from the Ministry of Research and Innovation.
In May 2008, Dr. Neil Turok, a cosmologist, was appointed as Perimeter Institute's second director replacing Howard Burton. Lazaridis donated a subsequent $50 million on June 4, 2008. In November 2008, it was announced that physicist Stephen Hawking would take the position of Distinguished Visiting Research Chair, a visiting position, at the institute.
Designed by Teeple Architects, a new expansion, the Stephen Hawking Centre at Perimeter Institute, was completed in September 2011. The centre's grand opening was in September 2011 and included a video greeting from Hawking, who rarely traveled due to disability. This was the first-ever Gold Seal-managed project in Ontario, it attained LEED Silver certification in 2015.
On February 28, 2019, Dr. Robert Myers was appointed as the third director of the Perimeter Institute. On Nov 4, 2024, Marcela Carena was announced as the fourth director.
Design
The Institute was designed by Montréal-based architectural firm Saucier + Perrotte. A concrete stairwell in the building's atrium was designed by Blackwell Engineers. The Institute's front aluminum wall is black with small windows to represent a blackboard. There are wooden fireplaces and blackboards throughout the building. Writing for The Globe and Mail, architecture critic Lisa Rochon praised the Institute's "seamless connections" between the building's interior and exterior and said the building is about "the flow of light and the directions we can take". Rochon described the building as modernism, and cited Tadao Ando as an influence.
Research
Perimeter's research encompasses nine fields:
Cosmology
Mathematical physics
Particle Physics
Quantum fields and strings
Quantum foundations
Quantum gravity
Quantum information
Quantum matter
Strong gravity
Programs
Perimeter Institute Recorded Seminar Archive (PIRSA)
An extensive, up-to-date archive of the institute's varied research activities is readily available to the public via the internet. The Perimeter Institute Recorded Seminar Archive (PIRSA), is a permanent, free, searchable, and citable archive of recorded seminars, conferences, workshops and outreach events. Seminars with video and timed presentation materials can be accessed on-demand in Windows and Flash formats together with MP3 audio files and PDFs of the supporting materials. The PIRSA project is enlarged by the creation of SciTalks (See below).
SciTalks
After more than 13,000 talks uploaded to PIRSA, Perimeter Institute created in 2020 a new public video archive called SciTalks with the support of the Simons Foundations. It is a meta-repository search tool of scientific talks beyond what is produced at PI and aims to "revolutionize the world of scholarly communication in the way that the arXiv has done for print scientific papers". Apart from PIRSA's existing content, SciTalks indexes videos from other institutions, such as CERN, Simons Institute for the Theory of Computing, ICTP and ICTP-SAIFR. As a meta-repository, SciTalks only stores metadata and links related to the videos, which are stored in other databases.
Educational outreach
Perimeter's educational outreach team's activities include a monthly public lecture series, a two-week summer camp for the world's top science students, a series of in-class resources, week-long professional development workshops for science teachers, cultural activities with local and international artists, an online archive of educational resources, an extensive network of science teachers to share content across Canada, and many other special events and science festivals contributing to physics outreach. Perimeter Institute operates an international outreach program.
The annual EinsteinPlus summer school for high school physics teachers is held for one week each summer. The International Summer School for Young Physicists (ISSYP) is a physics camp for high school students. It brings approximately 20 Canadian students and 20 International students aged 16 – 18 to Perimeter for two weeks each year.
Public lectures series
Perimeter Institute has welcomed a number of very prominent scientists to deliver lectures on a wide variety of subjects. Lecturers have included: Freeman Dyson, Gerard ‘t Hooft, Jay Ingram, Seth Lloyd, Jay Melosh, Sir Roger Penrose, Michael Peskin, Leonard Susskind, Frank Wilczek and Anton Zeilinger.
BrainSTEM: Your Future is Now
This festival connected technological innovations to the scientific breakthroughs that make them possible. The festival, held September 30 to October 6, 2013, featured science-centre styled exhibits, special presentations, public lectures, Science in the Club events and insider-tours of the Perimeter Institute. Webcast Public Lectures featured James Grime, Ray Laflamme and Lucy Hawking.
Quantum to Cosmos: Ideas for the Future festival
Held in October, 2009, the Quantum to Cosmos: Ideas for the Future festival (Q2C Festival) was a science outreach event held in Canada. The festival included events and activities spanning: lectures, panel discussions, pub talks, cultural activities, a PI documentary premiere (The Quantum Tamers: Revealing Our Weird and Wired Future), sci-fi film festival, an art exhibit and the hugely popular Physica Phantastica exhibit centre, a space filled with demonstrations, hands-on activities, experiments and an immersive 3D tour of the universe narrated by Stephen Hawking.
The Q2C Festival attracted some 40,000 attendees (including over 6,000 in the secondary school program that brought students from Ontario and New York State and nearly one million viewers – and counting – through online streaming, video-on-demand services and special television broadcasts. Special editions of TVO’s “The Agenda with Steve Paikin", filmed live in PI's Atrium in Waterloo attracted hundreds of thousands of viewers from across Canada with just five broadcasts.
Training
Joint masters-level program
In partnership with the University of Waterloo, PI conducts Perimeter Scholars International (PSI), a master's level course in theoretical physics. The 10-month course was inaugurated in August 2009, and admits around 30 scholars per year. Students admitted (on average 3% of all applicants) receive full scholarships and living expenses. The master's degree itself is issued by the University of Waterloo.
Doctoral studies
Perimeter Institute for Theoretical Physics also hosts PhD students wishing to pursue full-time graduate studies under the supervision of a PI faculty member. PhD students receive their doctoral degrees from a university partner, such as the University of Waterloo.
Courses
Perimeter Institute offers a number of planned courses each year, including cross-listed programs with universities and mini-courses given by PI faculty, associate faculty and visiting researchers. The courses are made available to all students enrolled in surrounding universities. The popular courses are attended by students from University of Waterloo, University of Western Ontario, McMaster University, University of Guelph, University of Toronto, York University, and other centres.
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
References
External links
PIRSA – Perimeter Institute Recorded Seminar Archive
Perimeter Scholars International (PSI)
Perimeter Institute for Theoretical Physics
Research institutes in Canada
Higher education in Ontario
Physics research institutes
Buildings and structures in Waterloo, Ontario
Organizations established in 1999
Theoretical physics institutes
1999 establishments in Ontario | Perimeter Institute for Theoretical Physics | [
"Physics"
] | 1,804 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
563,532 | https://en.wikipedia.org/wiki/Barrel%20of%20oil%20equivalent | The barrel of oil equivalent (BOE) is a unit of energy based on the approximate energy released by burning one barrel (, or about ) of crude oil. The BOE is used by oil and gas companies in their financial statements as a way of combining oil and natural gas reserves and production into a single measure, although this energy equivalence does not take into account the lower financial value of energy in the form of gas.
The U.S. Energy Information Administration defines the barrel of oil equivalent as about . The value is necessarily approximate as various grades of oil and gas have slightly different heating values. If one considers the lower heating value instead of the higher heating value, the value for one BOE would be approximately 5.4 GJ (see tonne of oil equivalent). Typically is equivalent to one BOE. The United States Geological Survey gives a figure of of typical natural gas.
Due to the risk of confusion The Society of Petroleum Engineers recommends in their style guide that abbreviations or prefixes M or MM are not used for barrels of oil or barrel of oil equivalent, but rather that thousands, millions or billions are spelled out. Common prefixes for readers familiar with the metric system are k for thousand, M for million and G for billion while other readers might be more familiar with M for thousand, MM for million and B for billion. All those multiples are commonly combined with barrel of oil equivalent from the level of individual production units output per day to level of petroleum reserves.
Metric regions commonly use the tonne of oil equivalent (toe), or more often million toe (Mtoe). Since this is a measurement of mass, any conversion to barrels of oil equivalent depends on the density of the oil in question, as well as the energy content. Typically 1 tonne of oil has a volume of . The United States EIA suggests 1 toe has an average energy value of .
See also
References
Petroleum economics
Units of energy
Equivalent units | Barrel of oil equivalent | [
"Mathematics"
] | 393 | [
"Equivalent quantities",
"Quantity",
"Units of energy",
"Equivalent units",
"Units of measurement"
] |
563,580 | https://en.wikipedia.org/wiki/Hormone%20receptor | A hormone receptor is a receptor molecule that binds to a specific hormone. Hormone receptors are a wide family of proteins made up of receptors for thyroid and steroid hormones, retinoids and Vitamin D, and a variety of other receptors for various ligands, such as fatty acids and prostaglandins. Hormone receptors are of mainly two classes. Receptors for peptide hormones tend to be cell surface receptors built into the plasma membrane of cells and are thus referred to as trans membrane receptors. An example of this is Actrapid. Receptors for steroid hormones are usually found within the protoplasm and are referred to as intracellular or nuclear receptors, such as testosterone. Upon hormone binding, the receptor can initiate multiple signaling pathways, which ultimately leads to changes in the behavior of the target cells.
Hormonal therapy and hormone receptors play a very large part in breast cancer treatment (therapy is not limited to only breast cancer). By influencing the hormones, the cells' growth can be changed along with its function. These hormones can cause cancer to not survive in the human body.
General ligand binding
Hormone receptor proteins bind to a hormone as a result of an accumulation of weak interactions. Because of the relatively large size of enzymes and receptors, the large amount of surface area provides the basis for these weak interactions to occur. This binding is actually highly specific because of the complementarity of these interactions between polar, non-polar, charged, neutral, hydrophilic, or hydrophobic residues. Upon binding, the receptor often undergoes a conformational change and may bind further, signaling ligands to activate a signaling pathway. Because of these highly specific and high affinity interactions between hormones and their receptors, very low concentrations of hormone can produce significant cellular response. Receptors can have various different structures depending on the function of the hormone and the structure of its ligand. Therefore, hormone binding to its receptor is a complex process that can be mediated by cooperative binding, reversible and irreversible interactions, and multiple binding sites.
Functions
Transmission of signal
The presence of hormone or multiple hormones enables a response in the receptor, which begins a cascade of signaling. The hormone receptor interacts with different molecules to induce a variety of changes, such as an increase or decrease of nutrient sources, growth, and other metabolic functions. These signaling pathways are complex mechanisms mediated by feedback loops where different signals activate and inhibit other signals. If a signaling pathway ends with the increase in production of a nutrient, that nutrient is then a signal back to the receptor that acts as a competitive inhibitor to prevent further production. Signaling pathways regulate cells through activating or inactivating gene expression, transport of metabolites, and controlling enzymatic activity to manage growth and functions of metabolism.
Intracellular receptors
Intracellular and nuclear receptors are a direct way for the cell to respond to internal changes and signals. Intracellular receptors are activated by hydrophobic ligands that pass through the cellular membrane. All nuclear receptors are very similar in structure, and are described with intrinsic transcriptional activity. Intrinsic transcriptional involves the three following domains: transcription-activating, DNA-binding, and ligand-binding. These domains and ligands are hydrophobic and are able to travel through the membrane. The movement of macromolecules and ligand molecules into the cell enables a complex transport system of intracellular signal transfers through different cellular environments until response is enabled. Nuclear receptors are a special class of intracellular receptor that specifically aid the needs of the cell to express certain genes. Nuclear receptors often bind directly to DNA by targeting specific DNA sequences in order to express or repress transcription of nearby genes.
Cell surface receptors
The extracellular environment is able to induce changes within the cell. Hormones, or other extracellular signals, are able to induce changes within the cell by binding to cell surface receptors also known as transmembrane receptors. This interaction allows the hormone receptor to produce second messengers within the cell to aid response. Second messengers may also be sent to interact with intracellular receptors in order to enter the complex signal transport system that eventually changes cellular function.
G-protein-coupled membrane receptors (GPCR) are a major class of transmembrane receptors. The features of G proteins include GDP/GTP binding, GTP hydrolysis and guanosine nucleotide exchange. When a ligand binds to a GPCR the receptor changes conformation, which makes the intracellular loops between the different membrane domains of the receptor interact with G proteins. This interaction causes the exchange of GDP for GTP, which triggers structural changes within the alpha subunit of the G protein. The changes interrupts the interaction of the alpha subunit with the beta–gamma complex and which results in a single alpha subunit with GTP bound and a beta–gamma dimer. The GTP–alpha monomer interacts with a variety of cellular targets. The beta–gamma dimer also can stimulate enzymes within the cells for example, adenylate cyclase but it does not have as many targets as the GTP–alpha complex.
Aiding gene expression
Hormone receptors can behave as transcription factors by interacting directly with DNA or by cross-talking with signaling pathways. This process is mediated through co-regulators. In the absence of ligand, receptor molecules bind corepressors to repress gene expression, compacting chromatin through histone deacetylatase. When a ligand is present, nuclear receptors undergo a conformational change to recruit various coactivators. These molecules work to remodel chromatin. Hormone receptors have highly specific motifs that can interact with coregulator complexes. This is the mechanism through which receptors can induce regulation of gene expression depending on both the extracellular environment and the immediate cellular composition. Steroid hormones and their regulation by receptors are the most potent molecule interactions in aiding gene expression.
Problems with nuclear receptor binding as a result of shortages of ligand or receptors can have drastic effects on the cell. The dependency on the ligand is the most important part in being able to regulate gene expression, so the absence of ligand is drastic to this process. For example, estrogen deficiency is a cause of osteoporosis and the inability to undergo a proper signaling cascade prevents bone growth and strengthening. Deficiencies in nuclear receptor-mediated pathways play a key role in the development of disease, like osteoporosis.
when a ligand binds to a nuclear receptor, the receptor undergoes a conformational change that causes it to become activated, which in turn affects how much gene expression is regulated.
Classification
Receptors for water-soluble hormones
Water-soluble hormones include glycoproteins, catecholamines, and peptide hormones composed of polypeptides, e.g. thyroid-stimulating hormone, follicle-stimulating hormone, luteinizing hormone and insulin. These molecules are not lipid-soluble and therefore cannot diffuse through cell membranes. Consequently, receptors for peptide hormones are located on the plasma membrane because they have bound to a receptor protein located on the plasma membrane.
Water-soluble hormones come from amino acids and are located and stored in endocrine cells until actually needed.
The main two types of transmembrane receptor hormone receptor are the G-protein-coupled receptors and the enzyme-linked receptors. These receptors generally function via intracellular second messengers, including cyclic AMP (cAMP), cyclic GMP (cGMP), inositol 1,4,5-trisphosphate (IP3) and the calcium (Ca2+)-calmodulin system.
Receptors for lipid-soluble hormones
Steroid hormone receptors and related receptors are generally soluble proteins that function through gene activation. Lipid-soluble hormones target specific sequences of DNA by diffusing into the cell. When they have diffused into the cell, they bind to receptors (intracellular), and migrate into the nucleus. Their response elements are DNA sequences (promoters) that are bound by the complex of the steroid bound to its receptor. The receptors themselves are zinc-finger proteins. These receptors include those for glucocorticoids (glucocorticoid receptors), estrogens (estrogen receptors), androgens (androgen receptors), thyroid hormone (T3) (thyroid hormone receptors), calcitriol (the active form of vitamin D) (calcitriol receptors), and the retinoids (vitamin A) (retinoid receptors). Receptor-protein interactions induce the uptake and destruction of their respective hormones in order to regulate their concentration in the body. This is especially important for steroid hormones because many body systems are entirely steroid dependent.
List of hormone receptors
For some of these classes, in any given species (such as, for example, humans), there is a single molecule encoded by a single gene; in other cases, there are several molecules in the class.
Androgen receptors
Calcitriol receptors
Corticotropin-releasing hormone receptor 1
Corticotropin releasing hormone receptor 2
Estrogen receptors
Follicle-stimulating hormone receptors
Glucagon receptors
Gonadotropin receptors
Gonadotropin-releasing hormone receptors
Growth hormone receptors
Insulin receptor
Luteinizing hormone
Progesterone receptors
Retinoid receptors
Somatostatin receptors
Thyroid hormone receptors
Thyrotropin receptors
References
Receptors
Integral membrane proteins | Hormone receptor | [
"Chemistry"
] | 1,890 | [
"Receptors",
"Signal transduction"
] |
563,620 | https://en.wikipedia.org/wiki/Septentrional | Septentrional, meaning "of the north", is a Latinate adjective sometimes used in English. It is a form of the Latin noun septentriones, which refers to the seven stars of the Plough (Big Dipper), occasionally called the Septentrion.
In the 18th century, septentrional languages was a recognised term for the Germanic languages.
Etymology and background
The Oxford English Dictionary gives the etymology of septentrional as:
"Septentrional" is more or less synonymous with the term "boreal", derived from Boreas, a Greek god of the North Wind. The constellation Ursa Major, containing the Big Dipper, or Plough, dominates the skies of the North. The usual antonym for septentrional is the term meridional, which refers to the noonday sun.
Usage
The term septentrional is found on maps, mostly those made before 1700. Early maps of North America often refer to the northern- and northwesternmost unexplored areas of the continent as at the "Septentrional" and as "America Septentrionalis", sometimes with slightly varying spellings. Sometimes abbreviated to "Sep.", it was used in historical astronomy to indicate the northern direction on the celestial globe, together with Meridional ("Mer.") for southern, Oriental ("Ori.") for eastern and Occidental ("Occ.") for western.
The linguistic usage in the 17th and 18th centuries was as an umbrella term. It described "the Germanic languages, usually with particular emphasis on Anglo-Saxon, Old Norse and Gothic." Writing of Johann Georg Keyßler in 1758, Thomas Gray distinguished between "Celtic" and "septentrional" antiquities. Thomas Percy actively criticised the blurring of the Celtic and the Germanic in the name of the "septentrional", while at the same time Ossianism favoured it. James Ingram in his inaugural lecture of 1807 called George Hickes "the first of septentrional scholars" for his pioneering lexicographical work on Anglo-Saxon. In current usage, "septentrional fiction" may refer to a setting in the Canadian North.
In France, the term septentrional refers to the Northern stretch of the Côtes du Rhône AOC winemaking region. The Northern Rhône, or septentrional, runs along the Rhône river from Vienne in the north, to Montélimar in the south. It includes the eight crus: Côte Rôtie, Condrieu, Château-Grillet, Hermitage, Saint-Joseph, Crozes-Hermitage, Cornas and Saint-Péray. The Southern Rhône is referred to as the meridional (Rhône méridionale), and extends from Montélimar in the north, to Avignon in the south.
See also
Septentrionalist
Oriental
Occidental
Boreal
Austral
Myotis septentrionalis, the Northern Long-eared Bat
Notes
References
Geography of the Arctic
Orientation (geometry) | Septentrional | [
"Physics",
"Mathematics"
] | 632 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
563,628 | https://en.wikipedia.org/wiki/Clock%20tower | Clock towers are a specific type of structure that house a turret clock and have one or more clock faces on the upper exterior walls. Many clock towers are freestanding structures but they can also adjoin or be located on top of another building. Some other buildings also have clock faces on their exterior but these structures serve other main functions.
Clock towers are a common sight in many parts of the world with some being iconic buildings. One example is the Elizabeth Tower in London (usually called "Big Ben", although strictly this name belongs only to the bell inside the tower).
Definition
There are many structures that may have clocks or clock faces attached to them and some structures have had clocks added to an existing structure. According to the Council on Tall Buildings and Urban Habitat a structure is defined as a building if at least fifty percent of its height is made up of floor plates containing habitable floor area. Structures that do not meet this criterion, are defined as towers. A clock tower historically fits this definition of a tower and therefore can be defined as any tower specifically built with one or more (often four) clock faces and that can be either freestanding or part of a church or municipal building such as a town hall. Not all clocks on buildings therefore make the building into a clock tower.
The mechanism inside the tower is known as a turret clock. It often marks the hour (and sometimes segments of an hour) by sounding large bells or chimes, sometimes playing simple musical phrases or tunes. Some clock towers were previously built as Bell towers and then had clocks added to them. As these structures fulfil the definition of a tower they can be considered to be clock towers.
History
Although clock towers are today mostly admired for their aesthetics, they once served an important purpose. Before the middle of the twentieth century, most people did not have watches, and prior to the 18th century even home clocks were rare. The first clocks did not have faces, but were solely striking clocks, which sounded bells to call the surrounding community to work or to prayer. They were therefore placed in towers so the bells would be audible for a long distance. Clock towers were placed near the centres of towns and were often the tallest structures there. As clock towers became more common, the designers realized that a dial on the outside of the tower would allow the townspeople to read the time whenever they wanted.
The use of clock towers dates back to antiquity. The earliest clock tower was the Tower of the Winds in Athens, which featured eight sundials and was created in the 1st century BC during the period of Roman Greece. In its interior, there was also a water clock (or clepsydra), driven by water coming down from the Acropolis.
In Song dynasty China, an astronomical clock tower was designed by Su Song and erected at Kaifeng in 1088, featuring a liquid escapement mechanism. In England, a clock was put up in a clock tower, the medieval precursor to Big Ben, at Westminster, in 1288; and in 1292 a clock was put up in Canterbury Cathedral. The oldest surviving turret clock formerly part of a clock tower in Europe is the Salisbury Cathedral clock, completed in 130. A clock put up at St. Albans, in 1326, 'showed various astronomical phenomena'.
Al-Jazari of the Artuqid dynasty in Upper Mesopotamia constructed an elaborate clock called the "castle clock" and described it in his Book of Knowledge of Ingenious Mechanical Devices in 1206. It was about high, and had multiple functions alongside timekeeping. It included a display of the zodiac and the solar and lunar paths, and a pointer in the shape of the crescent moon that travelled across the top of a gateway, moved by a hidden cart and causing automatic doors to open, each revealing a mannequin, every hour. It was possible to re-program the length of day and night daily in order to account for the changing lengths of day and night throughout the year, and it also featured five robotic musicians who automatically play music when moved by levers operated by a hidden camshaft attached to a water wheel.
Line (mains) synchronous tower clocks were introduced in the United States in the 1920s.
Landmarks
Some clock towers have become famous landmarks. Prominent examples include Elizabeth Tower built in 1859, which houses the Great Bell (generally known as Big Ben) in London, the tower of Philadelphia City Hall, the Rajabai Tower in Mumbai, the Spasskaya Tower of the Moscow Kremlin, the Torre dell'Orologio in the Piazza San Marco in Venice, Italy, the Peace Tower of the Parliament of Canada in Ottawa, and the Zytglogge clock tower in the Old City of Bern, Switzerland.
Records
The tallest freestanding clock tower in the world is the Joseph Chamberlain Memorial Clock Tower (Old Joe) at the University of Birmingham in Birmingham, United Kingdom. The tower stands at tall and was completed in 1908. The clock tower of Philadelphia City Hall was part of the tallest building in the world from 1894, when the tower was topped out and the building partially occupied, until 1908.
Taller buildings have had clock faces added to their existing structure such as the Palace of Culture and Science in Warsaw, with a clock added in 2000. The building has a roof height of , and an antenna height of . The NTT Docomo Yoyogi Building in Tokyo, with a clock added in 2002, has a roof height of , and an antenna height of .
The Abraj Al Bait, a hotel complex in Mecca constructed in 2012, has the largest and highest clock face on a building in the world, with its Makkah Royal Clock Tower having an occupied height of , and a tip height of . The tower has four clock faces, two of which are in diameter, at about high.
See also
List of clock towers
Bell tower
Minaret
Street clock
Thirteenth stroke of the clock
References
External links
Towerclocks.org - Tower clocks database
Railway Station Clock Towers Architecture of time
Hellenistic engineering
Ancient inventions
Tower
Ancient Greek technology
Greek inventions | Clock tower | [
"Physics",
"Technology",
"Engineering"
] | 1,220 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
563,662 | https://en.wikipedia.org/wiki/Stooge%20sort | Stooge sort is a recursive sorting algorithm. It is notable for its exceptionally bad time complexity of =
The algorithm's running time is thus slower compared to reasonable sorting algorithms, and is slower than bubble sort, a canonical example of a fairly inefficient sort. It is, however, more efficient than Slowsort. The name comes from The Three Stooges.
The algorithm is defined as follows:
If the value at the start is larger than the value at the end, swap them.
If there are three or more elements in the list, then:
Stooge sort the initial 2/3 of the list
Stooge sort the final 2/3 of the list
Stooge sort the initial 2/3 of the list again
It is important to get the integer sort size used in the recursive calls by rounding the 2/3 upwards, e.g. rounding 2/3 of 5 should give 4 rather than 3, as otherwise the sort can fail on certain data.
Implementation
Pseudocode
function stoogesort(array L, i = 0, j = length(L)-1){
if L[i] > L[j] then // If the leftmost element is larger than the rightmost element
swap(L[i],L[j]) // Then swap them
if (j - i + 1) > 2 then // If there are at least 3 elements in the array
t = floor((j - i + 1) / 3)
stoogesort(L, i, j-t) // Sort the first 2/3 of the array
stoogesort(L, i+t, j) // Sort the last 2/3 of the array
stoogesort(L, i, j-t) // Sort the first 2/3 of the array again
return L
}
Haskell
-- Not the best but equal to above
stoogesort :: (Ord a) => [a] -> [a]
stoogesort [] = []
stoogesort src = innerStoogesort src 0 ((length src) - 1)
innerStoogesort :: (Ord a) => [a] -> Int -> Int -> [a]
innerStoogesort src i j
| (j - i + 1) > 2 = src''''
| otherwise = src'
where
src' = swap src i j -- need every call
t = floor (fromIntegral (j - i + 1) / 3.0)
src'' = innerStoogesort src' i (j - t)
src''' = innerStoogesort src'' (i + t) j
src'''' = innerStoogesort src''' i (j - t)
swap :: (Ord a) => [a] -> Int -> Int -> [a]
swap src i j
| a > b = replaceAt (replaceAt src j a) i b
| otherwise = src
where
a = src !! i
b = src !! j
replaceAt :: [a] -> Int -> a -> [a]
replaceAt (x:xs) index value
| index == 0 = value : xs
| otherwise = x : replaceAt xs (index - 1) value
References
Sources
External links
Sorting Algorithms (including Stooge sort)
Stooge sort – implementation and comparison
Comparison sorts
Articles with example pseudocode | Stooge sort | [
"Technology"
] | 759 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
563,694 | https://en.wikipedia.org/wiki/Gell-Mann%20matrices | The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics.
They span the Lie algebra of the SU(3) group in the defining representation.
Matrices
{| border="0" cellpadding="8" cellspacing="0"
|
|
|
|-
|
|
|
|-
|
|
|
|}
Properties
These matrices are traceless, Hermitian, and obey the extra trace orthonormality relation, so they can generate unitary matrix group elements of SU(3) through exponentiation. These properties were chosen by Gell-Mann because they then naturally generalize the Pauli matrices for SU(2) to SU(3), which formed the basis for Gell-Mann's quark model. Gell-Mann's generalization further extends to general SU(n). For their connection to the standard basis of Lie algebras, see the Weyl–Cartan basis.
Trace orthonormality
In mathematics, orthonormality typically implies a norm which has a value of unity (1). Gell-Mann matrices, however, are normalized to a value of 2. Thus, the trace of the pairwise product results in the ortho-normalization condition
where is the Kronecker delta.
This is so the embedded Pauli matrices corresponding to the three embedded subalgebras of SU(2) are conventionally normalized. In this three-dimensional matrix representation, the Cartan subalgebra is the set of linear combinations (with real coefficients) of the two matrices and , which commute with each other.
There are three significant SU(2) subalgebras:
and
where the and are linear combinations of and . The SU(2) Casimirs of these subalgebras mutually commute.
However, any unitary similarity transformation of these subalgebras will yield SU(2) subalgebras. There is an uncountable number of such transformations.
Commutation relations
The 8 generators of SU(3) satisfy the commutation and anti-commutation relations
with the structure constants
The structure constants are completely symmetric in the three indices. The structure constants are completely antisymmetric in the three indices, generalizing the antisymmetry of the Levi-Civita symbol of . For the present order of Gell-Mann matrices they take the values
In general, they evaluate to zero, unless they contain an odd count of indices from the set {2,5,7}, corresponding to the antisymmetric (imaginary) s.
Using these commutation relations, the product of Gell-Mann matrices can be written as
where is the identity matrix.
Fierz completeness relations
Since the eight matrices and the identity are a complete trace-orthogonal set spanning all 3×3 matrices, it is straightforward to find two Fierz completeness relations, (Li & Cheng, 4.134), analogous to that satisfied by the Pauli matrices. Namely, using the dot to sum over the eight matrices and using Greek indices for their row/column indices, the following identities hold,
and
One may prefer the recast version, resulting from a linear combination of the above,
Representation theory
A particular choice of matrices is called a group representation, because any element of SU(3) can be written in the form using the Einstein notation, where the eight are real numbers and a sum over the index is implied. Given one representation, an equivalent one may be obtained by an arbitrary unitary similarity transformation, since that leaves the commutator unchanged.
The matrices can be realized as a representation of the infinitesimal generators of the special unitary group called SU(3). The Lie algebra of this group (a real Lie algebra in fact) has dimension eight and therefore it has some set with eight linearly independent generators, which can be written as , with i taking values from 1 to 8.
Casimir operators and invariants
The squared sum of the Gell-Mann matrices gives the quadratic Casimir operator, a group invariant,
where is 3×3 identity matrix. There is another, independent, cubic Casimir operator, as well.
Application to quantum chromodynamics
These matrices serve to study the internal (color) rotations of the gluon fields associated with the coloured quarks of quantum chromodynamics (cf. colours of the gluon). A gauge colour rotation is a spacetime-dependent SU(3) group element
where summation over the eight indices is implied.
See also
Casimir element
Clebsch–Gordan coefficients for SU(3)
Generalizations of Pauli matrices
Group representations
Killing form
Pauli matrices
Qutrit
SU(3)
References
Matrices
Quantum chromodynamics
Mathematical physics
Lie algebras
Representation theory of Lie algebras | Gell-Mann matrices | [
"Physics",
"Mathematics"
] | 1,018 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Matrices (mathematics)",
"Mathematical physics"
] |
563,728 | https://en.wikipedia.org/wiki/Crystalloluminescence | Crystalloluminescence is the effect of luminescence produced during crystallization. The phenomenon was first reported in the 1800s from the rapid crystallization of potassium sulfate from an aqueous solution.
References
Luminescence
Light sources | Crystalloluminescence | [
"Chemistry"
] | 50 | [
"Luminescence",
"Molecular physics",
"Physical chemistry stubs"
] |
563,847 | https://en.wikipedia.org/wiki/Electromigration | Electromigration is the transport of material caused by the gradual movement of the ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. The effect is important in applications where high direct current densities are used, such as in microelectronics and related structures. As the structure size in electronics such as integrated circuits (ICs) decreases, the practical significance of this effect increases.
History
The phenomenon of electromigration has been known for over 100 years, having been discovered by the French scientist Gerardin. The topic first became of practical interest during the late 1960s when packaged ICs first appeared. The earliest commercially available ICs failed in a mere three weeks of use from runaway electromigration, which led to a major industry effort to correct this problem. The first observation of electromigration in thin films was made by I. Blech. Research in this field was pioneered by a number of investigators throughout the fledgling semiconductor industry. One of the most important engineering studies was performed by Jim Black of Motorola, after whom Black's equation is named. At the time, the metal interconnects in ICs were still about 10 micrometres wide. Currently interconnects are only hundreds to tens of nanometers in width, making research in electromigration increasingly important.
Practical implications of electromigration
Electromigration decreases the reliability of integrated circuits (ICs). It can cause the eventual loss of connections or failure of a circuit. Since reliability is critically important for space travel, military purposes, anti-lock braking systems, medical equipment like Automated External Defibrillators and is even important for personal computers or home entertainment systems, the reliability of chips (ICs) is a major focus of research efforts.
Due to the difficulty of testing under real-world conditions, Black's equation is used to predict the life span of integrated circuits.
To use Black's equation, the component is put through high temperature operating life (HTOL) testing. The component's expected life span under real conditions is extrapolated from data gathered during this testing.
Although damage from electromigration ultimately results in the failure of the affected IC, the first symptoms are intermittent glitches, which are quite challenging to diagnose. As some interconnects fail before others, the circuit exhibits seemingly random errors, which may be indistinguishable from other failure mechanisms (such as electrostatic discharge damage). In a laboratory setting, electromigration failure is readily imaged with an electron microscope, as interconnect erosion leaves telltale visual markers on the metal layers of the IC.
With increasing miniaturization, the probability of failure due to electromigration increases in VLSI and ULSI circuits because both the power density and the current density increase. Specifically, line widths will continue to decrease over time, as will wire cross-sectional areas. Currents are also reduced due to lower supply voltages and shrinking gate capacitances. However, as current reduction is constrained by increasing frequencies, the more marked decrease in cross-sectional areas (compared to current reduction) will give rise to increased current densities in ICs going forward.
In advanced semiconductor manufacturing processes, copper has replaced aluminium as the interconnect material of choice. Despite its greater fragility in the fabrication process, copper is preferred for its superior conductivity. It is also intrinsically less susceptible to electromigration. However, electromigration (EM) continues to be an ever-present challenge to device fabrication, and therefore the EM research for copper interconnects is ongoing (though a relatively new field).
In modern consumer electronic devices, ICs rarely fail due to electromigration effects. This is because proper semiconductor design practices incorporate the effects of electromigration into the IC's layout. Nearly all IC design houses use automated EDA tools to check and correct electromigration problems at the transistor layout-level. When operated within the manufacturer's specified temperature and voltage range, a properly designed IC device is more likely to fail from other (environmental) causes, such as cumulative damage from gamma-ray bombardment.
Nevertheless, there have been documented cases of product failures due to electromigration. In the late 1980s, one line of Western Digital's desktop drives suffered widespread, predictable failure after 12–18 months of field usage. Using forensic analysis of the returned bad units, engineers identified improper design-rules in a third-party supplier's IC controller. By replacing the bad component with that of a different supplier, WD was able to correct the flaw, but not before significant damage was done to the company's reputation.
Electromigration can be a cause of degradation in some power semiconductor devices such as low voltage power MOSFETs, in which the lateral current through the source contact metallisation (often aluminium) can reach the critical current densities during overload conditions. The degradation of the aluminium layer causes an increase in on-state resistance, and can eventually lead to complete failure.
Fundamentals
The material properties of the metal interconnects have a strong influence on their life span. The characteristics are predominantly the composition of the metal alloy and the dimensions of the conductor. The shape of the conductor, the crystallographic orientation of the grains in the metal, procedures for the layer deposition, heat treatment or annealing, characteristics of the passivation, and the interface to other materials also affect the durability of the interconnects. There are also important differences with time dependent current: direct current or different alternating current waveforms cause different effects.
Forces on ions in an electrical field
Two forces affect ionized atoms in a conductor: 1) The direct electrostatic force Fe, as a result of the electric field , which has the same direction as the electric field, and 2) The force from the exchange of momentum with other charge carriers Fp, toward the flow of charge carriers, is in the opposite direction of the electric field. In metallic conductors Fp is caused by a so-called "electron wind" or "ion wind".
The resulting force Fres on an activated ion in the electrical field can be written as
where is the electric charge of the ions, and the valences corresponding to the electrostatic and wind force respectively, the so-called effective valence of the material, the current density, and the resistivity of the material
.
Electromigration occurs when some of the momentum of a moving electron is transferred to a nearby activated ion. This causes the ion to move from its original position. Over time this force knocks a significant number of atoms far from their original positions. A break or gap can develop in the conducting material, preventing the flow of electricity. In narrow interconnect conductors, such as those linking transistors and other components in integrated circuits, this is known as a void or internal failure (open circuit). Electromigration can also cause the atoms of a conductor to pile up and drift toward other nearby conductors, creating an unintended electrical connection known as a hillock failure or whisker failure (short circuit). Both of these situations can lead to a malfunction of the circuit.
Failure mechanisms
Diffusion mechanisms
In a homogeneous crystalline structure, because of the uniform lattice structure of the metal ions, there is hardly any momentum transfer between the conduction electrons and the metal ions. However, this symmetry does not exist at the grain boundaries and material interfaces, and so here momentum is transferred much more vigorously. Since the metal ions in these regions are bonded more weakly than in a regular crystal lattice, once the electron wind has reached a certain strength, atoms become separated from the grain boundaries and are transported in the direction of the current. This direction is also influenced by the grain boundary itself, because atoms tend to move along grain boundaries.
Diffusion processes caused by electromigration can be divided into grain boundary diffusion, bulk diffusion and surface diffusion. In general, grain boundary diffusion is the major electromigration process in aluminum wires, whereas surface diffusion is dominant in copper interconnects.
Thermal effects
In an ideal conductor, where atoms are arranged in a perfect lattice structure, the electrons moving through it would experience no collisions and electromigration would not occur. In real conductors, defects in the lattice structure and the random thermal vibration of the atoms about their positions causes electrons to collide with the atoms and scatter, which is the source of electrical resistance (at least in metals; see electrical conduction). Normally, the amount of momentum imparted by the relatively low-mass electrons is not enough to permanently displace the atoms. However, in high-power situations (such as with the increasing current draw and decreasing wire sizes in modern VLSI microprocessors), if many electrons bombard the atoms with enough force to become significant, this will accelerate the process of electromigration by causing the atoms of the conductor to vibrate further from their ideal lattice positions, increasing the amount of electron scattering. High current density increases the number of electrons scattering against the atoms of the conductor, and hence the rate at which those atoms are displaced.
In integrated circuits, electromigration does not occur in semiconductors directly, but in the metal interconnects deposited onto them (see semiconductor device fabrication).
Electromigration is exacerbated by high current densities and the Joule heating of the conductor (see electrical resistance), and can lead to eventual failure of electrical components. Localized increase of current density is known as current crowding.
Balance of atom concentration
A governing equation which describes the atom concentration evolution throughout some interconnect segment, is the conventional mass balance (continuity) equation
where is the atom concentration at the point with a coordinates at the moment of time , and is the total atomic flux at this location. The total atomic flux is a combination of the fluxes caused by the different atom migration forces. The major forces are induced by the electric current, and by the gradients of temperature, mechanical stress and concentration. .
To define the fluxes mentioned above:
.
Here is the electron charge, is the effective charge of the migrating atom, the resistivity of the conductor where atom migration takes place, is the local current density, is the Boltzmann constant, is the absolute temperature. is the time and position dependent atom diffusivity.
. We use the heat of thermal diffusion.
,
here is the atomic volume and is initial atomic concentration, is the hydrostatic stress and are the components of principal stress.
.
Assuming a vacancy mechanism for atom diffusion we can express as a function of the hydrostatic stress where is the effective activation energy of the thermal diffusion of metal atoms. The vacancy concentration represents availability of empty lattice sites, which might be occupied by a migrating atom.
Electromigration-aware design
Electromigration reliability of a wire (Black's equation)
At the end of the 1960s J. R. Black developed an empirical model to estimate the MTTF (mean time to failure) of a wire, taking electromigration into consideration. Since then, the formula has gained popularity in the semiconductor industry:
.
Here is a constant based on the cross-sectional area of the interconnect, is the current density, is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), is the Boltzmann constant, is the temperature in kelvins, and a scaling factor (usually set to 2 according to Black). The temperature of the conductor appears in the exponent, i.e. it strongly affects the MTTF of the interconnect. For an interconnect of a given construction to remain reliable as the temperature rises, the current density within the conductor must be reduced. However, as interconnect technology advances at the nanometer scale, the validity of Black's equation becomes increasingly questionable.
Wire material
Historically, aluminium has been used as conductor in integrated circuits, due to its good adherence to substrate, good conductivity, and ability to form ohmic contacts with silicon. However, pure aluminium is susceptible to electromigration. Research shows that adding 2-4% of copper to aluminium increases resistance to electromigration about 50 times. The effect is attributed to the grain boundary segregation of copper, which greatly inhibits the diffusion of aluminium atoms across grain boundaries.
Pure copper wires can withstand approximately five times more current density than aluminum wires while maintaining similar reliability requirements. This is mainly due to the higher electromigration activation energy levels of copper, caused by its superior electrical and thermal conductivity as well as its higher melting point. Further improvements can be achieved by alloying copper with about 1% palladium which inhibits diffusion of copper atoms along grain boundaries in the same way as the addition of copper to aluminium interconnect.
Bamboo structure and metal slotting
A wider wire results in smaller current density and, hence, less likelihood of electromigration. Also, the metal grain size has influence; the smaller grains, the more grain boundaries and the higher likelihood of electromigration effects. However, if you reduce wire width to below the average grain size of the wire material, grain boundaries become "crosswise", more or less perpendicular to the length of the wire. The resulting structure resembles the joints in a stalk of bamboo. With such a structure, the resistance to electromigration increases, despite an increase in current density. This apparent contradiction is caused by the perpendicular position of the grain boundaries; the boundary diffusion factor is excluded, and material transport is correspondingly reduced.
However, the maximum wire width possible for a bamboo structure is usually too narrow for signal lines of large-magnitude currents in analog circuits or for power supply lines. In these circumstances, slotted wires are often used, whereby rectangular holes are carved in the wires. Here, the widths of the individual metal structures in between the slots lie within the area of a bamboo structure, while the resulting total width of all the metal structures meets power requirements.
Blech length
There is a lower limit for the length of the interconnect that will allow higher current carrying capability. It is known as "Blech length". Any wire that has a length below this limit will have a stretched limit for electromigration. Here, a mechanical stress buildup causes an atom back flow process which reduces or even compensates the effective material flow towards the anode. The Blech length must be considered when designing test structures to evaluate electromigration. This minimum length is typically some tens of microns for chip traces, and interconnections shorter than this are sometimes referred to as 'electromigration immortal'.
Via arrangements and corner bends
Particular attention must be paid to vias and contact holes. The current carrying capacity of a via is much less than a metallic wire of same length. Hence multiple vias are often used, whereby the geometry of the via array is very significant: multiple vias must be organized such that the resulting current is distributed as evenly as possible through all the vias.
Attention must also be paid to bends in interconnects. In particular, 90-degree corner bends must be avoided, since the current density in such bends is significantly higher than that in oblique angles (e.g., 135 degrees).
Electromigration in solder joints
The typical current density at which electromigration occurs in Cu or Al interconnects is 106 to 107 A/cm2. For solder joints (SnPb or SnAgCu lead-free) used in IC chips, however, electromigration occurs at much lower current densities, e.g. 104 A/cm2.
It causes a net atom transport along the direction of electron flow. The atoms accumulate at the anode, while voids are generated at the cathode and back stress is induced during electromigration. The typical failure of a solder joint due to electromigration will occur at the cathode side. Due to the current crowding effect, voids form first at the corners of the solder joint. Then the voids extend and join to cause a failure. Electromigration also influences formation of intermetallic compounds, as the migration rates are a function of atomic mass.
Electromigration and technology computer aided design
The complete mathematical model describing electromigration consists of several partial differential equations (PDEs) which need to be solved for three-dimensional geometrical domains representing segments of an interconnect structure. Such a mathematical model forms the basis for simulation of electromigration in modern technology computer aided design (TCAD) tools.
Use of TCAD tools for detailed investigations of electromigration induced interconnect degradation is gaining importance. Results of TCAD studies in combination with reliability tests lead to modification of design rules improving the interconnect resistance to electromigration.
Electromigration due to IR drop noise of the on-chip power grid network/interconnect
The electromigration degradation of the on-chip power grid network/interconnect depends on the IR drop noise of the power grid interconnect.
The electromigration-aware lifetime of the power grid interconnects as well as the chip decreases if the chip suffers from a high value of the IR drop noise.
Machine Learning Model for Electromigration-aware MTTF Prediction
Recent work demonstrates MTTF prediction using a machine learning model. The work uses a neural network-based supervised learning approach with current density, interconnect length, interconnect temperature as input features to the model.
Electromigrated nanogaps
Electromigrated nanogaps are gaps formed in metallic bridges formed by the process of electromigration. A nanosized contact formed by electromigration acts like a waveguide for electrons. The nanocontact essentially acts like a one-dimensional wire with a conductance of . The current in a wire is the velocity of the electrons multiplied by the charge and number per unit length, or . This gives a conductance of . In nano scale bridges the conductance falls in discrete steps of multiples of the quantum conductance .
Electromigrated Nanogaps have shown great promise as electrodes in use in molecular scale electronics. Researchers have used feedback controlled electromigration to investigate the magnetoresistance of a quantum spin valve.
Reference standards
EIA/JEDEC Standard EIA/JESD61: Isothermal Electromigration Test Procedure.
EIA/JEDEC Standard EIA/JESD63: Standard method for calculating the electromigration model parameters for current density and temperature.
Fundamentals of electromigration, Chapter 2
See also
Kirkendall effect
Sealing current
References
Further reading
Ghate, P. B.: Electromigration-Induced Failures in VLSI Interconnects, IEEE Conf. Publication, Vol. 20:p 292 299, March 1982.
Lienig, J.: , (Download paper) Proc. of the Int. Symposium on Physical Design (ISPD) 2006, pp. 39–46, April 2006.
Lienig, J., Thiele, M.: , (Download paper), Proc. of the Int. Symposium on Physical Design (ISPD) 2018, pp. 144–151, March 2018.
Louie Liu, H.C., Murarka, S.: "Modeling of Temperature Increase Due to Joule Heating During Elektromigration Measurements. Center for Integrated Electronics and Electronics Manufacturing", Materials Research Society Symposium Proceedings Vol. 427:p. 113 119.
Books
External links
What is Electromigration?, Computer Simulation Laboratory, Middle East Technical University.
Electromigration for Designers: An Introduction for the Non-Specialist, J.R. Lloyd, EETimes.
Semiconductor electromigration in-depth at DWPG.Com
Modeling of electromigration process with void formation at UniPro R&D site
DoITPoMS Teaching and Learning Package- "Electromigration"
Electric and magnetic fields in matter
Electronic design automation
Semiconductor device defects
Transport phenomena
Electrochemistry | Electromigration | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 4,096 | [
"Transport phenomena",
"Physical phenomena",
"Chemical engineering",
"Technological failures",
"Semiconductor device defects",
"Electric and magnetic fields in matter",
"Materials science",
"Electrochemistry",
"Condensed matter physics"
] |
563,928 | https://en.wikipedia.org/wiki/Baby-step%20giant-step | In group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm for computing the discrete logarithm or order of an element in a finite abelian group by Daniel Shanks. The discrete log problem is of fundamental importance to the area of public key cryptography.
Many of the most commonly used cryptography systems are based on the assumption that the discrete log is extremely difficult to compute; the more difficult it is, the more security it provides a data transfer. One way to increase the difficulty of the discrete log problem is to base the cryptosystem on a larger group.
Theory
The algorithm is based on a space–time tradeoff. It is a fairly simple modification of trial multiplication, the naive method of finding discrete logarithms.
Given a cyclic group of order , a generator of the group and a group element , the problem is to find an integer such that
The baby-step giant-step algorithm is based on rewriting :
Therefore, we have:
The algorithm precomputes for several values of . Then it fixes an and tries values of in the right-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of , using the precomputed values of .
The algorithm
Input: A cyclic group G of order n, having a generator α and an element β.
Output: A value x satisfying .
m ← Ceiling()
For all j where 0 ≤ j < m:
Compute αj and store the pair (j, αj) in a table. (See )
Compute α−m.
γ ← β. (set γ = β)
For all i where 0 ≤ i < m:
Check to see if γ is the second component (αj) of any pair in the table.
If so, return im + j.
If not, γ ← γ • α−m.
In practice
The best way to speed up the baby-step giant-step algorithm is to use an efficient table lookup scheme. The best in this case is a hash table. The hashing is done on the second component, and to perform the check in step 1 of the main loop, γ is hashed and the resulting memory address checked. Since hash tables can retrieve and add elements in time (constant time), this does not slow down the overall baby-step giant-step algorithm.
The space complexity of the algorithm is , while the time complexity of the algorithm is . This running time is better than the running time of the naive brute force calculation.
The baby-step giant-step algorithm could be used by an eavesdropper to derive the private key generated in the Diffie Hellman key exchange, when the modulus is a prime number that is not too large. If the modulus is not prime, the Pohlig–Hellman algorithm has a smaller algorithmic complexity, and potentially solves the same problem.
Notes
The baby-step giant-step algorithm is a generic algorithm. It works for every finite cyclic group.
It is not necessary to know the exact order of the group G in advance. The algorithm still works if n is merely an upper bound on the group order.
Usually the baby-step giant-step algorithm is used for groups whose order is prime. If the order of the group is composite then the Pohlig–Hellman algorithm is more efficient.
The algorithm requires O(m) memory. It is possible to use less memory by choosing a smaller m in the first step of the algorithm. Doing so increases the running time, which then is O(n/m). Alternatively one can use Pollard's rho algorithm for logarithms, which has about the same running time as the baby-step giant-step algorithm, but only a small memory requirement.
While this algorithm is credited to Daniel Shanks, who published the 1971 paper in which it first appears, a 1994 paper by Nechaev states that it was known to Gelfond in 1962.
There exist optimized versions of the original algorithm, such as using the collision-free truncated lookup tables of or negation maps and Montgomery's simultaneous modular inversion as proposed in.
Further reading
H. Cohen, A course in computational algebraic number theory, Springer, 1996.
D. Shanks, Class number, a theory of factorization and genera. In Proc. Symp. Pure Math. 20, pages 415—440. AMS, Providence, R.I., 1971.
A. Stein and E. Teske, Optimized baby step-giant step methods, Journal of the Ramanujan Mathematical Society 20 (2005), no. 1, 1–32.
A. V. Sutherland, Order computations in generic groups, PhD thesis, M.I.T., 2007.
D. C. Terr, A modification of Shanks’ baby-step giant-step algorithm, Mathematics of Computation 69 (2000), 767–773.
References
External links
Baby step-Giant step – example C source code
Group theory
Number theoretic algorithms
Articles with example C++ code | Baby-step giant-step | [
"Mathematics"
] | 1,059 | [
"Group theory",
"Fields of abstract algebra"
] |
563,950 | https://en.wikipedia.org/wiki/Coesite | Coesite () is a form (polymorph) of silicon dioxide (SiO2) that is formed when very high pressure (2–3 gigapascals), and moderately high temperature (), are applied to quartz. Coesite was first synthesized by Loring Coes, Jr., a chemist at the Norton Company, in 1953.
Occurrences
In 1960, a natural occurrence of coesite was reported by Edward C. T. Chao, in collaboration with Eugene Shoemaker, from Barringer Crater, in Arizona, US, which was evidence that the crater must have been formed by an impact. After this report, the presence of coesite in unmetamorphosed rocks was taken as evidence of a meteorite impact event or of an atomic bomb explosion. It was not expected that coesite would survive in high pressure metamorphic rocks.
In metamorphic rocks, coesite was initially described in eclogite xenoliths from the mantle of the Earth that were carried up by ascending magmas; kimberlite is the most common host of such xenoliths. In metamorphic rocks, coesite is now recognized as one of the best mineral indicators of metamorphism at very high pressures (UHP, or ultrahigh-pressure metamorphism). Such UHP metamorphic rocks record subduction or continental collisions in which crustal rocks are carried to depths of or more. Coesite is formed at pressures above about 2.5 GPa (25 kbar) and temperature above about 700 °C. This corresponds to a depth of about 70 km in the Earth. It can be preserved as mineral inclusions in other phases because as it partially reverts to quartz, the quartz rim exerts pressure on the core of the grain, preserving the metastable grain as tectonic forces uplift and expose these rock at the surface. As a result, the grains have a characteristic texture of a polycrystalline quartz rim (see infobox figure).
Coesite has been identified in UHP metamorphic rocks around the world, including the western Alps of Italy at Dora Maira, the Ore Mountains of Germany, the Lanterman Range of Antarctica, in the Kokchetav Massif of Kazakhstan, in the Western Gneiss region of Norway, the Dabie-Shan Range in Eastern China, the Himalayas of Eastern Pakistan, and in the Appalachian Mountains of Vermont.
Crystal structure
Coesite is a tectosilicate with each silicon atom surrounded by four oxygen atoms in a tetrahedron. Each oxygen atom is then bonded to two Si atoms to form a framework. There are two crystallographically distinct Si atoms and five different oxygen positions in the unit cell. Although the unit cell is close to being hexagonal in shape ("a" and "c" are nearly equal and β nearly 120°), it is inherently monoclinic and cannot be hexagonal. The crystal structure of coesite is similar to that of feldspar and consists of four silicon dioxide tetrahedra arranged in Si4O8 and Si8O16 rings. The rings are further arranged into chains. This structure is metastable within the stability field of quartz: coesite will eventually decay back into quartz with a consequent volume increase, although the metamorphic reaction is very slow at the low temperatures of the Earth's surface. The crystal symmetry is monoclinic C2/c, No.15, Pearson symbol mS48.
See also
Seifertite, forming at higher pressure than stishovite
Stishovite, a higher-pressure polymorph
References
External links
Coesite page
Barringer Meteor Crater science education page
Impact event minerals
Silica polymorphs
Monoclinic minerals
Minerals in space group 15
Silicon dioxide | Coesite | [
"Materials_science"
] | 787 | [
"Silica polymorphs",
"Polymorphism (materials science)"
] |
563,960 | https://en.wikipedia.org/wiki/OpenAL | OpenAL (Open Audio Library) is a cross-platform audio application programming interface (API). It is designed for efficient rendering of multichannel three-dimensional positional audio. Its API style and conventions deliberately resemble those of OpenGL. OpenAL is an environmental 3D audio library, which can add realism to a game by simulating attenuation (degradation of sound over distance), the Doppler effect (change in frequency as a result of motion), and material densities.
OpenAL aimed to originally be an open standard and open-source replacement for proprietary (and generally incompatible with one another) 3D audio APIs such as DirectSound and Core Audio, though in practice has largely been implemented on various platforms as a wrapper around said proprietary APIs or as a proprietary and vendor-specific fork. While the reference implementation later became proprietary and unmaintained, there are open source implementations such as OpenAL Soft available.
History
OpenAL was originally developed in 2000 by Loki Software to help them in their business of porting Windows games to Linux. After the demise of Loki, the project was maintained for a time by the free software/open source community, and implemented on NVIDIA nForce sound cards and motherboards. It was hosted (and largely developed) by Creative Technology until circa 2012.
Since 1.1 (2009), the sample implementation by Creative has turned proprietary, with the last releases in free licenses still accessible through the project's Subversion source code repository. However, OpenAL Soft is a widely used open source alternative and remains actively maintained and extended.
While the OpenAL charter says that there will be an "Architecture Review Board" (ARB) modeled on the OpenGL ARB, no such organization has ever been formed and the OpenAL specification is generally handled and discussed via email on its public mailing list.
The original mailing list, openal-devel hosted by Creative, ran from March 2003 to circa August 2012. Ryan C. Gordon, a Loki veteran who went on to develop Simple DirectMedia Layer, started a new mailing list and website at OpenAL.org in January 2014. As of February 2023, the list remains in use.
API structure and functionality
The general functionality of OpenAL is encoded in source objects, audio buffers and a single listener. A source object contains a pointer to a buffer, the velocity, position and direction of the sound, and the intensity of the sound. The listener object contains the velocity, position and direction of the listener, and the general gain applied to all sound. Buffers contain audio data in PCM format, either 8- or 16-bit, in either monaural or stereo format. The rendering engine performs all necessary calculations for distance attenuation, Doppler effect, etc.
The net result of all of this for the end user is that in a properly written OpenAL application, sounds behave quite naturally as the user moves through the three-dimensional space of the virtual world. From a programmer's perspective, very little additional work is required to make this happen in an existing OpenGL-based 3D graphical application.
Unlike the OpenGL specification, the OpenAL specification includes two subsections of the API: the core consisting of the actual OpenAL function calls, and the ALC (Audio Library Context) API which is used to manage rendering contexts, resource usage and locking in a cross platform manner. There is also an 'ALUT' (Audio Library Utility Toolkit) library that provides higher level 'convenience' functions — exactly analogous to OpenGL's 'GLUT'.
In order to provide additional functionality in the future, OpenAL utilizes an extension mechanism. Individual vendors are thereby able to include their own extensions into distributions of OpenAL, commonly for the purpose of exposing additional functionality on their proprietary hardware. Extensions can be promoted to ARB (Architecture Review Board) status, indicating a standard extension which will be maintained for backwards compatibility. ARB extensions have the prospect of being added to the core API after a period of time.
For advanced digital signal processing and hardware-accelerated sound effects, the EFX (Effects Extension) or environmental audio extensions (EAX) can be used.
Limitations
The single listener model in OpenAL is tailored to a single human user and is not fit for artificial intelligence or robotic simulations or multiple human participants as in collaborative musical performances.
In these cases a multiple listener model is required. OpenAL also fails to take into account sound propagation delays (the speed of sound is used for the Doppler effect only). The distance to a sound source only translates into an amplitude effect (attenuation) and not a delay. Hence OpenAL cannot be used for time difference of arrival calculations unless that functionality is added in separately.
In order to take full speed advantage of OpenAL, a vendor/hardware specific implementation is needed and these are seldom released as open source. Many supported platforms in fact implement OpenAL as a wrapper which simply translates calls to the platform's native, and often proprietary, audio API. On Windows, if a vendor specific implementation is not detected it will fall back to the wrap_oal.dll wrapper library that translates OpenAL into DirectSound (Generic Software) or DirectSound3D (Generic Hardware); the removal of the latter from Windows Vista onward has effectively broken generic hardware acceleration on modern versions of Windows.
Supported platforms
The API is available on the following platforms and operating systems: Android (supports OpenSL ES), AmigaOS 3.x and 4.x, Bada, BlackBerry 10, BlackBerry PlayBook, BSD, iOS (supports Core Audio), IRIX, Linux (supports ALSA, OSS, PortAudio and PulseAudio), Mac OS 8, Mac OS 9 and Mac OS X (Core Audio), Microsoft Windows (supports DirectSound, Windows Multimedia API and Windows Multimedia Device (MMDevice) API), MorphOS, OpenBSD, Solaris, QNX, and AROS.
Supported gaming devices are for instance: GameCube, PlayStation 2, PlayStation 3, Xbox, Xbox 360, Wii, and PlayStation Portable.
Applications
Games
The following video games are known to use OpenAL:
0 A.D.
Alpha Protocol
America's Army: Operations
American Truck Simulator
Amnesia: The Dark Descent
Armed Assault
Baldur's Gate: Enhanced Edition
Battlefield 2
Battlefield 2142
BioShock
Bit.Trip
Colin McRae: DiRT
Doom 3
Euro Truck Simulator 2
FlightGear
ioquake3
Jedi Knight II: Jedi Outcast
Jedi Knight: Jedi Academy
Mari0
Mass Effect (video game)
Minecraft (through LWJGL)
OpenArena
Orbz
Penumbra: Overture
Postal 2
Prey
Psychonauts
Quake 4
Race Driver: Grid
Regnum Online
Running With Rifles
S.T.A.L.K.E.R.
System Shock 2
The Dark Mod
Tremulous
Unreal II: The Awakening
Unreal Tournament 2003
Unreal Tournament 2004
Unreal Tournament 3
War§ow
Wurm Online
Other applications
Blender – 3D modelling and rendering tool uses OpenAL for its built-in game engine
3DMark06 – Gamer's benchmarking tool
Dolphin (emulator) – GameCube and Wii emulator
Vanda Engine – uses OpenAL 1.1 to simulate 2D and 3D sounds
Croquet Project
Bino - Video player software that has support for stereoscopic 3D video and multi-display video
Implementations
OpenAL SI
The OpenAL Sample Implementation is the original implementation, from Loki, and is not currently maintained.
OpenAL Soft
OpenAL Soft is an LGPL-licensed, cross-platform, software implementation. The library is meant as a free compatible update/replacement to the now-deprecated and proprietary OpenAL Sample Implementation. OpenAL Soft supports mono, stereo (including HRTF and UHJ), 4-channel, 5.1, 6.1, 7.1, and B-Format output. Ambisonic assets are supported.
AeonWave-OpenAL
AeonWave-OpenAL is an LGPL-licensed OpenAL emulation layer that takes advantage of the hardware acceleration provided by the non-free but low cost AeonWave 4D-audio library for Linux and Windows made by Adalin B.V. The author claims that AeonWave-OpenAL implementation renders 3D audio five (on an AMD Athlon 64 X2) to seven (on an Intel Atom N270) times faster than either OpenAL SI or OpenAL Soft under the same conditions. By using the AeonWave library this implementation supports HRTF as well as spatialised surround sound for up to eight speakers.
Rapture3D OpenAL Driver
The Rapture3D OpenAL Driver is a non-free, commercial, Windows only, software implementation made by Blue Ripple Sound. The library is intended as a high performance drop-in replacement for other implementations. It features:
32bit floating point audio path.
High quality sample rate conversion (used for various purposes including Doppler shift).
High quality effects and filters.
Support for multi-channel sound sources (including assets encoded using Ambisonics).
The only limit on the number of sources or effects is CPU power, can render hundreds of sound sources and multiple effects on relatively old hardware.
Higher-order Ambisonics (HOA) bus running at up to fourth order.
Apple OpenAL
Apple ships an implementation of OpenAL in macOS and iOS. It is a very thin layer over the 3D Mixer (kAudioUnitSubType_3DMixer) feature in the operating system. This implementation was originally written by Ryan C. Gordon for Altivec Mac OS X systems.
MojoAL
Tiny (single-file), full OpenAL 1.1 implementation built on top of SDL2 by Ryan C. Gordon.
See also
OpenCL
OpenML
OpenMAX AL
FMOD
Java OpenAL
irrKlang
Lightweight Java Game Library
Web Audio – defines an API similar in some ways to OpenAL
References
External links
OpenAL official website
Implementations:
OpenAL Soft
AeonWave-OpenAL
Rapture3D advanced OpenAL 1.1 driver
Developer resources:
DevMaster.net OpenAL Tutorials (Note: these tutorials are showing their age slightly by, for instance, using deprecated functions such as alutLoadWAVFile)
OpenAL extension repository (maintained by Raulshc as of 2023; with table of supported extensions per implementation)
OpenAL package in Conan, a C++ package manager
Application programming interfaces
Audio libraries
Computer libraries
Cross-platform software
Formerly open-source or free software
Linux APIs
Video game engines | OpenAL | [
"Technology"
] | 2,168 | [
"IT infrastructure",
"Computer libraries"
] |
563,977 | https://en.wikipedia.org/wiki/Filgrastim | Filgrastim, sold under the brand name Neupogen among others, is a medication used to treat low neutrophil count. Low neutrophil counts may occur with HIV/AIDS, following chemotherapy or radiation poisoning, or be of an unknown cause. It may also be used to increase white blood cells for gathering during leukapheresis. It is given either by injection into a vein or under the skin. Filgrastim is a leukocyte growth factor.
Common side effects include fever, cough, chest pain, joint pain, vomiting, and hair loss. Severe side effects include splenic rupture and allergic reactions. It is unclear if use in pregnancy is safe for the baby. Filgrastim is a recombinant form of the naturally occurring granulocyte colony-stimulating factor (G-CSF). It works by stimulating the body to increase neutrophil production.
Filgrastim was approved for medical use in the United States in 1991. It is on the World Health Organization's List of Essential Medicines. Filgrastim biosimilar medications are available.
Medical uses
Filgrastim is used to treat neutropenia; acute myeloid leukemia; nonmyeloid malignancies; leukapheresis; congenital neutropenia‚ cyclic neutropenia‚ or idiopathic neutropenia; and myelosuppressive doses of radiation.
Tbo-filgrastim (Granix) is indicated for reduction in the duration of severe neutropenia in people with non-myeloid malignancies receiving myelosuppressive anti-cancer drugs associated with a clinically significant incidence of febrile neutropenia.
Adverse effects
The most commonly observed adverse effect is mild bone pain after repeated administration, and local skin reactions at the site of injection. Other observed adverse effects include serious allergic reactions (including a rash over the whole body, shortness of breath, wheezing, dizziness, swelling around the mouth or eyes, fast pulse, and sweating), ruptured spleen (sometimes resulting in death), alveolar hemorrhage, acute respiratory distress syndrome, and hemoptysis. Severe sickle cell crises, in some cases resulting in death, have been associated with the use of filgrastim in people with sickle cell disorders.
Interactions
Increased hematopoietic activity of the bone marrow in response to growth factor therapy has been associated with transient positive bone imaging changes; this should be considered when interpreting bone-imaging results.
Mechanism of action
G-CSF is a colony stimulating factor which has been shown to have minimal direct in vivo or in vitro effects on the production of other haematopoietic cell types. Neupogen (filgrastim) is the name for recombinant methionyl human granulocyte colony stimulating factor (r-metHuG-CSF).
Society and culture
Biosimilars
In 2015, Sandoz's filgrastim-sndz (Zarxio), obtained the approval of the US Food and Drug Administration (FDA) as a biosimilar. This was the first product to be passed under the Biologics Price Competition and Innovation Act of 2009 (BPCI Act), as part of the Affordable Care Act. Zarxio was approved as a biosimilar, not as an interchangeable product, the FDA notes. And under the BPCI Act, only a biologic that has been approved as an "interchangeable" may be substituted for the reference product without the intervention of the health care provider who prescribed the reference product. The FDA said its approval of Zarxio is based on review of evidence that included structural and functional characterization, animal study data, human pharmacokinetic and pharmacodynamics data, clinical immunogenicity data and other clinical safety and effectiveness data that demonstrates Zarxio is biosimilar to Neupogen.
In 2018, filgrastim-aafi (Nivestym) was approved for use in the United States.
In September 2008, Ratiograstim, Tevagrastim, Biograstim, and Filgrastim ratiopharm were approved for use in the European Union. Filgrastim ratiopharm was withdrawn in July 2011 and Biograstim was withdrawn in December 2016.
In February 2009, Filgrastim Hexal and Zarzio were approved for use in the European Union.
In June 2010, Nivestim was approved for use in the European Union.
In October 2013, Grastofil was approved for use in the European Union.
In September 2014, Accofil was approved for use in the European Union.
In 2016, Fraven was approved for use by Republic of Turkey ministry of health.
Nivestym was approved for medical use in Canada in April 2020.
In October 2021, Nypozi was approved for medical use in Canada.
In February 2022, filgrastim-ayow (Releuko) was approved for medical use in the United States.
In June 2024, filgrastim-txid (Nypozi) was approved for medical use in the United States.
In December 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Zefylti, intended for the treatment of neutropenia and the mobilization of peripheral blood progenitor cells. The applicant for this medicinal product is CuraTeQ Biologics s.r.o. Zefylti is a biosimilar medicinal product. It is highly similar to the reference product Neupogen (filgrastim), which has been authorized in various EU countries.
Economics
Shortly after it was introduced, analyses of whether filgrastim is a cost-effective way of preventing febrile neutropenia depended upon the clinical situation and the financial model used to pay for treatment. The longer-acting pegfilgrastim may in some cases be more cost-effective.
References
Further reading
Amgen
Drugs developed by Hoffmann-La Roche
Drugs developed by Novartis
Drugs developed by AbbVie
Drugs developed by Pfizer
Drugs acting on the blood and blood forming organs
Growth factors
Immunostimulants
Recombinant proteins
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Filgrastim | [
"Chemistry",
"Biology"
] | 1,358 | [
"Growth factors",
"Recombinant proteins",
"Biotechnology products",
"Signal transduction"
] |
563,980 | https://en.wikipedia.org/wiki/Look-and-say%20sequence | In mathematics, the look-and-say sequence is the sequence of integers beginning as follows:
1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... .
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
1 is read off as "one 1" or 11.
11 is read off as "two 1s" or 21.
21 is read off as "one 2, one 1" or 1211.
1211 is read off as "one 1, one 2, two 1s" or 111221.
111221 is read off as "three 1s, two 2s, one 1" or 312211.
The look-and-say sequence was analyzed by John Conway
after he was introduced to it by one of his students at a party.
The idea of the look-and-say sequence is similar to that of run-length encoding.
If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows:
d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, …
Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see )
Basic properties
Growth
The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ... which remains the same size.
Digits presence limitation
No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit.
Cosmological decay
Conway's cosmological theorem asserts that every sequence eventually splits ("decays") into a sequence of "atomic elements", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium, calling the sequence audioactive. There are also two "transuranic" elements (Np and Pu) for each digit other than 1, 2, and 3. Below is a table of all such elements:
Growth in length
The terms eventually grow in length by about 30% per generation. In particular, if Ln denotes the number of digits of the n-th member of the sequence, then the limit of the ratio exists and is given by
where λ = 1.303577269034... is an algebraic number of degree 71. This fact was proven by Conway, and the constant λ is known as Conway's constant. The same result also holds for every variant of the sequence starting with any seed other than 22.
Conway's constant as a polynomial root
Conway's constant is the unique positive real root of the following polynomial :
This polynomial was correctly given in Conway's original Eureka article,
but in the reprinted version in the book edited by Cover and Gopinath the term was incorrectly printed with a minus sign in front.
Popularization
The look-and-say sequence is also popularly known as the Morris Number Sequence, after cryptographer Robert Morris, and the puzzle "What is the next number in the sequence 1, 11, 21, 1211, 111221?" is sometimes referred to as the Cuckoo's Egg, from a description of Morris in Clifford Stoll's book The Cuckoo's Egg.
Variations
There are many possible variations on the rule used to generate the look-and-say sequence. For example, to form the "pea pattern" one reads the previous term and counts all instances of each digit, listed in order of their first appearance, not just those occurring in a consecutive block. So beginning with the seed 1, the pea pattern proceeds 1, 11 ("one 1"), 21 ("two 1s"), 1211 ("one 2 and one 1"), 3112 ("three 1s and one 2"), 132112 ("one 3, two 1s and one 2"), 311322 ("three 1s, one 3 and two 2s"), etc. This version of the pea pattern eventually forms a cycle with the two "atomic" terms 23322114 and 32232114.
Other versions of the pea pattern are also possible; for example, instead of reading the digits as they first appear, one could read them in ascending order instead . In this case, the term following 21 would be 1112 ("one 1, one 2") and the term following 3112 would be 211213 ("two 1s, one 2 and one 3"). This variation ultimately ends up repeating the number 21322314 ("two 1s, three 2s, two 3s and one 4").
These sequences differ in several notable ways from the look-and-say sequence. Notably, unlike the Conway sequences, a given term of the pea pattern does not uniquely define the preceding term. Moreover, for any seed the pea pattern produces terms of bounded length: This bound will not typically exceed (22 digits for decimal: ) and may only exceed (30 digits for decimal radix) in length for long, degenerate, initial seeds (sequence of "100 ones", etc.). For these extreme cases, individual elements of decimal sequences immediately settle into a permutation of the form where here the letters are placeholders for digit counts from the preceding sequence element.
Since the sequence is infinite, and the length of each element is bounded, it must eventually repeat, due to the pigeonhole principle. As a consequence, pea pattern sequences are always eventually periodic.
See also
Gijswijt's sequence
Kolakoski sequence
Autogram
Notes
References
External links
Conway speaking about this sequence and telling that it took him some explanations to understand the sequence.
Implementations in many programming languages on Rosetta Code
Look and Say sequence generator p
A Derivation of Conway’s Degree-71 “Look-and-Say” Polynomial
Base-dependent integer sequences
Algebraic numbers
Mathematical constants
John Horton Conway | Look-and-say sequence | [
"Mathematics"
] | 1,336 | [
"Mathematical objects",
"Algebraic numbers",
"nan",
"Mathematical constants",
"Numbers"
] |
564,004 | https://en.wikipedia.org/wiki/Graph%20reduction | In computer science, graph reduction implements an efficient version of non-strict evaluation, an evaluation strategy where the arguments to a function are not immediately evaluated. This form of non-strict evaluation is also known as lazy evaluation and used in functional programming languages. The technique was first developed by Chris Wadsworth in 1971.
Motivation
A simple example of evaluating an arithmetic expression follows:
The above reduction sequence employs a strategy known as outermost tree reduction. The same expression can be evaluated using innermost tree reduction, yielding the reduction sequence:
Notice that the reduction order is made explicit by the addition of parentheses. This expression could also have been simply evaluated right to left, because addition is an associative operation.
Represented as a tree, the expression above looks like this:
This is where the term tree reduction comes from. When represented as a tree, we can think of innermost reduction as working from the bottom up, while outermost works from the top down.
The expression can also be represented as a directed acyclic graph, allowing sub-expressions to be shared:
As for trees, outermost and innermost reduction also applies to graphs. Hence we have graph reduction.
Now evaluation with outermost graph reduction can proceed as follows:
Notice that evaluation now only requires four steps. Outermost graph reduction is referred to as lazy evaluation and innermost graph reduction is referred to as eager evaluation.
Combinator graph reduction
Combinator graph reduction is a fundamental implementation technique for functional programming languages, in which a program is converted into a combinator representation which is mapped to a directed graph data structure in computer memory, and program execution then consists of rewriting parts of this graph ("reducing" it) so as to move towards useful results.
History
The concept of a graph reduction that allows evaluated values to be shared was first developed by Chris Wadsworth in his 1971 Ph.D. dissertation. This dissertation was cited by Peter Henderson and James H. Morris Jr. in 1976 paper, “A lazy evaluator” that introduced the notion of lazy evaluation. In 1976 David Turner incorporated lazy evaluation into SASL using combinators.
SASL was an early functional programming language first developed by Turner in 1972.
See also
Graph reduction machine
SECD machine
Notes
References
Further reading
Implementation of functional programming languages
Graph algorithms
Graph rewriting | Graph reduction | [
"Mathematics"
] | 465 | [
"Mathematical relations",
"Graph theory",
"Graph rewriting"
] |
564,276 | https://en.wikipedia.org/wiki/Delimiter | A delimiter is a sequence of one or more characters for specifying the boundary between separate, independent regions in plain text, mathematical expressions or other data streams. An example of a delimiter is the comma character, which acts as a field delimiter in a sequence of comma-separated values. Another example of a delimiter is the time gap used to separate letters and words in the transmission of Morse code.
In mathematics, delimiters are often used to specify the scope of an operation, and can occur both as isolated symbols (e.g., colon in "") and as a pair of opposing-looking symbols (e.g., angled brackets in ).
Delimiters represent one of various means of specifying boundaries in a data stream. Declarative notation, for example, is an alternate method (without the use of delimiters) that uses a length field at the start of a data stream to specify the number of characters that the data stream contains.
Overview
Delimiters may be characterized as field and record delimiters, or as bracket delimiters.
Field and record delimiters
Field delimiters separate data fields. Record delimiters separate groups of fields.
For example, the CSV format uses a comma as the delimiter between fields, and an end-of-line indicator as the delimiter between records:
fname,lname,age,salary
nancy,davolio,33,$30000
erin,borakova,28,$25250
tony,raphael,35,$28700
This specifies a simple flat-file database table using the CSV file format.
Bracket delimiters
Bracket delimiters, also called block delimiters, region delimiters, or balanced delimiters, mark both the start and end of a region of text.
Common examples of bracket delimiters include:
Conventions
Historically, computing platforms have used certain delimiters by convention. The following tables depict a few examples for comparison.
Programming languages
(See also, Comparison of programming languages (syntax)).
Field and Record delimiters (See also, ASCII, Control character).
Delimiter collision
Delimiter collision is a problem that occurs when an author or programmer introduces delimiters into text without actually intending them to be interpreted as boundaries between separate regions. In the case of XML, for example, this can occur whenever an author attempts to specify an angle bracket character.
In most file types there is both a field delimiter and a record delimiter, both of which are subject to collision. In the case of comma-separated values files, for example, field collision can occur whenever an author attempts to include a comma as part of a field value (e.g., salary = "$30,000"), and record delimiter collision would occur whenever a field contained multiple lines. Both record and field delimiter collision occur frequently in text files.
In some contexts, a malicious user or attacker may seek to exploit this problem intentionally. Consequently, delimiter collision can be the source of security vulnerabilities and exploits. Malicious users can take advantage of delimiter collision in languages such as SQL and HTML to deploy such well-known attacks as SQL injection and cross-site scripting, respectively.
Solutions
Because delimiter collision is a very common problem, various methods for avoiding it have been invented. Some authors may attempt to avoid the problem by choosing a delimiter character (or sequence of characters) that is not likely to appear in the data stream itself. This ad hoc approach may be suitable, but it necessarily depends on a correct guess of what will appear in the data stream, and offers no security against malicious collisions. Other, more formal conventions are therefore applied as well.
ASCII delimited text
The ASCII and Unicode character sets were designed to solve this problem by the provision of non-printing characters that can be used as delimiters. These are the range from ASCII 28 to 31.
The use of ASCII 31 Unit separator as a field separator and ASCII 30 Record separator solves the problem of both field and record delimiters that appear in a text data stream.
Escape character
One method for avoiding delimiter collision is to use escape characters. From a language design standpoint, these are adequate, but they have drawbacks:
text can be rendered unreadable when littered with numerous escape characters, a problem referred to as leaning toothpick syndrome (due to use of \ to escape / in Perl regular expressions, leading to sequences such as "\/\/");
text becomes difficult to parse through regular expression
they require a mechanism to "escape the escapes" when not intended as escape characters; and
although easy to type, they can be cryptic to someone unfamiliar with the language.
they do not protect against injection attacks
Escape sequence
Escape sequences are similar to escape characters, except they usually consist of some kind of mnemonic instead of just a single character. One use is in string literals that include a doublequote (") character. For example in Perl, the code:
print "Nancy said \x22Hello World!\x22 to the crowd."; ### use \x22
produces the same output as:
print "Nancy said \"Hello World!\" to the crowd."; ### use escape char
One drawback of escape sequences, when used by people, is the need to memorize the codes that represent individual characters (see also: character entity reference, numeric character reference).
Dual quoting delimiters
In contrast to escape sequences and escape characters, dual delimiters provide yet another way to avoid delimiter collision. Some languages, for example, allow the use of either a single quote (') or a double quote (") to specify a string literal. For example, in Perl:
print 'Nancy said "Hello World!" to the crowd.';
produces the desired output without requiring escapes. This approach, however, only works when the string does not contain both types of quotation marks.
Padding quoting delimiters
In contrast to escape sequences and escape characters, padding delimiters provide yet another way to avoid delimiter collision. Visual Basic, for example, uses double quotes as delimiters. This is similar to escaping the delimiter.
print "Nancy said ""Hello World!"" to the crowd."
produces the desired output without requiring escapes. Like regular escaping it can, however, become confusing when many quotes are used.
The code to print the above source code would look more confusing:
print "print ""Nancy said """"Hello World!"""" to the crowd."""
Configurable alternative quoting delimiters
In contrast to dual delimiters, multiple delimiters are even more flexible for avoiding delimiter collision.
For example, in Perl:
print qq^Nancy doesn't want to say "Hello World!" anymore.^;
print qq@Nancy doesn't want to say "Hello World!" anymore.@;
print qq(Nancy doesn't want to say "Hello World!" anymore.);
all produce the desired output through use of quote operators, which allow any convenient character to act as a delimiter. Although this method is more flexible, few languages support it. Perl and Ruby are two that do.
Content boundary
A content boundary is a special type of delimiter that is specifically designed to resist delimiter collision. It works by allowing the author to specify a sequence of characters that is guaranteed to always indicate a boundary between parts in a multi-part message, with no other possible interpretation.
The delimiter is frequently generated from a random sequence of characters that is statistically improbable to occur in the content. This may be followed by an identifying mark such as a UUID, a timestamp, or some other distinguishing mark. Alternatively, the content may be scanned to guarantee that a delimiter does not appear in the text. This may allow the delimiter to be shorter or simpler, and increase the human readability of the document. (See e.g., MIME, Here documents).
Whitespace or indentation
Some programming and computer languages allow the use of whitespace delimiters or indentation as a means of specifying boundaries between independent regions in text.
Regular expression syntax
In specifying a regular expression, alternate delimiters may also be used to simplify the syntax for match and substitution operations in Perl.
For example, a simple match operation may be specified in Perl with the following syntax:
$string1 = 'Nancy said "Hello World!" to the crowd.'; # specify a target string
print $string1 =~ m/[aeiou]+/; # match one or more vowels
The syntax is flexible enough to specify match operations with alternate delimiters, making it easy to avoid delimiter collision:
$string1 = 'Nancy said "http://Hello/World.htm" is not a valid address.'; # target string
print $string1 =~ m@http://@; # match using alternate regular expression delimiter
print $string1 =~ m{http://}; # same as previous, but different delimiter
print $string1 =~ m!http://!; # same as previous, but different delimiter.
Here document
A Here document allows the inclusion of arbitrary content by describing a special end sequence. Many languages support this including PHP, bash scripts, ruby and perl. A here document starts by describing what the end sequence will be and continues until that sequence is seen at the start of a new line.
Here is an example in perl:
print <<ENDOFHEREDOC;
It's very hard to encode a string with "certain characters".
Newlines, commas, and other characters can cause delimiter collisions.
ENDOFHEREDOC
This code would print:
It's very hard to encode a string with "certain characters".
Newlines, commas, and other characters can cause delimiter collisions.
By using a special end sequence all manner of characters are allowed in the string.
ASCII armor
Although principally used as a mechanism for text encoding of binary data,
ASCII armoring is a programming and systems administration technique that also helps to avoid delimiter collision in some circumstances. This technique is contrasted from the other approaches described above because it is more complicated, and therefore not suitable for small applications and simple data storage formats. The technique employs a special encoding scheme, such as base64, to ensure that delimiter or other significant characters do not appear in transmitted data. The purpose is to prevent multilayered escaping, i.e. for doublequotes.
This technique is used, for example, in Microsoft's ASP.NET web development technology, and is closely associated with the "VIEWSTATE" component of that system.
Example
The following simplified example demonstrates how this technique works in practice.
The first code fragment shows a simple HTML tag in which the VIEWSTATE value contains characters that are incompatible with the delimiters of the HTML tag itself:
<input type="hidden" name="__VIEWSTATE" value="BookTitle:Nancy doesn't say "Hello World!" anymore." />
This first code fragment is not well-formed, and would therefore not work properly in a "real world" deployed system.
To store arbitrary text in an HTML attribute, HTML entities can be used. In this case """ stands in for the double-quote:
<input type="hidden" name="__VIEWSTATE" value="BookTitle:Nancy doesn't say "Hello World!" anymore." />
Alternatively, any encoding could be used that doesn't include characters that have special meaning in the context, such as base64:
<input type="hidden" name="__VIEWSTATE" value="Qm9va1RpdGxlOk5hbmN5IGRvZXNuJ3Qgc2F5ICJIZWxsbyBXb3JsZCEiIGFueW1vcmUu" />
Or percent-encoding:
<input type="hidden" name="__VIEWSTATE" value="BookTitle:Nancy%20doesn%27t%20say%20%22Hello%20World!%22%20anymore." />
This prevents delimiter collision and ensures that incompatible characters will not appear inside the HTML code, regardless of what characters appear in the original (decoded) text.
See also
CDATA
Decimal separator
Delimiter-separated values
Escape sequence
String literal
Tab-separated values
References
External links
Data File Metaformats from The Art of Unix Programming by Eric Steven Raymond
Markup languages
Pattern matching
Programming constructs
String (computer science) | Delimiter | [
"Mathematics",
"Technology"
] | 2,712 | [
"Sequences and series",
"Computer science",
"Mathematical structures",
"String (computer science)"
] |
564,332 | https://en.wikipedia.org/wiki/San%20Juan%20Island | San Juan Island is the second-largest and most populous of the San Juan Islands in northwestern Washington, United States. It has a land area of 142.59 km2 (55.053 sq mi) and a population of 8,632 as of the 2020 census.
Washington State Ferries serves Friday Harbor, which is San Juan Island's major population center, the San Juan County seat, and the only incorporated town in the islands.
History
The name "San Juan" originates from the 1791 expedition of Francisco de Eliza, who named the archipelago Isla y Archipiélago de San Juan to honor his patron sponsor, Juan Vicente de Güemes Padilla Horcasitas y Aguayo, 2nd Count of Revillagigedo. One of the officers under Eliza's command, Gonzalo López de Haro, was the first European to discover San Juan Island. During the Wilkes Expedition, American explorer Charles Wilkes renamed the island Rodgers Island; the Spanish name remained on British nautical charts and over time became the island's official name.
The island saw seasonal use for salmon fishing. The island was also occupied by Native Americans, many of whom arrived seasonally for fishing. The Hudson's Bay Company (HBC) established the first permanent, non-native settlement on the island on December 13, 1853, in order to create a sheep farm. The Belle Vue Sheep Farm, set up by Chief Factor and Governor of the Colony of Vancouver Island, James Douglas, was intended to assert British sovereignty over the disputed San Juan Islands.
Both the British and Americans asserted control of the island. A small force of American soldiers was sent to the island over concern for this issue and with Native American raids on American settlers. The territorial dispute over this island and the rest of the San Juan Islands heightened when an American settler shot an HBC pig, starting the Pig War in 1859. By November 1859, an agreement was reached for a joint British–American control of the island until the matter was resolved by negotiation. In 1861, the HBC decided to give up the sheep farm due to disruption of HBC sheep by Americans, the subsequent demoralization of HBC employees, and lack of British government support. In 1862, Chief Trader Charles John Griffin left the island and leased the Belle Vue Sheep Farm to Robert Firth, a shepherd, from 1864 to 1873.
The dispute was finally resolved in favor of the Americans in 1872.
The 1862 Pacific Northwest smallpox epidemic swept through the region, killing large numbers of indigenous people. Smallpox Bay, on the west side of San Juan Island, was named for victims of this epidemic.
Island life
San Juan Island is considered a "small town" community, in that it is relatively quiet rural living with few distractions or incidents aside from tourism. One notable resident would be Lisa "Ivory" Moretti, a retired female professional wrestler of World Wrestling Entertainment fame. In addition, many Hollywood stars and celebrities spend time on the island to avoid publicity and to seek some peace and quiet.
Historical sites
A pair of landmarks, the old English and American Camps, are at opposite ends of the island, which together comprise the San Juan Island National Historical Park, which commemorates the 1859 Pig War. Interpretive centers and reconstructed buildings, formal gardens, etc. recall the history of early European settlement in the area.
Infrastructure
The Island has a hospital, the Peace Health Peace Island Medical Center.
Transportation to the Island is by boat, Washington State Ferries, seaplane, or by conventional aircraft. If traveling by seaplane, Friday Harbor is serviced by Northwest Seaplanes and Kenmore Air, both longtime operators in the area. The Friday Harbor Airport terminal is 1.3 miles from the Ferry Landing. Outside of Friday Harbor, the only major commercial establishment resort is the village of Roche Harbor, located on the northwest side of the island.
Media
San Juan Island has a number of weekly newspapers and two online daily news sites: the San Juan Islander and the Island Guardian.
Tourism
The Island is dotted with numerous farms, and is a tourist-driven economy. The island hosts two substantial marinas, one in Friday Harbor, the other in Roche Harbor. Both count tall ships and large yachts as frequent visitors.
It has several attractions including The Whale Museum; a contemporary Art Museum building completed in 2015; the San Juan Community Theatre; the Sculpture Park (near Roche Harbor); the San Juan Historical Museum; and Lime Kiln Point State Park where visitors can watch orca pods swim by.
Schools
Public schools are operated by the San Juan Island School District #149. It operates four schools: Friday Harbor Elementary School, Friday Harbor Middle School, Friday Harbor High School, Griffin Bay Schools (alternative high school, parent-partner home school program, online courses, and virtual school), and Stuart Island School (K-8). There are also two privately operated schools.
The University of Washington runs Friday Harbor Laboratories, a marine research lab and campus outside Friday Harbor. The campus has been extant since 1909 and has dormitories, a food service, and classrooms for holding lectures.
Ecology
The waters surrounding San Juan Island are home to a variety of species including red sea urchins and pinto abalone. Though no commercial fishing of abalone has ever been allowed in this area, recreational fishing of abalone was outlawed in 1994. The National Marine Fisheries Service listed pinto abalone as a Species of Concern in 2004.
In 2015, San Juan Island became a protected location under The Antiquities Act. On March 25 of that year, former President Barack Obama included San Juan Island and approximately 75 other sites located in the Salish Sea into the San Juan Islands National Monument.
Parks and recreation
Lime Kiln Park is so named because it housed a lime kiln and is home to the historic Lime Kiln Light. Camping is also available around the island.
There are a few small, family-run aquaculture farms in the San Juan Islands including Westcott Bay Shellfish Co, where visitors can buy oysters, clams, and mussels and see shellfish farming operations. Whale watching and night-time bioluminescence tours depart from Friday Harbor.
Notable people
Guthrie Burnett-Tison, performing artist
Singer Jake Shears grew up partly on the island.
References
External links
San Juan Islander - daily news site
San Juan Island Chamber of Commerce
San Juan Island Heritage Historical collections from the San Juan Island Library District and local partners.
American Biography A New Cyclopedia VOL 5 Page 28 San Juan Island and Northwest Boundary Survey by Archibald Campbell led to it being in the USA instead of Canada
San Juan Islands
Lime kilns in the United States
Places with bioluminescence | San Juan Island | [
"Chemistry",
"Biology"
] | 1,341 | [
"Places with bioluminescence",
"Bioluminescence"
] |
564,361 | https://en.wikipedia.org/wiki/Site-directed%20mutagenesis | Site-directed mutagenesis is a molecular biology method that is used to make specific and intentional mutating changes to the DNA sequence of a gene and any gene products. Also called site-specific mutagenesis or oligonucleotide-directed mutagenesis, it is used for investigating the structure and biological activity of DNA, RNA, and protein molecules, and for protein engineering.
Site-directed mutagenesis is one of the most important laboratory techniques for creating DNA libraries by introducing mutations into DNA sequences. There are numerous methods for achieving site-directed mutagenesis, but with decreasing costs of oligonucleotide synthesis, artificial gene synthesis is now occasionally used as an alternative to site-directed mutagenesis. Since 2013, the development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has also allowed for the editing of the genome, and mutagenesis may be performed in vivo with relative ease.
History
Early attempts at mutagenesis using radiation or chemical mutagens were non-site-specific, generating random mutations. Analogs of nucleotides and other chemicals were later used to generate localized point mutations, examples of such chemicals are aminopurine, nitrosoguanidine, and bisulfite. Site-directed mutagenesis was achieved in 1974 in the laboratory of Charles Weissmann using a nucleotide analogue N4-hydroxycytidine, which induces transition of GC to AT. These methods of mutagenesis, however, are limited by the kind of mutation they can achieve, and they are not as specific as later site-directed mutagenesis methods.
In 1971, Clyde Hutchison and Marshall Edgell showed that it is possible to produce mutants with small fragments of phage ϕX174 and restriction nucleases. Hutchison later produced with his collaborator Michael Smith in 1978 a more flexible approach to site-directed mutagenesis by using oligonucleotides in a primer extension method with DNA polymerase. For his part in the development of this process, Michael Smith later shared the Nobel Prize in Chemistry in October 1993 with Kary B. Mullis, who invented polymerase chain reaction.
Basic mechanism
The basic procedure requires the synthesis of a short DNA primer. This synthetic primer contains the desired mutation and is complementary to the template DNA around the mutation site so it can hybridize with the DNA in the gene of interest. The mutation may be a single base change (a point mutation), multiple base changes, deletion, or insertion. The single-strand primer is then extended using a DNA polymerase, which copies the rest of the gene. The gene thus copied contains the mutated site, and is then introduced into a host cell in a vector and cloned. Finally, mutants are selected by DNA sequencing to check that they contain the desired mutation.
Approaches
The original method using single-primer extension was inefficient due to a low yield of mutants. This resulting mixture contains both the original unmutated template as well as the mutant strand, producing a mixed population of mutant and non-mutant progenies. Furthermore, the template used is methylated while the mutant strand is unmethylated, and the mutants may be counter-selected due to presence of mismatch repair system that favors the methylated template DNA, resulting in fewer mutants. Many approaches have since been developed to improve the efficiency of mutagenesis.
A large number of methods are available to effect site-directed mutagenesis, although most of them have rarely been used in laboratories since the early 2000s, as newer techniques allow for simpler and easier ways of introducing site-specific mutation into genes.
Kunkel's method
In 1985, Thomas Kunkel introduced a technique that reduces the need to select for the mutants. The DNA fragment to be mutated is inserted into a phagemid such as M13mp18/19 and is then transformed into an E. coli strain deficient in two enzymes, dUTPase (dut) and uracil deglycosidase (udg). Both enzymes are part of a DNA repair pathway that protects the bacterial chromosome from mutations by the spontaneous deamination of dCTP to dUTP. The dUTPase deficiency prevents the breakdown of dUTP, resulting in a high level of dUTP in the cell. The uracil deglycosidase deficiency prevents the removal of uracil from newly synthesized DNA. As the double-mutant E. coli replicates the phage DNA, its enzymatic machinery may, therefore, misincorporate dUTP instead of dTTP, resulting in single-strand DNA that contains some uracils (ssUDNA). The ssUDNA is extracted from the bacteriophage that is released into the medium, and then used as template for mutagenesis. An oligonucleotide containing the desired mutation is used for primer extension. The heteroduplex DNA, that forms, consists of one parental non-mutated strand containing dUTP and a mutated strand containing dTTP. The DNA is then transformed into an E. coli strain carrying the wildtype dut and udg genes. Here, the uracil-containing parental DNA strand is degraded, so that nearly all of the resulting DNA consists of the mutated strand.
Cassette mutagenesis
Unlike other methods, cassette mutagenesis need not involve primer extension using DNA polymerase. In this method, a fragment of DNA is synthesized, and then inserted into a plasmid. It involves the cleavage by a restriction enzyme at a site in the plasmid and subsequent ligation of a pair of complementary oligonucleotides containing the mutation in the gene of interest to the plasmid. Usually, the restriction enzymes that cut at the plasmid and the oligonucleotide are the same, permitting sticky ends of the plasmid and insert to ligate to one another. This method can generate mutants at close to 100% efficiency, but is limited by the availability of suitable restriction sites flanking the site that is to be mutated.
PCR site-directed mutagenesis
The limitation of restriction sites in cassette mutagenesis may be overcome using polymerase chain reaction with oligonucleotide "primers", such that a larger fragment may be generated, covering two convenient restriction sites. The exponential amplification in PCR produces a fragment containing the desired mutation in sufficient quantity to be separated from the original, unmutated plasmid by gel electrophoresis, which may then be inserted in the original context using standard recombinant molecular biology techniques. There are many variations of the same technique. The simplest method places the mutation site toward one of the ends of the fragment whereby one of two oligonucleotides used for generating the fragment contains the mutation. This involves a single step of PCR, but still has the inherent problem of requiring a suitable restriction site near the mutation site unless a very long primer is used. Other variations, therefore, employ three or four oligonucleotides, two of which may be non-mutagenic oligonucleotides that cover two convenient restriction sites and generate a fragment that can be digested and ligated into a plasmid, whereas the mutagenic oligonucleotide may be complementary to a location within that fragment well away from any convenient restriction site. These methods require multiple steps of PCR so that the final fragment to be ligated can contain the desired mutation. The design process for generating a fragment with the desired mutation and relevant restriction sites can be cumbersome. Software tools like SDM-Assist can simplify the process.
Whole plasmid mutagenesis
For plasmid manipulations, other site-directed mutagenesis techniques have been supplanted largely by techniques that are highly efficient but relatively simple, easy to use, and commercially available as a kit. An example of these techniques is the "Quikchange" method, wherein a pair of complementary mutagenic primers are used to amplify the entire plasmid in a thermocycling reaction using a high-fidelity non-strand-displacing DNA polymerase such as Pfu polymerase. The reaction generates a nicked, circular DNA. The template DNA must be eliminated by enzymatic digestion with a restriction enzyme such as DpnI, which is specific for methylated DNA. All DNA produced from most Escherichia coli strains would be methylated; the template plasmid that is biosynthesized in E. coli will, therefore, be digested, while the mutated plasmid, which is generated in vitro and is therefore unmethylated, would be left undigested. Note that, in these double-strand plasmid mutagenesis methods, while the thermocycling reaction may be used, the DNA is not exponentially amplified if the two primers are designed such that they bind symmetrically to the same region around the mutagenesis site, as described in the original protocol. In this case the amplification is linear, and it is therefore inaccurate to describe the procedure as a PCR, since there is no chain reaction. However, if the primers are designed to bind in an offset manner such that mutagenesis site is close to the 5' end of both primers, the 3' region of the primers can bind also to the amplified products and thus exponential product formation is observed. The name "Quikchange" originates from the registered trademark "QuikChange mutagenesis" of Stratagene, now Agilent Technologies , for site directed mutagenesis kits. The method was developed by scientists working at Stratagene.
Note that Pfu polymerase can become strand-displacing at higher extension temperature (≥70 °C) which can result in the failure of the experiment, therefore the extension reaction should be performed at the recommended temperature of 68 °C. In some applications, this method has been observed to lead to insertion of multiple copies of primers. A variation of this method, called SPRINP, prevents this artifact and has been used in different types of site directed mutagenesis.
Other techniques such as scanning mutagenesis of oligo-directed targets (SMOOT) can semi-randomly combine mutagenic oligonucleotides in plasmid mutagenesis. This technique can create plasmid mutagenesis libraries ranging from single mutations to comprehensive codon mutagenesis across an entire gene.
In vivo site-directed mutagenesis methods
Delitto perfetto
Transplacement "pop-in pop-out"
Direct gene deletion and site-specific mutagenesis with PCR and one recyclable marker
Direct gene deletion and site-specific mutagenesis with PCR and one recyclable marker using long homologous regions
In vivo site-directed mutagenesis with synthetic oligonucleotides
CRISPR
Since 2013, the development of CRISPR-Cas9 technology has allowed for the efficient introduction of various mutations into the genome of a wide variety of organisms. The method does not require a transposon insertion site, leaves no marker, and its efficiency and simplicity has made it the preferred method for genome editing.
Applications
Site-directed mutagenesis is used to generate mutations that may produce a rationally designed protein that has improved or special properties (i.e.protein engineering).
Investigative tools – specific mutations in DNA allow the function and properties of a DNA sequence or a protein to be investigated in a rational approach. Furthermore, single amino-acid changes by site-directed mutagenesis in proteins can help understand the importance of post-translational modifications. For instance changing a particular serine (phosphoacceptor) to an alanine (phospho-non-acceptor) in a substrate protein blocks the attachment of a phosphate group, thereby allows the phosphorylation to be investigated. This approach has been used to uncover the phosphorylation of the protein CBP by the kinase HIPK2 Another comprehensive approach is site saturation mutagenesis where one codon or a set of codons may be substituted with all possible amino acids at the specific positions.
Commercial applications – Proteins may be engineered to produce mutant forms that are tailored for a specific application. For example, commonly used laundry detergents may contain subtilisin, whose wild-type form has a methionine that can be oxidized by bleach, significantly reducing the activity the protein in the process. This methionine may be replaced by alanine or other residues, making it resistant to oxidation thereby keeping the protein active in the presence of bleach.
Gene synthesis
As the cost of DNA oligonucleotides synthesis falls, artificial synthesis of a complete gene is now a viable method for introducing mutation into gene. This method allows for extensive mutagenesis over multiples sites, including the complete redesign of the codon usage of gene to optimise it for a particular organism.
See also
Directed mutagenesis
Phi value analysis
References
External links
Nobel Lecture on Invention of Site-Directed Mutagenesis
OpenWetWare
Diagram summarizing site-directed mutagenesis
Genetics techniques
Molecular genetics
Mutagenesis
Protein engineering | Site-directed mutagenesis | [
"Chemistry",
"Engineering",
"Biology"
] | 2,808 | [
"Genetics techniques",
"Molecular genetics",
"Genetic engineering",
"Molecular biology"
] |
564,380 | https://en.wikipedia.org/wiki/Expression%20vector | An expression vector, otherwise known as an expression construct, is usually a plasmid or virus designed for gene expression in cells. The vector is used to introduce a specific gene into a target cell, and can commandeer the cell's mechanism for protein synthesis to produce the protein encoded by the gene. Expression vectors are the basic tools in biotechnology for the production of proteins.
The vector is engineered to contain regulatory sequences that act as enhancer and promoter regions and lead to efficient transcription of the gene carried on the expression vector. The goal of a well-designed expression vector is the efficient production of protein, and this may be achieved by the production of significant amount of stable messenger RNA, which can then be translated into protein. The expression of a protein may be tightly controlled, and the protein is only produced in significant quantity when necessary through the use of an inducer. In some systems, however, the protein may be expressed constitutively. Escherichia coli is commonly used as the host for protein production, but other cell types may also be used. An example of the use of expression vector is the production of insulin, which is used for medical treatments of diabetes.
Elements
An expression vector has features that any vector may have, such as an origin of replication, a selectable marker, and a suitable site for the insertion of a gene like the multiple cloning site. The cloned gene may be transferred from a specialized cloning vector to an expression vector, although it is possible to clone directly into an expression vector. The cloning process is normally performed in Escherichia coli. Vectors used for protein production in organisms other than E.coli may have, in addition to a suitable origin of replication for its propagation in E. coli, elements that allow them to be maintained in another organism, and these vectors are called shuttle vectors.
Elements for expression
An expression vector must have elements necessary for gene expression. These may include a promoter, the correct translation initiation sequence such as a ribosomal binding site and start codon, a termination codon, and a transcription termination sequence. There are differences in the machinery for protein synthesis between prokaryotes and eukaryotes, therefore the expression vectors must have the elements for expression that are appropriate for the chosen host. For example, prokaryotes expression vectors would have a Shine-Dalgarno sequence at its translation initiation site for the binding of ribosomes, while eukaryotes expression vectors would contain the Kozak consensus sequence.
The promoter initiates the transcription and is therefore the point of control for the expression of the cloned gene. The promoters used in expression vector are normally inducible, meaning that protein synthesis is only initiated when required by the introduction of an inducer such as IPTG. Gene expression however may also be constitutive (i.e. protein is constantly expressed) in some expression vectors. Low level of constitutive protein synthesis may occur even in expression vectors with tightly controlled promoters.
Protein tags
After the expression of the gene product, it may be necessary to purify the expressed protein; however, separating the protein of interest from the great majority of proteins of the host cell can be a protracted process. To make this purification process easier, a purification tag may be added to the cloned gene. This tag could be histidine (His) tag, other marker peptides, or a fusion partners such as glutathione S-transferase or maltose-binding protein. Some of these fusion partners may also help to increase the solubility of some expressed proteins. Other fusion proteins such as green fluorescent protein may act as a reporter gene for the identification of successful cloned genes, or they may be used to study protein expression in cellular imaging.
Other Elements
The expression vector is transformed or transfected into the host cell for protein synthesis. Some expression vectors may have elements for transformation or the insertion of DNA into the host chromosome, for example the vir genes for plant transformation, and integrase sites for chromosomal integration .
Some vectors may include targeting sequence that may target the expressed protein to a specific location such as the periplasmic space of bacteria.
Expression/Production systems
Different organisms may be used to express a gene's target protein, and the expression vector used will therefore have elements specific for use in the particular organism. The most commonly used organism for protein production is the bacterium Escherichia coli. However, not all proteins can be successfully expressed in E. coli, or be expressed with the correct form of post-translational modifications such as glycosylations, and other systems may therefore be used.
Bacterial
The expression host of choice for the expression of many proteins is Escherichia coli as the production of heterologous protein in E. coli is relatively simple and convenient, as well as being rapid and cheap. A large number of E. coli expression plasmids are also available for a wide variety of needs. Other bacteria used for protein production include Bacillus subtilis.
Most heterologous proteins are expressed in the cytoplasm of E. coli. However, not all proteins formed may be soluble in the cytoplasm, and incorrectly folded proteins formed in cytoplasm can form insoluble aggregates called inclusion bodies. Such insoluble proteins will require refolding, which can be an involved process and may not necessarily produce high yield. Proteins which have disulphide bonds are often not able to fold correctly due to the reducing environment in the cytoplasm which prevents such bond formation, and a possible solution is to target the protein to the periplasmic space by the use of an N-terminal signal sequence. Another possibility is to manipulate the redox environment of the cytoplasm. Other more sophisticated systems are also being developed; such systems may allow for the expression of proteins previously thought impossible in E. coli, such as glycosylated proteins.
The promoters used for these vector are usually based on the promoter of the lac operon or the T7 promoter, and they are normally regulated by the lac operator. These promoters may also be hybrids of different promoters, for example, the Tac-Promoter is a hybrid of trp and lac promoters. Note that most commonly used lac or lac-derived promoters are based on the lacUV5 mutant which is insensitive to catabolite repression. This mutant allows for expression of protein under the control of the lac promoter when the growth medium contains glucose since glucose would inhibit gene expression if wild-type lac promoter is used. Presence of glucose nevertheless may still be used to reduce background expression through residual inhibition in some systems.
Examples of E. coli expression vectors are the pGEX series of vectors where glutathione S-transferase is used as a fusion partner and gene expression is under the control of the tac promoter, and the pET series of vectors which uses a T7 promoter.
It is possible to simultaneously express two or more different proteins in E. coli using different plasmids. However, when 2 or more plasmids are used, each plasmid needs to use a different antibiotic selection as well as a different origin of replication, otherwise one of the plasmids may not be stably maintained. Many commonly used plasmids are based on the ColE1 replicon and are therefore incompatible with each other; in order for a ColE1-based plasmid to coexist with another in the same cell, the other would need to be of a different replicon, e.g. a p15A replicon-based plasmid such as the pACYC series of plasmids. Another approach would be to use a single two-cistron vector or design the coding sequences in tandem as a bi- or poly-cistronic construct.
Yeast
A yeast commonly used for protein production is Pichia pastoris. Examples of yeast expression vector in Pichia are the pPIC series of vectors, and these vectors use the AOX1 promoter which is inducible with methanol. The plasmids may contain elements for insertion of foreign DNA into the yeast genome and signal sequence for the secretion of expressed protein. Proteins with disulphide bonds and glycosylation can be efficiently produced in yeast. Another yeast used for protein production is Kluyveromyces lactis and the gene is expressed, driven by a variant of the strong lactase LAC4 promoter.
Saccharomyces cerevisiae is particularly widely used for gene expression studies in yeast, for example in yeast two-hybrid system for the study of protein-protein interaction. The vectors used in yeast two-hybrid system contain fusion partners for two cloned genes that allow the transcription of a reporter gene when there is interaction between the two proteins expressed from the cloned genes.
Baculovirus
Baculovirus, a rod-shaped virus which infects insect cells, is used as the expression vector in this system. Insect cell lines derived from Lepidopterans (moths and butterflies), such as Spodoptera frugiperda, are used as host. A cell line derived from the cabbage looper is of particular interest, as it has been developed to grow fast and without the expensive serum normally needed to boost cell growth. The shuttle vector is called bacmid, and gene expression is under the control of a strong promoter pPolh. Baculovirus has also been used with mammalian cell lines in the BacMam system.
Baculovirus is normally used for production of glycoproteins, although the glycosylations may be different from those found in vertebrates. In general, it is safer to use than mammalian virus as it has a limited host range and does not infect vertebrates without modifications.
Plant
Many plant expression vectors are based on the Ti plasmid of Agrobacterium tumefaciens. In these expression vectors, DNA to be inserted into plant is cloned into the T-DNA, a stretch of DNA flanked by a 25-bp direct repeat sequence at either end, and which can integrate into the plant genome. The T-DNA also contains the selectable marker. The Agrobacterium provides a mechanism for transformation, integration of into the plant genome, and the promoters for its vir genes may also be used for the cloned genes. Concerns over the transfer of bacterial or viral genetic material into the plant however have led to the development of vectors called intragenic vectors whereby functional equivalents of plant genome are used so that there is no transfer of genetic material from an alien species into the plant.
Plant viruses may be used as vectors since the Agrobacterium method does not work for all plants. Examples of plant virus used are the tobacco mosaic virus (TMV), potato virus X, and cowpea mosaic virus. The protein may be expressed as a fusion to the coat protein of the virus and is displayed on the surface of assembled viral particles, or as an unfused protein that accumulates within the plant. Expression in plant using plant vectors is often constitutive, and a commonly used constitutive promoter in plant expression vectors is the cauliflower mosaic virus (CaMV) 35S promoter.
Mammalian
Mammalian expression vectors offer considerable advantages for the expression of mammalian proteins over bacterial expression systems - proper folding, post-translational modifications, and relevant enzymatic activity. It may also be more desirable than other eukaryotic non-mammalian systems whereby the proteins expressed may not contain the correct glycosylations. It is of particular use in producing membrane-associating proteins that require chaperones for proper folding and stability as well as containing numerous post-translational modifications. The downside, however, is the low yield of product in comparison to prokaryotic vectors as well as the costly nature of the techniques involved. Its complicated technology, and potential contamination with animal viruses of mammalian cell expression have also placed a constraint on its use in large-scale industrial production.
Cultured mammalian cell lines such as the Chinese hamster ovary (CHO), COS, including human cell lines such as HEK and HeLa may be used to produce protein. Vectors are transfected into the cells and the DNA may be integrated into the genome by homologous recombination in the case of stable transfection, or the cells may be transiently transfected. Examples of mammalian expression vectors include the adenoviral vectors, the pSV and the pCMV series of plasmid vectors, vaccinia and retroviral vectors, as well as baculovirus. The promoters for cytomegalovirus (CMV) and SV40 are commonly used in mammalian expression vectors to drive gene expression. Non-viral promoter, such as the elongation factor (EF)-1 promoter, is also known.
Cell-free systems
E. coli cell lysate containing the cellular components required for transcription and translation are used in this in vitro method of protein production. The advantage of such system is that protein may be produced much faster than those produced in vivo since it does not require time to culture the cells, but it is also more expensive. Vectors used for E. coli expression can be used in this system although specifically designed vectors for this system are also available. Eukaryotic cell extracts may also be used in other cell-free systems, for example, the wheat germ cell-free expression systems. Mammalian cell-free systems have also been produced.
Applications
Laboratory use
Expression vector in an expression host is now the usual method used in laboratories to produce proteins for research. Most proteins are produced in E. coli, but for glycosylated proteins and those with disulphide bonds, yeast, baculovirus and mammalian systems may be used.
Production of peptide and protein pharmaceuticals
Most protein pharmaceuticals are now produced through recombinant DNA technology using expression vectors. These peptide and protein pharmaceuticals may be hormones, vaccines, antibiotics, antibodies, and enzymes. The first human recombinant protein used for disease management, insulin, was introduced in 1982. Biotechnology allows these peptide and protein pharmaceuticals, some of which were previously rare or difficult to obtain, to be produced in large quantity. It also reduces the risks of contaminants such as host viruses, toxins and prions. Examples from the past include prion contamination in growth hormone extracted from pituitary glands harvested from human cadavers, which caused Creutzfeldt–Jakob disease in patients receiving treatment for dwarfism, and viral contaminants in clotting factor VIII isolated from human blood that resulted in the transmission of viral diseases such as hepatitis and AIDS. Such risk is reduced or removed completely when the proteins are produced in non-human host cells.
Transgenic plant and animals
In recent years, expression vectors have been used to introduce specific genes into plants and animals to produce transgenic organisms, for example in agriculture it is used to produce transgenic plants. Expression vectors have been used to introduce a vitamin A precursor, beta-carotene, into rice plants. This product is called golden rice. This process has also been used to introduce a gene into plants that produces an insecticide, called Bacillus thuringiensis toxin or Bt toxin which reduces the need for farmers to apply insecticides since it is produced by the modified organism. In addition expression vectors are used to extend the ripeness of tomatoes by altering the plant so that it produces less of the chemical that causes the tomatoes to rot. There have been controversies over using expression vectors to modify crops due to the fact that there might be unknown health risks, possibilities of companies patenting certain genetically modified food crops, and ethical concerns. Nevertheless, this technique is still being used and heavily researched.
Transgenic animals have also been produced to study animal biochemical processes and human diseases, or used to produce pharmaceuticals and other proteins. They may also be engineered to have advantageous or useful traits. Green fluorescent protein is sometimes used as tags which results in animal that can fluoresce, and this have been exploited commercially to produce the fluorescent GloFish.
Gene therapy
Gene therapy is a promising treatment for a number of diseases where a "normal" gene carried by the vector is inserted into the genome, to replace an "abnormal" gene or supplement the expression of particular gene. Viral vectors are generally used but other nonviral methods of delivery are being developed. The treatment is still a risky option due to the viral vector used which can cause ill-effects, for example giving rise to insertional mutation that can result in cancer. However, there have been promising results.
See also
Cloning vector
Host cell protein
References
External links
GST Gene Fusion System Handbook
Genetics techniques
Molecular biology
Biotechnology | Expression vector | [
"Chemistry",
"Engineering",
"Biology"
] | 3,454 | [
"Genetics techniques",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry"
] |
564,384 | https://en.wikipedia.org/wiki/Clock%20face | A clock face is the part of an analog clock (or watch) that displays time through the use of a flat dial with reference marks, and revolving pointers turning on concentric shafts at the center, called hands. In its most basic, globally recognized form, the periphery of the dial is numbered 1 through 12 indicating the hours in a 12-hour cycle, and a short hour hand makes two revolutions in a day. A long minute hand makes one revolution every hour. The face may also include a second hand, which makes one revolution per minute. The term is less commonly used for the time display on digital clocks and watches.
A second type of clock face is the 24-hour analog dial, widely used in military and other organizations that use 24-hour time. This is similar to the 12-hour dial above, except it has hours numbered 1–24 (or 0–23) around the outside, and the hour hand makes only one revolution per day. Some special-purpose clocks, such as timers and sporting event clocks, are designed for measuring periods less than one hour. Clocks can indicate the hour with Roman numerals or Hindu–Arabic numerals, or with non-numeric indicator marks. The two numbering systems have also been used in combination, with the prior indicating the hour and the latter the minute. Longcase clocks (grandfather clocks) typically use Roman numerals for the hours. Clocks using only Arabic numerals first began to appear in the mid-18th century.
The clock face is so familiar that the numbers are often omitted and replaced with unlabeled graduations (marks), particularly in the case of watches. Occasionally, markings of any sort are dispensed with, and the time is read by the angles of the hands.
Reading a modern clock face
Most modern clocks have the numbers 1 through 12 printed at equally spaced intervals around the periphery of the face with the 12 at the top, indicating the hour, and on many models, sixty dots or lines evenly spaced in a ring around the outside of the dial, indicating minutes and seconds. The time is read by observing the placement of several "hands", which emanate from the centre of the dial:
A short, thick "hour" hand;
A long, thinner "minute" hand;
On some models, a very thin "second" or "sweep" hand
All three hands continuously rotate around the dial in a clockwise direction – in the direction of increasing numbers.
The second, or sweep, hand moves relatively quickly, taking a full minute (sixty seconds) to make a complete rotation from 12 to 12. For every rotation of the second hand, the minute hand will move from one minute mark to the next.
The minute hand rotates more slowly around the dial. It takes one hour (sixty minutes) to make a complete rotation from 12 to 12. For every rotation of the minute hand, the hour hand will move from one hour mark to the next.
The hour hand moves slowest of all, taking half a day (twelve hours) to make a complete rotation. It starts from "12" at midnight, makes one rotation until it is pointing at "12" again at noon, and then makes another rotation until it is pointing at "12" again at midnight of the next morning.
Historical development
The word clock derives from the medieval Latin word for "bell"; , and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch Klocke. The first mechanical clocks, built in 13th-century Europe, were striking clocks: their purpose was to ring bells upon the canonical hours, to call the local community to prayer. These were tower clocks installed in bell towers in public places, to ensure that the bells were audible over a wide area. Soon after these first mechanical clocks were in place clockmakers realized that their wheels could be used to drive an indicator on a dial on the outside of the tower, where it could be widely seen, so the local population could tell the time between the hourly strikes.
Before the late 14th century, a fixed hand (often a carving literally shaped like a hand) indicated the hour by pointing to numbers on a rotating dial; after this time, the current convention of a rotating hand on a fixed dial was adopted. Minute hands (so named because they indicated the small, or minute, divisions of the hour) only came into regular use around 1690, after the invention of the pendulum and anchor escapement increased the precision of time-telling enough to justify it. In some precision clocks, a third hand, which rotated once a minute, was added in a separate subdial. This was called the "second-minute" hand (because it measured the secondary minute divisions of the hour), which was shortened to "second" hand. The convention of the hands moving clockwise evolved in imitation of the sundial. In the Northern hemisphere, where the clock face originated, the shadow of the gnomon on a horizontal sundial moves clockwise during the day.
French decimal time
During the French Revolution in 1793, in connection with its Republican calendar, France attempted to introduce a decimal time system. This had 10 decimal hours in the day, 100 decimal minutes per hour, and 100 decimal seconds per minute. Therefore, the decimal hour was more than twice as long (144 min) as the present hour, the decimal minute was slightly longer than the present minute (86.4 seconds) and the decimal second was slightly shorter (0.864 sec) than the present second. Clocks were manufactured with this alternate face, usually combined with traditional hour markings. However, it did not catch on, and France discontinued the mandatory use of decimal time on 7 April 1795, although some French cities used decimal time until 1801.
Stylistic development
Until the last quarter of the 17th century, hour markings were etched into metal faces and the recesses filled with black wax. Subsequently, higher contrast and improved readability was achieved with white enamel plaques painted with black numbers. Initially, the numbers were printed on small, individual plaques mounted on a brass substructure. This was not a stylistic decision, rather enamel production technology had not yet achieved the ability to create large pieces of enamel. The "13-piece face" was an early attempt to create an entirely white enamel face. As the name suggests, it was composed of 13 enamel plaques: 12 numbered wedges fitted around a circle. The first single-piece enamel faces, not unlike those in production today, began to appear .
It is customary for modern advertisements to display clocks and watches set to approximately 10:10 or 1:50,
as this V-shaped arrangement roughly makes a smile, imitates a human figure with raised arms, and leaves the watch company's logo unobscured by the hands.
In the 1970s, German designer Tian Harlan invented the Chromachron, a wristwatch with a clock face that has no dials but a disc with pie-shaped pattern rotating by the minute over color patterns representing both hours and minutes.
Technological obsolescence
In the 2010s, some United Kingdom schools started replacing analogue clocks in examination halls with digital clocks because an increasing number of pupils were unable to read analogue clocks. Smartphone and computer clocks are often digital rather than analogue, and proponents of replacing analogue clock faces argue that they have become technologically obsolete. However, reading analogue clocks is still part of American elementary school curricula; proponents of analogue clocks argue that their inclusion in the curriculum reinforces basic mathematical concepts that are taught in elementary school.
See also
List of largest clock faces
Clock position
Roman numerals
Footnotes
Timekeeping components | Clock face | [
"Technology"
] | 1,558 | [
"Timekeeping components",
"Components"
] |
564,527 | https://en.wikipedia.org/wiki/Density%20matrix%20renormalization%20group | The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low-energy physics of quantum many-body systems with high accuracy. As a variational method, DMRG is an efficient algorithm that attempts to find the lowest-energy matrix product state wavefunction of a Hamiltonian. It was invented in 1992 by Steven R. White and it is nowadays the most efficient method for 1-dimensional systems.
History
The first application of the DMRG, by Steven R. White and Reinhard Noack, was a toy model: to find the spectrum of a spin 0 particle in a 1D box. This model had been proposed by Kenneth G. Wilson as a test for any new renormalization group method, because they all happened to fail with this simple problem. The DMRG overcame the problems of previous renormalization group methods by connecting two blocks with the two sites in the middle rather than just adding a single site to a block at each step as well as by using the density matrix to identify the most important states to be kept at the end of each step. After succeeding with the toy model, the DMRG method was tried with success on the quantum Heisenberg model.
Principle
The main problem of quantum many-body physics is the fact that the Hilbert space grows exponentially with size. In other words if one considers a lattice, with some Hilbert space of dimension on each site of the lattice, then the total Hilbert space would have dimension , where is the number of sites on the lattice. For example, a spin-1/2 chain of length L has 2L degrees of freedom. The DMRG is an iterative, variational method that reduces effective degrees of freedom to those most important for a target state. The state one is most often interested in is the ground state.
After a warmup cycle, the method splits the system into two subsystems, or blocks, which need not have equal sizes, and two sites in between. A set of representative states has been chosen for the block during the warmup. This set of left blocks + two sites + right blocks is known as the superblock. Now a candidate for the ground state of the superblock, which is a reduced version of the full system, may be found. It may have a rather poor accuracy, but the method is iterative and improves with the steps below.
The candidate ground state that has been found is projected into the Hilbert subspace for each block using a density matrix, hence the name. Thus, the relevant states for each block are updated.
Now one of the blocks grows at the expense of the other and the procedure is repeated. When the growing block reaches maximum size, the other starts to grow in its place. Each time we return to the original (equal sizes) situation, we say that a sweep has been completed. Normally, a few sweeps are enough to get a precision of a part in 1010 for a 1D lattice.
Implementation guide
A practical implementation of the DMRG algorithm is a lengthy work. A few of the main computational tricks are these:
Since the size of the renormalized Hamiltonian is usually in the order of a few or tens of thousand while the sought eigenstate is just the ground state, the ground state for the superblock is obtained via iterative algorithm such as the Lanczos algorithm of matrix diagonalization. Another choice is the Arnoldi method, especially when dealing with non-hermitian matrices.
The Lanczos algorithm usually starts with the best guess of the solution. If no guess is available a random vector is chosen. In DMRG, the ground state obtained in a certain DMRG step, suitably transformed, is a reasonable guess and thus works significantly better than a random starting vector at the next DMRG step.
In systems with symmetries, we may have conserved quantum numbers, such as total spin in a Heisenberg model. It is convenient to find the ground state within each of the sectors into which the Hilbert space is divided.
Applications
The DMRG has been successfully applied to get the low energy properties of spin chains: Ising model in a transverse field, Heisenberg model, etc., fermionic systems, such as the Hubbard model, problems with impurities such as the Kondo effect, boson systems, and the physics of quantum dots joined with quantum wires. It has been also extended to work on tree graphs, and has found applications in the study of dendrimers. For 2D systems with one of the dimensions much larger than the other DMRG is also accurate, and has proved useful in the study of ladders.
The method has been extended to study equilibrium statistical physics in 2D, and to analyze non-equilibrium phenomena in 1D.
The DMRG has also been applied to the field of quantum chemistry to study strongly correlated systems.
Example: Quantum Heisenberg model
Let us consider an "infinite" DMRG algorithm for the antiferromagnetic quantum Heisenberg chain. The recipe can be applied for every translationally invariant one-dimensional lattice.
DMRG is a renormalization-group technique because it offers an efficient truncation of the Hilbert space of one-dimensional quantum systems.
Starting point
To simulate an infinite chain, start with four sites. The first is the block site, the last the universe-block site and the remaining are the added sites, the right one is added to the universe-block site and the other to the block site.
The Hilbert space for the single site is with the base . With this base the spin operators are , and for the single site. For every block, the two blocks and the two sites, there is its own Hilbert space , its base ()and its own operatorswhere
block: , , , , ,
left-site: , , , ,
right-site: , , , ,
universe: , , , , ,
At the starting point all four Hilbert spaces are equivalent to , all spin operators are equivalent to , and and . In the following iterations, this is only true for the left and right sites.
Step 1: Form the Hamiltonian matrix for the superblock
The ingredients are the four block operators and the four universe-block operators, which at the first iteration are matrices, the three left-site spin operators and the three right-site spin operators, which are always matrices. The Hamiltonian matrix of the superblock (the chain), which at the first iteration has only four sites, is formed by these operators. In the Heisenberg antiferromagnetic S=1 model the Hamiltonian is:
These operators live in the superblock state space: , the base is . For example: (convention):
The Hamiltonian in the DMRG form is (we set ):
The operators are matrices, , for example:
Step 2: Diagonalize the superblock Hamiltonian
At this point you must choose the eigenstate of the Hamiltonian for which some observables is calculated, this is the target state . At the beginning you can choose the ground state and use some advanced algorithm to find it, one of these is described in:
The Iterative Calculation of a Few of the Lowest Eigenvalues and Corresponding Eigenvectors of Large Real-Symmetric Matrices, Ernest R. Davidson; Journal of Computational Physics 17, 87-94 (1975)
This step is the most time-consuming part of the algorithm.
If is the target state, expectation value of various operators can be measured at this point using .
Step 3: Reduce density matrix
Form the reduced density matrix for the first two block system, the block and the left-site. By definition it is the matrix:
Diagonalize and form the matrix , which rows are the eigenvectors associated with the largest eigenvalues of . So is formed by the most significant eigenstates of the reduced density matrix. You choose looking to the parameter : .
Step 4: New block and universe-block operators
Form the matrix representation of operators for the system composite of the block and left-site, and for the system composite of right-site and universe-block, for example:
Now, form the matrix representations of the new block and universe-block operators, form a new block by changing basis with the transformation , for example:At this point the iteration is ended and the algorithm goes back to step 1.
The algorithm stops successfully when the observable converges to some value.
Matrix product ansatz
The success of the DMRG for 1D systems is related to the fact that it is a variational method within the space of matrix product states (MPS). These are states of the form
where are the values of the e.g. z-component of the spin in a spin chain, and the Asi are matrices of arbitrary dimension m. As m → ∞, the representation becomes exact. This theory was exposed by S. Rommer and S. Ostlund in .
In quantum chemistry application, stands for the four possibilities of the projection of the spin quantum number of the two electrons that can occupy a single orbital, thus , where the first (second) entry of these kets corresponds to the spin-up(down) electron. In quantum chemistry, (for a given ) and (for a given ) are traditionally chosen to be row and column matrices, respectively. This way, the result of is a scalar value and the trace operation is unnecessary. is the number of sites (the orbitals basically) used in the simulation.
The matrices in the MPS ansatz are not unique, one can, for instance, insert in the middle of , then define and , and the state will stay unchanged. Such gauge freedom is employed to transform the matrices into a canonical form. Three types of canonical form exist: (1) left-normalized form, when
for all , (2) right-normalized form, when
for all , and (3) mixed-canonical form when both left- and right-normalized matrices exist among the matrices in the above MPS ansatz.
The goal of the DMRG calculation is then to solve for the elements of each of the matrices. The so-called one-site and two-site algorithms have been devised for this purpose. In the one-site algorithm, only one matrix (one site) whose elements are solved for at a time. Two-site just means that two matrices are first contracted (multiplied) into a single matrix, and then its elements are solved. The two-site algorithm is proposed because the one-site algorithm is much more prone to getting trapped at a local minimum. Having the MPS in one of the above canonical forms has the advantage of making the computation more favorable - it leads to the ordinary eigenvalue problem. Without canonicalization, one will be dealing with a generalized eigenvalue problem.
Extensions
In 2004 the time-evolving block decimation method was developed to implement real-time evolution of matrix product states. The idea is based on the classical simulation of a quantum computer. Subsequently, a new method was devised to compute real-time evolution within the DMRG formalism - See the paper by A. Feiguin and S.R. White .
In recent years, some proposals to extend the method to 2D and 3D have been put forward, extending the definition of the matrix product states. See this paper by F. Verstraete and I. Cirac, .
Further reading
The original paper, by S. R. White, or
A textbook on DMRG and its origins: https://www.springer.com/gp/book/9783540661290
A broad review, by Karen Hallberg, .
Two reviews by Ulrich Schollwöck, one discussing the original formulation , and another in terms of matrix product states
The Ph.D. thesis of Javier Rodríguez Laguna .
An introduction to DMRG and its time-dependent extension .
A list of DMRG e-prints on arxiv.org .
A review article on DMRG for ab initio quantum chemistry .
An introduction video on DMRG for ab initio quantum chemistry .
Related software
The Matrix Product Toolkit: A free GPL set of tools for manipulating finite and infinite matrix product states written in C++
Uni10: a library implementing numerous tensor network algorithms (DMRG, TEBD, MERA, PEPS ...) in C++
Powder with Power: a free distribution of time-dependent DMRG code written in Fortran
The ALPS Project: a free distribution of time-independent DMRG code and Quantum Monte Carlo codes written in C++
DMRG++: a free implementation of DMRG written in C++
The ITensor (Intelligent Tensor) Library: a free library for performing tensor and matrix-product state based DMRG calculations written in C++
OpenMPS: an open source DMRG implementation based on Matrix Product States written in Python/Fortran2003.
Snake DMRG program: open source DMRG, tDMRG and finite temperature DMRG program written in C++
CheMPS2: open source (GPL) spin-adapted DMRG code for ab initio quantum chemistry written in C++
Block: open source DMRG framework for quantum chemistry and model Hamiltonians. Supports SU(2) and general non-Abelian symmetries. Written in C++.
Block2: An efficient parallel implementation of DMRG, dynamical DMRG, tdDMRG, and finite temperature DMRG for quantum chemistry and models. Written in Python/C++.
See also
Quantum Monte Carlo
Time-evolving block decimation
Configuration interaction
References
Theoretical physics
Computational physics
Statistical mechanics | Density matrix renormalization group | [
"Physics"
] | 2,829 | [
"Statistical mechanics",
"Theoretical physics",
"Computational physics"
] |
564,578 | https://en.wikipedia.org/wiki/Protease%20inhibitor%20%28biology%29 | In biology and biochemistry, protease inhibitors, or antiproteases, are molecules that inhibit the function of proteases (enzymes that aid the breakdown of proteins). Many naturally occurring protease inhibitors are proteins.
In medicine, protease inhibitor is often used interchangeably with alpha 1-antitrypsin (A1AT, which is abbreviated PI for this reason). A1AT is indeed the protease inhibitor most often involved in disease, namely in alpha-1 antitrypsin deficiency.
Classification
Protease inhibitors may be classified either by the type of protease they inhibit, or by their mechanism of action. In 2004 Rawlings and colleagues introduced a classification of protease inhibitors based on similarities detectable at the level of amino acid sequence. This classification initially identified 48 families of inhibitors that could be grouped into 26 related superfamily (or clans) by their structure. According to the MEROPS database there are now 81 families of inhibitors. These families are named with an I followed by a number, for example, I14 contains hirudin-like inhibitors.
By protease
Classes of proteases are:
Aspartic protease inhibitors
Cysteine protease inhibitors
Metalloprotease inhibitors
Serine protease inhibitors
Threonine protease inhibitors
Trypsin inhibitors
Kunitz STI protease inhibitor
By mechanism
Classes of inhibitor mechanisms of action are:
Suicide inhibitor
Transition state inhibitor
Protein protease inhibitor (see serpins)
Chelating agents
Families
Inhibitor I4
This is a family of protease suicide inhibitors called the serpins. It contains inhibitors of multiple cysteine and serine protease families. Their mechanism of action relies on undergoing a large conformational change which inactivates their target's catalytic triad.
Inhibitor I9
Proteinase propeptide inhibitors (sometimes referred to as activation peptides) are responsible for the modulation of folding and activity of the peptidase pro-enzyme or zymogen. The pro-segment docks into the enzyme, shielding the substrate binding site, thereby promoting inhibition of the enzyme. Several such propeptides share a similar topology, despite often low sequence identities. The propeptide region has an open-sandwich antiparallel-alpha/antiparallel-beta fold, with two alpha-helices and four beta-strands with a (beta/alpha/beta)x2 topology.
The peptidase inhibitor I9 family contains the propeptide domain at the N-terminus of peptidases belonging to MEROPS family S8A, subtilisins. The propeptide is removed by proteolytic cleavage; removal activating the enzyme.
Inhibitor I10
This family includes both microviridins and marinostatins. It seems likely that in both cases it is the C-terminus which becomes the active inhibitor after post-translational modifications of the full length, pre-peptide. It is the ester linkages within the key, 12-residue region that circularise the molecule giving it its inhibitory conformation.
Inhibitor I24
This family includes PinA, which inhibits the endopeptidase La. It binds to the La homotetramer but does not interfere with the ATP binding site or the active site of La.
Inhibitor I29
The inhibitor I29 domain, which belongs to MEROPS peptidase inhibitor family I29, is found at the N-terminus of a variety of peptidase precursors that belong to MEROPS peptidase subfamily C1A; these include cathepsin L, papain, and procaricain. It forms an alpha-helical domain that runs through the substrate-binding site, preventing access. Removal of this region by proteolytic cleavage results in activation of the enzyme. This domain is also found, in one or more copies, in a variety of cysteine peptidase inhibitors such as salarin.
Inhibitor I34
The saccharopepsin inhibitor I34 is highly specific for the aspartic peptidase saccharopepsin. In the absence of saccharopepsin it is largely unstructured, but in its presence, the inhibitor undergoes a conformational change forming an almost perfect alpha-helix from Asn2 to Met32 in the active site cleft of the peptidase.
Inhibitor I36
The peptidase inhibitor family I36 domain is only found in a small number of proteins restricted to Streptomyces species. All have four conserved cysteines that probably form two disulphide bonds. One of these proteins from Streptomyces nigrescens, is the well characterised metalloproteinase inhibitor SMPI.
The structure of SMPI has been determined. It has 102 amino acid residues with two disulphide bridges and specifically inhibits metalloproteinases such as thermolysin, which belongs to MEROPS peptidase family M4. SMPI is composed of two beta-sheets, each consisting of four antiparallel beta-strands. The structure can be considered as two Greek key motifs with 2-fold internal symmetry, a Greek key beta-barrel. One unique structural feature found in SMPI is in its extension between the first and second strands of the second Greek key motif which is known to be involved in the inhibitory activity of SMPI. In the absence of sequence similarity, the SMPI structure shows clear similarity to both domains of the eye lens crystallins, both domains of the calcium sensor protein-S, as well as the single-domain yeast killer toxin. The yeast killer toxin structure was thought to be a precursor of the two-domain beta gamma-crystallin proteins, because of its structural similarity to each domain of the beta gamma-crystallins. SMPI thus provides another example of a single-domain protein structure that corresponds to the ancestral fold from which the two-domain proteins in the beta gamma-crystallin superfamily are believed to have evolved.
Inhibitor I42
Inhibitor family I42 includes chagasin, a reversible inhibitor of papain-like cysteine proteases. Chagasin has a beta-barrel structure, which is a unique variant of the immunoglobulin fold with homology to human CD8alpha.
Inhibitor I48
Inhibitor family I48 includes clitocypin, which binds and inhibits cysteine proteinases. It has no similarity to any other known cysteine proteinase inhibitors but bears some similarity to a lectin-like family of proteins from mushrooms.
Inhibitor I53
Members of this family are the peptidase inhibitor madanin proteins. These proteins were isolated from tick saliva.
Inhibitor I67
Bromelain inhibitor VI, in the Inhibitor I67 family, is a double-chain inhibitor consisting of an 11-residue and a 41-residue chain.
Inhibitor I68
The Carboxypeptidase inhibitor I68 family represents a family of carboxypeptidase inhibitors found in ticks.
Inhibitor I78
The peptidase inhibitor I78 family includes Aspergillus elastase inhibitor.
Compounds
Aprotinin
Bestatin
Calpain inhibitor I and II
Chymostatin
E-64
Leupeptin (N-acetyl-L-leucyl-L-leucyl-L-argininal)
alpha-2-Macroglobulin
Pefabloc SC
Pepstatin
PMSF (phenylmethanesulfonyl fluoride)
TLCK
Trypsin inhibitors
See also
Kunitz domain
Pacifastin
Proteinase inhibitors in plants
References
External links
Sigma-Aldrich protease inhibitor overview
Protein families | Protease inhibitor (biology) | [
"Biology"
] | 1,595 | [
"Protein families",
"Protein classification"
] |
564,590 | https://en.wikipedia.org/wiki/Serpin | Serpins are a superfamily of proteins with similar structures that were first identified for their protease inhibition activity and are found in all kingdoms of life. The acronym serpin was originally coined because the first serpins to be identified act on chymotrypsin-like serine proteases (serine protease inhibitors). They are notable for their unusual mechanism of action, in which they irreversibly inhibit their target protease by undergoing a large conformational change to disrupt the target's active site. This contrasts with the more common competitive mechanism for protease inhibitors that bind to and block access to the protease active site.
Protease inhibition by serpins controls an array of biological processes, including coagulation and inflammation, and consequently these proteins are the target of medical research. Their unique conformational change also makes them of interest to the structural biology and protein folding research communities. The conformational-change mechanism confers certain advantages, but it also has drawbacks: serpins are vulnerable to mutations that can result in serpinopathies such as protein misfolding and the formation of inactive long-chain polymers. Serpin polymerisation not only reduces the amount of active inhibitor, but also leads to accumulation of the polymers, causing cell death and organ failure.
Although most serpins control proteolytic cascades, some proteins with a serpin structure are not enzyme inhibitors, but instead perform diverse functions such as storage (as in egg white—ovalbumin), transport as in hormone carriage proteins (thyroxine-binding globulin, cortisol-binding globulin) and molecular chaperoning (HSP47). The term serpin is used to describe these members as well, despite their non-inhibitory function, since they are evolutionarily related.
History
Protease inhibitory activity in blood plasma was first reported in the late 1800s, but it was not until the 1950s that the serpins antithrombin and alpha 1-antitrypsin were isolated, with the subsequent recognition of their close family homology in 1979. That they belonged to a new protein family became apparent on their further alignment with the non-inhibitory egg-white protein ovalbumin, to give what was initially called the alpha1-antitrypsin-antithrombin III-ovalbumin superfamily of serine proteinase inhibitors, but was subsequently succinctly renamed as the Serpins. The initial characterisation of the new family centred on alpha1-antitrypsin, a serpin present in high concentration in blood plasma, the common genetic disorder of which was shown to cause a predisposition to the lung disease emphysema and to liver cirrhosis. The identification of the S and Z mutations responsible for the genetic deficiency and the subsequent sequence alignments of alpha1-antitrypsin and antithrombin in 1982 led to the recognition of the close homologies of the active sites of the two proteins, centred on a methionine in alpha1-antitrypsin as an inhibitor of tissue elastase and on arginine in antithrombin as an inhibitor of thrombin.
The critical role of the active centre residue in determining the specificity of inhibition of serpins was unequivocally confirmed by the finding that a natural mutation of the active centre methionine in alpha1-antitrypsin to an arginine, as in antithrombin, resulted in a severe bleeding disorder. This active-centre specificity of inhibition was also evident in the many other families of protease inhibitors but the serpins differed from them in being much larger proteins and also in possessing what was soon apparent as an inherent ability to undergo a change in shape. The nature of this conformational change was revealed with the determination in 1984 of the first crystal structure of a serpin, that of post-cleavage alpha1-antitrypsin. This together with the subsequent solving of the structure of native (uncleaved) ovalbumin indicated that the inhibitory mechanism of the serpins involved a remarkable conformational shift, with the movement of the exposed peptide loop containing the reactive site and its incorporation as a middle strand in the main beta-pleated sheet that characterises the serpin molecule. Early evidence of the essential role of this loop movement in the inhibitory mechanism came from the finding that even minor aberrations in the amino acid residues that form the hinge of the movement in antithrombin resulted in thrombotic disease. Ultimate confirmation of the linked displacement of the target protease by this loop movement was provided in 2000 by the structure of the post-inhibitory complex of alpha1-antitrypsin with trypsin, showing how the displacement results in the deformation and inactivation of the attached protease. Subsequent structural studies have revealed an additional advantage of the conformational mechanism in allowing the subtle modulation of inhibitory activity, as notably seen at tissue level with the functionally diverse serpins in human plasma.
Over 1000 serpins have now been identified, including 36 human proteins, as well as molecules in all kingdoms of life—animals, plants, fungi, bacteria, and archaea—and some viruses. The central feature of all is a tightly conserved framework, which allows the precise alignment of their key structural and functional components based on the template structure of alpha1-antitrypsin. In the 2000s, a systematic nomenclature was introduced in order to categorise members of the serpin superfamily based on their evolutionary relationships. Serpins are therefore the largest and most diverse superfamily of protease inhibitors.
Activity
Most serpins are protease inhibitors, targeting extracellular, chymotrypsin-like serine proteases. These proteases possess a nucleophilic serine residue in a catalytic triad in their active site. Examples include thrombin, trypsin, and human neutrophil elastase. Serpins act as irreversible, suicide inhibitors by trapping an intermediate of the protease's catalytic mechanism.
Some serpins inhibit other protease classes, typically cysteine proteases, and are termed "cross-class inhibitors". These enzymes differ from serine proteases in that they use a nucleophilic cysteine residue, rather than a serine, in their active site. Nonetheless, the enzymatic chemistry is similar, and the mechanism of inhibition by serpins is the same for both classes of protease. Examples of cross-class inhibitory serpins include serpin B4 a squamous cell carcinoma antigen 1 (SCCA-1) and the avian serpin myeloid and erythroid nuclear termination stage-specific protein (MENT), which both inhibit papain-like cysteine proteases.
Biological function and localization
Protease inhibition
Approximately two-thirds of human serpins perform extracellular roles, inhibiting proteases in the bloodstream in order to modulate their activities. For example, extracellular serpins regulate the proteolytic cascades central to blood clotting (antithrombin), the inflammatory and immune responses (antitrypsin, antichymotrypsin, and C1-inhibitor) and tissue remodelling (PAI-1). By inhibiting signalling cascade proteases, they can also affect development. The table of human serpins (below) provides examples of the range of functions performed by human serpin, as well as some of the diseases that result from serpin deficiency.
The protease targets of intracellular inhibitory serpins have been difficult to identify, since many of these molecules appear to perform overlapping roles. Further, many human serpins lack precise functional equivalents in model organisms such as the mouse. Nevertheless, an important function of intracellular serpins may be to protect against the inappropriate activity of proteases inside the cell. For example, one of the best-characterised human intracellular serpins is Serpin B9, which inhibits the cytotoxic granule protease granzyme B. In doing so, Serpin B9 may protect against inadvertent release of granzyme B and premature or unwanted activation of cell death pathways.
Some viruses use serpins to disrupt protease functions in their host. The cowpox viral serpin CrmA (cytokine response modifier A) is used in order to avoid inflammatory and apoptotic responses of infected host cells. CrmA increases infectivity by suppressing its host's inflammatory response through inhibition of IL-1 and IL-18 processing by the cysteine protease caspase-1. In eukaryotes, a plant serpin inhibits both metacaspases and a papain-like cysteine protease.
Non-inhibitory roles
Non-inhibitory extracellular serpins also perform a wide array of important roles. Thyroxine-binding globulin and transcortin transport the hormones thyroxine and cortisol, respectively. The non-inhibitory serpin ovalbumin is the most abundant protein in egg white. Its exact function is unknown, but it is thought to be a storage protein for the developing foetus. Heat shock serpin 47 is a chaperone, essential for proper folding of collagen. It acts by stabilising collagen's triple helix whilst it is being processed in the endoplasmic reticulum.
Some serpins are both protease inhibitors and perform additional roles. For example, the nuclear cysteine protease inhibitor MENT, in birds also acts as a chromatin remodelling molecule in a bird's red blood cells.
Structure
All serpins share a common structure (or fold), despite their varied functions. All typically have three β-sheets (named A, B and C) and eight or nine α-helices (named hA–hI). The most significant regions to serpin function are the A-sheet and the reactive centre loop (RCL). The A-sheet includes two β-strands that are in a parallel orientation with a region between them called the 'shutter', and upper region called the 'breach'. The RCL forms the initial interaction with the target protease in inhibitory molecules. Structures have been solved showing the RCL either fully exposed or partially inserted into the A-sheet, and serpins are thought to be in dynamic equilibrium between these two states. The RCL also only makes temporary interactions with the rest of the structure, and is therefore highly flexible and exposed to the solvent.
The serpin structures that have been determined cover several different conformations, which has been necessary for the understanding of their multiple-step mechanism of action. Structural biology has therefore played a central role in the understanding of serpin function and biology.
Conformational change and inhibitory mechanism
Inhibitory serpins do not inhibit their target proteases by the typical competitive (lock-and-key) mechanism used by most small protease inhibitors (e.g. Kunitz-type inhibitors). Instead, serpins use an unusual conformational change, which disrupts the structure of the protease and prevents it from completing catalysis. The conformational change involves the RCL moving to the opposite end of the protein and inserting into β-sheet A, forming an extra antiparallel β-strand. This converts the serpin from a stressed state, to a lower-energy relaxed state (S to R transition).
Serine and cysteine proteases catalyse peptide bond cleavage by a two-step process. Initially, the catalytic residue of the active site triad performs a nucleophilic attack on the peptide bond of the substrate. This releases the new N-terminus and forms a covalent ester-bond between the enzyme and the substrate. This covalent complex between enzyme and substrate is called an acyl-enzyme intermediate. For standard substrates, the ester bond is hydrolysed and the new C-terminus is released to complete catalysis. However, when a serpin is cleaved by a protease, it rapidly undergoes the S to R transition before the acyl-enzyme intermediate is hydrolysed. The efficiency of inhibition depends on fact that the relative kinetic rate of the conformational change is several orders of magnitude faster than hydrolysis by the protease.
Since the RCL is still covalently attached to the protease via the ester bond, the S to R transition pulls protease from the top to the bottom of the serpin and distorts the catalytic triad. The distorted protease can only hydrolyse the acyl enzyme intermediate extremely slowly and so the protease remains covalently attached for days to weeks. Serpins are classed as irreversible inhibitors and as suicide inhibitors since each serpin protein permanently inactivates a single protease, and can only function once.
Allosteric activation
The conformational mobility of serpins provides a key advantage over static lock-and-key protease inhibitors. In particular, the function of inhibitory serpins can be regulated by allosteric interactions with specific cofactors. The X-ray crystal structures of antithrombin, heparin cofactor II, MENT and murine antichymotrypsin reveal that these serpins adopt a conformation wherein the first two amino acids of the RCL are inserted into the top of the A β-sheet. The partially inserted conformation is important because co-factors are able to conformationally switch certain partially inserted serpins into a fully expelled form. This conformational rearrangement makes the serpin a more effective inhibitor.
The archetypal example of this situation is antithrombin, which circulates in plasma in a partially inserted relatively inactive state. The primary specificity determining residue (the P1 arginine) points toward the body of the serpin and is unavailable to the protease. Upon binding a high-affinity pentasaccharide sequence within long-chain heparin, antithrombin undergoes a conformational change, RCL expulsion, and exposure of the P1 arginine. The heparin pentasaccharide-bound form of antithrombin is, thus, a more effective inhibitor of thrombin and factor Xa. Furthermore, both of these coagulation proteases also contain binding sites (called exosites) for heparin. Heparin, therefore, also acts as a template for binding of both protease and serpin, further dramatically accelerating the interaction between the two parties. After the initial interaction, the final serpin complex is formed and the heparin moiety is released. This interaction is physiologically important. For example, after injury to the blood vessel wall, heparin is exposed, and antithrombin is activated to control the clotting response. Understanding of the molecular basis of this interaction enabled the development of Fondaparinux, a synthetic form of Heparin pentasaccharide used as an anti-clotting drug.
Latent conformation
Certain serpins spontaneously undergo the S to R transition without having been cleaved by a protease, to form a conformation termed the latent state. Latent serpins are unable to interact with proteases and so are no longer protease inhibitors. The conformational change to latency is not exactly the same as the S to R transition of a cleaved serpin. Since the RCL is still intact, the first strand of the C-sheet has to peel off to allow full RCL insertion.
Regulation of the latency transition can act as a control mechanism in some serpins, such as PAI-1. Although PAI-1 is produced in the inhibitory S conformation, it "auto-inactivates" by changing to the latent state unless it is bound to the cofactor vitronectin. Similarly, antithrombin can also spontaneously convert to the latent state, as an additional modulation mechanism to its allosteric activation by heparin. Finally, the N-terminus of , a serpin from Thermoanaerobacter tengcongensis, is required to lock the molecule in the native inhibitory state. Disruption of interactions made by the N-terminal region results in spontaneous conformational change of this serpin to the latent conformation.
Conformational change in non-inhibitory functions
Certain non-inhibitory serpins also use the serpin conformational change as part of their function. For example, the native (S) form of thyroxine-binding globulin has high affinity for thyroxine, whereas the cleaved (R) form has low affinity. Similarly, transcortin has higher affinity for cortisol when in its native (S) state, than its cleaved (R) state. Thus, in these serpins, RCL cleavage and the S to R transition has been commandeered to allow for ligand release, rather than protease inhibition.
In some serpins, the S to R transition can activate cell signalling events. In these cases, a serpin that has formed a complex with its target protease, is then recognised by a receptor. The binding event then leads to downstream signalling by the receptor. The S to R transition is therefore used to alert cells to the presence of protease activity. This differs from the usual mechanism whereby serpins affect signalling simply by inhibiting proteases involved in a signalling cascade.
Degradation
When a serpin inhibits a target protease, it forms a permanent complex, which needs to be disposed of. For extracellular serpins, the final serpin-enzyme complexes are rapidly cleared from circulation. One mechanism by which this occurs in mammals is via the low-density lipoprotein receptor-related protein (LRP), which binds to inhibitory complexes made by antithrombin, PA1-1, and neuroserpin, causing cellular uptake. Similarly, the Drosophila necrotic serpin is degraded in the lysosome after being trafficked into the cell by the Lipophorin Receptor-1 (homologous to the mammalian LDL receptor family).
Disease and serpinopathies
Serpins are involved in a wide array of physiological functions, and so mutations in genes encoding them can cause a range of diseases. Mutations that change the activity, specificity or aggregation properties of serpins all affect how they function. The majority of serpin-related diseases are the result of serpin polymerisation into aggregates, though several other types of disease-linked mutations also occur. The disorder alpha-1 antitrypsin deficiency is one of the most common hereditary diseases.
Inactivity or absence
Since the stressed serpin fold is high-energy, mutations can cause them to incorrectly change into their lower-energy conformations (e.g. relaxed or latent) before they have correctly performed their inhibitory role.
Mutations that affect the rate or the extent of RCL insertion into the A-sheet can cause the serpin to undergo its S to R conformational change before having engaged a protease. Since a serpin can only make this conformational change once, the resulting misfired serpin is inactive and unable to properly control its target protease. Similarly, mutations that promote inappropriate transition to the monomeric latent state cause disease by reducing the amount of active inhibitory serpin. For example, the disease-linked antithrombin variants wibble and wobble, both promote formation of the latent state.
The structure of the disease-linked mutant of antichymotrypsin (L55P) revealed another, inactive "δ-conformation". In the δ-conformation, four residues of the RCL are inserted into the top of β-sheet A. The bottom half of the sheet is filled as a result of one of the α-helices (the F-helix) partially switching to a β-strand conformation, completing the β-sheet hydrogen bonding. It is unclear whether other serpins can adopt this conformer, and whether this conformation has a functional role, but it is speculated that the δ-conformation may be adopted by Thyroxine-binding globulin during thyroxine release. The non-inhibitory proteins related to serpins can also cause diseases when mutated. For example, mutations in SERPINF1 cause osteogenesis imperfecta type VI in humans.
In the absence of a required serpin, the protease that it normally would regulate is over-active, leading to pathologies. Consequently, simple deficiency of a serpin (e.g. a null mutation) can result in disease. Gene knockouts, particularly in mice, are used experimentally to determine the normal functions of serpins by the effect of their absence.
Specificity change
In some rare cases, a single amino acid change in a serpin's RCL alters its specificity to target the wrong protease. For example, the Antitrypsin-Pittsburgh mutation (M358R) causes the α1-antitrypsin serpin to inhibit thrombin, causing a bleeding disorder.
Polymerisation and aggregation
The majority of serpin diseases are due to protein aggregation and are termed "serpinopathies". Serpins are vulnerable to disease-causing mutations that promote formation of misfolded polymers due to their inherently unstable structures. Well-characterised serpinopathies include α1-antitrypsin deficiency (alpha-1), which may cause familial emphysema, and sometimes liver cirrhosis, certain familial forms of thrombosis related to antithrombin deficiency, types 1 and 2 hereditary angioedema (HAE) related to deficiency of C1-inhibitor, and familial encephalopathy with neuroserpin inclusion bodies (FENIB; a rare type of dementia caused by neuroserpin polymerisation).
Each monomer of the serpin aggregate exists in the inactive, relaxed conformation (with the RCL inserted into the A-sheet). The polymers are therefore hyperstable to temperature and unable to inhibit proteases. Serpinopathies therefore cause pathologies similarly to other proteopathies (e.g. prion diseases) via two main mechanisms. First, the lack of active serpin results in uncontrolled protease activity and tissue destruction. Second, the hyperstable polymers themselves clog up the endoplasmic reticulum of cells that synthesize serpins, eventually resulting in cell death and tissue damage. In the case of antitrypsin deficiency, antitrypsin polymers cause the death of liver cells, sometimes resulting in liver damage and cirrhosis. Within the cell, serpin polymers are slowly removed via degradation in the endoplasmic reticulum. However, the details of how serpin polymers cause cell death remains to be fully understood.
Physiological serpin polymers are thought to form via domain swapping events, where a segment of one serpin protein inserts into another. Domain-swaps occur when mutations or environmental factors interfere with the final stages of serpin folding to the native state, causing high-energy intermediates to misfold. Both dimer and trimer domain-swap structures have been solved. In the dimer (of antithrombin), the RCL and part of the A-sheet incorporates into the A-sheet of another serpin molecule. The domain-swapped trimer (of antitrypsin) forms via the exchange of an entirely different region of the structure, the B-sheet (with each molecule's RCL inserted into its own A-sheet). It has also been proposed that serpins may form domain-swaps by inserting the RCL of one protein into the A-sheet of another (A-sheet polymerisation). These domain-swapped dimer and trimer structures are thought to be the building blocks of the disease-causing polymer aggregates, but the exact mechanism is still unclear.
Therapeutic strategies
Several therapeutic approaches are in use or under investigation to treat the most common serpinopathy: antitrypsin deficiency. Antitrypsin augmentation therapy is approved for severe antitrypsin deficiency-related emphysema. In this therapy, antitrypsin is purified from the plasma of blood donors and administered intravenously (first marketed as Prolastin). To treat severe antitrypsin deficiency-related disease, lung and liver transplantation has proven effective. In animal models, gene targeting in induced pluripotent stem cells has been successfully used to correct an antitrypsin polymerisation defect and to restore the ability of the mammalian liver to secrete active antitrypsin. Small molecules have also been developed that block antitrypsin polymerisation in vitro.
Evolution
Serpins are the most widely distributed and largest superfamily of protease inhibitors. They were initially believed to be restricted to eukaryote organisms, but have since been found in bacteria, archaea and some viruses. It remains unclear whether prokaryote genes are the descendants of an ancestral prokaryotic serpin or the product of horizontal gene transfer from eukaryotes. Most intracellular serpins belong to a single phylogenetic clade, whether they come from plants or animals, indicating that the intracellular and extracellular serpins may have diverged before the plants and animals. Exceptions include the intracellular heat shock serpin HSP47, which is a chaperone essential for proper folding of collagen, and cycles between the cis-Golgi and the endoplasmic reticulum.
Protease-inhibition is thought to be the ancestral function, with non-inhibitory members the results of evolutionary neofunctionalisation of the structure. The S to R conformational change has also been adapted by some binding serpins to regulate affinity for their targets.
Distribution
Animal
Human
The human genome encodes 16 serpin clades, termed through , including 29 inhibitory and 7 non-inhibitory serpin proteins. The human serpin naming system is based upon a phylogenetic analysis of approximately 500 serpins from 2001, with proteins named , where X is the clade of the protein and Y the number of the protein within that clade. The functions of human serpins have been determined by a combination of biochemical studies, human genetic disorders, and knockout mouse models.
Specialised mammalian serpins
Many mammalian serpins have been identified that share no obvious orthology with a human serpin counterpart. Examples include numerous rodent serpins (particularly some of the murine intracellular serpins) as well as the uterine serpins. The term uterine serpin refers to members of the serpin A clade that are encoded by the SERPINA14 gene. Uterine serpins are produced by the endometrium of a restricted group of mammals in the Laurasiatheria clade under the influence of progesterone or estrogen. They are probably not functional proteinase inhibitors and may function during pregnancy to inhibit maternal immune responses against the conceptus or to participate in transplacental transport.
Insect
The Drosophila melanogaster genome contains 29 serpin encoding genes. Amino acid sequence analysis has placed 14 of these serpins in serpin clade Q and three in serpin clade K with the remaining twelve classified as orphan serpins not belonging to any clade. The clade classification system is difficult to use for Drosophila serpins and instead a nomenclature system has been adopted that is based on the position of serpin genes on the Drosophila chromosomes. Thirteen of the Drosophila serpins occur as isolated genes in the genome (including Serpin-27A, see below), with the remaining 16 organised into five gene clusters that occur at chromosome positions 28D (2 serpins), 42D (5 serpins), 43A (4 serpins), 77B (3 serpins) and 88E (2 serpins).
Studies on Drosophila serpins reveal that Serpin-27A inhibits the Easter protease (the final protease in the Nudel, Gastrulation Defective, Snake and Easter proteolytic cascade) and thus controls dorsoventral patterning. Easter functions to cleave Spätzle (a chemokine-type ligand), which results in toll-mediated signaling. As well as its central role in embryonic patterning, toll signaling is also important for the innate immune response in insects. Accordingly, serpin-27A also functions to control the insect immune response. In Tenebrio molitor (a large beetle), a protein (SPN93) comprising two discrete tandem serpin domains functions to regulate the toll proteolytic cascade.
Nematode
The genome of the nematode worm C. elegans contains 9 serpins, all of which lack signal sequences and so are likely intracellular. However, only 5 of these serpins appear to function as protease inhibitors. One, SRP-6, performs a protective function and guards against stress-induced calpain-associated lysosomal disruption. Further, SRP-6 inhibits lysosomal cysteine proteases released after lysosomal rupture. Accordingly, worms lacking SRP-6 are sensitive to stress. Most notably, SRP-6 knockout worms die when placed in water (the hypo-osmotic stress lethal phenotype or Osl). It has therefore been suggested that lysosomes play a general and controllable role in determining cell fate.
Plant
Plant serpins were amongst the first members of the superfamily that were identified. The serpin barley protein Z is highly abundant in barley grain, and one of the major protein components in beer. The genome of the model plant, Arabidopsis thaliana contain 18 serpin-like genes, although only 8 of these are full-length serpin sequences.
Plant serpins are potent inhibitors of mammalian chymotrypsin-like serine proteases in vitro, the best-studied example being barley serpin Zx (BSZx), which is able to inhibit trypsin and chymotrypsin as well as several blood coagulation factors. However, close relatives of chymotrypsin-like serine proteases are absent in plants. The RCL of several serpins from wheat grain and rye contain poly-Q repeat sequences similar to those present in the prolamin storage proteins of the endosperm. It has therefore been suggested that plant serpins may function to inhibit proteases from insects or microbes that would otherwise digest grain storage proteins. In support of this hypothesis, specific plant serpins have been identified in the phloem sap of pumpkin (CmPS-1) and cucumber plants. Although an inverse correlation between up-regulation of CmPS-1 expression and aphid survival was observed, in vitro feeding experiments revealed that recombinant CmPS-1 did not appear to affect insect survival.
Alternative roles and protease targets for plant serpins have been proposed. The Arabidopsis serpin, AtSerpin1 (At1g47710; ), mediates set-point control over programmed cell death by targeting the 'Responsive to Desiccation-21' (RD21) papain-like cysteine protease. AtSerpin1 also inhibits metacaspase-like proteases in vitro. Two other Arabidopsis serpins, AtSRP2 (At2g14540) and AtSRP3 (At1g64030) appear to be involved in responses to DNA damage.
Fungal
A single fungal serpin has been characterized to date: from Piromyces spp. strain E2. Piromyces is a genus of anaerobic fungi found in the gut of ruminants and is important for digesting plant material. is predicted to be inhibitory and contains two N-terminal dockerin domains in addition to its serpin domain. Dockerins are commonly found in proteins that localise to the fungal cellulosome, a large extracellular multiprotein complex that breaks down cellulose. It is therefore suggested that may protect the cellulosome against plant proteases. Certain bacterial serpins similarly localize to the cellulosome.
Prokaryotic
Predicted serpin genes are sporadically distributed in prokaryotes. In vitro studies on some of these molecules have revealed that they are able to inhibit proteases, and it is suggested that they function as inhibitors in vivo. Several prokaryote serpins are found in extremophiles. Accordingly, and in contrast to mammalian serpins, these molecules possess elevated resistance to heat denaturation. The precise role of most bacterial serpins remains obscure, although Clostridium thermocellum serpin localises to the cellulosome. It is suggested that the role of cellulosome-associated serpins may be to prevent unwanted protease activity against the cellulosome.
Viral
Serpins are also expressed by viruses as a way to evade the host's immune defense. In particular, serpins expressed by pox viruses, including cow pox (vaccinia) and rabbit pox (myxoma), are of interest because of their potential use as novel therapeutics for immune and inflammatory disorders as well as transplant therapy. Serp1 suppresses the TLR-mediated innate immune response and allows indefinite cardiac allograft survival in rats. Crma and Serp2 are both cross-class inhibitors and target both serine (granzyme B; albeit weakly) and cysteine proteases (caspase 1 and caspase 8). In comparison to their mammalian counterparts, viral serpins contain significant deletions of elements of secondary structure. Specifically, crmA lacks the D-helix as well as significant portions of the A- and E-helices.
References
External links
Merops protease inhibitor claudication (Family I4)
James Whisstock laboratory at Monash University
Jim Huntington laboratory at University of Cambridge
Frank Church laboratory at University of North Carolina at Chapel Hill
Paul Declerck laboratory at Katholieke Universiteit Leuven
Tom Roberts laboratory at University of Sydney
Robert Fluhr laboratory at Weizmann Institute of Science
Peter Gettins laboratory at University of Illinois at Chicago
Protein families | Serpin | [
"Biology"
] | 7,200 | [
"Protein families",
"Protein classification"
] |
564,641 | https://en.wikipedia.org/wiki/Virtual%20ground | In electronics, a virtual ground (or virtual earth) is a node of a circuit that is maintained at a steady reference potential, without being connected directly to the reference potential. In some cases the reference potential is considered to be that of the surface of the earth, and the reference node is called "ground" or "earth" as a consequence.
The virtual ground concept aids circuit analysis in operational amplifiers and other circuits and provides useful practical circuit effects that would be difficult to achieve in other ways.
In circuit theory, a node may have any value of current or voltage but physical implementations of a virtual ground will have limitations in terms of current handling ability and a non-zero impedance which may have practical side effects.
Construction
A voltage divider, using two resistors, can be used to create a virtual ground node. If two voltage sources are connected in series with two resistors, it can be shown that the midpoint becomes a virtual ground if
An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or some other circuit element that has gain. Since an operational amplifier has very high open-loop gain, the potential difference between its inputs tends to zero when a feedback network is implemented.
This means that the output supplies the inverting input (via the feedback network) with enough voltage to reduce the potential difference between the inputs to microvolts. More precisely, it can be shown that the output voltage of the amplifier in the figure is approximately equal to .
Thus, as far as the amplifier is working in its linear region (output not saturated, frequencies inside the range of the opamp), the voltage at the inverting input terminal remains constant with respect to the real ground, and independent from the loads to which the output may be connected.
This property is characterized a "virtual ground".
Applications
Voltage is a differential quantity, which appears between two points. In order to deal only with a voltage (an electrical potential) of a single point, the second point has to be connected to a reference point (ground). Usually, the power supply terminals serve as steady grounds; when the internal points of compound power sources are accessible, they can also serve as real grounds.
If there are no accessible source internal points, external circuit points with steady voltage relative to the source terminals can serve as artificial virtual grounds. Such a point has to have steady potential, which does not vary when a load is attached.
See also
Voltage-to-current converter and Current-to-voltage converter show some typical virtual ground applications
Miller theorem applications
References
External links
Create a Virtual Ground with the LT1118-2.5 Sink/Source Voltage Regulator
Rail Splitter, from Abraham Lincoln to Virtual Ground Application note on creating an artificial virtual ground as a reference voltage.
Creating a Virtual Power Supply Ground
Inverting configuration shows the application of the virtual ground concept in an inverting amplifier (Archived)
Electrical circuits
Electricity concepts | Virtual ground | [
"Engineering"
] | 597 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
564,661 | https://en.wikipedia.org/wiki/Video%20game%20producer | A video game producer is the top person in charge of overseeing development of a video game.
History
The earliest documented use of the term producer in games was by Trip Hawkins, who established the position when he founded Electronic Arts in 1982:
Sierra On-Line's 1982 computer game Time Zone may be the first to list credits for "Producer" and "Executive Producer". As of late 1983 Electronic Arts had five producers: A product marketer and two others from Hawkins' former employer Apple ("good at working with engineering people"), one former IBM salesman and executive recruiter, and one product marketer from Automated Simulations; it popularized the use of the title in the industry. Hawkins' vision—influenced by his relationship with Jerry Moss—was that producers would manage artists and repertoire in the same way as in the music business, and Hawkins brought in record producers from A&M Records to help train those first producers. Activision made Brad Fregger their first producer in April 1983.
Although the term is an industry standard today, it was dismissed as "imitation Hollywood" by many game executives and press members at the time. Over its entire history, the role of the video game producer has been defined in a wide range of ways by different companies and different teams, and there are a variety of positions within the industry referred to as producer.
There are relatively few superstars of game production that parallel those in film, in part because top producers are usually employed by publishers who choose to play down publicizing their contributions. Unlike many of their counterparts in film or music, these producers do not run their own independent companies.
Types of producers
Most video and computer games are developed by third-party developers. In these cases, there may be external and internal producers. External producers may act as "executive producers" and are employed by the game's publisher. Internal producers work for the developer itself and have more of a hands-on role. Some game developers may have no internal producers, however, and may rely solely on the publisher's producer.
For an internal producer, associate producers tend to specialize in an area of expertise depending on the team they are producing for and what skills they have a background in. These specializations include but are not limited to: programming, design, art, sound, and quality assurance. A normal producer is usually the project manager and is in charge of delivering the product to the publisher on time and on budget. An executive producer will be managing all of the products in the company and making sure that the games are on track to meet their goals and stay within the company's goals and direction.
For an external producer, their job responsibilities may focus mainly on overseeing several projects being worked on by a number of developers. While keeping updated on the progress of the games being developed externally, they inform the upper management of the publisher of the status of the pending projects and any problems they may be experiencing. If a publisher's producer is overseeing a game being developed internally, their role is more akin to that of an internal producer and will generally only work on one game or a few small games.
As games have grown larger and more expensive, line producers have become part of some teams. Based on filmmaking traditions, line producers focus on project scheduling and costing to ensure titles are completed on time and on budget.
Responsibilities
An internal producer is heavily involved in the development of, usually, a single game. Responsibilities for this position vary from company to company, but in general, the person in this position has the following duties:
Negotiating contracts, including licensing deals
Acting as a liaison between the development staff and the upper stakeholders (publisher or executive staff)
Developing and maintaining schedules and budgets
Overseeing creative (art and design) and technical development (game programming) of the game
Ensuring timely delivery of deliverables (such as milestones)
Scheduling timely quality assurance (testing)
Arranging for beta testing and focus groups, if applicable
Arranging for localization
Pitching game ideas to publishers
In short, the internal producer is ultimately responsible for timely delivery and final quality of the game.
For small games, the producer may interact directly with the programming and creative staff. For larger games, the producer will seek the assistance of the lead programmer, art lead, game designer and testing lead. While it is customary for the producer to meet with the entire development staff from time to time, for larger games, they will only meet with the leads on a regular basis to keep updated on the development status. In smaller studios, a producer may fill any slack in the production team by doing the odd job of writing the game manual or producing game assets.
For most games, the producer does not have a large role but does have some influence on the development of the video game design. While not a game designer, the producer has to weave the wishes of the publisher or upper management into the design. They usually seek the assistance of the game designer in this effort. So the final game design is a result the effort of the designer and some influence of the producer.
Compensation
In general, video game producers earn the third most out of game development positions, behind business (management) and programmers.
According to an annual survey of salaries in the industry, producers earn an average of USD$75,000 annually. A video game producer with less than 3 years of experience makes, on average, around $55,000 annually. A video game producer with more than 6 years of experience makes, on average, over $125,000 annually. The salaries of a video game producer will vary depending on the region and the studio.
Education
Most video game producers complete a bachelor's degree program in game design, computer science, digital media or business. Popular computer programming languages for video game development include C, C++, Assembly, C# and Java. Some common courses are communications, mathematics, accounting, art, digital modeling and animation.
Employers typically require three plus years of experience, since a producer has to have gone through the development cycle several times to really understand how unpredictable the business is. The most common path to becoming a video game producer begins by first working as a game tester, then moving up the quality assurance ladder, and then eventually on to production. This is easier to accomplish if one stays with the same studio, reaping the benefits of having built relationships with the production department.
See also
List of video game producers
References
External links
Producer at Eurocom
Justyn McLean - Game Boy at Mirror
Video game producer
Video game industry occupations | Video game producer | [
"Technology"
] | 1,307 | [
"Video game industry occupations",
"Computer occupations"
] |
564,685 | https://en.wikipedia.org/wiki/Digital%20control | Digital control is a branch of control theory that uses digital computers to act as system controllers.
Depending on the requirements, a digital control system can take the form of a microcontroller to an ASIC to a standard desktop computer.
Since a digital computer is a discrete system, the Laplace transform is replaced with the Z-transform. Since a digital computer has finite precision (See quantization), extra care is needed to ensure the error in coefficients, analog-to-digital conversion, digital-to-analog conversion, etc. are not producing undesired or unplanned effects.
Since the creation of the first digital computer in the early 1940s the price of digital computers has dropped considerably, which has made them key pieces to control systems because they are easy to configure and reconfigure through software, can scale to the limits of the memory or storage space without extra cost, parameters of the program can change with time (See adaptive control) and digital computers are much less prone to environmental conditions than capacitors, inductors, etc.
Digital controller implementation
A digital controller is usually cascaded with the plant in a feedback system. The rest of the system can either be digital or analog.
Typically, a digital controller requires:
Analog-to-digital conversion to convert analog inputs to machine-readable (digital) format
Digital-to-analog conversion to convert digital outputs to a form that can be input to a plant (analog)
A program that relates the outputs to the inputs
Output program
Outputs from the digital controller are functions of current and past input samples, as well as past output samples - this can be implemented by storing relevant values of input and output in registers. The output can then be formed by a weighted sum of these stored values.
The programs can take numerous forms and perform many functions
A digital filter for low-pass filtering
A state space model of a system to act as a state observer
A telemetry system
Stability
Although a controller may be stable when implemented as an analog controller, it could be unstable when implemented as a digital controller due to a large sampling interval. During sampling the aliasing modifies the cutoff parameters. Thus the sample rate characterizes the transient response and stability of the compensated system, and must update the values at the controller input often enough so as to not cause instability.
When substituting the frequency into the z operator, regular stability criteria still apply to discrete control systems. Nyquist criteria apply to z-domain transfer functions as well as being general for complex valued functions. Bode stability criteria apply similarly.
Jury criterion determines the discrete system stability about its characteristic polynomial.
Design of digital controller in s-domain
The digital controller can also be designed in the s-domain (continuous). The Tustin transformation can transform the continuous compensator to the respective digital compensator. The digital compensator will achieve an output that approaches the output of its respective analog controller as the sampling interval is decreased.
Tustin transformation deduction
Tustin is the Padé(1,1) approximation of the exponential function :
And its inverse
Digital control theory is the technique to design strategies in discrete time, (and/or) quantized amplitude (and/or) in (binary) coded form to be implemented in computer systems (microcontrollers, microprocessors) that will control the analog (continuous in time and amplitude) dynamics of analog systems. From this consideration many errors from classical digital control were identified and solved and new methods were proposed:
Marcelo Tredinnick and Marcelo Souza and their new type of analog-digital mapping
Yutaka Yamamoto and his "lifting function space model"
Alexander Sesekin and his studies about impulsive systems.
M.U. Akhmetov and his studies about impulsive and pulse control
Design of digital controller in z-domain
The digital controller can also be designed in the z-domain (discrete). The Pulse Transfer Function (PTF) represents the digital viewpoint of the continuous process when interfaced with appropriate ADC and DAC, and for a specified sample time is obtained as:
Where denotes z-Transform for the chosen sample time . There are many ways to directly design a digital controller to achieve a given specification. For a type-0 system under unity negative feedback control, Michael Short and colleagues have shown that a relatively simple but effective method to synthesize a controller for a given (monic) closed-loop denominator polynomial and preserve the (scaled) zeros of the PTF numerator is to use the design equation:
Where the scalar term ensures the controller exhibits integral action, and a steady-state gain of unity is achieved in the closed-loop. The resulting closed-loop discrete transfer function from the z-Transform of reference input to the z-Transform of process output is then given by:
Since process time delay manifests as leading co-efficient(s) of zero in the process PTF numerator , the synthesis method above inherently yields a predictive controller if any such delay is present in the continuous plant.
See also
Sampled data systems
Adaptive control
Analog control
Control theory
Digital
Feedback, Negative feedback, Positive feedback
Laplace transform
Real-time control
Z-transform
References
FRANKLIN, G.F.; POWELL, J.D., Emami-Naeini, A., Digital Control of Dynamical Systems, 3rd Ed (1998). Ellis-Kagle Press, Half Moon Bay, CA
KATZ, P. Digital control using microprocessors. Englewood Cliffs: Prentice-Hall, 293p. 1981.
OGATA, K. Discrete-time control systems. Englewood Cliffs: Prentice-Hall,984p. 1987.
PHILLIPS, C.L.; NAGLE, H. T. Digital control system analysis and design. Englewood Cliffs, New Jersey: Prentice Hall International. 1995.
M. Sami Fadali, Antonio Visioli, (2009) "Digital Control Engineering", Academic Press, .
JURY, E.I. Sampled-data control systems. New-York: John Wiley. 1958.
Control theory
de:Digitaler Regler | Digital control | [
"Mathematics"
] | 1,251 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
564,695 | https://en.wikipedia.org/wiki/Discrete%20system | In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models.
A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
See also
Digital control
Finite-state machine
Frequency spectrum
Mathematical model
Sample and hold
Sample rate
Sample time
Z-transform
References
Automata (computation)
Models of computation
Signal processing | Discrete system | [
"Technology",
"Engineering"
] | 178 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
564,719 | https://en.wikipedia.org/wiki/Hybrid%20system | A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both flow (described by a differential equation) and jump (described by a state machine, automaton, or a difference equation). Often, the term "hybrid dynamical system" is used instead of "hybrid system", to distinguish from other usages of "hybrid system", such as the combination neural nets and fuzzy logic, or of electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, the state of a hybrid system is defined by the values of the continuous variables and a discrete mode. The state changes either continuously, according to a flow condition, or discretely according to a control graph. Continuous flow is permitted as long as so-called invariants hold, while discrete transitions can occur as soon as given jump conditions are satisfied. Discrete transitions may be associated with events.
Examples
Hybrid systems have been used to model several cyber-physical systems, including physical systems with impact, logic-dynamic controllers, and even Internet congestion.
Bouncing ball
A canonical example of a hybrid system is the bouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after an inelastic collision. A mathematical description of the bouncing ball follows. Let be the height of the ball and be the velocity of the ball. A hybrid system describing the ball is as follows:
When , flow is governed by
,
where is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
When , jumps are governed by
,
where is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor of . Effectively, this describes the nature of the inelastic collision.
The bouncing ball is an especially interesting hybrid system, as it exhibits Zeno behavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making an infinite number of jumps in a finite amount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.
It is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as
Such a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact force . One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.
Hybrid systems verification
There are approaches to automatically proving properties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets, abstraction refinement, and barrier certificates.
Most verification tasks are undecidable, making general verification algorithms impossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.
Other modeling approaches
Two basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by a hybrid automaton, a hybrid program or a hybrid Petri net. The implicit approach is often represented by guarded equations to result in systems of differential algebraic equations (DAEs) where the active equations may change, for example by means of a hybrid bond graph.
As a unified simulation approach for hybrid system analysis, there is a method based on DEVS formalism in which integrators for differential equations are quantized into atomic DEVS models. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software tool PowerDEVS.
Software Tools
Simulation
HyEQ Toolbox: Hybrid system solver for MATLAB and Simulink
PowerDEVS: General-purpose tool for DEVS (Discrete Event System) modeling and simulation oriented to the simulation of hybrid systems
Reachability
Ariadne: C++ library for (numerically rigorous) reachability analysis of nonlinear hybrid systems
CORA: A MATLAB Toolbox for reachability analysis of cyber-physical systems, including hybrid systems
Flow*: A tool for reachability analysis of nonlinear hybrid systems
HyCreate: A tool for overapproximating reachability of hybrid automata
HyPro: C++ library for state set representations for hybrid systems reachability analysis
JuliaReach: A toolbox for set-based reachability
Temporal Logic and Other Verification
C2E2: Nonlinear hybrid system verifier
HyTech: Model checker for hybrid systems
HSolver: Verification tool for hybrid systems
KeYmaera: Theorem prover for hybrid systems
PHAVer: Polyhedral hybrid automaton verifier
S-TaLiRo: MATLAB toolbox for verification of hybrid systems with respect to temporal logic specifications
Other
SCOTS: Tool for the synthesis of correct-by-construction controllers for hybrid systems
SpaceEx: State-space explorer
See also
Hybrid automaton
Sliding mode control
Variable structure system
Variable structure control
Joint spectral radius
Cyber-physical system
Behavior trees (artificial intelligence, robotics and control)
Jump process (in the context of probability), an example of a (stochastic) hybrid system with zero flow component
Piecewise-deterministic Markov process (PDMP), an example of a (stochastic) hybrid system and a generalization of the jump process
Jump diffusion, an example of a (stochastic) hybrid system and a generalization of the PDMP
Further reading
[Kofman2004]
[CF2006]
[Nutaro2010]
External links
IEEE CSS Committee on Hybrid Systems
References
Systems theory
Differential equations
Dynamical systems
Control theory | Hybrid system | [
"Physics",
"Mathematics"
] | 1,484 | [
"Applied mathematics",
"Control theory",
"Mathematical objects",
"Differential equations",
"Equations",
"Mechanics",
"Dynamical systems"
] |
564,746 | https://en.wikipedia.org/wiki/Closed-loop%20controller | A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
improved rectification of random fluctuations
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
Open-loop and closed-loop
Closed-loop transfer function
The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Solving for Y(s) in terms of R(s) gives
The expression is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If , i.e., it has a large norm with each value of s, and if , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.
PID feedback control
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If is the control signal sent to the system, is the measured output and is the desired output, and is the tracking error, a PID controller has the general form
The desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
with the PID controller transfer function
As an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by
where and are some constants. The plant output is fed back through
where is also a constant. Now if we set , , and , we can express the PID controller transfer function in series form as
Plugging , , and into the closed-loop transfer function , we find that by setting
. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
References
Control theory | Closed-loop controller | [
"Mathematics"
] | 1,260 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
564,779 | https://en.wikipedia.org/wiki/Cell%20growth | Cell growth refers to an increase in the total mass of a cell, including both cytoplasmic, nuclear and organelle volume. Cell growth occurs when the overall rate of cellular biosynthesis (production of biomolecules or anabolism) is greater than the overall rate of cellular degradation (the destruction of biomolecules via the proteasome, lysosome or autophagy, or catabolism).
Cell growth is not to be confused with cell division or the cell cycle, which are distinct processes that can occur alongside cell growth during the process of cell proliferation, where a cell, known as the mother cell, grows and divides to produce two daughter cells. Importantly, cell growth and cell division can also occur independently of one another. During early embryonic development (cleavage of the zygote to form a morula and blastoderm), cell divisions occur repeatedly without cell growth. Conversely, some cells can grow without cell division or without any progression of the cell cycle, such as growth of neurons during axonal pathfinding in nervous system development.
In multicellular organisms, tissue growth rarely occurs solely through cell growth without cell division, but most often occurs through cell proliferation. This is because a single cell with only one copy of the genome in the cell nucleus can perform biosynthesis and thus undergo cell growth at only half the rate of two cells. Hence, two cells grow (accumulate mass) at twice the rate of a single cell, and four cells grow at 4-times the rate of a single cell. This principle leads to an exponential increase of tissue growth rate (mass accumulation) during cell proliferation, owing to the exponential increase in cell number.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells.
Some special cells can grow to very large sizes via an unusual endoreplication cell cycle in which the genome is replicated during S-phase but there is no subsequent mitosis (M-phase) or cell division (cytokinesis). These large endoreplicating cells have many copies of the genome, so are highly polyploid.
Oocytes can be unusually large cells in species for which embryonic development takes place away from the mother's body within an egg that is laid externally. The large size of some eggs can be achieved either by pumping in cytosolic components from adjacent cells through cytoplasmic bridges named ring canals (Drosophila) or by internalisation of nutrient storage granules (yolk granules) by endocytosis (frogs).
Mechanisms of cell growth control
Cells can grow by increasing the overall rate of cellular biosynthesis such that production of biomolecules exceeds the overall rate of cellular degradation of biomolecules via the proteasome, lysosome or autophagy.
Biosynthesis of biomolecules is initiated by expression of genes which encode RNAs and/or proteins, including enzymes that catalyse synthesis of lipids and carbohydrates.
Individual genes are generally expressed via transcription into messenger RNA (mRNA) and translation into proteins, and the expression of each gene occurs to various different levels in a cell-type specific fashion (in response to gene regulatory networks).
To drive cell growth, the global rate of gene expression can be increased by enhancing the overall rate of transcription by RNA polymerase II (for active genes) or the overall rate of mRNA translation into protein by increasing the abundance of ribosomes and tRNA, whose biogenesis depends on RNA polymerase I and RNA polymerase III. The Myc transcription factor is an example of a regulatory protein that can induce the overall activity of RNA polymerase I, RNA polymerase II and RNA polymerase III to drive global transcription and translation and thereby cell growth.
In addition, the activity of individual ribosomes can be increased to boost the global efficiency of mRNA translation via regulation of translation initiation factors, including the 'translational elongation initiation factor 4E' (eIF4E) complex, which binds to and caps the 5' end of mRNAs. The protein TOR, part of the TORC1 complex, is an important upstream regulator of translation initiation as well as ribosome biogenesis. TOR is a serine/threonine kinase that can directly phosphorylate and inactivate a general inhibitor of eIF4E, named 4E-binding protein (4E-BP), to promote translation efficiency. TOR also directly phosphorylates and activates the ribosomal protein S6-kinase (S6K), which promotes ribosome biogenesis.
To inhibit cell growth, the global rate of gene expression can be decreased or the global rate of biomolecular degradation can be increased by increasing the rate of autophagy. TOR normally directly inhibits the function of the autophagy inducing kinase Atg1/ULK1. Thus, reducing TOR activity both reduces the global rate of translation and increases the extent of autophagy to reduce cell growth.
Cell growth regulation in animals
Many of the signal molecules that control of cellular growth are called growth factors, many of which induce signal transduction via the PI3K/AKT/mTOR pathway, which includes upstream lipid kinase PI3K and the downstream serine/threonine protein kinase Akt, which is able to activate another protein kinase TOR, which promotes translation and inhibits autophagy to drive cell growth.
Nutrient availability influences production of growth factors of the Insulin/IGF-1 family, which circulate as hormones in animals to activate the PI3K/AKT/mTOR pathway in cells to promote TOR activity so that when animals are well fed they will grow rapidly and when they are not able to receive sufficient nutrients they will reduce their growth rate. Recently it has been also demonstrated that cellular bicarbonate metabolism, which is responsible for cell growth, can be regulated by mTORC1 signaling.
In addition, the availability of amino acids to individual cells also directly promotes TOR activity, although this mode of regulation is more important in single-celled organisms than in multicellular organisms such as animals that always maintain an abundance of amino acids in circulation.
One disputed theory proposes that many different mammalian cells undergo size-dependent transitions during the cell cycle. These transitions are controlled by the cyclin-dependent kinase Cdk1. Though the proteins that control Cdk1 are well understood, their connection to mechanisms monitoring cell size remains elusive.
A postulated model for mammalian size control situates mass as the driving force of the cell cycle. A cell is unable to grow to an abnormally large size because at a certain cell size or cell mass, the S phase is initiated. The S phase starts the sequence of events leading to mitosis and cytokinesis. A cell is unable to get too small because the later cell cycle events, such as S, G2, and M, are delayed until mass increases sufficiently to begin S phase.
Cell populations
Cell populations go through a particular type of exponential growth called doubling or cell proliferation. Thus, each generation of cells should be twice as numerous as the previous generation. However, the number of generations only gives a maximum figure as not all cells survive in each generation. Cells can reproduce in the stage of Mitosis, where they double and split into two genetically equal cells.
Cell size
Cell size is highly variable among organisms, with some algae such as Caulerpa taxifolia being a single cell several meters in length. Plant cells are much larger than animal cells, and protists such as Paramecium can be 330 μm long, while a typical human cell might be 10 μm. How these cells "decide" how big they should be before dividing is an open question. Chemical gradients are known to be partly responsible, and it is hypothesized that mechanical stress detection by cytoskeletal structures is involved. Work on the topic generally requires an organism whose cell cycle is well-characterized.
Yeast cell size regulation
The relationship between cell size and cell division has been extensively studied in yeast. For some cells, there is a mechanism by which cell division is not initiated until a cell has reached a certain size. If the nutrient supply is restricted (after time t = 2 in the diagram, below), and the rate of increase in cell size is slowed, the time period between cell divisions is increased. Yeast cell-size mutants were isolated that begin cell division before reaching a normal/regular size (wee mutants).
Wee1 protein is a tyrosine kinase that normally phosphorylates the Cdc2 cell cycle regulatory protein (the homolog of CDK1 in humans), a cyclin-dependent kinase, on a tyrosine residue. Cdc2 drives entry into mitosis by phosphorylating a wide range of targets. This covalent modification of the molecular structure of Cdc2 inhibits the enzymatic activity of Cdc2 and prevents cell division. Wee1 acts to keep Cdc2 inactive during early G2 when cells are still small. When cells have reached sufficient size during G2, the phosphatase Cdc25 removes the inhibitory phosphorylation, and thus activates Cdc2 to allow mitotic entry. A balance of Wee1 and Cdc25 activity with changes in cell size is coordinated by the mitotic entry control system. It has been shown in Wee1 mutants, cells with weakened Wee1 activity, that Cdc2 becomes active when the cell is smaller. Thus, mitosis occurs before the yeast reach their normal size. This suggests that cell division may be regulated in part by dilution of Wee1 protein in cells as they grow larger.
Linking Cdr2 to Wee1
The protein kinase Cdr2 (which negatively regulates Wee1) and the Cdr2-related kinase Cdr1 (which directly phosphorylates and inhibits Wee1 in vitro) are localized to a band of cortical nodes in the middle of interphase cells. After entry into mitosis, cytokinesis factors such as myosin II are recruited to similar nodes; these nodes eventually condense to form the cytokinetic ring. A previously uncharacterized protein, Blt1, was found to colocalize with Cdr2 in the medial interphase nodes. Blt1 knockout cells had increased length at division, which is consistent with a delay in mitotic entry. This finding connects a physical location, a band of cortical nodes, with factors that have been shown to directly regulate mitotic entry, namely Cdr1, Cdr2, and Blt1.
Further experimentation with GFP-tagged proteins and mutant proteins indicates that the medial cortical nodes are formed by the ordered, Cdr2-dependent assembly of multiple interacting proteins during interphase. Cdr2 is at the top of this hierarchy and works upstream of Cdr1 and Blt1. Mitosis is promoted by the negative regulation of Wee1 by Cdr2. It has also been shown that Cdr2 recruits Wee1 to the medial cortical node. The mechanism of this recruitment has yet to be discovered. A Cdr2 kinase mutant, which is able to localize properly despite a loss of function in phosphorylation, disrupts the recruitment of Wee1 to the medial cortex and delays entry into mitosis. Thus, Wee1 localizes with its inhibitory network, which demonstrates that mitosis is controlled through Cdr2-dependent negative regulation of Wee1 at the medial cortical nodes.
Cell polarity factors
Cell polarity factors positioned at the cell tips provide spatial cues to limit Cdr2 distribution to the cell middle. In fission yeast Schizosaccharomyces pombe (S. Pombe), cells divide at a defined, reproducible size during mitosis because of the regulated activity of Cdk1. The cell polarity protein kinase Pom1, a member of the dual-specificity tyrosine-phosphorylation regulated kinase (DYRK) family of kinases, localizes to cell ends. In Pom1 knockout cells, Cdr2 was no longer restricted to the cell middle, but was seen diffusely through half of the cell. From this data it becomes apparent that Pom1 provides inhibitory signals that confine Cdr2 to the middle of the cell. It has been further shown that Pom1-dependent signals lead to the phosphorylation of Cdr2. Pom1 knockout cells were also shown to divide at a smaller size than wild-type, which indicates a premature entry into mitosis.
Pom1 forms polar gradients that peak at cell ends, which shows a direct link between size control factors and a specific physical location in the cell. As a cell grows in size, a gradient in Pom1 grows. When cells are small, Pom1 is spread diffusely throughout the cell body. As the cell increases in size, Pom1 concentration decreases in the middle and becomes concentrated at cell ends. Small cells in early G2 which contain sufficient levels of Pom1 in the entirety of the cell have inactive Cdr2 and cannot enter mitosis. It is not until the cells grow into late G2, when Pom1 is confined to the cell ends that Cdr2 in the medial cortical nodes is activated and able to start the inhibition of Wee1. This finding shows how cell size plays a direct role in regulating the start of mitosis. In this model, Pom1 acts as a molecular link between cell growth and mitotic entry through a Cdr2-Cdr1-Wee1-Cdk1 pathway. The Pom1 polar gradient successfully relays information about cell size and geometry to the Cdk1 regulatory system. Through this gradient, the cell ensures it has reached a defined, sufficient size to enter mitosis.
Other experimental systems for the study of cell size regulation
One common means to produce very large cells is by cell fusion to form syncytia. For example, very long (several inches) skeletal muscle cells are formed by fusion of thousands of myocytes. Genetic studies of the fruit fly Drosophila have revealed several genes that are required for the formation of multinucleated muscle cells by fusion of myoblasts. Some of the key proteins are important for cell adhesion between myocytes and some are involved in adhesion-dependent cell-to-cell signal transduction that allows for a cascade of cell fusion events.
Increases in the size of plant cells are complicated by the fact that almost all plant cells are inside of a solid cell wall. Under the influence of certain plant hormones the cell wall can be remodeled, allowing for increases in cell size that are important for the growth of some plant tissues.
Most unicellular organisms are microscopic in size, but there are some giant bacteria and protozoa that are visible to the naked eye. (See Table of cell sizes—Dense populations of a giant sulfur bacterium in Namibian shelf sediments—Large protists of the genus Chaos, closely related to the genus Amoeba.)
In the rod-shaped bacteria E. coli, Caulobacter crescentus and B. subtilis cell size is controlled by a simple mechanisms in which cell division occurs after a constant volume has been added since the previous division. By always growing by the same amount, cells born smaller or larger than average naturally converge to an average size equivalent to the amount added during each generation.
Cell division
Cell reproduction is asexual. For most of the constituents of the cell, growth is a steady, continuous process, interrupted only briefly at M phase when the nucleus and then the cell divide in two.
The process of cell division, called cell cycle, has four major parts called phases. The first part, called G1 phase is marked by synthesis of various enzymes that are required for DNA replication.
The second part of the cell cycle is the S phase, where DNA replication produces two identical sets of chromosomes. The third part is the G2 phase in which a significant protein synthesis occurs, mainly involving the production of microtubules that are required during the process of division, called mitosis.
The fourth phase, M phase, consists of nuclear division (karyokinesis) and cytoplasmic division (cytokinesis), accompanied by the formation of a new cell membrane. This is the physical division of mother and daughter cells. The M phase has been broken down into several distinct phases, sequentially known as prophase, prometaphase, metaphase, anaphase and telophase leading to cytokinesis.
Cell division is more complex in eukaryotes than in other organisms. Prokaryotic cells such as bacterial cells reproduce by binary fission, a process that includes DNA replication, chromosome segregation, and cytokinesis. Eukaryotic cell division either involves mitosis or a more complex process called meiosis. Mitosis and meiosis are sometimes called the two nuclear division processes. Binary fission is similar to eukaryote cell reproduction that involves mitosis. Both lead to the production of two daughter cells with the same number of chromosomes as the parental cell. Meiosis is used for a special cell reproduction process of diploid organisms. It produces four special daughter cells (gametes) which have half the normal cellular amount of DNA. A male and a female gamete can then combine to produce a zygote, a cell which again has the normal amount of chromosomes.
The rest of this article is a comparison of the main features of the three types of cell reproduction that either involve binary fission, mitosis, or meiosis. The diagram below depicts the similarities and differences of these three types of cell reproduction.
Comparison of the three types of cell division
The DNA content of a cell is duplicated at the start of the cell reproduction process. Prior to DNA replication, the DNA content of a cell can be represented as the amount Z (the cell has Z chromosomes). After the DNA replication process, the amount of DNA in the cell is 2Z (multiplication: 2 x Z = 2Z). During Binary fission and mitosis the duplicated DNA content of the reproducing parental cell is separated into two equal halves that are destined to end up in the two daughter cells. The final part of the cell reproduction process is cell division, when daughter cells physically split apart from a parental cell. During meiosis, there are two cell division steps that together produce the four daughter cells.
After the completion of binary fission or cell reproduction involving mitosis, each daughter cell has the same amount of DNA (Z) as what the parental cell had before it replicated its DNA. These two types of cell reproduction produced two daughter cells that have the same number of chromosomes as the parental cell. Chromosomes duplicate prior to cell division when forming new skin cells for reproduction. After meiotic cell reproduction the four daughter cells have half the number of chromosomes that the parental cell originally had. This is the haploid amount of DNA, often symbolized as N. Meiosis is used by diploid organisms to produce haploid gametes. In a diploid organism such as the human organism, most cells of the body have the diploid amount of DNA, 2N. Using this notation for counting chromosomes we say that human somatic cells have 46 chromosomes (2N = 46) while human sperm and eggs have 23 chromosomes (N = 23). Humans have 23 distinct types of chromosomes, the 22 autosomes and the special category of sex chromosomes. There are two distinct sex chromosomes, the X chromosome and the Y chromosome. A diploid human cell has 23 chromosomes from that person's father and 23 from the mother. That is, your body has two copies of human chromosome number 2, one from each of your parents.
Immediately after DNA replication a human cell will have 46 "double chromosomes". In each double chromosome there are two copies of that chromosome's DNA molecule. During mitosis the double chromosomes are split to produce 92 "single chromosomes", half of which go into each daughter cell. During meiosis, there are two chromosome separation steps which assure that each of the four daughter cells gets one copy of each of the 23 types of chromosome.
Sexual reproduction
Though cell reproduction that uses mitosis can reproduce eukaryotic cells, eukaryotes bother with the more complicated process of meiosis because sexual reproduction such as meiosis confers a selective advantage. Notice that when meiosis starts, the two copies of sister chromatids number 2 are adjacent to each other. During this time, there can be genetic recombination events. Information from the chromosome 2 DNA gained from one parent (red) will transfer over to the chromosome 2 DNA molecule that was received from the other parent (green). Notice that in mitosis the two copies of chromosome number 2 do not interact. Recombination of genetic information between homologous chromosomes during meiosis is a process for repairing DNA damages. This process can also produce new combinations of genes, some of which may be adaptively beneficial and influence the course of evolution. However, in organisms with more than one set of chromosomes at the main life cycle stage, sex may also provide an advantage because, under random mating, it produces homozygotes and heterozygotes according to the Hardy–Weinberg ratio.
Disorders
A series of growth disorders can occur at the cellular level and these consequently underpin much of the subsequent course in cancer, in which a group of cells display uncontrolled growth and division beyond the normal limits, invasion (intrusion on and destruction of adjacent tissues), and sometimes metastasis (spread to other locations in the body via lymph or blood). Several key determinants of cell growth, like ploidy and the regulation of cellular metabolism, are commonly disrupted in tumors. Therefore, heterogenous cell growth and pleomorphism is one of the earliest hallmarks of cancer progression. Despite the prevalence of pleomorphism in human pathology, its role in disease progression is unclear. In epithelial tissues, misregulation of cellular size can induce packing defects and disperse aberrant cells. But the consequence of atypical cell growth in other animal tissues is unknown.
Measurement methods
The cell growth can be detected by a variety of methods.
The cell size growth can be visualized by microscopy, using suitable stains. But the increase of cells number is usually more significant. It can be measured by manual counting of cells under microscopy observation, using the dye exclusion method (i.e. trypan blue) to count only viable cells. Less fastidious, scalable, methods include the use of cytometers, while flow cytometry allows combining cell counts ('events') with other specific parameters: fluorescent probes for membranes, cytoplasm or nuclei allow distinguishing dead/viable cells, cell types, cell differentiation, expression of a biomarker such as Ki67. The total mass of a cell, which comprises the mass of all its components including its water content, is a dynamic magnitude and it can be measured in real-time and tracked over hours or even days using an inertial picobalance. A cell's buoyant mass, which corresponds to the total mass of the cell minus that of the fluid it displaces, can be measured using suspended microchannel resonators.
Beside the increasing number of cells, one can be assessed regarding the metabolic activity growth, that is, the CFDA and calcein-AM measure (fluorimetrically) not only the membrane functionality (dye retention), but also the functionality of cytoplasmic enzymes (esterases). The MTT assays (colorimetric) and the resazurin assay (fluorimetric) dose the mitochondrial redox potential.
All these assays may correlate well, or not, depending on cell growth conditions and desired aspects (activity, proliferation). The task is even more complicated with populations of different cells, furthermore when combining cell growth interferences or toxicity.
See also
Bacterial growth
References
Books
External links
A comparison of generational and exponential models of cell population growth
Local Growth in an Array of Disks Wolfram Demonstrations Project
Cell cycle
Cellular processes | Cell growth | [
"Biology"
] | 5,061 | [
"Cell cycle",
"Cellular processes"
] |
564,821 | https://en.wikipedia.org/wiki/Numero%20sign | {{Infobox symbol
|mark=
|unicode = {{unichar|2116|Numero sig'n|html=}}
|see also =
|different from=
}}
The numero sign or numero symbol, (also represented as Nº, No̱, №, No., or no.'''), is a typographic abbreviation of the word number(s) indicating ordinal numeration, especially in names and titles. For example, using the numero sign, the written long-form of the address "Number 29 Acacia Road" is shortened to "№ 29 Acacia Rd", yet both forms are spoken long.
Typographically, the numero sign combines as a single ligature the uppercase Latin letter with a usually superscript lowercase letter , sometimes underlined, resembling the masculine ordinal indicator . The ligature has a code point in Unicode as a precomposed character, .
The Oxford English Dictionary derives the numero sign from Latin , the ablative form of ("number", with the ablative denotations of "by the number, with the number"). In Romance languages, the numero sign is understood as an abbreviation of the word for "number", e.g. Italian , French , and Portuguese and Spanish .
This article describes other typographical abbreviations for "number" in different languages, in addition to the numero sign proper.
Usages
The numero sign's non-ligature substitution by the two separate letters and is common. A capital or lower-case "n" may be used, followed by "o.", superscript "o", ordinal indicator, or the degree sign; this will be understood in most languages.
Bulgarian
In Bulgarian the numero sign is often used and it is present in three widely used keyboard layouts accessible with in BDS and prBDS and with on the Phonetic layout.
English
In English, the non-ligature form is typical and is often used to abbreviate the word "number". In North America, the number sign, , is more prevalent. The ligature form does not appear on British or American QWERTY keyboards.
French
The numero symbol is not in common use in France and does not appear on a standard AZERTY keyboard. Instead, the French Imprimerie nationale recommends the use of the form "no" (an "n" followed by a superscript lowercase "o"). The plural form "nos" can also be used. In practice, the "o" is often replaced by the degree symbol (°), which is visually similar to the superscript "o" and is easily accessible on an AZERTY keyboard.
Indonesian and Malaysian
"Nomor" in Indonesian and "nombor" in Malaysian; therefore "No." is commonly used as an abbreviation with standard spelling and full stop.
Italian
The sign is usually replaced with the abbreviations "n." or "nº", the latter using a masculine ordinal indicator, rather than a superscript "O".
Philippine languages
Because of more than three centuries of Spanish colonisation, the word número is found in almost all Philippine languages. "No." is its common notation in local languages as well as English.
Portuguese
In Portugal, the similar-looking notation is often used. In Brazil, where Portuguese is the official language, is often used on official documents. In both cases, the symbol used () is the masculine ordinal indicator. However, the Brazilian National Standards Organization (ABNT) determines that the word "número" should be abbreviated "n." only.
Russian
Although the letter is not in the Cyrillic alphabet, the numero sign is typeset in Russian publishing, and is available on Russian computer and typewriter keyboards.
The numero sign is very widely used in Russia and other post-Soviet states in many official and casual contexts. Examples include usage for law and other official documents numbering, names of institutions (hospitals, kindergartens, schools, libraries, organization departments and so on), numbering of periodical publications (such as newspapers and magazines), numbering of public transport routes, etc.
(, "sequential number") is universally used as a table header to denote a column containing the table row number.
The sign is sometimes used in Russian medical prescriptions (which according to the law must be written in Latin language) as an abbreviation for the Latin word numero to indicate the number of prescribed dosages (for example, tablets or capsules), and on the price tags in drugstores and pharmacy websites to indicate number of unit doses in drug packages, although the standard abbreviation for use in prescriptions is the Latin
Spanish
The numero sign is not typically used in Iberian Spanish, and it is not present on standard keyboard layouts. According to the Real Academia Española and the Fundéu BBVA, the word número (number) is abbreviated per the Spanish typographic convention of letras voladas ("flying letters"). The first letter(s) of the word to be abbreviated are followed by a period; then, the final letter(s) of the word are written as lowercase superscripts. This gives the abbreviations n.o (singular) and n.os (plural). The abbreviation "no." is not used (it might be mistaken for the Spanish negative word no). The abbreviations nro. and núm. are also acceptable. The numero sign, either as a one-character symbol or composed of the letter N plus superscript "o" (sometimes underlined or substituted by the ordinal indicator, ), is common in Latin America, where the interpolated period is sometimes not used in abbreviations.
Nr.
In some languages, Nr., nr., nr or NR is used instead, reflecting the abbreviation of the language's word for "number". In German, which capitalises all nouns and abbreviations of nouns, the word is abbreviated as Nr. Lithuanian uses this spelling as well, and it is usually capitalised in bureaucratic contexts, especially with the meaning "reference number" (such as , "contract No.") but in other contexts it follows the usual sentence capitalisation (such as tel. nr., abbreviation for '', "telephone number"). It is commonly lowercase in other languages, such as Dutch, Danish, Norwegian, Polish, Romanian, Estonian and Swedish. Some languages, such as Polish, omit the dot in abbreviations if the abbreviation ends with the last letter of the original word.
Typing the symbol
The sign is encoded in Unicode as and many platforms and languages have methods to enter it. See Unicode input and the relevant keyboard articles for further details.
See also
Superior letter
References
External links
Unicode Letterlike Symbols code chart
Typographical symbols
Numbers | Numero sign | [
"Mathematics"
] | 1,437 | [
"Symbols",
"Mathematical objects",
"Arithmetic",
"Typographical symbols",
"Numbers"
] |
564,838 | https://en.wikipedia.org/wiki/M%C4%83r%C8%9Bi%C8%99or | Mărțișor () is a tradition celebrated at the beginning of Spring in March, involving an object made from two intertwined red and white strings with hanging tassel in Romania and Moldova, very similar to Martenitsa tradition in Bulgaria and Martinka in North Macedonia and traditions of other populations from Southeastern Europe.
The word Mărțișor is the diminutive of marț, the old folk name for March (martie, in modern Romanian), and literally means "little March".
Modern tradition involves wearing the small object on the chest like a brooch or a lapel pin, during the first part of the month, starting from 1 March. Some older traditions held it should be worn from the first new moon of March until next significant holiday for the local community, which could be anywhere between 9 March and 1 May, or until first tree flowers blossomed, depending on the area. It was also more commonly worn tied around the wrist or like a necklace.
The object
Nowadays a Mărțișor is made from silk strings, almost exclusively red and white. Before the 19th century various other colors were used: black and white in Mehedinți and in Aromanian communities, red only in Vâlcea, Romanați, Argeș, Neamț, and Vaslui, black and red in Brăila, white and blue in Vrancea, or even multiple colours in areas of southern Transylvania and Moldova. Likewise, the material used could have been wool, linen, cotton, or silk.
Charms were attached to the strings, mostly coins, usually silver, or cross pendants. Later these ornaments were shaped to resemble various images such as four-leaf clover, ladybug, snowdrop etc. Bulgarian Martenitsa models the tassel into small dolls called Pizho and Penda. In Moldova the pendant started being made in the shape of ethnographic objects in the later part of the 21st(???) century.
General explanations have been given by the observers of the tradition for the object's appearance: the strings are believed to represent "funia anului" - the year's "rope", intertwining summer and winter, the pendant symbolized fortune and wealth, or, like a talisman, brought and preserved good health and beauty to the wearer.
The tradition
The custom of gifting and wearing the Mărțișor is a nationwide tradition among Romanians, Moldovans, and Aromanians. Similar customs include the Martenitsa, celebrated by Bulgarians, and Martinka by Macedonians, while other communities such as Albanians, Turks from the Ohrid region, Greeks from Northern Greece, the isles of Rhodes, Dodecanese and Karpathos, the Gagauz people, and the Diaspora of these populations also practice local variations of the custom.
The object was worn primarily by children and women, less so by men, and rarely by old people. Almost each region had a different time frame for how long it should be kept, varying from 2–3 days in the Iași region of Moldavia, up to 2–3 months in the Vâlcea region of Oltenia. Very often the end of this period was associated with signs of spring in the natural world: the return of migratory birds such as swallows and white storks, the flowering of fruit trees (apple tree, cherry tree), the blossoming of roses, or with the next significant holiday in the calendar.
When the object is removed, it is customary to tie it to a branch of a tree or place it on a fence as a gift for migratory birds returning from the south. Less commonly north of the Danube, but often recorded in Dobruja, was the practice of leaving the Mărțișor under a rock, with the type of insects found on the spot being interpreted as omens, throwing it into a spring or river (Gorj), or even burning it. In modern times they are often kept as souvenirs.
The tradition is placed along with other spring celebrations marking the year's cycle: agricultural communities associated it with the end of winter and start of spring. In particular it is connected to the days of "Baba Dochia", a mythological figure in Romanian folklore, and March, which in antiquity was the start of the year.
See also
Dragobete - another Romanian spring/fertility holiday
Martenitsa
Literature
Despina Leonhard: Das Märzchen: Brauch und Legende / Mărțişorul: Obicei şi Legendă. Ganderkesee 2016.
References
External links
Romania Welcomes Spring with Martisor Day. History and Traditions - Info in English by the native students of Romania
Traditii si obiceiuri on Travelworld.ro
March observances
Romanian traditions
Moldovan traditions
Spring traditions
Amulets
Talismans
Religious objects
Folklore
Mediterranean
Objects believed to protect from evil
Superstitions
Magic items | Mărțișor | [
"Physics"
] | 994 | [
"Magic items",
"Religious objects",
"Physical objects",
"Matter"
] |
564,886 | https://en.wikipedia.org/wiki/Nslookup | nslookup (from name server lookup) is a network administration command-line tool for querying the Domain Name System (DNS) to obtain the mapping between domain name and IP address, or other DNS records.
Overview
nslookup is a member of the BIND name server software. Andrew Cherenson created nslookup as a class project at UC Berkeley in 1986 and it first shipped in 4.3-Tahoe BSD
In the development of BIND 9, the Internet Systems Consortium planned to deprecate nslookup in favor of host and dig. This decision was reversed in 2004 with the release of BIND 9.3 and nslookup has been fully supported since then.
Unlike dig, nslookup does not use the operating system's local Domain Name System resolver library to perform its queries, and thus may behave differently. Additionally, vendor-provided versions may include the output of other sources of name information, such as host files, and Network Information Service. Some behaviors of nslookup may be modified by the contents of resolv.conf.
The Linux version of nslookup is the original BSD version written by Andrew Cherenson.
The ReactOS version was developed by Lucas Suggs and is licensed under the GPL.
Usage
nslookup operates in interactive or non-interactive mode. When used interactively by invoking it without arguments or when the first argument is - (minus sign) and the second argument is a hostname or Internet address of a name server, the user issues parameter configurations or requests when presented with the nslookup prompt (>). When no arguments are given, then the command queries the default server. The - (minus sign) invokes subcommands which are specified on the command line and should precede nslookup commands. In non-interactive mode, i.e. when the first argument is a name or Internet address of the host being searched, parameters and the query are specified as command line arguments in the invocation of the program. The non interactive mode searches the information for a specified host using the default name server.
See also
dig, a utility that interrogates DNS servers directly for troubleshooting and system administration purposes.
host is a simple utility for performing Domain Name System lookups.
List of DNS record types - possible types of records stored and queried within DNS
Root name server - top-level name servers providing top level domain name resolution
whois
BIND name server
References
Further reading
External links
Microsoft Windows
nslookup – Microsoft TechNet library
Using NSlookup.exe, Microsoft Knowledge Base
Unix-like OSs
nslookup source code in ISC Gitlab repository (Mozilla Public License)
DNS software
Internet Protocol based network software
OS/2 commands
Unix network-related software
Windows communication and services
Windows administration | Nslookup | [
"Technology"
] | 596 | [
"Windows commands",
"Computing commands",
"OS/2 commands"
] |
564,948 | https://en.wikipedia.org/wiki/Bernard%20Fr%C3%A9nicle%20de%20Bessy | Bernard Frénicle de Bessy (c. 1604 – 1674), was a French mathematician born in Paris, who wrote numerous mathematical papers, mainly in number theory and combinatorics. He is best remembered for , a treatise on magic squares published posthumously in 1693, in which he described all 880 essentially different normal magic squares of order 4. The Frénicle standard form, a standard representation of magic squares, is named after him. He solved many problems created by Fermat and also discovered the cube property of the number 1729 (Ramanujan number), later referred to as a taxicab number. He is also remembered for his treatise Traité des triangles rectangles en nombres published (posthumously) in 1676 and reprinted in 1729.
Bessy was a member of many of the scientific circles of his day, including the French Academy of Sciences, and corresponded with many prominent mathematicians, such as Mersenne and Pascal. Bessy was also particularly close to Fermat, Descartes and Wallis, and was best known for his insights into number theory.
In 1661 he proposed to John Wallis a problem of what amounted to the following system of equations in integers,
x2 + y2 = z2, x2 = u2 + v2, x − y = u − v > 0.
A solution was given by Théophile Pépin in 1880.
La Méthode des exclusions
Frénicle's La Méthode des exclusions was published (posthumously) in 1693, which appeared in the fifth volume of (1729, Paris), though the work appears to have been written around 1640. The book contains a short introduction followed by ten rules, intended to serve as a "method" or general rules one should apply in order to solve mathematical problems. During the Renaissance, "method" was primarily used for educational purposes, rather than for professional mathematicians (or natural philosophers). However, Frénicle's rules imply slight methodological preferences which suggests a turn towards explorational purposes.
Frénicle's text provided a number of examples on how his rules ought to be applied. He proposed the problem of determining whether or not a given integer can be the hypotenuse of a right-angled triangle (it is not clear if Frénicle initially intended the other two sides of the triangle to have integral length). He considers the case where the integer is 221 and promptly applies his second rule, which states that "if you do not know, even generally, what is proposed, find its properties by systematically constructing similar numbers." He then goes on and exploits the Pythagorean Theorem. Next, the third rule is applied, which states that "in order not to omit any necessary number, establish the order of investigation as simple as possible." Frénicle then takes increasing sums of perfect squares. He produces tables of computations and is able to reduce computations by rules four to six, which all deal with simplifying matters. He eventually arrives at the conclusion that it is possible for 221 to satisfy the property under certain conditions and checks his assertion by experimentation.
Experimental approach
The example in La Méthode des exclusions represents an experimental approach to mathematics. This is in contrast with the standard Euclidean approach of the time, which emphasized axioms and deductive reasoning. Frénicle instead relied on structured and careful observations to find interesting patterns and constructions rather than producing proofs in the axiomatic Euclidean sense. He himself even said that "this research is mainly useful for possible questions, using for most of them no proof other than construction."
References
1600s births
1675 deaths
Combinatorialists
French number theorists
Magic squares
Members of the French Academy of Sciences
17th-century French mathematicians | Bernard Frénicle de Bessy | [
"Mathematics"
] | 763 | [
"Combinatorialists",
"Combinatorics"
] |
564,961 | https://en.wikipedia.org/wiki/Dilaton | In particle physics, the hypothetical dilaton particle is a particle of a scalar field that appears in theories with extra dimensions when the volume of the compactified dimensions varies. It appears as a radion in Kaluza–Klein theory's compactifications of extra dimensions. In Brans–Dicke theory of gravity, Newton's constant is not presumed to be constant but instead 1/G is replaced by a scalar field and the associated particle is the dilaton.
Exposition
In Kaluza–Klein theories, after dimensional reduction, the effective Planck mass varies as some power of the volume of compactified space. This is why volume can turn out as a dilaton in the lower-dimensional effective theory.
Although string theory naturally incorporates Kaluza–Klein theory that first introduced the dilaton, perturbative string theories such as type I string theory, type II string theory, and heterotic string theory already contain the dilaton in the maximal number of 10 dimensions. However, M-theory in 11 dimensions does not include the dilaton in its spectrum unless compactified. The dilaton in type IIA string theory parallels the radion of M-theory compactified over a circle, and the dilaton in string theory parallels the radion for the Hořava–Witten model. (For more on the M-theory origin of the dilaton, see Berman & Perry (2006).)
In string theory, there is also a dilaton in the worldsheet CFT – two-dimensional conformal field theory. The exponential of its vacuum expectation value determines the coupling constant and the Euler characteristic as for compact worldsheets by the Gauss–Bonnet theorem, where the genus counts the number of handles and thus the number of loops or string interactions described by a specific worldsheet.
Therefore, the dynamic variable coupling constant in string theory contrasts the quantum field theory where it is constant. As long as supersymmetry is unbroken, such scalar fields can take arbitrary values moduli). However, supersymmetry breaking usually creates a potential energy for the scalar fields and the scalar fields localize near a minimum whose position should in principle calculate in string theory.
The dilaton acts like a Brans–Dicke scalar, with the effective Planck scale depending upon both the string scale and the dilaton field.
In supersymmetry the superpartner of the dilaton or here the dilatino, combines with the axion to form a complex scalar field.
The dilaton in quantum gravity
The dilaton made its first appearance in Kaluza–Klein theory, a five-dimensional theory that combined gravitation and electromagnetism. It appears in string theory. However, it has become central to the lower-dimensional many-bodied gravity problem based on the field theoretic approach of Roman Jackiw. The impetus arose from the fact that complete analytical solutions for the metric of a covariant N-body system have proven elusive in general relativity. To simplify the problem, the number of dimensions was lowered to 1 + 1 – one spatial dimension and one temporal dimension. This model problem, known as R = T theory, as opposed to the general G = T theory, was amenable to exact solutions in terms of a generalization of the Lambert W function. Also, the field equation governing the dilaton, derived from differential geometry, as the Schrödinger equation could be amenable to quantization.
This combines gravity, quantization, and even the electromagnetic interaction, promising ingredients of a fundamental physical theory. This outcome revealed a previously unknown and already existing natural link between general relativity and quantum mechanics. There lacks clarity in the generalization of this theory to 3 + 1 dimensions. However, a recent derivation in 3 + 1 dimensions under the right coordinate conditions yields a formulation similar to the earlier 1 + 1, a dilaton field governed by the logarithmic Schrödinger equation that is seen in condensed matter physics and superfluids. The field equations are amenable to such a generalization, as shown with the inclusion of a one-graviton process, and yield the correct Newtonian limit in d dimensions, but only with a dilaton. Furthermore, some speculate on the view of the apparent resemblance between the dilaton and the Higgs boson. However, there needs more experimentation to resolve the relationship between these two particles. Finally, since this theory can combine gravitational, electromagnetic, and quantum effects, their coupling could potentially lead to a means of testing the theory through cosmology and experimentation.
Dilaton action
The dilaton-gravity action is
This is more general than Brans–Dicke in vacuum in that we have a dilaton potential.
See also
CGHS model
R = T model
Quantum gravity
List of hypothetical particles
Citations
References
Hypothetical elementary particles
String theory
Supersymmetry | Dilaton | [
"Physics",
"Astronomy"
] | 1,000 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Hypothetical elementary particles",
"String theory",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
564,987 | https://en.wikipedia.org/wiki/Robert%20Woodhouse | Robert Woodhouse (28 April 177323 December 1827) was a British mathematician and astronomer.
Biography
Early life and education
Robert Woodhouse was born on 28 April 1773 in Norwich, Norfolk, the son of Robert Woodhouse, linen draper, and Judith Alderson, the daughter of a Unitarian minister from Lowestoft. Robert junior was baptised at St George's Church, Colegate, Norwich, on 19 May, 1773. A younger son, John Thomas Woodhouse, was born in 1780. The brothers were educated at the Paston School in North Walsham, north of Norwich.
In May 1790 Woodhouse was admitted to Gonville and Caius College, Cambridge, the college where Paston pupils were traditionally sent. In 1795 he graduated as the Senior Wrangler (ranked first among the mathematics undergraduates at the university), and took the First Smith's Prize. He obtained his Master's degree at Cambridge in 1798.
Marriage and career at Cambridge
Woodhouse was a fellow of the college from 1798 to 1823, after which he resigned so as to be able to marry Harriet, the daughter of William Wilkin, a Norwich architect. They were married on 20 February 1823; the marriage produced a son, also named Robert. Harriet Woodhouse died at Cambridge on 31 March 1826.
Woodhouse was elected a Fellow of the Royal Society on 16 December 1802. His earliest work, entitled the Principles of Analytical Calculation, was published at Cambridge in 1803. In this he explained the differential notation and strongly pressed the employment of it; but he severely criticised the methods used by continental writers, and their constant assumption of non-evident principles.
In 1809 Woodhouse published a textbook covering planar trigonometry and spherical trigonometry and the next year a historical treatise on the calculus of variations and isoperimetrical problems. He next produced an astronomy; of which the first book (usually bound in two volumes), on practical and descriptive astronomy, was issued in 1812, and the second book, containing an account of the treatment of physical astronomy by Pierre-Simon Laplace and other continental writers, was issued in 1818.
Woodhouse became the Lucasian Professor of Mathematics in 1820, but the small income caused him to resign the professorship in 1822 and instead accept the better paid post as the Plumian professor in the university. As Plumian Professor he was responsible for installing and adjusting the transit instruments and clocks at the Cambridge Observatory.
Woodhouse did not exercise much influence on the majority of his contemporaries, and the movement might have died away for the time being if it had not been for the advocacy of George Peacock, Charles Babbage, and John Herschel, who formed the Analytical Society, with the object of advocating the general use in the university of analytical methods and of the differential notation. Woodhouse was the first director of the newly built observatory at Cambridge, a post he held until his death in 1827.
On his death in Cambridge he was buried in Caius College chapel.
Notes
References
Sources
Further reading
External links
Facsimile of Woodhouse's certificate of election to the Royal Society
Works
1803: Principles of Analytical Calculation
1809: A Treatise on Plane and Spherical Trigonometry (5th edition 1827)
1810: A Treatise on Isoperimetric Problems and the Calculus of Variations
1818: An Elementary Treatise on Physical Astronomy, volume 1
1818: An Elementary Treatise on Astronomy, volume 2
1821: A Treatise on Astronomy, Theoretical and Practical
1773 births
1827 deaths
Burials in Cambridgeshire
People from Norwich
19th-century English mathematicians
Lucasian Professors of Mathematics
Mathematical analysts
Senior Wranglers
Fellows of the Royal Society
Alumni of Gonville and Caius College, Cambridge
Fellows of Gonville and Caius College, Cambridge
Plumian Professors of Astronomy and Experimental Philosophy | Robert Woodhouse | [
"Mathematics"
] | 757 | [
"Mathematical analysis",
"Mathematical analysts"
] |
565,005 | https://en.wikipedia.org/wiki/Antimonite | In chemistry, antimonite refers to a salt of antimony(III), such as NaSb(OH)4 and NaSbO2 (meta-antimonite), which can be prepared by reacting alkali with antimony trioxide, Sb2O3. These are formally salts of antimonous acid, Sb(OH)3, whose existence in solution is dubious. Attempts to isolate it generally form Sb2O3·xH2O, antimony(III) oxide hydrate, which slowly transforms into Sb2O3.
In geology, the mineral stibnite, Sb2S3, is sometimes called antimonite.
Antimonites can be compared to antimonates, which contain antimony in the +5 oxidation state.
References
Antimony(III) compounds
Pnictogen oxyanions
th:สติบไนท์ | Antimonite | [
"Chemistry"
] | 176 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
565,031 | https://en.wikipedia.org/wiki/Particle%20displacement | Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.
A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound, equal to in air at .
Mathematical definition
Particle displacement, denoted δ, is given by
where v is the particle velocity.
Progressive sine waves
The particle displacement of a progressive sine wave is given by
where
is the amplitude of the particle displacement;
is the phase shift of the particle displacement;
is the angular wavevector;
is the angular frequency.
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
is the amplitude of the particle velocity;
is the phase shift of the particle velocity;
is the amplitude of the acoustic pressure;
is the phase shift of the acoustic pressure.
Taking the Laplace transforms of v and p with respect to time yields
Since , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by
See also
Sound
Sound particle
Particle velocity
Particle acceleration
References and notes
Related Reading:
External links
Acoustic Particle-Image Velocimetry. Development and Applications
Ohm's Law as Acoustic Equivalent. Calculations
Relationships of Acoustic Quantities Associated with a Plane Progressive Acoustic Sound Wave
Acoustics
Sound
Sound measurements
Physical quantities | Particle displacement | [
"Physics",
"Mathematics"
] | 388 | [
"Physical phenomena",
"Sound measurements",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Acoustics",
"Physical properties"
] |
565,214 | https://en.wikipedia.org/wiki/7000%20%28number%29 | 7000 (seven thousand) is the natural number following 6999 and preceding 7001.
Selected numbers in the range 7001–7999
7001 to 7099
7021 – triangular number
7043 – Sophie Germain prime
7056 = 842
7057 – cuban prime of the form x = y + 1, super-prime
7073 – Leyland number
7079 – Sophie Germain prime, safe prime
7100 to 7199
7103 – Sophie Germain prime, sexy prime with 7109
7106 – octahedral number
7109 – super-prime, sexy prime with 7103
7121 – Sophie Germain prime
7140 – triangular number, also a pronic number and hence = 3570 is also a triangular number, tetrahedral number
7141 - sum of the first 58 primes, star number
7151 – Sophie Germain prime
7155 – number of 19-bead necklaces (turning over is allowed) where complements are equivalent
7187 – safe prime
7192 – weird number
7193 – Sophie Germain prime, super-prime
7200 to 7299
7200 – pentagonal pyramidal number
7211 – Sophie Germain prime
7225 = 852, centered octagonal number
7230 = 362 + 372 + 382 + 392 + 402 = 412 + 422 + 432 + 442
7246 – centered heptagonal number
7247 – safe prime
7260 – triangular number
7267 – decagonal number
7272 – Kaprekar number
7283 – super-prime
7291 – nonagonal number
7300 to 7399
7310 - pronic number
7316 – number of 18-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
7338 – Fine number.
7349 – Sophie Germain prime
7351 – super-prime, cuban prime of the form x = y + 1
7381 – triangular number
7385 – Keith number
7396 = 862
7400 to 7499
7417 – super-prime
7418 - sum of the first 59 primes
7433 – Sophie Germain prime
7471 – centered cube number
7481 – super-prime, cousin prime
7482 - pronic number
7500 to 7599
7503 – triangular number
7523 – balanced prime, safe prime, super-prime
7537 – prime of the form 2p-1
7541 – Sophie Germain prime
7559 – safe prime
7560 – the 20th highly composite number
7561 – Markov prime, star prime
7568 – centered heptagonal number
7569 = 872, centered octagonal number
7583 – balanced prime
7600 to 7699
7607 – safe prime, super-prime
7612 – decagonal number
7614 – nonagonal number
7626 – triangular number
7643 – Sophie Germain prime, safe prime
7647 – Keith number
7649 – Sophie Germain prime, super-prime
7656 - pronic number
7691 – Sophie Germain prime
7699 – super-prime, emirp, sum of the first 60 primes, first prime above 281 to be the sum of the first k primes for some k
7700 to 7799
7703 – safe prime
7710 = number of primitive polynomials of degree 17 over GF(2)
7714 – square pyramidal number
7727 – safe prime
7739 – member of the Padovan sequence
7741 = number of trees with 15 unlabeled nodes
7744 = 882, square palindrome not ending in 0
7750 – triangular number
7753 – super-prime
7770 – tetrahedral number
7776 = 65, number of primitive polynomials of degree 18 over GF(2)
7777 – Kaprekar number, repdigit
7800 to 7899
7810 – ISO/IEC 7810 is the ISO's standard for physical characteristics of identification cards
7821 – n=6 value of
7823 – Sophie Germain prime, safe prime, balanced prime
7825 – magic constant of n × n normal magic square and n-Queens Problem for n = 25. Also the first counterexample in the Boolean Pythagorean triples problem.
7832 - pronic number
7841 – Sophie Germain prime, balanced prime, super-prime
7875 – triangular number
7883 – Sophie Germain prime, super-prime
7897 – centered heptagonal number
7900 to 7999
7901 – Sophie Germain prime
7909 – Keith number
7912 – weird number
7919 – 1000th prime number
7920 – the order of the Mathieu group M11, the smallest sporadic simple group
7921 = 892, centered octagonal number
7944 – nonagonal number
7957 – super-Poulet number
7965 – decagonal number
7979 – highly cototient number
7982 - sum of the first 61 primes
7993 - star prime, reverse superstar prime
Prime numbers
There are 107 prime numbers between 7000 and 8000:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7121, 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207, 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253, 7283, 7297, 7307, 7309, 7321, 7331, 7333, 7349, 7351, 7369, 7393, 7411, 7417, 7433, 7451, 7457, 7459, 7477, 7481, 7487, 7489, 7499, 7507, 7517, 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561, 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621, 7639, 7643, 7649, 7669, 7673, 7681, 7687, 7691, 7699, 7703, 7717, 7723, 7727, 7741, 7753, 7757, 7759, 7789, 7793, 7817, 7823, 7829, 7841, 7853, 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919, 7927, 7933, 7937, 7949, 7951, 7963, 7993
References
Integers | 7000 (number) | [
"Mathematics"
] | 1,398 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.