id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
55,212 | https://en.wikipedia.org/wiki/Newton%27s%20laws%20of%20motion | Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows:
A body remains at rest, or in motion at a constant speed in a straight line, except insofar as it is acted upon by a force.
At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time.
If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.
The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical objects and systems. In the time since Newton, new insights, especially around the concept of energy, built the field of classical mechanics on his foundations. Limitations to Newton's laws have also been discovered; new theories are necessary when objects move at very high speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics).
Prerequisites
Newton's laws are often stated in terms of point or particle masses, that is, bodies whose volume is negligible. This is a reasonable approximation for real bodies when the motion of internal parts can be neglected, and when the separation between bodies is much larger than the size of each. For instance, the Earth and the Sun can both be approximated as pointlike when considering the orbit of the former around the latter, but the Earth is not pointlike when considering activities on its surface.
The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is , then its average velocity over the time interval from to is Here, the Greek letter (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an instantaneous velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace with the symbol , for example,This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position to the infinitesimally small time interval over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function has a limit of at a given input value if the difference between and can be made arbitrarily small by choosing an input sufficiently close to . One writes, Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero: Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:Consequently, the acceleration is the second derivative of position, often written .
Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction. Velocity and acceleration are vector quantities as well. The mathematical tools of vector algebra provide the means to describe motion in two, three or more dimensions. Vectors are often denoted with an arrow, as in , or in bold typeface, such as . Often, vectors are represented visually as arrows, with the direction of the vector being the direction of the arrow, and the magnitude of the vector indicated by the length of the arrow. Numerically, a vector can be represented as a list; for example, a body's velocity vector might be , indicating that it is moving at 3 metres per second along the horizontal axis and 4 metres per second along the vertical axis. The same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives.
The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight. The physics concept of force makes quantitative the everyday idea of a push or a pull. Forces in Newtonian mechanics are often due to strings and ropes, friction, muscle effort, gravity, and so forth. Like displacement, velocity, and acceleration, force is a vector quantity.
Laws
First law
Translated from Latin, Newton's first law reads,
Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon.
Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. A body's motion preserves the status quo, but external forces can perturb this.
The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest. Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative.
Second law
The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed.
By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity:
where all three quantities can change over time.
Newton's second law, in modern form, states that the time derivative of the momentum is the force:
If the mass does not change with time, then the derivative acts only upon the velocity, and so the force equals the product of the mass and the time derivative of the velocity, which is the acceleration:
As the acceleration is the second derivative of position with respect to time, this can also be written
The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces. When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.
A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension.
Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. In order for this to be more than a tautology — acceleration implies force, force implies acceleration — some other statement about force must also be made. For example, an equation detailing the force might be specified, like Newton's law of universal gravitation. By inserting such an expression for into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter.
Third law
To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth.
Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta and respectively, then the total momentum of the pair is , and the rate of change of is By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and is constant. Alternatively, if is known to be constant, it follows that the forces have equal magnitude and opposite direction.
Candidates for additional laws
Various sources have proposed elevating other ideas used in classical mechanics to the status of Newton's laws. For example, in Newtonian mechanics, the total mass of a body made by bringing together two smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law".
Moreover, some texts organize the basic ideas of Newtonian mechanics into different postulates, other than the three laws as commonly phrased, with the goal of being more clear about what is empirically observed and what is true by definition.
Examples
The study of the behavior of massive bodies using Newton's laws is known as Newtonian mechanics. Some example problems in Newtonian mechanics are particularly noteworthy for conceptual or historical reasons.
Uniformly accelerated motion
If a body falls from rest near the surface of the Earth, then in the absence of air resistance, it will accelerate at a constant rate. This is known as free fall. The speed attained during free fall is proportional to the elapsed time, and the distance traveled is proportional to the square of the elapsed time. Importantly, the acceleration is the same for all bodies, independently of their mass. This follows from combining Newton's second law of motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is
where is the mass of the falling body, is the mass of the Earth, is Newton's constant, and is the distance from the center of the Earth to the body's location, which is very nearly the radius of the Earth. Setting this equal to , the body's mass cancels from both sides of the equation, leaving an acceleration that depends upon , , and , and can be taken to be constant. This particular value of acceleration is typically denoted :
If the body is not released from rest but instead launched upwards and/or horizontally with nonzero velocity, then free fall becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped trajectories, because gravity affects the body's vertical motion and not its horizontal. At the peak of the projectile's trajectory, its vertical velocity is zero, but its acceleration is downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students.
Uniform circular motion
When a body is in uniform circular motion, the force on it changes the direction of its motion but not its speed. For a body moving in a circle of radius at a constant speed , its acceleration has a magnitudeand is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude . Many orbits, such as that of the Moon around the Earth, can be approximated by uniform circular motion. In such cases, the centripetal force is gravity, and by Newton's law of universal gravitation has magnitude , where is the mass of the larger body being orbited. Therefore, the mass of a body can be calculated from observations of another body orbiting around it.
Newton's cannonball is a thought experiment that interpolates between projectile motion and uniform circular motion. A cannonball that is lobbed weakly off the edge of a tall cliff will hit the ground in the same amount of time as if it were dropped from rest, because the force of gravity only affects the cannonball's momentum in the downward direction, and its effect is not diminished by horizontal movement. If the cannonball is launched with a greater initial horizontal velocity, then it will travel farther before it hits the ground, but it will still hit the ground in the same amount of time. However, if the cannonball is launched with an even larger initial velocity, then the curvature of the Earth becomes significant: the ground itself will curve away from the falling cannonball. A very fast cannonball will fall away from the inertial straight-line trajectory at the same rate that the Earth curves away beneath it; in other words, it will be in orbit (imagining that it is not slowed by air resistance or obstacles).
Harmonic motion
Consider a body of mass able to move along the axis, and suppose an equilibrium point exists at the position . That is, at , the net force upon the body is the zero vector, and by Newton's second law, the body will not accelerate. If the force upon the body is proportional to the displacement from the equilibrium point, and directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as , Newton's second law becomes
This differential equation has the solution
where the frequency is equal to , and the constants and can be calculated knowing, for example, the position and velocity the body has at a given time, like .
One reason that the harmonic oscillator is a conceptually important example is that it is good approximation for many systems near a stable mechanical equilibrium. For example, a pendulum has a stable equilibrium in the vertical position: if motionless there, it will remain there, and if pushed slightly, it will swing back and forth. Neglecting air resistance and friction in the pivot, the force upon the pendulum is gravity, and Newton's second law becomes where is the length of the pendulum and is its angle from the vertical. When the angle is small, the sine of is nearly equal to (see small-angle approximation), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency .
A harmonic oscillator can be damped, often by friction or viscous drag, in which case energy bleeds out of the oscillator and the amplitude of the oscillations decreases over time. Also, a harmonic oscillator can be driven by an applied force, which can lead to the phenomenon of resonance.
Objects with variable mass
Newtonian physics treats matter as being neither created nor destroyed, though it may be rearranged. It can be the case that an object of interest gains or loses mass because matter is added to or removed from it. In such a situation, Newton's laws can be applied to the individual pieces of matter, keeping track of which pieces belong to the object of interest over time. For instance, if a rocket of mass , moving at velocity , ejects matter at a velocity relative to the rocket, then
where is the net external force (e.g., a planet's gravitational pull).
Work and energy
The concept of energy was developed after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body moves in a closed loop — starting at a point, moving along some trajectory, and returning to the initial point — is zero. If this is the case, then the force can be written in terms of the gradient of a function called a scalar potential:
This is true for many forces including that of gravity, but not for friction; indeed, almost any problem in a mechanics textbook that does not involve friction can be expressed in this way. The fact that the force can be written in this way can be understood from the conservation of energy. Without friction to dissipate a body's energy into heat, the body's energy will trade between potential and (non-thermal) kinetic forms while the total amount remains constant. Any gain of kinetic energy, which occurs when the net force on the body accelerates it to a higher speed, must be accompanied by a loss of potential energy. So, the net force upon the body is determined by the manner in which the potential energy decreases.
Rigid-body motion and rotation
A rigid body is an object whose size is too large to neglect and which maintains the same shape over time. In Newtonian mechanics, the motion of a rigid body is often understood by separating it into movement of the body's center of mass and movement around the center of mass.
Center of mass
Significant aspects of the motion of an extended body can be understood by imagining the mass of that body concentrated to a single point, known as the center of mass. The location of a body's center of mass depends upon how that body's material is distributed. For a collection of pointlike objects with masses at positions , the center of mass is located at where is the total mass of the collection. In the absence of a net external force, the center of mass moves at a constant speed in a straight line. This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass . This follows from the fact that the internal forces within the collection, the forces that the objects exert upon each other, occur in balanced pairs by Newton's third law. In a system of two bodies with one much more massive than the other, the center of mass will approximately coincide with the location of the more massive body.
Rotational analogues of Newton's laws
When Newton's laws are applied to rotating extended bodies, they lead to new quantities that are analogous to those invoked in the original laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque.
Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is and the body has momentum , then the body's angular momentum with respect to that point is, using the vector cross product, Taking the time derivative of the angular momentum gives The first term vanishes because and point in the same direction. The remaining term is the torque, When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant. The torque can vanish even when the force is non-zero, if the body is located at the reference point () or if the force and the displacement vector are directed along the same line.
The angular momentum of a collection of point masses, and thus of an extended body, is found by adding the contributions from each of the points. This provides a means to characterize a body's rotation about an axis, by adding up the angular momenta of its individual pieces. The result depends on the chosen axis, the shape of the body, and the rate of rotation.
Multi-body gravitational system
Newton's law of universal gravitation states that any body attracts any other body along the straight line connecting them. The size of the attracting force is proportional to the product of their masses, and inversely proportional to the square of the distance between them. Finding the shape of the orbits that an inverse-square force law will produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity of the orbit, and thus the type of conic section, is determined by the energy and the angular momentum of the orbiting body. Planets do not have sufficient energy to escape the Sun, and so their orbits are ellipses, to a good approximation; because the planets pull on one another, actual orbits are not exactly conic sections.
If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables within a computer's memory; Newton's laws are used to calculate how the velocities will change over a short interval of time, and knowing the velocities, the changes of position over that time interval can be computed. This process is looped to calculate, approximately, the bodies' trajectories. Generally speaking, the shorter the time interval, the more accurate the approximation.
Chaos and unpredictability
Nonlinear dynamics
Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws can exhibit sensitive dependence upon their initial conditions: a slight change of the position or velocity of one part of a system can lead to the whole system behaving in a radically different way within a short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem.
Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function that assigns a velocity vector to each point in space and time. A small object being carried along by the fluid flow can change velocity for two reasons: first, because the velocity field at its position is changing over time, and second, because it moves to a new location where the velocity field has a different value. Consequently, when Newton's second law is applied to an infinitesimal portion of fluid, the acceleration has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, becomes
where is the density, is the pressure, and stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation:
where is the kinematic viscosity.
Singularities
It is mathematically possible for a collection of point masses, moving in accord with Newton's laws, to launch some of themselves away so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics.
It is not yet known whether or not the Euler and Navier–Stokes equations exhibit the analogous behavior of initially smooth solutions "blowing up" in finite time. The question of existence and smoothness of Navier–Stokes solutions is one of the Millennium Prize Problems.
Relation to other formulations of classical physics
Classical mechanics can be mathematically formulated in multiple different ways, other than the "Newtonian" description (which itself, of course, incorporates contributions from others both before and after Newton). The physical content of these different formulations is the same as the Newtonian, but they provide different insights and facilitate different types of calculations. For example, Lagrangian mechanics helps make apparent the connection between symmetries and conservation laws, and it is useful when calculating the motion of constrained bodies, like a mass restricted to move along a curving track or on the surface of a sphere. Hamiltonian mechanics is convenient for statistical physics, leads to further insight about symmetry, and can be developed into sophisticated techniques for perturbation theory. Due to the breadth of these topics, the discussion here will be confined to concise treatments of how they reformulate Newton's laws of motion.
Lagrangian
Lagrangian mechanics differs from the Newtonian formulation by considering entire trajectories at once rather than predicting a body's motion at a single instant. It is traditional in Lagrangian mechanics to denote position with and velocity with . The simplest example is a massive point particle, the Lagrangian for which can be written as the difference between its kinetic and potential energies:
where the kinetic energy is
and the potential energy is some function of the position, . The physical path that the particle will take between an initial point and a final point is the path for which the integral of the Lagrangian is "stationary". That is, the physical path has the property that small perturbations of it will, to a first approximation, not change the integral of the Lagrangian. Calculus of variations provides the mathematical tools for finding this path. Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle,
Evaluating the partial derivatives of the Lagrangian gives
which is a restatement of Newton's second law. The left-hand side is the time derivative of the momentum, and the right-hand side is the force, represented in terms of the potential energy.
Landau and Lifshitz argue that the Lagrangian formulation makes the conceptual content of classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's theorem to a Lagrangian for a multi-particle system, and so, Newton's third law is a theorem rather than an assumption.
Hamiltonian
In Hamiltonian mechanics, the dynamics of a system are represented by a function called the Hamiltonian, which in many cases of interest is equal to the total energy of the system. The Hamiltonian is a function of the positions and the momenta of all the bodies making up the system, and it may also depend explicitly upon time. The time derivatives of the position and momentum variables are given by partial derivatives of the Hamiltonian, via Hamilton's equations. The simplest example is a point mass constrained to move in a straight line, under the effect of a potential. Writing for the position coordinate and for the body's momentum, the Hamiltonian is
In this example, Hamilton's equations are
and
Evaluating these partial derivatives, the former equation becomes
which reproduces the familiar statement that a body's momentum is the product of its mass and velocity. The time derivative of the momentum is
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
As in the Lagrangian formulation, in Hamiltonian mechanics the conservation of momentum can be derived using Noether's theorem, making Newton's third law an idea that is deduced rather than assumed.
Among the proposals to reform the standard introductory-physics curriculum is one that teaches the concept of energy before that of force, essentially "introductory Hamiltonian mechanics".
Hamilton–Jacobi
The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics. This formulation also uses Hamiltonian functions, but in a different way than the formulation described above. The paths taken by bodies or collections of bodies are deduced from a function of positions and time . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant , analogously to how a light ray propagates in the direction perpendicular to its wavefront. This is simplest to express for the case of a single point mass, in which is a function , and the point mass moves in the direction along which changes most steeply. In other words, the momentum of the point mass is the gradient of :
The Hamilton–Jacobi equation for a point mass is
The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential , in which case the Hamilton–Jacobi equation becomes
Taking the gradient of both sides, this becomes
Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side,
Gathering together the terms that depend upon the gradient of ,
This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated changes over time at a fixed location, and the second term captures how a moving particle will see different values of that function as it travels from place to place:
Relation to other physical theories
Thermodynamics and statistical physics
In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum.
The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones. It can be writtenwhere is a drag coefficient and is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion.
Electromagnetism
Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist.
Coulomb's law for the electric force between two stationary, electrically charged bodies has much the same mathematical form as Newton's law of universal gravitation: the force is proportional to the product of the charges, inversely proportional to the square of the distance between them, and directed along the straight line between them. The Coulomb force that a charge exerts upon a charge is equal in magnitude to the force that exerts upon , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law.
Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law provides an expression for the force upon a charged body that can be plugged into Newton's second law in order to calculate its acceleration. According to the Lorentz force law, a charged body in an electric field experiences a force in the direction of that field, a force proportional to its charge and to the strength of the electric field. In addition, a moving charged body in a magnetic field experiences a force that is also proportional to its charge, in a direction perpendicular to both the field and the body's direction of motion. Using the vector cross product,
If the electric field vanishes (), then the force will be perpendicular to the charge's motion, just as in the case of uniform circular motion studied above, and the charge will circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency . Mass spectrometry works by applying electric and/or magnetic fields to moving charges and measuring the resulting acceleration, which by the Lorentz force law yields the mass-to-charge ratio.
Collections of charged bodies do not always obey Newton's third law: there can be a change of one body's momentum without a compensatory change in the momentum of another. The discrepancy is accounted for by momentum carried by the electromagnetic field itself. The momentum per unit volume of the electromagnetic field is proportional to the Poynting vector.
There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism predicts that electromagnetic waves will travel through empty space at a constant, definite speed. Thus, some inertial observers seemingly have a privileged status over the others, namely those who measure the speed of light and find it to be the value predicted by the Maxwell equations. In other words, light provides an absolute standard for speed, yet the principle of inertia holds that there should be no such standard. This tension is resolved in the theory of special relativity, which revises the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum.
Special relativity
In special relativity, the rule that Wilczek called "Newton's Zeroth Law" breaks down: the mass of a composite object is not merely the sum of the masses of the individual pieces. Newton's first law, inertial motion, remains true. A form of Newton's second law, that force is the rate of change of momentum, also holds, as does the conservation of momentum. However, the definition of momentum is modified. Among the consequences of this is the fact that the more quickly a body moves, the harder it is to accelerate, and so, no matter how much force is applied, a body cannot be accelerated to the speed of light. Depending on the problem at hand, momentum in special relativity can be represented as a three-dimensional vector, , where is the body's rest mass and is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors.
Newton's third law must be modified in special relativity. The third law refers to the forces between two bodies at the same moment in time, and a key feature of special relativity is that simultaneity is relative. Events that happen at the same time relative to one observer can happen at different times relative to another. So, in a given observer's frame of reference, action and reaction may not be exactly opposite, and the total momentum of interacting bodies may not be conserved. The conservation of momentum is restored by including the momentum stored in the field that describes the bodies' interaction.
Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light.
General relativity
General relativity is a theory of gravity that advances beyond that of Newton. In general relativity, the gravitational force of Newtonian mechanics is reimagined as curvature of spacetime. A curved path like an orbit, attributed to a gravitational force in Newtonian mechanics, is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express.
The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light.
Quantum mechanics
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
The Ehrenfest theorem provides a connection between quantum expectation values and Newton's second law, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, position and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
History
The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both.
As noted by scholar I. Bernard Cohen, Newton's work was more than a mere synthesis of previous results, as he selected certain ideas and further transformed them, with each in a new form that was useful to him, while at the same time proving false of certain basic or fundamental principles of scientists such as Galileo Galilei, Johannes Kepler, René Descartes, and Nicolaus Copernicus. He approached natural philosophy with mathematics in a completely novel way, in that instead of a preconceived natural philosophy, his style was to begin with a mathematical construct, and build on from there, comparing it to the real world to show that his system accurately accounted for it.
Antiquity and medieval background
Aristotle and "violent" motion
The subject of physics is often traced back to Aristotle, but the history of the concepts involved is obscured by multiple factors. An exact correspondence between Aristotelian and modern concepts is not simple to establish: Aristotle did not clearly distinguish what we would call speed and force, used the same term for density and viscosity, and conceived of motion as always through a medium, rather than through space. In addition, some concepts often termed "Aristotelian" might better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The "natural" motion of terrestrial solid matter was to fall downwards, whereas a "violent" motion could push a body sideways. Moreover, in Aristotelian physics, a "violent" motion requires an immediate cause; separated from the cause of its "violent" motion, a body would revert to its "natural" behavior. Yet, a javelin continues moving after it leaves the thrower's hand. Aristotle concluded that the air around the javelin must be imparted with the ability to move the javelin forward.
Philoponus and impetus
John Philoponus, a Byzantine Greek thinker active during the sixth century, found this absurd: the same medium, air, was somehow responsible both for sustaining motion and for impeding it. If Aristotle's idea were true, Philoponus said, armies would launch weapons by blowing upon them with bellows. Philoponus argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move. In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics.
Inertia and the first law
The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death.
The modern concept of inertia is credited to Galileo. Based on his experiments, Galileo concluded that the "natural" behavior of a moving body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would be codified into Newton's first law. Galileo thought that a body moving a long distance inertially would follow the curve of the Earth. This idea was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down.
According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione.
According to Huygens, this law was already known by Galileo and Descartes among others.
Force and the second law
Christiaan Huygens, in his Horologium Oscillatorium (1673), put forth the hypothesis that "By the action of gravity, whatever its sources, it happens that bodies are moved by a motion composed both of a uniform motion in one direction or another and of a motion downward due to gravity." Newton's second law generalized this hypothesis from gravity to all forces.
One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally, despite being separated by millions of kilometres. This contrasts with the idea, championed by Descartes among others, that the Sun's gravity held planets in orbit by swirling them in a vortex of transparent matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed.
Momentum conservation and the third law
Johannes Kepler suggested that gravitational attractions were reciprocal — that, for example, the Moon pulls on the Earth while the Earth pulls on the Moon — but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy (1644), Descartes introduced the idea that during a collision between bodies, a "quantity of motion" remains unchanged. Descartes defined this quantity somewhat imprecisely by adding up the products of the speed and "size" of each body, where "size" for him incorporated both volume and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved.
During the 1650s, Huygens studied collisions between hard spheres and deduced a principle that is now identified as the conservation of momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law.
Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens, he listed four laws: the principle of inertia, the change of motion by force, a statement about relative motion that would today be called Galilean invariance, and the rule that interactions between bodies do not change the motion of their center of mass. In a later manuscript, Newton added a law of action and reaction, while saying that this law and the law regarding the center of mass implied one another. Newton probably settled on the presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685.
After the Principia
Newton expressed his second law by saying that the force on a body is proportional to its change of motion, or momentum. By the time he wrote the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous. Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides.
The concept of energy became a key part of Newtonian mechanics in the post-Newton period. Huygens' solution of the collision of hard spheres showed that in that case, not only is momentum conserved, but kinetic energy is as well (or, rather, a quantity that in retrospect we can identify as one-half the total kinetic energy). The question of what is conserved during all other processes, like inelastic collisions and motion slowed by friction, was not resolved until the 19th century. Debates on this topic overlapped with philosophical disputes between the metaphysical views of Newton and Leibniz, and variants of the term "force" were sometimes used to denote what we would call types of energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force is that which a body has when it is in actual motion." In modern terminology, "dead force" and "living force" correspond to potential energy and kinetic energy respectively. Conservation of energy was not established as a universal principle until it was understood that the energy of mechanical work can be dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could then be derived within formulations of classical mechanics that put energy first, as in the Lagrangian and Hamiltonian formulations described above.
Modern presentations of Newton's laws use the mathematics of vectors, a topic that was not developed until the late 19th and early 20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton.
See also
Euler's laws of motion
History of classical mechanics
List of eponymous laws
List of equations in classical mechanics
List of scientific laws named after people
List of textbooks on classical mechanics and quantum mechanics
Norton's dome
Notes
References
Further reading
Newton’s Laws of Dynamics - The Feynman Lectures on Physics
Classical mechanics
Isaac Newton
Texts in Latin
Equations of physics
Scientific observation
Experimental physics
Copernican Revolution
Articles containing video clips
Scientific laws
Eponymous laws of physics | Newton's laws of motion | [
"Physics",
"Astronomy",
"Mathematics"
] | 10,810 | [
"Equations of physics",
"History of astronomy",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Scientific laws",
"Mechanics",
"Experimental physics",
"Copernican Revolution"
] |
55,227 | https://en.wikipedia.org/wiki/Baire%20category%20theorem | The Baire category theorem (BCT) is an important result in general topology and functional analysis. The theorem has two forms, each of which gives sufficient conditions for a topological space to be a Baire space (a topological space such that the intersection of countably many dense open sets is still dense). It is used in the proof of results in many areas of analysis and geometry, including some of the fundamental theorems of functional analysis.
Versions of the Baire category theorem were first proved independently in 1897 by Osgood for the real line and in 1899 by Baire for Euclidean space . The more general statement for completely metrizable spaces was first shown by Hausdorff in 1914.
Statement
A Baire space is a topological space in which every countable intersection of open dense sets is dense in See the corresponding article for a list of equivalent characterizations, as some are more useful than others depending on the application.
(BCT1) Every complete pseudometric space is a Baire space. In particular, every completely metrizable topological space is a Baire space.
(BCT2) Every locally compact regular space is a Baire space. In particular, every locally compact Hausdorff space is a Baire space.
Neither of these statements directly implies the other, since there are complete metric spaces that are not locally compact (the irrational numbers with the metric defined below; also, any Banach space of infinite dimension), and there are locally compact Hausdorff spaces that are not metrizable (for instance, any uncountable product of non-trivial compact Hausdorff spaces; also, several function spaces used in functional analysis; the uncountable Fort space).
See Steen and Seebach in the references below.
Relation to the axiom of choice
The proof of BCT1 for arbitrary complete metric spaces requires some form of the axiom of choice; and in fact BCT1 is equivalent over ZF to the axiom of dependent choice, a weak form of the axiom of choice.
A restricted form of the Baire category theorem, in which the complete metric space is also assumed to be separable, is provable in ZF with no additional choice principles.
This restricted form applies in particular to the real line, the Baire space the Cantor space and a separable Hilbert space such as the -space .
Uses
In functional analysis, BCT1 can be used to prove the open mapping theorem, the closed graph theorem and the uniform boundedness principle.
BCT1 also shows that every nonempty complete metric space with no isolated point is uncountable. (If is a nonempty countable metric space with no isolated point, then each singleton in is nowhere dense, and is meagre in itself.) In particular, this proves that the set of all real numbers is uncountable.
BCT1 shows that each of the following is a Baire space:
The space of real numbers
The irrational numbers, with the metric defined by where is the first index for which the continued fraction expansions of and differ (this is a complete metric space)
The Cantor set
By BCT2, every finite-dimensional Hausdorff manifold is a Baire space, since it is locally compact and Hausdorff. This is so even for non-paracompact (hence nonmetrizable) manifolds such as the long line.
BCT is used to prove Hartogs's theorem, a fundamental result in the theory of several complex variables.
BCT1 is used to prove that a Banach space cannot have countably infinite dimension.
Proof
(BCT1) The following is a standard proof that a complete pseudometric space is a Baire space.
Let be a countable collection of open dense subsets. We want to show that the intersection is dense.
A subset is dense if and only if every nonempty open subset intersects it. Thus to show that the intersection is dense, it suffices to show that any nonempty open subset of has some point in common with all of the .
Because is dense, intersects consequently, there exists a point and a number such that:
where and denote an open and closed ball, respectively, centered at with radius
Since each is dense, this construction can be continued recursively to find a pair of sequences and such that:
(This step relies on the axiom of choice and the fact that a finite intersection of open sets is open and hence an open ball can be found inside it centered at .)
The sequence is Cauchy because whenever and hence converges to some limit by completeness.
If is a positive integer then (because this set is closed).
Thus and for all
There is an alternative proof using Choquet's game.
(BCT2) The proof that a locally compact regular space is a Baire space is similar. It uses the facts that (1) in such a space every point has a local base of closed compact neighborhoods; and (2) in a compact space any collection of closed sets with the finite intersection property has nonempty intersection. The result for locally compact Hausdorff spaces is a special case, as such spaces are regular.
Notes
References
Reprinted by Dover Publications, New York, 1995. (Dover edition).
External links
Encyclopaedia of Mathematics article on Baire theorem
Articles containing proofs
Functional analysis
General topology
Theorems in topology | Baire category theorem | [
"Mathematics"
] | 1,104 | [
"General topology",
"Functions and mappings",
"Mathematical theorems",
"Functional analysis",
"Mathematical objects",
"Theorems in topology",
"Topology",
"Mathematical relations",
"Articles containing proofs",
"Mathematical problems"
] |
55,233 | https://en.wikipedia.org/wiki/High-energy%20astronomy | High-energy astronomy is the study of astronomical objects that release electromagnetic radiation of highly energetic wavelengths. It includes X-ray astronomy, gamma-ray astronomy, extreme UV astronomy, neutrino astronomy, and studies of cosmic rays. The physical study of these phenomena is referred to as high-energy astrophysics.
Astronomical objects commonly studied in this field may include black holes, neutron stars, active galactic nuclei, supernovae, kilonovae, supernova remnants, and gamma-ray bursts.
Missions
Some space and ground-based telescopes that have studied high energy astronomy include the following:
AGILE
AMS-02
AUGER
CALET
Chandra
Fermi
HAWC
H.E.S.S.
IceCube
INTEGRAL
MAGIC
NuSTAR
Proton
Swift
TA
XMM-Newton
VERITAS
References
External links
NASA's High Energy Astrophysics Science Archive Research Center
Observational astronomy | High-energy astronomy | [
"Astronomy"
] | 176 | [
"Observational astronomy",
"Astronomy stubs",
"Astronomical sub-disciplines"
] |
55,236 | https://en.wikipedia.org/wiki/Compton%20scattering | Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules.
The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize for Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles.
Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron").
This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur. This is known as inverse Compton scattering, in which the scattered photon increases in energy.
Introduction
In Compton's original experiment (see Fig. 1), the energy of the X ray photon (≈ 17 keV) was significantly larger than the binding energy of the atomic electron, so the electrons could be treated as being free after scattering. The amount by which the light's wavelength changes is called the Compton shift. Although nucleus Compton scattering exists, Compton scattering usually refers to the interaction involving only the electrons of an atom. The Compton effect was observed by Arthur Holly Compton in 1923 at Washington University in St. Louis and further verified by his graduate student Y. H. Woo in the years following. Compton was awarded the 1927 Nobel Prize in Physics for the discovery.
The effect is significant because it demonstrates that light cannot be explained purely as a wave phenomenon. Thomson scattering, the classical theory of an electromagnetic wave scattered by charged particles, cannot explain shifts in wavelength at low intensity: classically, light of sufficient intensity for the electric field to accelerate a charged particle to a relativistic speed will cause radiation-pressure recoil and an associated Doppler shift of the scattered light, but the effect would become arbitrarily small at sufficiently low light intensities regardless of wavelength. Thus, if we are to explain low-intensity Compton scattering, light must behave as if it consists of particles. Or the assumption that the electron can be treated as free is invalid resulting in the effectively infinite electron mass equal to the nuclear mass (see e.g. the comment below on elastic scattering of X-rays being from that effect). Compton's experiment convinced physicists that light can be treated as a stream of particle-like objects (quanta called photons), whose energy is proportional to the light wave's frequency.
As shown in Fig. 2, the interaction between an electron and a photon results in the electron being given part of the energy (making it recoil), and a photon of the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is also conserved. If the scattered photon still has enough energy, the process may be repeated. In this scenario, the electron is treated as free or loosely bound. Experimental verification of momentum conservation in individual Compton scattering processes by Bothe and Geiger as well as by Compton and Simon has been important in disproving the BKS theory.
Compton scattering is commonly described as inelastic scattering. This is because, unlike the more common Thomson scattering that happens at the low-energy limit, the energy in the scattered photon in Compton scattering is less than the energy of the incident photon. As the electron is typically weakly bound to the atom, the scattering can be viewed from either the perspective of an electron in a potential well, or as an atom with a small ionization energy. In the former perspective, energy of the incident photon is transferred to the recoil particle, but only as kinetic energy. The electron gains no internal energy, respective masses remain the same, the mark of an elastic collision. From this perspective, Compton scattering could be considered elastic because the internal state of the electron does not change during the scattering process. In the latter perspective, the atom's state is change, constituting an inelastic collision. Whether Compton scattering is considered elastic or inelastic depends on which perspective is being used, as well as the context.
Compton scattering is one of four competing processes when photons interact with matter. At energies of a few eV to a few keV, corresponding to visible light through soft X-rays, a photon can be completely absorbed and its energy can eject an electron from its host atom, a process known as the photoelectric effect. High-energy photons of and above may bombard the nucleus and cause an electron and a positron to be formed, a process called pair production; even-higher-energy photons (beyond a threshold energy of at least , depending on the nuclei involved), can eject a nucleon or alpha particle from the nucleus in a process called photodisintegration. Compton scattering is the most important interaction in the intervening energy region, at photon energies greater than those typical of the photoelectric effect but less than the pair-production threshold.
Description of the phenomenon
By the early 20th century, research into the interaction of X-rays with matter was well under way. It was observed that when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an angle and emerge at a different wavelength related to . Although classical electromagnetism predicted that the wavelength of scattered rays should be equal to the initial wavelength, multiple experiments had found that the wavelength of the scattered rays was longer (corresponding to lower energy) than the initial wavelength.
In 1923, Compton published a paper in the Physical Review that explained the X-ray shift by attributing particle-like momentum to light quanta (Albert Einstein had proposed light quanta in 1905 in explaining the photo-electric effect, but Compton did not build on Einstein's work). The energy of light quanta depends only on the frequency of the light. In his paper, Compton derived the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays by assuming that each scattered X-ray photon interacted with only one electron. His paper concludes by reporting on experiments which verified his derived relation:
where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.
The quantity is known as the Compton wavelength of the electron; it is equal to . The wavelength shift is at least zero (for ) and at most twice the Compton wavelength of the electron (for ).
Compton found that some X-rays experienced no wavelength shift despite being scattered through large angles; in each of these cases the photon failed to eject an electron. Thus the magnitude of the shift is related not to the Compton wavelength of the electron, but to the Compton wavelength of the entire atom, which can be upwards of 10000 times smaller. This is known as "coherent" scattering off the entire atom since the atom remains intact, gaining no internal excitation.
In Compton's original experiments the wavelength shift given above was the directly measurable observable. In modern experiments it is conventional to measure the energies, not the wavelengths, of the scattered photons. For a given incident energy , the outgoing final-state photon energy, , is given by
Derivation of the scattering formula
A photon with wavelength collides with an electron in an atom, which is treated as being at rest. The collision causes the electron to recoil, and a new photon ′ with wavelength ′ emerges at angle from the photon's incoming path. Let ′ denote the electron after the collision. Compton allowed for the possibility that the interaction would sometimes accelerate the electron to speeds sufficiently close to the velocity of light as to require the application of Einstein's special relativity theory to properly describe its energy and momentum.
At the conclusion of Compton's 1923 paper, he reported results of experiments confirming the predictions of his scattering formula, thus supporting the assumption that photons carry momentum as well as quantized energy. At the start of his derivation, he had postulated an expression for the momentum of a photon from equating Einstein's already established mass-energy relationship of to the quantized photon energies of , which Einstein had separately postulated. If , the equivalent photon mass must be . The photon's momentum is then simply this effective mass times the photon's frame-invariant velocity . For a photon, its momentum , and thus can be substituted for for all photon momentum terms which arise in course of the derivation below. The derivation which appears in Compton's paper is more terse, but follows the same logic in the same sequence as the following derivation.
The conservation of energy merely equates the sum of energies before and after scattering.
Compton postulated that photons carry momentum; thus from the conservation of momentum, the momenta of the particles should be similarly related by
in which () is omitted on the assumption it is effectively zero.
The photon energies are related to the frequencies by
where h is the Planck constant.
Before the scattering event, the electron is treated as sufficiently close to being at rest that its total energy consists entirely of the mass-energy equivalence of its (rest) mass ,
After scattering, the possibility that the electron might be accelerated to a significant fraction of the speed of light, requires that its total energy be represented using the relativistic energy–momentum relation
Substituting these quantities into the expression for the conservation of energy gives
This expression can be used to find the magnitude of the momentum of the scattered electron,
Note that this magnitude of the momentum gained by the electron (formerly zero) exceeds the energy/c lost by the photon,
Equation (1) relates the various energies associated with the collision. The electron's momentum change involves a relativistic change in the energy of the electron, so it is not simply related to the change in energy occurring in classical physics. The change of the magnitude of the momentum of the photon is not just related to the change of its energy; it also involves a change in direction.
Solving the conservation of momentum expression for the scattered electron's momentum gives
Making use of the scalar product yields the square of its magnitude,
In anticipation of being replaced with , multiply both sides by ,
After replacing the photon momentum terms with , we get a second expression for the magnitude of the momentum of the scattered electron,
Equating the alternate expressions for this momentum gives
which, after evaluating the square and canceling and rearranging terms, further yields
Dividing both sides by yields
Finally, since = = ,
It can further be seen that the angle of the outgoing electron with the direction of the incoming photon is specified by
Applications
Compton scattering
Compton scattering is of prime importance to radiobiology, as it is the most probable interaction of gamma rays and high energy X-rays with atoms in living beings and is applied in radiation therapy.
Compton scattering is an important effect in gamma spectroscopy which gives rise to the Compton edge, as it is possible for the gamma rays to scatter out of the detectors used. Compton suppression is used to detect stray scatter gamma rays to counteract this effect.
Magnetic Compton scattering
Magnetic Compton scattering is an extension of the previously mentioned technique which involves the magnetisation of a crystal sample hit with high energy, circularly polarised photons. By measuring the scattered photons' energy and reversing the magnetisation of the sample, two different Compton profiles are generated (one for spin up momenta and one for spin down momenta). Taking the difference between these two profiles gives the magnetic Compton profile (MCP), given by – a one-dimensional projection of the electron spin density.
where is the number of spin-unpaired electrons in the system, and are the three-dimensional electron momentum distributions for the majority spin and minority spin electrons respectively.
Since this scattering process is incoherent (there is no phase relationship between the scattered photons), the MCP is representative of the bulk properties of the sample and is a probe of the ground state. This means that the MCP is ideal for comparison with theoretical techniques such as density functional theory.
The area under the MCP is directly proportional to the spin moment of the system and so, when combined with total moment measurements methods (such as SQUID magnetometry), can be used to isolate both the spin and orbital contributions to the total moment of a system.
The shape of the MCP also yields insight into the origin of the magnetism in the system.
Inverse Compton scattering
Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disk surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona. This is surmised to cause the power law component in the X-ray spectra (0.2–10 keV) of accreting black holes.
The effect is also observed when photons from the cosmic microwave background (CMB) move through the hot gas surrounding a galaxy cluster. The CMB photons are scattered to higher energies by the electrons in this gas, resulting in the Sunyaev–Zel'dovich effect. Observations of the Sunyaev–Zel'dovich effect provide a nearly redshift-independent means of detecting galaxy clusters.
Some synchrotron radiation facilities scatter laser light off the stored electron beam.
This Compton backscattering produces high energy photons in the MeV to GeV range subsequently used for nuclear physics experiments.
Non-linear inverse Compton scattering
Non-linear inverse Compton scattering (NICS) is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, such as an electron. It is also called non-linear Compton scattering and multiphoton Compton scattering. It is the non-linear version of inverse Compton scattering in which the conditions for multiphoton absorption by the charged particle are reached due to a very intense electromagnetic field, for example the one produced by a laser.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to the charged particle rest energy and higher. As a consequence NICS photons can be used to trigger other phenomena such as pair production, Compton scattering, nuclear reactions, and can be used to probe non-linear quantum effects and non-linear QED.
See also
References
Further reading
(the original 1923 paper on the APS website)
Stuewer, Roger H. (1975), The Compton Effect: Turning Point in Physics (New York: Science History Publications)
External links
Compton Scattering – Georgia State University
Compton Scattering Data – Georgia State University
Derivation of Compton shift equation
Astrophysics
Observational astronomy
Atomic physics
Foundational quantum physics
Quantum electrodynamics
X-ray scattering | Compton scattering | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,167 | [
"X-ray scattering",
"Foundational quantum physics",
"Observational astronomy",
"Quantum mechanics",
"Astrophysics",
"Scattering",
"Atomic physics",
" molecular",
"Atomic",
"Astronomical sub-disciplines",
" and optical physics"
] |
55,244 | https://en.wikipedia.org/wiki/Hypersonic%20speed | In aerodynamics, a hypersonic speed is one that exceeds five times the speed of sound, often stated as starting at speeds of Mach 5 and above.
The precise Mach number at which a craft can be said to be flying at hypersonic speed varies, since individual physical changes in the airflow (like molecular dissociation and ionization) occur at different speeds; these effects collectively become important around Mach 5–10. The hypersonic regime can also be alternatively defined as speeds where specific heat capacity changes with the temperature of the flow as kinetic energy of the moving object is converted into heat.
Characteristics of flow
While the definition of hypersonic flow can be quite vague and is generally debatable (especially because of the absence of discontinuity between supersonic and hypersonic flows), a hypersonic flow may be characterized by certain physical phenomena that can no longer be analytically discounted as in supersonic flow. The peculiarities in hypersonic flows are as follows:
Shock layer
Aerodynamic heating
Entropy layer
Real gas effects
Low density effects
Independence of aerodynamic coefficients with Mach number.
Small shock stand-off distance
As a body's Mach number increases, the density behind a bow shock generated by the body also increases, which corresponds to a decrease in volume behind the shock due to conservation of mass. Consequently, the distance between the bow shock and the body decreases at higher Mach numbers.
Entropy layer
As Mach numbers increase, the entropy change across the shock also increases, which results in a strong entropy gradient and highly vortical flow that mixes with the boundary layer.
Viscous interaction
A portion of the large kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in internal energy is realized as an increase in temperature. Since the pressure gradient normal to the flow within a boundary layer is approximately zero for low to moderate hypersonic Mach numbers, the increase of temperature through the boundary layer coincides with a decrease in density. This causes the bottom of the boundary layer to expand, so that the boundary layer over the body grows thicker and can often merge with the shock wave near the body leading edge.
High-temperature flow
High temperatures due to a manifestation of viscous dissipation cause non-equilibrium chemical flow properties such as vibrational excitation and dissociation and ionization of molecules resulting in convective and radiative heat-flux.
Classification of Mach regimes
Although "subsonic" and "supersonic" usually refer to speeds below and above the local speed of sound respectively, aerodynamicists often use these terms to refer to particular ranges of Mach values. When an aircraft approaches transonic speeds (around Mach 1), it enters a special regime. The usual approximations based on the Navier–Stokes equations, which work well for subsonic designs, start to break down because, even in the freestream, some parts of the flow locally exceed Mach 1. So, more sophisticated methods are needed to handle this complex behavior.
The "supersonic regime" usually refers to the set of Mach numbers for which linearised theory may be used; for example, where the (air) flow is not chemically reacting and where heat transfer between air and vehicle may be reasonably neglected in calculations. Generally, NASA defines "high" hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Among the spacecraft operating in these regimes are returning Soyuz and Dragon space capsules; the previously-operated Space Shuttle; various reusable spacecraft in development such as SpaceX Starship and Rocket Lab Electron; and (theoretical) spaceplanes.
In the following table, the "regimes" or "ranges of Mach values" are referenced instead of the usual meanings of "subsonic" and "supersonic".
Similarity parameters
The categorization of airflow relies on a number of similarity parameters, which allow the simplification of a nearly infinite number of test cases into groups of similarity. For transonic and compressible flow, the Mach and Reynolds numbers alone allow good categorization of many flow cases.
Hypersonic flows, however, require other similarity parameters. First, the analytic equations for the oblique shock angle become nearly independent of Mach number at high (~>10) Mach numbers. Second, the formation of strong shocks around aerodynamic bodies means that the freestream Reynolds number is less useful as an estimate of the behavior of the boundary layer over a body (although it is still important). Finally, the increased temperature of hypersonic flow mean that real gas effects become important. Research in hypersonics is therefore often called aerothermodynamics, rather than aerodynamics.
The introduction of real gas effects means that more variables are required to describe the full state of a gas. Whereas a stationary gas can be described by three variables (pressure, temperature, adiabatic index), and a moving gas by four (flow velocity), a hot gas in chemical equilibrium also requires state equations for the chemical components of the gas, and a gas in nonequilibrium solves those state equations using time as an extra variable. This means that for nonequilibrium flow, something between 10 and 100 variables may be required to describe the state of the gas at any given time. Additionally, rarefied hypersonic flows (usually defined as those with a Knudsen number above 0.1) do not follow the Navier–Stokes equations.
Hypersonic flows are typically categorized by their total energy, expressed as total enthalpy (MJ/kg), total pressure (kPa-MPa), stagnation pressure (kPa-MPa), stagnation temperature (K), or flow velocity (km/s).
Wallace D. Hayes developed a similarity parameter, similar to the Whitcomb area rule, which allowed similar configurations to be compared. In the study of hypersonic flow over slender bodies, the product of the freestream Mach number and the flow deflection angle , known as the hypersonic similarity parameter:is considered to be an important governing parameter. The slenderness ratio of a vehicle , where is the diameter and is the length, is often substituted for .
Regimes
Hypersonic flow can be approximately separated into a number of regimes. The selection of these regimes is rough, due to the blurring of the boundaries where a particular effect can be found.
Perfect gas
In this regime, the gas can be regarded as an ideal gas. Flow in this regime is still Mach number dependent. Simulations start to depend on the use of a constant-temperature wall, rather than the adiabatic wall typically used at lower speeds. The lower border of this region is around Mach 5, where ramjets become inefficient, and the upper border around Mach 10–12.
Two-temperature ideal gas
This is a subset of the perfect gas regime, where the gas can be considered chemically perfect, but the rotational and vibrational temperatures of the gas must be considered separately, leading to two temperature models. See particularly the modeling of supersonic nozzles, where vibrational freezing becomes important.
Dissociated gas
In this regime, diatomic or polyatomic gases (the gases found in most atmospheres) begin to dissociate as they come into contact with the bow shock generated by the body. Surface catalysis plays a role in the calculation of surface heating, meaning that the type of surface material also has an effect on the flow. The lower border of this regime is where any component of a gas mixture first begins to dissociate in the stagnation point of a flow (which for nitrogen is around 2000 K). At the upper border of this regime, the effects of ionization start to have an effect on the flow.
Ionized gas
In this regime the ionized electron population of the stagnated flow becomes significant, and the electrons must be modeled separately. Often the electron temperature is handled separately from the temperature of the remaining gas components. This region occurs for freestream flow velocities around 3–4 km/s. Gases in this region are modeled as non-radiating plasmas.
Radiation-dominated regime
Above around 12 km/s, the heat transfer to a vehicle changes from being conductively dominated to radiatively dominated. The modeling of gases in this regime is split into two classes:
Optically thin: where the gas does not re-absorb radiation emitted from other parts of the gas
Optically thick: where the radiation must be considered a separate source of energy.
The modeling of optically thick gases is extremely difficult, since, due to the calculation of the radiation at each point, the computation load theoretically expands exponentially as the number of points considered increases.
See also
Hypersonic glide vehicle
Supersonic transport
Lifting body
Atmospheric entry
Hypersonic flight
DARPA Falcon Project
Reaction Engines Skylon (design study)
Reaction Engines A2 (design study)
HyperSoar (concept)
Boeing X-51 Waverider
X-20 Dyna-Soar (cancelled)
Rockwell X-30 (cancelled)
Avatar RLV (2001 Indian concept study)
Hypersonic Technology Demonstrator Vehicle (Indian project)
Ayaks (Russian wave rider project from the 1990s)
Avangard (Russian hypersonic glide vehicle, in service)
DF-ZF (Chinese hypersonic glide vehicle, operational)
Lockheed Martin SR-72 (planned)
WZ-8 Chinese Hypersonic surveillance UAV (In Service)
MD-22 Chinese Hypersonic Unmanned combat aerial vehicle (In development)
Engines
Rocket engine
Ramjet
Scramjet
Reaction Engines SABRE, LAPCAT (design studies)
Missiles
3M22 Zircon Anti-ship hypersonic cruise missile (in production)
BrahMos-II Cruise Missile – (Under Development)
Other flow regimes
Subsonic flight
Transonic
Supersonic speed
References
External links
NASA's Guide to Hypersonics
Hypersonics Group at Imperial College
University of Queensland Centre for Hypersonics
High Speed Flow Group at University of New South Wales
Hypersonics Group at the University of Oxford
Aerodynamics
Aerospace engineering
Airspeed
Spacecraft propulsion | Hypersonic speed | [
"Physics",
"Chemistry",
"Engineering"
] | 2,046 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
55,249 | https://en.wikipedia.org/wiki/Skunk%20Works | Skunk Works is an official pseudonym for Lockheed Martin's Advanced Development Programs (ADP), formerly called Lockheed Advanced Development Projects. It is responsible for a number of aircraft designs, highly classified research and development programs, and exotic aircraft platforms. Known locations include United States Air Force Plant 42 (Palmdale, California), United States Air Force Plant 4 (Fort Worth, Texas), and Marietta, Georgia.
Skunk Works' history started with the P-38 Lightning in 1939 and the P-80 Shooting Star in 1943. Skunk Works engineers subsequently developed the U-2, SR-71 Blackbird, F-117 Nighthawk, F-22 Raptor, and F-35 Lightning II, the latter being used in the air forces of several countries.
The Skunk Works name was taken from the "Skonk Oil" factory in the comic strip Li'l Abner. Derived from the Lockheed use of the term, the designation "skunk works" or "skunkworks" is now widely used in business, engineering, and technical fields to describe a group within an organization given a high degree of autonomy and unhampered by bureaucracy, with the task of working on advanced or secret projects.
History
There are conflicting observations about the birth of Skunk Works.
Engineer Ben Rich sets the origin as June 1943 in Burbank, California. Kelly Johnson has made contradictory statements, some agreeing with Rich, and others putting the origin earlier, in 1939. The official Lockheed Skunk Works story states:
Warren M. Bodie, journalist, historian, and Skunk Works engineer from 1977 to 1984, wrote that engineering independence, elitism and secrecy of the Skunk Works variety were demonstrated earlier when Lockheed was asked by Lieutenant Benjamin S. Kelsey (later air force brigadier general) to build for the United States Army Air Corps a high speed, high altitude fighter to compete with German aircraft. In July 1938, while the rest of Lockheed was busy tooling up to build Hudson reconnaissance bombers to fill a British contract, a small group of engineers was assigned to fabricate the first prototype of what would become the P-38 Lightning. Kelly Johnson set them apart from the rest of the factory in a walled-off section of one building, off limits to all but those involved directly. Secretly, a number of advanced features were being incorporated into the new fighter including a significant structural revolution in which the aluminum skin of the aircraft was joggled, fitted and flush-riveted, a design innovation not called for in the army's specification but one that would yield less aerodynamic drag and give greater strength with lower mass.
As a result, the XP-38 was the first 400-mph fighter in the world. The Lightning team was temporarily moved to the 3G Distillery, a smelly former bourbon works where the first YP-38 (constructor's number 2202) was built. Moving from the distillery to a larger building, the stench from a nearby plastic factory was so vile that Irv Culver, one of the engineers, began answering the intra-Lockheed "house" phone "Skonk Works, inside man Culver speaking!"
In Al Capp's comic strip Li'l Abner, Big Barnsmell's Skonk Works — spelled with an "o" — was where Kickapoo Joy Juice was brewed from skunks, old shoes, kerosene, anvils, and other strange ingredients. When the name leaked out, Lockheed ordered it changed to "Skunk Works" to avoid potential legal trouble over use of a copyrighted term. The term rapidly circulated throughout the aerospace community, and became a common nickname for research and development offices. The once informal nickname is now a registered trademark of Lockheed Martin.
In November 1941, Kelsey gave the unofficial nod to Johnson and the P-38 team to engineer a drop tank system to extend range for the fighter, and they completed the initial research and development without a contract. When the Army Air Forces officially asked for a range extension solution it was ready. The range modifications were performed in Lockheed's Building 304, starting with 100 P-38F models on April 15, 1942. Some of the group of independent-minded engineers were later involved with the XP-80 project, the prototype of the P-80 Shooting Star.
Mary G. Ross, the first Native American female engineer, began working at Lockheed in 1942 on the mathematics of compressibility in high-speed flight—a problem first seriously encountered in the P-38. In 1952, she was invited to join the Skunk Works team.
1950s to 1990s
In 1955, the Skunk Works received a contract from the CIA to build a spyplane known as the U-2 with the intention of flying over the Soviet Union and photographing sites of strategic interest. The U-2 was tested at Groom Lake in the Nevada desert, and the Flight Test Engineer in charge was Joseph F. Ware, Jr. The first overflight took place on July 4 1956. The U-2 ceased overflights when Francis Gary Powers was shot down during a mission on May 1, 1960, while over Russia.
The Skunk Works had predicted that the U-2 would have a limited operational life over the Soviet Union. The CIA agreed. In late 1959, Skunk Works received a contract to build five A-12 aircraft at a cost of $96 million. Building a Mach 3.0+ aircraft out of titanium posed enormous difficulties, and the first flight did not occur until 1962. (Titanium supply was largely dominated by the Soviet Union, so the CIA used several shell corporations to acquire source material.) Several years later, the U.S. Air Force became interested in the design, and it ordered the SR-71 Blackbird, a two-seater version of the A-12. This aircraft first flew in 1966 and remained in service until 1998.
The D-21 drone, similar in design to the Blackbird, was built to overfly the Lop Nur nuclear test facility in China. This drone was launched from the back of a specially modified A-12, known as M-21, of which there were two built. After a fatal mid-air collision on the fourth launch, the drones were re-built as D-21Bs, and launched with a rocket booster from B-52s. Four operational missions were conducted over China, but the camera packages were never successfully recovered. Kelly Johnson headed the Skunk Works until 1975. He was succeeded by Ben Rich.
In 1976, the Skunk Works began production on a pair of stealth technology demonstrators for the U.S. Air Force named Have Blue in Building 82 at Burbank. These scaled-down demonstrators, built in only 18 months, were a revolutionary step forward in aviation technology because of their extremely small radar cross-section. After a series of successful test flights beginning in 1977, the Air force awarded Skunk Works the contract to build the F-117 stealth fighter on November 1, 1978.
During the entirety of the Cold War, the Skunk Works was located in Burbank, California, on the eastern side of Burbank-Glendale-Pasadena Airport (). After 1989, Lockheed reorganized its operations and relocated the Skunk Works to Site 10 at U.S. Air Force Plant 42 in Palmdale, California, where it remains in operation today. Most of the old Skunk Works buildings in Burbank were demolished in the late 1990s to make room for parking lots. One main building still remains at 2777 Ontario Street in Burbank (near San Fernando Road), now used as an office building for digital film post-production and sound mixing. During the late 1990s when designing Pixar's building, Edwin Catmull and Steve Jobs visited a Skunkworks Building which influenced Jobs' design. In 2009, the Skunk Works was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
Projects
2015 projects
Next generation optionally-manned U-2 aircraft. During September 2015 the proposed aircraft was deemed to have developed into more of a tactical reconnaissance aircraft, instead of strategic reconnaissance.
Aircraft
Lockheed P-38 Lightning (unofficial)
Lockheed P-80 Shooting Star
Lockheed XF-90
Lockheed F-104 Starfighter
Lockheed U-2
Lockheed X-26 Frigate
Lockheed YO-3
Lockheed A-12
Lockheed SR-71 Blackbird
Lockheed D-21
Lockheed XST (Have Blue) and Lockheed F-117 Nighthawk
Lockheed YF-22 and Lockheed Martin F-22 Raptor
Lockheed Martin X-33
Lockheed Martin X-35 and Lockheed Martin F-35 Lightning II
Lockheed X-27
Lockheed Martin Polecat
Quiet Supersonic Transport
Lockheed Martin Cormorant
Lockheed Martin Desert Hawk
Lockheed Martin RQ-170 Sentinel
Lockheed Martin X-55
Lockheed Martin SR-72
Lockheed Martin X-59 QueSST
Other
High beta fusion reactor
Sea Shadow
Term origin
The term "Skunk Works" came from Al Capp's satirical, hillbilly comic strip Li’l Abner, which was immensely popular from 1935 through the 1950s. In the comic, the “Skonk Works" was a dilapidated factory located on the remote outskirts of Dogpatch, in the backwoods of Kentucky. According to the strip, scores of locals were done in yearly by the toxic fumes of the concentrated "skonk oil", which was brewed and barreled daily by "Big Barnsmell" (known as the lonely "inside man" at the Skonk Works), by grinding dead skunks and worn shoes into a smoldering still, for some mysterious, unspecified purpose.
In mid-1939 when Lockheed was expanding rapidly, the YP-38 project was moved a few blocks away to the newly purchased 3G Distillery, also known as Three G or GGG Distillery. Lockheed took over the building but the sour smell of bourbon mash lingered, partly because the group of buildings continued to store barrels of aging whiskey. The first YP-38 was built there before the team moved back to Lockheed's main factory a year later. In 1964, Johnson told Look magazine that the bourbon distillery was the first of five Lockheed skunk works locations.
During the development of the P-80 Shooting Star, Johnson's engineering team was located adjacent to a malodorous plastics factory. According to Ben Rich’s memoir, an engineer jokingly showed up to work one day wearing a Civil Defense gas mask. To comment on the smell and the secrecy the project entailed, another engineer, Irv Culver, referred to the facility as "Skonk Works". As the development was very secret, the employees were told to be careful even with how they answered phone calls. One day, when the Department of the Navy was trying to reach the Lockheed management for the P-80 project, the call was accidentally transferred to Culver’s desk. Culver answered the phone in his trademark fashion of the time, by picking up the phone and stating "Skonk Works, inside man Culver". "What?" replied the voice at the other end. "Skonk Works", Culver repeated. The name stuck. Culver later said at an interview conducted in 1993 that "when Kelly Johnson heard about the incident, he promptly fired me. It didn’t really matter, since he was firing me about twice a day anyways."
At the request of the comic strip copyright holders, Lockheed changed the name of the advanced development company to "Skunk Works" in the 1960s. The name "Skunk Works" and the skunk design are now registered trademarks of the Lockheed Martin Corporation. The company also holds several registrations of it with the United States Patent and Trademark Office. They have filed several challenges against registrants of domain names containing variations on the term under anti-cybersquatting policies, and have lost a case under the .uk domain name dispute resolution service against a company selling cannabis seeds and paraphernalia, which used the word "skunkworks" in its domain name (referring to "Skunk", the pungent smell of the cannabis flower). Lockheed Martin claimed the company registered the domain in order to disrupt its business and that consumer confusion might result. The respondent company argued that Lockheed "used its size, resources and financial position to employ 'bullyboy' tactics against... a very small company."
In Australia, the trademark for use of the name "Skunkworks" is held by Perth-based television accessory manufacturer The Novita Group Pty Ltd. Lockheed Martin formally registered opposition to the application in 2006, however the Australian government's intellectual property authority, IP Australia, rejected the opposition, awarding Novita the trademark in 2008.
See also
Advanced Propulsion Physics Laboratory
Area 51
Boeing Phantom Works
Swamp Works
References
Bibliography
External links
Aerospace engineering organizations
Lockheed Martin-associated military facilities
Military industrial facilities
Research organizations in the United States
Research and development in the United States
American military aviation
History of aviation
United States government secrecy
Buildings and structures in Burbank, California
Buildings and structures in Palmdale, California
Military facilities in the Mojave Desert
Military in Greater Los Angeles
National Medal of Technology recipients
Organizations established in 1943
Think tanks established in 1943
1943 establishments in California
1989 establishments in California
Science and technology in Greater Los Angeles | Skunk Works | [
"Engineering"
] | 2,711 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
55,265 | https://en.wikipedia.org/wiki/Road%20train | A road train, also known as a land train or long combination vehicle (LCV) is a semi-trailer used to move road freight more efficiently than single-trailer semi-trailers. It consists of one semi-trailer or more connected together with or without a prime-mover. It typically has to be at least three trailers and one prime-mover. Road trains are often used in areas where other forms of heavy transport (freight train, cargo aircraft, container ship) are not feasible or practical.
History
Early road trains consisted of traction engines pulling multiple wagons. The first identified road trains operated into South Australia's Flinders Ranges from the Port Augusta area in the mid-19th century. They displaced bullock teams for the carriage of minerals to port and were, in turn, superseded by railways.
During the Crimean War, a traction engine was used to pull multiple open trucks. By 1898 steam traction engine trains with up to four wagons were employed in military manoeuvres in England.
In 1900, John Fowler & Co. provided armoured road trains for use by the British Armed Forces in the Second Boer War. Lord Kitchener stated that he had around 45 steam road trains at his disposal.
A road train devised by Captain Charles Renard of the French Engineering Corps was displayed at the 1903 Paris Salon. After his death, Daimler, which had acquired the rights, attempted to market it in the United Kingdom. Four of these vehicles were successfully delivered to Queensland, Australia, before the company ceased production upon the start of World War I.
In the 1930s/40s, the government of Australia operated an AEC Roadtrain to transport freight and supplies into the Northern Territory, replacing the Afghan camel trains that had been trekking through the deserts since the late 19th century. This truck pulled two or three Dyson four-axle self-tracking trailers. At , the AEC was grossly underpowered by today's standards, and drivers and offsiders (a partner or assistant) routinely froze in winter and sweltered in summer due to the truck's open cab design and the position of the engine radiator, with its cooling fan, behind the seats.
Australian Kurt Johannsen, a bush mechanic, is recognised as the inventor of the modern road train. After transporting stud bulls to an outback property, Johannsen was challenged to build a truck to carry 100 head of cattle instead of the original load of 20. Provided with financing of about 2000 pounds and inspired by the tracking abilities of the Government roadtrain, Johannsen began construction. Two years later his first road train was running.
Johannsen's first road train consisted of a United States Army World War II surplus Diamond-T tank carrier, nicknamed "Bertha", and two home-built self-tracking trailers. Both wheel sets on each trailer could steer, and therefore could negotiate the tight and narrow tracks and creek crossings that existed throughout Central Australia in the earlier part of the 20th century. Freighter Trailers in Australia viewed this improved invention and went on to build self-tracking trailers for Kurt and other customers, and went on to become innovators in transport machinery for Australia.
This first example of the modern road train, along with the AEC Government Roadtrain, forms part of the huge collection at the National Road Transport Hall of Fame in Alice Springs, Northern Territory.
In 2023, Janus launched the first BEV triple road train with 620 kWh battery, also the world's heaviest street-legal BEV truck at 170 tonnes (gross weight).
Usage
Australia
The term road train is used in Australia and typically means a prime mover hauling two or more trailers, other than a B-double. In contrast with a more common semi-trailer towing one trailer or semi-trailer, the diesel prime mover of a road train hauls two or more trailers or semi-trailers. Australia has the longest and heaviest road-legal road trains in the world, weighing up to .
Double (two-trailer) road train combinations are allowed on some roads in most states of Australia, including specified approaches to the ports and industrial areas of Adelaide, South Australia and Perth, Western Australia. An A-double road train should not be confused with a B-double, which is allowed access to most of the country and in all major cities.
In South Australia, B-triples up to and two-trailer road trains to were only permitted to travel on a small number of approved routes in the north and west of the state, including access to Adelaide's north-western suburban industrial and export areas such as Port Adelaide, Gillman and Outer Harbour via Salisbury Highway, Port Wakefield Road and Augusta Highway before 2017. A project named Improving Road Transport for the Agriculture Industry added of key routes permitted to operate vehicles over in 2015–2018.
Triple (three-trailer) road trains operate in western New South Wales, western Queensland, South Australia, Western Australia and the Northern Territory, with the last three states also allowing AB-quads (B double with two additional trailers coupled behind). Darwin is the only capital city in the world where triples and quads are allowed to within of the central business district (CBD).
Strict regulations regarding licensing, registration, weights, and experience apply to all operators of road trains throughout Australia.
Road trains are used for transporting all manner of materials: common examples are livestock, fuel, mineral ores, and general freight. Their cost-effective transport has played a significant part in the economic development of remote areas; some communities are totally reliant on regular service.
When road trains get close to populated areas, the multiple dog-trailers are unhooked, the dollies removed and then connected individually to multiple trucks at "assembly" yards.
When the flat-top trailers of a road train need to be transported empty, it is common practice to stack them. This is commonly referred to as "doubled-up" or "doubling-up". Sometimes, if many trailers are required to be moved at one time, they will be triple-stacked, or "tripled-up".
Higher Mass Limits (HML) Schemes are now in all jurisdictions in Australia, allowing trucks to carry additional weight beyond general mass limits. Some roads in some states regularly allowing up to 4 trailers at long and . On private property like mines, highway restrictions on trailer length, weight and count may not apply. Some of the heaviest road trains carrying ore are multiple unit with a diesel engine in each trailer, controlled by the tractor.
Diesel sales in Australia (per year) are around 32 billion litres, of which some is used by road trains. In order to reduce emissions and running cost, trials are made with road trains powered by batteries.
United States
In the United States, trucks on public roads are limited to two trailers (two and a dolly to connect; the limit is end to end). Some states allow three trailers, although triples are usually restricted to less populous states such as Idaho, Oregon, and Montana, plus the Ohio Turnpike and Indiana East–West Toll Road. Triples are used for long-distance less-than-truckload freight hauling (in which case the trailers are shorter than a typical single-unit trailer) or resource hauling in the interior west (such as ore or aggregate). Triples are sometimes marked with "LONG LOAD" banners both front and rear. "Turnpike doubles"—tractors towing two full-length trailers—are allowed on the New York Thruway and Massachusetts Turnpike (Interstate 90), Florida's Turnpike, Kansas Turnpike (Kansas City – Wichita route) as well as the Ohio and Indiana toll roads. Colorado allows what are known as "Rocky Mountain Doubles" which is one full length trailer and an additional trailer. The term "road train" is not commonly used in the United States; "turnpike train" has been used, generally in a pejorative sense.
In the western United States LCVs are allowed on many Interstate highways. The only LCVs allowed nationwide are STAA doubles.
On private property like farms, highway restrictions on trailer length and count do not apply. Bales of straw, for example, are sometimes moved in wagon trains of up to 20 trailers an eighth of a mile long (carrying a total of 3,600 bales).
Europe
In Finland, Sweden, Germany, the Netherlands, Denmark, Belgium, and some roads in Norway, trucks with trailers are allowed to be long. In Finland, a length of has been allowed since January 2019. In Sweden, this length has been allowed on several major roads, including all of E4, since August 2023. 34.5 meters allows two 40 foot containers.
Elsewhere in the European Union, the limit is (Norway ). The trucks are of a cab-over-engine design, with a flat front and a high floor, about above ground. The Scandinavian countries are less densely populated than the other EU countries, and distances, especially in Finland and Sweden, are long. Until the late 1960s, vehicle length was unlimited, giving rise to long vehicles to cost effectively handle goods. As traffic increased, truck lengths became more of a concern and they were limited, albeit at a more generous level than in the rest of Europe.
In the United Kingdom in 2009, a two-year desk study of Longer Heavier Vehicles (LHVs), including up to 11-axle, long, combinations, ruled out all road-train-type vehicles for the foreseeable future.
In 2010, Sweden was performing tests on log-hauling trucks, weighing up to and measuring and haulers for two 40 ft containers, measuring in total. In 2015, a pilot began in Finland to test a 104-tonne timber lorry which was and had 13 axles. Testing of the special lorry was limited to a predefined route in northern Finland
Since 2015, Spain has permitted B-doubles with a length of up to and weighing up to 60 tonnes to travel on certain routes. In July 2024, after 5 years of testing, HCTs have been permitted on Spanish territory, with lengths of up to 32 meters (105 ft) and 70 gross tonnes.
Since 2016, Eoin Gavin Transport, Shannon and Dennison Trailers, Kildare have been trialling B-doubles on the Irish motorways. In Feb 2024, The Pallet Network announced four B-doubles to operate between Dublin, Cork and Galway.
In 2020, a small number of road trains were operating between Belgium and the Netherlands.
Mexico
In Mexico road trains exist in a limited capacity due to the sizes of roads in its larger cities, and they are only allowed to pull 2 trailers joined with a pup or dolly created for this purpose. Recently the regulations tend to be more severe and strict to avoid overloading and accidents, to adhere to the federal rules of transportation. Truck drivers must obtain a certificate to certify that the driver is capable to manipulate and drive that type of vehicle.
All the tractor vehicles that make road train type transport in the country (along with the normal security requirements) need to have visual warnings like;
"Warning Double Semi-Trailer" () alert located in the frontal fenders of the tractor and in the rear part of each trailer,
yellow turn and warning lights to be more visible to other drivers,
a seal for the entire vehicle approving the use as double semi trailer,
federal license plates in every trailer, dolly, and tractor unit.
Some major cargo enterprises in the country use this form to cut costs of carrying all type of goods in some regions where other forms of transportation are too expensive compared to it due to the difficult geography of the country.
The Mexican road train equivalent form in Australian Standard is the A-Double form, the difference is that the Mexican road trains can be hauled with a long distance tractor truck.
Zimbabwe
In Zimbabwe, they are only used in one highway, Ngezi – Makwiro road. They make use of 42 m long road trains pulling three trailers.
Trailer arrangements
A-double
An A-double consists of a prime mover towing a normal lead trailer with a towing hitch such as a Ringfeder coupling affixed to it at the rear. A fifth wheel dolly is then affixed to the hitch allowing another standard trailer to be attached. Eleven-axle coal tipping sets carrying to Port Kembla, Australia are described as A-doubles. The set depicted has a tare weight of and is capable of carrying of coal. Note the shield at the front of the second trailer to direct tipped coal from the first trailer downwards.
Pros include the ability to use standard semi-trailers and the potential for very large loads. Cons mainly include very tricky reversing due to the multiple articulation points across two different types of coupling.
B-double
A B-double consists of a prime mover towing a specialised lead trailer that has a fifth-wheel mounted on the rear towing another semi-trailer, resulting in two articulation points. It may also be known as a B-train, interlink in South Africa, B-double in Australia, tandem tractor-trailer, tandem rig or double in North America. They may typically be up to long. The fifth wheel coupling is located at the rear of the lead (first) trailer and is mounted on a "tail" section commonly located immediately above the lead trailer axles. In North America this area of the lead trailer is often referred to as the "bridge". The twin-trailer assembly is hooked up to a tractor unit via the tractor unit's fifth wheel in the customary manner.
An advantage of the B-train configuration is its inherent stability when compared to most other twin trailer combinations, the turntable mounted on the forward trailer results in the B-train not requiring a converter dolly as with all other road train configurations. It is this feature above all else that has ensured its continued development and global acceptance. Reversing is simpler as all articulation points are on fifth wheel couplings.
B-train trailers are used to transport many types of load and examples include tanks for liquid and dry-bulk, flat-beds and curtain-siders for deck-loads, bulkers for aggregates and wood residuals, refrigerated trailers for chilled and frozen goods, vans for dry goods, logging trailers for forestry work and cattle liners for livestock.
In Australia, standard semi-trailers are permitted on almost any road. B-doubles are more heavily regulated, but routes are made available by state governments for almost anywhere that significant road freight movement is required.
Around container ports in Australia exists what is known as a super B-double; a B-double with an extra axle (total of 4) on the lead trailer and either three or four axle set on the rear trailer. This allows the super B-Double to carry combinations of two 40 foot containers, four 20 foot containers, or a combination of one 40 foot container and two twenty foot containers. However, because of their length and low accessibility into narrow streets, these vehicles are restricted in where they can go and are generally used for terminal-to-terminal work, i.e. wharf to container holding park or wharf-to-wharf. The rear axle on each trailer can also pivot slightly while turning to prevent scrubbing out the edges of the tyres due to the heavy loads placed on them.
B-triple
Same as B-double, but with an additional lead trailer behind the prime mover. The B-train principle has been exploited in Australia, where configurations such as B triples, double-B doubles and 2AB quads are permitted on some routes. These are run in most states of Australia where double road trains are allowed. Australia's National Transport Commission proposed a national framework for B-triple operations that includes basic vehicle specifications and operating conditions that the commission anticipates will replace the current state-by-state approach, which largely discourages the use of B-triples for interstate operation. In South Australia, B-triples up to and two-trailer road trains to are generally only permitted on specified routes, including access to industrial and export areas near Port Adelaide from the north.
B quad
In 2018, B quad was also allowed in states Victoria, New South Wales and Queensland, which enables more economical transport.
AB triple
An AB triple consists of a standard trailer with a B-Double behind it using a converter dolly, with a trailer order of Standard, Dolly, B-Train, Standard. The final trailer may be either a B-Train with no trailer attached to it or a standard trailer. Alternatively, a BA triple sees this configuration reversed, consisting of a B-double with a converter dolly and standard trailer behind it.
A-triple
In South Australia, larger road trains up to (three full trailers) are only permitted on certain routes in the Far North.
BAB quad
A BAB quad consists of two B-double units linked with a converter dolly, with trailer order of Prime Mover, B-Train, Dolly, B-Train.
ABB quad
ABB quad consists of one standard trailer and B-triple units linked with a converter dolly.
AAB quad
AAB quad consists of A-double and B-double units linked with a converter dolly. Alternatively, a BAA quad sees this configuration reversed, first the B-double, then the A-double.
A quad
In some parts of Australia, 'super quad' road trains up to are permitted, consisting of four standard trailers connected via three converter dollies.
C-train
A C-train is a semi-trailer attached to a turn table on a C-dolly. Unlike in an A-Train, the C-dolly is connected to the tractor or another trailer in front of it with two drawbars, thus eliminating the drawbar connection as an articulation point. One of the axles on a C-dolly is self-steerable to prevent tire scrubbing. C-dollies are not permitted in Australia, due to the lack of articulation.
Dog-trailer (dog trailer)
A dog-trailer (also called a pup) is a short trailer with a permanent dolly, with a single A-frame drawbar that fits into the Ringfeder or pintle hook on the rear of the truck or trailer in front, giving the whole unit two or more articulation points and very little roll stiffness. These are commonly used in Australia, particularly for end tipper applications like shown above. They are normally limited to a single dog trailer behind a short bodied (independently load carrying) truck with a standard length limit of 19 metres (20 under design permits). A quad dog trailer in combination with a bodied truck is able to carry more weight than a truck and single semi-trailer of the same length limit and access restrictions, as well as carrying two different materials as separate loads, such as with tipper bodies and fluid tankers.
Interstate road transport registration in Australia
In 1991, at a special Premiers' Conference, Australian heads of government signed an inter-governmental agreement to establish a national heavy vehicle registration, regulation and charging scheme: the Federal Interstate Registration Scheme (FIRS). Its requirements are as follows:
Due to the "eastern" and "western" mass limits in Australia, two different categories of registration were enacted. The second digit of the registration plate showed what mass limit was allowed for that vehicle. If a vehicle had a 'V' as the second letter, its mass limits were in line with the eastern states mass limits, which were:
Steer axle, 1 axle, 2 tyres:
Steer axle, 2 axles, 2 tyres per axle: Non load sharing suspension
Load sharing suspension
Single axle, dual tyres:
Tandem axle, dual tyres:
Tri-axle, dual tyres or 'super single' tyres:
Gross combination mass on a 6-axle vehicle not to exceed
If a vehicle had an X as the second letter, its mass limits were in line with the western states mass limits, which were:
Steer axle, 1 axle, 2 tyres:
Steer axle, 2 axles, 2 tyres per axle
Non load sharing suspension : Load sharing suspension
Single axle, dual tyres:
Tandem axle, dual tyres:
Tri-axle, dual tyres or "super single" tyres:
Gross combination mass on a 6-axle vehicle not to exceed
The second digit of the registration being a T designates a trailer.
One of the main criteria of the registration is that intrastate operation is not permitted. The load has to come from one state or territory and be delivered to another. Many grain carriers were reported and prosecuted for cartage from the paddock to the silos. However, if the load went to a port silo, they were given the benefit of the doubt, as that grain was more than likely to be going overseas.
Signage
Australian road trains have horizontal signs front and back with high black uppercase letters on a reflective yellow background reading "ROAD TRAIN". The sign(s) must have a black border and be at least long and high and be placed between and above the ground on the fore or rearmost surface of the unit.
In the case of B-triples in Western Australia, they are signed front and rear with "ROAD TRAIN" until they cross the WA/SA border where they are then signed with "LONG VEHICLE" in the front and rear.
Converter dollies must have a sign affixed horizontally to the rearmost point, complying to the same conditions, reading "LONG VEHICLE". This is required for when a dolly is towed behind a trailer.
Combination lengths
B-double max. Western Australia, max.
B-triple up to max.
NTC modular B-triple max. (uses 2× conventional B-double lead trailers)
Pocket road train max. (Western Australia only) This configuration is classed as a "Long Vehicle".
Double road train or AB road train max.
Triple and ABB or BAB-quad road trains max.
Operating weights
Operational weights are based on axle group masses, as follows:
Single axle (steer tyre)
Single axle (steer axle with 'super single' tyres)
Single axle (dual tyres)
Tandem axle grouping
Tri-axle grouping
Therefore,
A B-double (single axle steering, tandem drive, and two tri-axle groups) would have an operational weight of .
A double road train (single axle steering, tandem drive, tri-axle, tandem, tri-axle) would have an operational weight of .
A triple is .
Quads weigh in at .
Concessional weight limits, which increase allowable weight to accredited operators can see (for example) a quad weighing up to .
If a tri-drive prime mover is utilised, along with tri-axle dollies, weights can reach nearly .
Speed limits
The Australian national heavy vehicle speed limit is , except New South Wales and Queensland where the speed limit for any road train is . B triple road trains have a speed limit of 100 km/h (62 mph) in Queensland.
In Canada, there has been no difference between the speed limits between cars and road trains, which range from on two-lane roads and between on three-lane roads.
In Europe, the speed limit for heavy goods trucks is usually . There is a law on having speed limiters which makes it impossible to drive heavy trucks faster than . These limits are normally the same for road trains. There is not a wish to encourage trucks to overtake slightly slower trucks on motorways because it obstacles the left lane, although common anyway e.g. when heavy road trains lose speed uphill.
World's longest road trains
Below is a list of longest road trains driven in the world. Most of these had no practical use, as they were put together and driven across relatively short distances for the express purpose of record-breaking.
In 1989, a trucker named "Buddo" tugged 12 trailers down the main street of Winton.
In 1993, "Plugger" Bowden took the record with a Mack CLR pulling 16 trailers.
A few months later this effort was surpassed by Darwin driver Malcolm Chisholm with a , 21-trailer rig extending .
In April 1994 Bob Hayward and Andrew Aichison organised another attempt using a 1988 Mack Super-Liner 500 hp V8 belonging to Plugger Bowden who drove 29 stock trailers measuring 439.169 metres a distance of 4.5 km into Bourke. The record was published in the next Guinness Book of Records.
Then the record went back to Winton with 34 trailers.
On 3 April 1999, the town of Merredin, officially made it into the Guinness Book of Records, when Marleys Transport made a successful attempt on the record for the world's longest road train. The record was created when 45 trailers, driven by Greg Marley, weighing and measuring were pulled by a Kenworth 10×6 K100G for .
On 19 October 2000, Doug Gould set the first of his records in Kalgoorlie, when a roadtrain made up of 79 trailers, measuring and weighing , was pulled by a Kenworth C501T driven by Steven Matthews a distance of .
On 29 March 2003, the record was surpassed near Mungindi, by a road train consisting of 87 trailers and a single prime mover (measuring in length).
The record returned to Kalgoorlie, on 17 October 2004, when Doug Gould assembled 117 trailers for a total length of . The attempt nearly failed, as the first prime mover's main driveshaft broke when taking off. A second truck was quickly made available, and pulled the train a distance of .
In 2004, the record was again broken by a group from Clifton, Queensland which used a standard Mack truck to pull 120 trailers a distance of about .
On 18 February 2006, an Australian built Mack truck with 113 semi-trailers, and long, pulled the load to recapture the record for the longest road train (multiple loaded trailers) ever pulled with a single prime mover. It was on the main road of Clifton, Queensland, that 70-year-old John Atkinson claimed a new record, pulled by a tri-drive Mack Titan.
Outside Australia
On 12 April 2016 in Gothenburg, Sweden, a Volvo FH16 750 pulled 20 trailers with double-stacked containers with a total length of 300 meters (984 ft) and with a total weight of 750 tonnes.
Gallery
See also
Air brake (road vehicle)
Articulated bus
Brake
B-train
Containerization
Container on barge
Container ship
Dolly (trailer)
Federal Bridge Weight Formula
Fifth wheel coupling
Gladhand connector
Intermodal freight transport
Jackknifing
Longer Heavier Vehicle
National Network – highway and interstate system
Overland train
Ringfeder coupling devices
Road transport in Australia
Rolling highway – freight trucks by rail
Semi-trailer truck – large trucks such as road trains and articulated lorries
Shipping container
Top intermodal container companies list
Trackless train
Transport
References
External links
Australian Road Train Association
Australian National Heavy Vehicles Accreditation Scheme.
Northern Territory Road Train road safety TV commercials.
South Australian Roads road train gazette
NSW Roads and Traffic Authority road train operators gazette
NSW Roads and Traffic Authority Restricted Access Vehicles route map index
NSW Roads and Traffic Authority Reflective sign standards
U.S. department of Transportation, Federal Highway Administration, Chapter VII, Safety.
The U.S. Department of Transportation's Western Uniformity Scenario Analysis.
British Columbia Government Licensing Bulletin 6
British Columbia Government Licensing Bulletin 41
Roadmap of technologies able to halve energy use per passenger mile includes the dynamically coupled, heterogeneous type of roadtrain
Road trains and electrification of transport
Combination Vehicles for Commercial Drivers License
Trucks
Trains
Train
Articulated vehicles
Train
Trailers | Road train | [
"Technology"
] | 5,595 | [
"Trains",
"Transport systems",
"Road hazards"
] |
55,275 | https://en.wikipedia.org/wiki/Denotational%20semantics | In computer science, denotational semantics (initially known as mathematical semantics or Scott–Strachey semantics) is an approach of formalizing the meanings of programming languages by constructing mathematical objects (called denotations) that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.
Broadly speaking, denotational semantics is concerned with finding mathematical objects called domains that represent what programs do. For example, programs (or program phrases) might be represented by partial functions or by games between the environment and the system.
An important tenet of denotational semantics is that semantics should be compositional: the denotation of a program phrase should be built out of the denotations of its subphrases.
Historical development
Denotational semantics originated in the work of Christopher Strachey and Dana Scott published in the early 1970s. As originally developed by Strachey and Scott, denotational semantics provided the meaning of a computer program as a function that mapped input into output. To give meanings to recursively defined programs, Scott proposed working with continuous functions between domains, specifically complete partial orders. As described below, work has continued in investigating appropriate denotational semantics for aspects of programming languages such as sequentiality, concurrency, non-determinism and local state.
Denotational semantics has been developed for modern programming languages that use capabilities like concurrency and exceptions, e.g., Concurrent ML, CSP, and Haskell. The semantics of these languages is compositional in that the meaning of a phrase depends on the meanings of its subphrases. For example, the meaning of the applicative expression f(E1,E2) is defined in terms of semantics of its subphrases f, E1 and E2. In a modern programming language, E1 and E2 can be evaluated concurrently and the execution of one of them might affect the other by interacting through shared objects causing their meanings to be defined in terms of each other. Also, E1 or E2 might throw an exception which could terminate the execution of the other one. The sections below describe special cases of the semantics of these modern programming languages.
Meanings of recursive programs
Denotational semantics is ascribed to a program phrase as a function from an environment (holding current values of its free variables) to its denotation. For example, the phrase produces a denotation when provided with an environment that has binding for its two free variables: and . If in the environment has the value 3 and has the value 5, then the denotation is 15.
A function can be represented as a set of ordered pairs of argument and corresponding result values. For example, the set {(0,1), (4,3)} denotes a function with result 1 for argument 0, result 3 for the argument 4, and undefined otherwise.
Consider for example the factorial function, which might be defined recursively as:
int factorial(int n) { if (n == 0) then return 1; else return n * factorial(n-1); }
To provide a meaning for this recursive definition, the denotation is built up as the limit of approximations, where each approximation limits the number of calls to factorial. At the beginning, we start with no calls - hence nothing is defined. In the next approximation, we can add the ordered pair (0,1), because this doesn't require calling factorial again. Similarly we can add (1,1), (2,2), etc., adding one pair each successive approximation because computing factorial(n) requires n+1 calls. In the limit we get a total function from to defined everywhere in its domain.
Formally we model each approximation as a partial function . Our approximation is then repeatedly applying a function implementing "make a more defined partial factorial function", i.e. , starting with the empty function (empty set). F could be defined in code as follows (using Map<int,int> for ):
int factorial_nonrecursive(Map<int,int> factorial_less_defined, int n)
{
if (n == 0) then return 1;
else if (fprev = lookup(factorial_less_defined, n-1)) then
return n * fprev;
else
return NOT_DEFINED;
}
Map<int,int> F(Map<int,int> factorial_less_defined)
{
Map<int,int> new_factorial = Map.empty();
for (int n in all<int>()) {
if (f = factorial_nonrecursive(factorial_less_defined, n) != NOT_DEFINED)
new_factorial.put(n, f);
}
return new_factorial;
}
Then we can introduce the notation Fn to indicate F applied n times.
F0({}) is the totally undefined partial function, represented as the set {};
F1({}) is the partial function represented as the set {(0,1)}: it is defined at 0, to be 1, and undefined elsewhere;
F5({}) is the partial function represented as the set {(0,1), (1,1), (2,2), (3,6), (4,24)}: it is defined for arguments 0,1,2,3,4.
This iterative process builds a sequence of partial functions from to . Partial functions form a chain-complete partial order using ⊆ as the ordering. Furthermore, this iterative process of better approximations of the factorial function forms an expansive (also called progressive) mapping because each using ⊆ as the ordering. So by a fixed-point theorem (specifically Bourbaki–Witt theorem), there exists a fixed point for this iterative process.
In this case, the fixed point is the least upper bound of this chain, which is the full function, which can be expressed as the union
The fixed point we found is the least fixed point of F, because our iteration started with the smallest element in the domain (the empty set). To prove this we need a more complex fixed point theorem such as the Knaster–Tarski theorem.
Denotational semantics of non-deterministic programs
The concept of power domains has been developed to give a denotational semantics to non-deterministic sequential programs. Writing P for a power-domain constructor, the domain P(D) is the domain of non-deterministic computations of type denoted by D.
There are difficulties with fairness and unboundedness in domain-theoretic models of non-determinism.
Denotational semantics of concurrency
Many researchers have argued that the domain-theoretic models given above do not suffice for the more general case of concurrent computation. For this reason various new models have been introduced. In the early 1980s, people began using the style of denotational semantics to give semantics for concurrent languages. Examples include Will Clinger's work with the actor model; Glynn Winskel's work with event structures and Petri nets; and the work by Francez, Hoare, Lehmann, and de Roever (1979) on trace semantics for CSP. All these lines of inquiry remain under investigation (see e.g. the various denotational models for CSP).
Recently, Winskel and others have proposed the category of profunctors as a domain theory for concurrency.
Denotational semantics of state
State (such as a heap) and simple imperative features can be straightforwardly modeled in the denotational semantics described above. The key idea is to consider a command as a partial function on some domain of states. The meaning of "" is then the function that takes a state to the state with assigned to . The sequencing operator "" is denoted by composition of functions. Fixed-point constructions are then used to give a semantics to looping constructs, such as "".
Things become more difficult in modelling programs with local variables. One approach is to no longer work with domains, but instead to interpret types as functors from some category of worlds to a category of domains. Programs are then denoted by natural continuous functions between these functors.
Denotations of data types
Many programming languages allow users to define recursive data types. For example, the type of lists of numbers can be specified by
datatype list = Cons of nat * list | Empty
This section deals only with functional data structures that cannot change. Conventional imperative programming languages would typically allow the elements of such a recursive list to be changed.
For another example: the type of denotations of the untyped lambda calculus is
datatype D = D of (D → D)
The problem of solving domain equations is concerned with finding domains that model these kinds of datatypes. One approach, roughly speaking, is to consider the collection of all domains as a domain itself, and then solve the recursive definition there.
Polymorphic data types are data types that are defined with a parameter. For example, the type of α s is defined by
datatype α list = Cons of α * α list | Empty
Lists of natural numbers, then, are of type , while lists of strings are of .
Some researchers have developed domain theoretic models of polymorphism. Other researchers have also modeled parametric polymorphism within constructive set theories.
A recent research area has involved denotational semantics for object and class based programming languages.
Denotational semantics for programs of restricted complexity
Following the development of programming languages based on linear logic, denotational semantics have been given to languages for linear usage (see e.g. proof nets, coherence spaces) and also polynomial time complexity.
Denotational semantics of sequentiality
The problem of full abstraction for the sequential programming language PCF was, for a long time, a big open question in denotational semantics. The difficulty with PCF is that it is a very sequential language. For example, there is no way to define the parallel-or function in PCF. It is for this reason that the approach using domains, as introduced above, yields a denotational semantics that is not fully abstract.
This open question was mostly resolved in the 1990s with the development of game semantics and also with techniques involving logical relations. For more details, see the page on PCF.
Denotational semantics as source-to-source translation
It is often useful to translate one programming language into another. For example, a concurrent programming language might be translated into a process calculus; a high-level programming language might be translated into byte-code. (Indeed, conventional denotational semantics can be seen as the interpretation of programming languages into the internal language of the category of domains.)
In this context, notions from denotational semantics, such as full abstraction, help to satisfy security concerns.
Abstraction
It is often considered important to connect denotational semantics with operational semantics. This is especially important when the denotational semantics is rather mathematical and abstract, and the operational semantics is more concrete or closer to the computational intuitions. The following properties of a denotational semantics are often of interest.
Syntax independence: The denotations of programs should not involve the syntax of the source language.
Adequacy (or soundness): All observably distinct programs have distinct denotations;
Full abstraction: All observationally equivalent programs have equal denotations.
For semantics in the traditional style, adequacy and full abstraction may be understood roughly as the requirement that "operational equivalence coincides with denotational equality". For denotational semantics in more intensional models, such as the actor model and process calculi, there are different notions of equivalence within each model, and so the concepts of adequacy and of full abstraction are a matter of debate, and harder to pin down. Also the mathematical structure of operational semantics and denotational semantics can become very close.
Additional desirable properties we may wish to hold between operational and denotational semantics are:
Constructivism: Constructivism is concerned with whether domain elements can be shown to exist by constructive methods.
Independence of denotational and operational semantics: The denotational semantics should be formalized using mathematical structures that are independent of the operational semantics of a programming language; However, the underlying concepts can be closely related. See the section on Compositionality below.
Full completeness or definability: Every morphism of the semantic model should be the denotation of a program.
Compositionality
An important aspect of denotational semantics of programming languages is compositionality, by which the denotation of a program is constructed from denotations of its parts. For example, consider the expression "7 + 4". Compositionality in this case is to provide a meaning for "7 + 4" in terms of the meanings of "7", "4" and "+".
A basic denotational semantics in domain theory is compositional because it is given as follows. We start by considering program fragments, i.e. programs with free variables. A typing context assigns a type to each free variable. For instance, in the expression (x + y) might be considered in a typing context (x:,y:). We now give a denotational semantics to program fragments, using the following scheme.
We begin by describing the meaning of the types of our language: the meaning of each type must be a domain. We write 〚τ〛 for the domain denoting the type τ. For instance, the meaning of type should be the domain of natural numbers: 〚〛= ⊥.
From the meaning of types we derive a meaning for typing contexts. We set 〚 x1:τ1,..., xn:τn〛 = 〚 τ1〛× ... ×〚τn〛. For instance, 〚x:,y:〛= ⊥×⊥. As a special case, the meaning of the empty typing context, with no variables, is the domain with one element, denoted 1.
Finally, we must give a meaning to each program-fragment-in-typing-context. Suppose that P is a program fragment of type σ, in typing context Γ, often written Γ⊢P:σ. Then the meaning of this program-in-typing-context must be a continuous function 〚Γ⊢P:σ〛:〚Γ〛→〚σ〛. For instance, 〚⊢7:〛:1→⊥ is the constantly "7" function, while 〚x:,y:⊢x+y:〛:⊥×⊥→⊥ is the function that adds two numbers.
Now, the meaning of the compound expression (7+4) is determined by composing the three functions 〚⊢7:〛:1→⊥, 〚⊢4:〛:1→⊥, and 〚x:,y:⊢x+y:〛:⊥×⊥→⊥.
In fact, this is a general scheme for compositional denotational semantics. There is nothing specific about domains and continuous functions here. One can work with a different category instead. For example, in game semantics, the category of games has games as objects and strategies as morphisms: we can interpret types as games, and programs as strategies. For a simple language without general recursion, we can make do with the category of sets and functions. For a language with side-effects, we can work in the Kleisli category for a monad. For a language with state, we can work in a functor category. Milner has advocated modelling location and interaction by working in a category with interfaces as objects and bigraphs as morphisms.
Semantics versus implementation
According to Dana Scott (1980):
It is not necessary for the semantics to determine an implementation, but it should provide criteria for showing that an implementation is correct.
According to Clinger (1981):
Usually, however, the formal semantics of a conventional sequential programming language may itself be interpreted to provide an (inefficient) implementation of the language. A formal semantics need not always provide such an implementation, though, and to believe that semantics must provide an implementation leads to confusion about the formal semantics of concurrent languages. Such confusion is painfully evident when the presence of unbounded nondeterminism in a programming language's semantics is said to imply that the programming language cannot be implemented.
Connections to other areas of computer science
Some work in denotational semantics has interpreted types as domains in the sense of domain theory, which can be seen as a branch of model theory, leading to connections with type theory and category theory. Within computer science, there are connections with abstract interpretation, program verification, and model checking.
References
Further reading
Textbooks
(A classic if dated textbook.)
out of print now; free electronic version available:
Lecture notes
Other references
External links
Denotational Semantics. Overview of book by Lloyd Allison
1970 in computing
Logic in computer science
Models of computation
Formal specification languages
Programming language semantics
es:Semántica denotacional | Denotational semantics | [
"Mathematics"
] | 3,555 | [
"Mathematical logic",
"Logic in computer science"
] |
55,285 | https://en.wikipedia.org/wiki/Ernst%20Mach | Ernst Waldfried Josef Wenzel Mach ( ; ; 18 February 1838 – 19 February 1916) was an Austrian physicist and philosopher, who contributed to the physics of shock waves. The ratio of the speed of a flow or object to that of sound is named the Mach number in his honour. As a philosopher of science, he was a major influence on logical positivism and American pragmatism. Through his criticism of Isaac Newton's theories of space and time, he foreshadowed Albert Einstein's theory of relativity.
Biography
Early life
Mach was born in Chrlice (), Moravia, Austrian Empire (now part of Brno in the Czech Republic). His father Jan Nepomuk Mach, who had graduated from Charles-Ferdinand University in Prague, acted as tutor to the noble Brethon family in Zlín in eastern Moravia. His grandfather, Wenzl Lanhaus, an administrator of the Chirlitz estate, was also master builder of the streets there. His activities in that field later influenced Ernst Mach's theoretical work. Some sources give Mach's birthplace as Tuřany (, also part of Brno), the site of the Chirlitz registry office. It was there that Mach was baptised by Peregrin Weiss. Mach later became a socialist and an atheist, but his theory and life was sometimes compared to Buddhism. Heinrich Gomperz called Mach the "Buddha of Science" because of his phenomenalist approach to the "Ego" in his Analysis of Sensations.
Up to the age of 14, Mach was educated at home by his parents. He then entered a gymnasium in Kroměříž (), where he studied for three years. In 1855 he became a student at the University of Vienna, where he studied physics and for one semester medical physiology, receiving his doctorate in physics in 1860 under Andreas von Ettingshausen with the thesis Über elektrische Ladungen und Induktion, and his habilitation the following year. His early work focused on the Doppler effect in optics and acoustics.
Professional research
In 1864, Mach became professor of mathematics at the University of Graz after having declined a chair in surgery at the University of Salzburg. In 1866 he was appointed professor of physics. During this period, Mach continued his work in psycho-physics and in sensory perception. In 1867, he took the chair of experimental physics at the Charles-Ferdinand University, where he stayed for 28 years before returning to Vienna. In 1871 he was elected a member of the Royal Bohemian Society of Sciences.
Mach's main contribution to physics involved his description and photographs of spark shock-waves and then ballistic shock-waves. He described how when a bullet or shell moved faster than the speed of sound, it created a compression of air in front of it. Using schlieren photography, he and his son Ludwig photographed the shadows of the invisible shock waves. During the early 1890s Ludwig invented a modification of the Jamin interferometer that allowed for much clearer photographs. But Mach also made many contributions to psychology and physiology, including his anticipation of gestalt phenomena, his discovery of the oblique effect and of Mach bands, an inhibition-influenced type of visual illusion, and especially his discovery of a non-acoustic function of the inner ear that helps control human balance.
One of the best-known of Mach's ideas is the so-called Mach principle, the physical origin of inertia. This was never written down by Mach, but was given a graphic verbal form, attributed by Philipp Frank to Mach: "When the subway jerks, it's the fixed stars that throw you down."
In 1900 Mach became the godfather of the physicist Wolfgang Ernst Pauli, who was also named after him.
Mach was also well known for his philosophy, developed in close interplay with his science. He defended a type of phenomenalism, recognizing only sensations as real. That position seemed incompatible with the view of atoms and molecules as external, mind-independent things. After an 1897 lecture by Ludwig Boltzmann at the Imperial Academy of Science in Vienna, Mach said, "I don't believe that atoms exist!"
In 1898, Mach survived a paralytic stroke, and in 1901, he retired from the University of Vienna and was appointed to the upper chamber of the Austrian Parliament. On leaving Vienna in 1913, he moved to his son's home in Vaterstetten, near Munich, where he continued writing and corresponding until his death in 1916, one day after his 78th birthday.
Politics
Born to a liberal family, Mach lamented that a "very reactionary-clerical" period followed the 1848 revolutions, prompting him to plan to emigrate to America.
In 1901, Mach accepted an appointment to the Austrian House of Lords but declined a nobility because he thought it inappropriate for a scientist to accept such a thing. He was on good personal terms with the Social Democrat politician Viktor Adler and left money in his will to the Social Democrat newspaper Arbeiter-Zeitung.
Mach was critical of the European powers' colonial conquests, saying that they "will constitute...the most distasteful chapter of history for coming generations".
Physics
Most of Mach's initial studies in experimental physics concentrated on the interference, diffraction, polarization and refraction of light in different media under external influences. From there followed explorations in supersonic fluid mechanics. Mach and physicist-photographer Peter Salcher presented their paper on this subject in 1887; it correctly describes the sound effects observed during the supersonic motion of a projectile. They deduced and experimentally confirmed the existence of a shock wave of conical shape, with the projectile at the apex. The ratio of the speed of a fluid to the local speed of sound vp/vs is called the Mach number after him. It is a critical parameter in the description of high-speed fluid movement in aerodynamics and hydrodynamics. Mach also contributed to cosmology the hypothesis known as Mach's principle.
Philosophy of science
Empirio-criticism
From 1895 to 1901, Mach held a newly created chair for "the history and philosophy of the inductive sciences" at the University of Vienna. In his historico-philosophical studies, Mach developed a phenomenalistic philosophy of science that became influential in the 19th and 20th centuries. He originally saw scientific laws as summaries of experimental events, constructed for the purpose of making complex data comprehensible, but later emphasized mathematical functions as a more useful way to describe sensory appearances. Thus, scientific laws, while somewhat idealized, have more to do with describing sensations than with reality as it exists beyond sensations.
Mach's positivism influenced many Russian Marxists, such as Alexander Bogdanov. In 1908, Lenin wrote a philosophical work, Materialism and Empirio-criticism, in which he criticized Machism and the views of "Russian Machists". His main criticisms were that Mach's philosophy led to solipsism and to the absurd conclusion that nature did not exist before humans:
Empirio-criticism is the term for the rigorously positivist and radically empiricist philosophy established by the German philosopher Richard Avenarius and further developed by Mach, Joseph Petzoldt, and others, that claims that all we can know is our sensations and that knowledge should be confined to pure experience.
In accordance with empirio-critical philosophy, Mach opposed Boltzmann and others who proposed an atomic theory of physics. Since one cannot observe things as small as atoms directly, and since no atomic model at the time was consistent, the atomic hypothesis seemed unwarranted to Mach, and perhaps not sufficiently "economical". Mach had a direct influence on the Vienna Circle philosophers and logical positivism in general.
Several principles are attributed to Mach that distill his ideal of physical theorization, called "Machian physics":
It should be based entirely on directly observable phenomena (in line with his positivistic leanings)
It should completely eschew absolute space and time in favour of relative motion
Any phenomena that seem attributable to absolute space and time (e.g., inertia and centrifugal force) should instead be seen as emerging from the distribution of matter in the universe.
The last is singled out, particularly by Einstein, as "the" Mach's principle. Einstein cited it as one of the three principles underlying general relativity. In 1930, he wrote, "it is justified to consider Mach as the precursor of the general theory of relativity", though Mach, before his death, apparently rejected Einstein's theory. Einstein knew that his theories did not fulfill all Mach's principles, and neither has any subsequent theory, despite considerable effort.
Phenomenological constructivism
According to Alexander Riegler, Mach's work was a precursor to the influential perspective known as constructivism. Constructivism holds that all knowledge is constructed rather than received by the learner. He took an exceptionally non-dualist, phenomenological position. The founder of radical constructivism, Ernst von Glasersfeld, gave a nod to Mach as an ally.
On the other hand, there is also a reasonable case for viewing Mach simply as an empiricist and a precursor of the logical empiricists and the Vienna Circle. On this view, the purpose of science is to detail functional relationships between observations: "The goal which it (physical science) has set itself is the simplest and most economical abstract expression of facts."
Influence
Friedrich Hayek wrote that, when he attended the University of Vienna from 1918 to 1921, "as far as philosophical discussion went it essentially revolved around Mach's ideas". Mach's work has also been cited as an influence on the Vienna Circle, being described as a "major precursor of logical positivism".
Mach's work was a "forerunner" of Gestalt psychology.
Physiology
In 1873, independently of each other, Mach and the physiologist and physician Josef Breuer discovered how the sense of balance (i.e., the perception of the head's imbalance) functions, tracing its management by information the brain receives from the movement of a fluid in the semicircular canals of the inner ear. That the sense of balance depends on the three semicircular canals was discovered in 1870 by the physiologist Friedrich Goltz, but Goltz did not discover how the balance-sensing apparatus functions. Mach devised a swivel chair to test his theories, and Floyd Ratliff has suggested that this experiment may have paved the way to Mach's critique of a physical conception of absolute space and motion.
Psychology
In the area of sensory perception, psychologists remember Mach for the optical illusion called Mach bands. The effect exaggerates the contrast between edges of the slightly differing shades of gray as soon as they separate, by triggering edge-detection in the human visual system.
More clearly than anyone before or since, Mach made the distinction between what he called physiological (specifically visual) and geometrical spaces.
Mach's views on mediating structures inspired B. F. Skinner's strongly inductive position, which paralleled Mach's in the field of psychology.
Eponyms
In homage his name was given to:
3949 Mach, an asteroid
Mach, a lunar crater
Mach bands, an optical illusion
Mach diamonds, seen in supersonic exhausts
Mach Five, the car used by Speed Racer
Mach number, the unit for speed relative to the speed of sound
Bibliography
(Later editions were published under the title Analyse der Empfindungen und das Verhältnis des Physischen zum Psychischen)
Mach's principal works in English:
with Peter Slacher
Popular Scientific Lectures (1895); Revised & enlarged 3rd edition (1898)
with S.J.B. Sugden
History and Root of the Principle of the Conservation of Energy (1911)
The Principles of Physical Optics (1926)
Knowledge and Error (1976)
Principles of the Theory of Heat (1986)
Fundamentals of the Theory of Movement Perception (2001)
See also
Energeticism
Mach (kernel)
Mach bands
Mach disk
Mach reflection
Mach's principle
Mach–Zehnder interferometer
Stereokinetic stimulus
Visual space
References
Notes
Citations
Sources
Further reading
Erik C. Banks: Ernst Mach's World Elements. A Study in Natural Philosophy. Dordrecht: Kluwer (now Springer), 2013.
John Blackmore and Klaus Hentschel (eds.): Ernst Mach als Außenseiter. Vienna: Braumüller, 1985 (with select correspondence).
John T. Blackmore, Ryoichi Itagaki and Setsuko Tanaka (eds.): Ernst Mach's Science. Kanagawa: Tokai University Press, 2006.
John T. Blackmore, Ryoichi Itagaki and Setsuko Tanaka: Ernst Mach's Influence Spreads. Bethesda: Sentinel Open Press, 2009.
John T. Blackmore, Ryoichi Itagaki and Setsuko Tanaka: Ernst Mach's Graz (1864–1867), where much science and philosophy were developed. Bethesda: Sentinel Open Press, 2010.
John T. Blackmore: Ernst Mach's Prague 1867–1895 as a human adventure, Bethesda: Sentinel Open Press, 2010.
(with select correspondence).
External links
Ernst Mach bibliography of all of his papers and books from 1860 to 1916, compiled by Vienna lecturer Dr. Peter Mahr in 2016
Various Ernst Mach links, compiled by Greg C Elvers
Klaus Hentschel: Mach, Ernst, in: Neue Deutsche Biographie 15 (1987), pp. 605–609.
Short biography and bibliography in the Virtual Laboratory of the Max Planck Institute for the History of Science
Ernst Mach: The Analysis of Sensations (1897) [translation of Beiträge zur Analyse der Empfindungen (1886)]
"The critical positivism of Mach and Avenarius": entry in the Britannica Online Encyclopedia
1838 births
1916 deaths
19th-century Austrian writers
19th-century Austrian philosophers
19th-century Czech philosophers
20th-century Austrian male writers
20th-century Austrian philosophers
19th-century Austrian physicists
Historians of physics
Austrian atheists
Atheist philosophers
Austrian people of Moravian-German descent
Austrian socialists
Ballistics experts
Academic staff of Charles University
Empiricists
Experimental physicists
Fluid dynamicists
Historians of science
Optical physicists
People from the Margraviate of Moravia
Philosophers of science
Positivists
Scientists from Brno | Ernst Mach | [
"Physics",
"Chemistry"
] | 2,965 | [
"Fluid dynamicists",
"Experimental physics",
"Experimental physicists",
"Fluid dynamics"
] |
55,309 | https://en.wikipedia.org/wiki/Blood%20type | A blood type (also known as a blood group) is a classification of blood, based on the presence and absence of antibodies and inherited antigenic substances on the surface of red blood cells (RBCs). These antigens may be proteins, carbohydrates, glycoproteins, or glycolipids, depending on the blood group system. Some of these antigens are also present on the surface of other types of cells of various tissues. Several of these red blood cell surface antigens can stem from one allele (or an alternative version of a gene) and collectively form a blood group system.
Blood types are inherited and represent contributions from both parents of an individual. a total of 45 human blood group systems are recognized by the International Society of Blood Transfusion (ISBT). The two most important blood group systems are ABO and Rh; they determine someone's blood type (A, B, AB, and O, with + or − denoting RhD status) for suitability in blood transfusion.
Blood group systems
A complete blood type would describe each of the 45 blood groups, and an individual's blood type is one of many possible combinations of blood-group antigens. Almost always, an individual has the same blood group for life, but very rarely an individual's blood type changes through addition or suppression of an antigen in infection, malignancy, or autoimmune disease. Another more common cause of blood type change is a bone marrow transplant. Bone-marrow transplants are performed for many leukemias and lymphomas, among other diseases. If a person receives bone marrow from someone of a different ABO type (e.g., a type O patient receives a type A bone marrow), the patient's blood type should eventually become the donor's type, as the patient's hematopoietic stem cells (HSCs) are destroyed, either by ablation of the bone marrow or by the donor's T-cells. Once all the patient's original red blood cells have died, they will have been fully replaced by new cells derived from the donor HSCs. Provided the donor had a different ABO type, the new cells' surface antigens will be different from those on the surface of the patient's original red blood cells.
Some blood types are associated with inheritance of other diseases; for example, the Kell antigen is sometimes associated with McLeod syndrome.For another example, Von Willebrand disease may be more severe or apparent in people with blood type O. Certain blood types may affect susceptibility to infections. For example, people with blood type O may be less susceptible to pro-thrombotic events induced by COVID-19 or long covid. Another example being the resistance to specific malaria species seen in individuals lacking the Duffy antigen. The Duffy antigen, presumably as a result of natural selection, is less common in population groups from areas having a high incidence of malaria.
ABO blood group system
The ABO blood group system involves two antigens and two antibodies found in human blood. The two antigens are antigen A and antigen B. The two antibodies are antibody A and antibody B. The antigens are present on the red blood cells and the antibodies in the serum. Regarding the antigen property of the blood all human beings can be classified into four groups, those with antigen A (group A), those with antigen B (group B), those with both antigen A and B (group AB) and those with neither antigen (group O). The antibodies present together with the antigens are found as follows:
Antigen A with antibody B
Antigen B with antibody A
Antigen AB with neither antibody A nor B
Antigen null (group O) with both antibody A and B
There is an agglutination reaction between similar antigen and antibody (for example, antigen A agglutinates the antibody A and antigen B agglutinates the antibody B). Thus, transfusion can be considered safe as long as the serum of the recipient does not contain antibodies for the blood cell antigens of the donor.
The ABO system is the most important blood-group system in human-blood transfusion. The associated anti-A and anti-B antibodies are usually immunoglobulin M, abbreviated IgM, antibodies. It has been hypothesized that ABO IgM antibodies are produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses, although blood group compatibility rules are applied to newborn and infants as a matter of practice. The original terminology used by Karl Landsteiner in 1901 for the classification was A/B/C; in later publications "C" became "O". Type O is often called 0 (zero, or null) in other languages.
Rh blood group system
The Rh system (Rh meaning Rhesus) is the second most significant blood-group system in human-blood transfusion with currently 50 antigens. The most significant Rh antigen is the D antigen, because it is the most likely to provoke an immune system response of the five main Rh antigens. It is common for D-negative individuals not to have any anti-D IgG or IgM antibodies, because anti-D antibodies are not usually produced by sensitization against environmental substances. However, D-negative individuals can produce IgG anti-D antibodies following a sensitizing event: possibly a fetomaternal transfusion of blood from a fetus in pregnancy or occasionally a blood transfusion with D positive RBCs. Rh disease can develop in these cases. Rh negative blood types are much less common in Asian populations (0.3%) than they are in European populations (15%).
The presence or absence of the Rh(D) antigen is signified by the + or − sign, so that, for example, the A− group is ABO type A and does not have the Rh (D) antigen.
ABO and Rh distribution by country
As with many other genetic traits, the distribution of ABO and Rh blood groups varies significantly between populations. While theories are still debated in the scientific community as to why blood types vary geographically and why they emerged in the first place, evidence suggests that the evolution of blood types may be driven by genetic selection for those types whose antigens confer resistance to particular diseases in certain regions – such as the prevalence of blood type O in malaria-endemic countries where individuals of blood type O exhibit the highest rates of survival.
Other blood group systems
42 blood-group systems have been identified by the International Society for Blood Transfusion in addition to the ABO and Rh systems. Thus, in addition to the ABO antigens and Rh antigens, many other antigens are expressed on the RBC surface membrane. For example, an individual can be AB, D positive, and at the same time M and N positive (MNS system), K positive (Kell system), Lea or Leb negative (Lewis system), and so on, being positive or negative for each blood group system antigen. Many of the blood group systems were named after the patients in whom the corresponding antibodies were initially encountered. Blood group systems other than ABO and Rh pose a potential, yet relatively low, risk of complications upon mixing of blood from different people.
Following is a comparison of clinically relevant characteristics of antibodies against the main human blood group systems:
Clinical significance
Blood transfusion
Transfusion medicine is a specialized branch of hematology that is concerned with the study of blood groups, along with the work of a blood bank to provide a transfusion service for blood and other blood products. Across the world, blood products must be prescribed by a medical doctor (licensed physician or surgeon) in a similar way as medicines.
Much of the routine work of a blood bank involves testing blood from both donors and recipients to ensure that every individual recipient is given blood that is compatible and as safe as possible. If a unit of incompatible blood is transfused between a donor and recipient, a severe acute hemolytic reaction with hemolysis (RBC destruction), kidney failure and shock is likely to occur, and death is a possibility. Antibodies can be highly active and can attack RBCs and bind components of the complement system to cause massive hemolysis of the transfused blood.
Patients should ideally receive their own blood or type-specific blood products to minimize the chance of a transfusion reaction. It is also possible to use the patient's own blood for transfusion. This is called autotransfusion, which is always compatible with the patient. The procedure of washing a patient's own red blood cells goes as follows: The patient's lost blood is collected and washed with a saline solution. The washing procedure yields concentrated washed red blood cells. The last step is reinfusing the packed red blood cells into the patient. There are multiple ways to wash red blood cells. The two main ways are centrifugation and filtration methods. This procedure can be performed with microfiltration devices like the Hemoclear filter. Risks can be further reduced by cross-matching blood, but this may be skipped when blood is required for an emergency. Cross-matching involves mixing a sample of the recipient's serum with a sample of the donor's red blood cells and checking if the mixture agglutinates, or forms clumps. If agglutination is not obvious by direct vision, blood bank technologist usually check for agglutination with a microscope. If agglutination occurs, that particular donor's blood cannot be transfused to that particular recipient. In a blood bank it is vital that all blood specimens are correctly identified, so labelling has been standardized using a barcode system known as ISBT 128.
The blood group may be included on identification tags or on tattoos worn by military personnel, in case they should need an emergency blood transfusion. Frontline German Waffen-SS had blood group tattoos during World War II.
Rare blood types can cause supply problems for blood banks and hospitals. For example, Duffy-negative blood occurs much more frequently in people of African origin, and the rarity of this blood type in the rest of the population can result in a shortage of Duffy-negative blood for these patients. Similarly, for RhD negative people there is a risk associated with travelling to parts of the world where supplies of RhD negative blood are rare, particularly East Asia, where blood services may endeavor to encourage Westerners to donate blood.
Hemolytic disease of the newborn (HDN)
A pregnant woman may carry a fetus with a blood type which is different from her own. Typically, this is an issue if a Rh- mother has a child with a Rh+ father, and the fetus ends up being Rh+ like the father. In those cases, the mother can make IgG blood group antibodies. This can happen if some of the fetus' blood cells pass into the mother's blood circulation (e.g. a small fetomaternal hemorrhage at the time of childbirth or obstetric intervention), or sometimes after a therapeutic blood transfusion. This can cause Rh disease or other forms of hemolytic disease of the newborn (HDN) in the current pregnancy and/or subsequent pregnancies. Sometimes this is lethal for the fetus; in these cases it is called hydrops fetalis. If a pregnant woman is known to have anti-D antibodies, the Rh blood type of a fetus can be tested by analysis of fetal DNA in maternal plasma to assess the risk to the fetus of Rh disease. One of the major advances of twentieth century medicine was to prevent this disease by stopping the formation of Anti-D antibodies by D negative mothers with an injectable medication called Rho(D) immune globulin. Antibodies associated with some blood groups can cause severe HDN, others can only cause mild HDN and others are not known to cause HDN.
Blood products
To provide maximum benefit from each blood donation and to extend shelf-life, blood banks fractionate some whole blood into several products. The most common of these products are packed RBCs, plasma, platelets, cryoprecipitate, and fresh frozen plasma (FFP). FFP is quick-frozen to retain the labile clotting factors V and VIII, which are usually administered to patients who have a potentially fatal clotting problem caused by a condition such as advanced liver disease, overdose of anticoagulant, or disseminated intravascular coagulation (DIC).
Units of packed red cells are made by removing as much of the plasma as possible from whole blood units.
Clotting factors synthesized by modern recombinant methods are now in routine clinical use for hemophilia, as the risks of infection transmission that occur with pooled blood products are avoided.
Red blood cell compatibility
Blood group AB individuals have both A and B antigens on the surface of their RBCs, and their blood plasma does not contain any antibodies against either A or B antigen. Therefore, an individual with type AB blood can receive blood from any group (with AB being preferable), but cannot donate blood to any group other than AB. They are known as universal recipients.
Blood group A individuals have the A antigen on the surface of their RBCs, and blood serum containing IgM antibodies against the B antigen. Therefore, a group A individual can receive blood only from individuals of groups A or O (with A being preferable), and can donate blood to individuals with type A or AB.
Blood group B individuals have the B antigen on the surface of their RBCs, and blood serum containing IgM antibodies against the A antigen. Therefore, a group B individual can receive blood only from individuals of groups B or O (with B being preferable), and can donate blood to individuals with type B or AB.
Blood group O (or blood group zero in some countries) individuals do not have either A or B antigens on the surface of their RBCs, and their blood serum contains IgM anti-A and anti-B antibodies. Therefore, a group O individual can receive blood only from a group O individual, but can donate blood to individuals of any ABO blood group (i.e., A, B, O or AB). If a patient needs an urgent blood transfusion, and if the time taken to process the recipient's blood would cause a detrimental delay, O negative blood can be used. Because it is compatible with anyone, there are some concerns that O negative blood is often overused and consequently is always in short supply. According to the American Association of Blood Banks and the British Chief Medical Officer's National Blood Transfusion Committee, the use of group O RhD negative red cells should be restricted to persons with O negative blood, women who might be pregnant, and emergency cases in which blood-group testing is genuinely impracticable.
Table note
1. Assumes absence of atypical antibodies that would cause an incompatibility between donor and recipient blood, as is usual for blood selected by cross matching.
An Rh D-negative patient who does not have any anti-D antibodies (never being previously sensitized to D-positive RBCs) can receive a transfusion of D-positive blood once, but this would cause sensitization to the D antigen, and a female patient would become at risk for hemolytic disease of the newborn. If a D-negative patient has developed anti-D antibodies, a subsequent exposure to D-positive blood would lead to a potentially dangerous transfusion reaction. Rh D-positive blood should never be given to D-negative women of child-bearing age or to patients with D antibodies, so blood banks must conserve Rh-negative blood for these patients. In extreme circumstances, such as for a major bleed when stocks of D-negative blood units are very low at the blood bank, D-positive blood might be given to D-negative females above child-bearing age or to Rh-negative males, providing that they did not have anti-D antibodies, to conserve D-negative blood stock in the blood bank. The converse is not true; Rh D-positive patients do not react to D negative blood.
This same matching is done for other antigens of the Rh system as C, c, E and e and for other blood group systems with a known risk for immunization such as the Kell system in particular for females of child-bearing age or patients with known need for many transfusions.
Plasma compatibility
Blood plasma compatibility is the inverse of red blood cell compatibility. Type AB plasma carries neither anti-A nor anti-B antibodies and can be transfused to individuals of any blood group; but type AB patients can only receive type AB plasma. Type O carries both antibodies, so individuals of blood group O can receive plasma from any blood group, but type O plasma can be used only by type O recipients.
Table note
1. Assuming absence of strong atypical antibodies in donor plasma
Rh D antibodies are uncommon, so generally neither D negative nor D positive blood contain anti-D antibodies. If a potential donor is found to have anti-D antibodies or any strong atypical blood group antibody by antibody screening in the blood bank, they would not be accepted as a donor (or in some blood banks the blood would be drawn but the product would need to be appropriately labeled); therefore, donor blood plasma issued by a blood bank can be selected to be free of D antibodies and free of other atypical antibodies, and such donor plasma issued from a blood bank would be suitable for a recipient who may be D positive or D negative, as long as blood plasma and the recipient are ABO compatible.
Universal donors and universal recipients
In transfusions of packed red blood cells, individuals with type O Rh D negative blood are often called universal donors. Those with type AB Rh D positive blood are called universal recipients. However, these terms are only generally true with respect to possible reactions of the recipient's anti-A and anti-B antibodies to transfused red blood cells, and also possible sensitization to Rh D antigens. One exception is individuals with hh antigen system (also known as the Bombay phenotype) who can only receive blood safely from other hh donors, because they form antibodies against the H antigen present on all red blood cells.
Blood donors with exceptionally strong anti-A, anti-B or any atypical blood group antibody may be excluded from blood donation. In general, while the plasma fraction of a blood transfusion may carry donor antibodies not found in the recipient, a significant reaction is unlikely because of dilution.
Additionally, red blood cell surface antigens other than A, B and Rh D, might cause adverse reactions and sensitization, if they can bind to the corresponding antibodies to generate an immune response. Transfusions are further complicated because platelets and white blood cells (WBCs) have their own systems of surface antigens, and sensitization to platelet or WBC antigens can occur as a result of transfusion.
For transfusions of plasma, this situation is reversed. Type O plasma, containing both anti-A and anti-B antibodies, can only be given to O recipients. The antibodies will attack the antigens on any other blood type. Conversely, AB plasma can be given to patients of any ABO blood group, because it does not contain any anti-A or anti-B antibodies.
Blood typing
Typically, blood type tests are performed through addition of a blood sample to a solution containing antibodies corresponding to each antigen. The presence of an antigen on the surface of the blood cells is indicated by agglutination.
Blood group genotyping
In addition to the current practice of serologic testing of blood types, the progress in molecular diagnostics allows the increasing use of blood group genotyping. In contrast to serologic tests reporting a direct blood type phenotype, genotyping allows the prediction of a phenotype based on the knowledge of the molecular basis of the currently known antigens. This allows a more detailed determination of the blood type and therefore a better match for transfusion, which can be crucial in particular for patients with needs for many transfusions to prevent allo-immunization.
History
Blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that blood sera from different persons would clump together (agglutinate) when mixed in test tubes, and not only that, some human blood also agglutinated with animal blood. He wrote a two-sentence footnote:
This was the first evidence that blood variation exists in humans. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human bloods into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B. This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. (C was later renamed to O after the German Ohne, meaning without, or zero, or null.) Another group (later named AB) was discovered a year later by Landsteiner's students Adriano Sturli and Alfred von Decastello without designating the name (simply referring it to as "no particular type"). Thus, after Landsteiner, three blood types were initially recognised, namely A, B, and C.
Czech serologist Jan Janský was the first to recognise and designate four blood types in 1907 that he published in a local journal, using the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB respectively). Unknown to Janský, American physician William L. Moss introduced an almost identical classification in 1910, but with Moss's I and IV corresponding to Janský's IV and I. Thus the existence of two systems immediately created confusion and potential danger in medical practice. Moss's system was adopted in Britain, France, and the US, while Janský's was preferred in most other European countries and some parts of the US. It was reported that "The practically universal use of the Moss classification at that time was completely and purposely cast aside. Therefore in place of bringing order out of chaos, chaos was increased in the larger cities." To resolve the confusion, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Moss's system had been used.
In 1927, Landsteiner, who had moved to the Rockefeller Institute for Medical Research in New York, and as a member of a committee of the National Research Council concerned with blood grouping suggested to substitute Janský's and Moss's systems with the letters O, A, B, and AB, first introduced by Polish physician Ludwik Hirszfeld and German physician Emil von Dungern. There was another confusion on the use of O which was introduced in 1910. It was never clear whether it was meant for the figure 0, German null for zero or the upper case letter O for ohne, meaning without; Landsteiner chose the latter.
In 1928 the Permanent Commission on Biological Standardization adopted Landsteiner's proposal and stated:This classification became widely accepted and after the early 1950s it was universally followed.
Hirszfeld and Dungern discovered the inheritance of blood types as Mendelian genetics in 1910 and the existence of sub-types of A in 1911. In 1927, Landsteiner, with Philip Levine, discovered the MN blood group system, and the P system. Development of the Coombs test in 1945, the advent of transfusion medicine, and the understanding of ABO hemolytic disease of the newborn led to discovery of more blood groups. , the International Society of Blood Transfusion (ISBT) recognizes 47 blood groups.
Society and culture
A popular pseudoscientific belief in Eastern Asian countries (especially in Japan and South Korea) known as 血液型 ketsuekigata / hyeoraekhyeong is that a person's ABO blood type is predictive of their personality, character, and compatibility with others. Researchers have established no scientific basis exists for blood type personality categorization, and studies have found no "significant relationship between personality and blood type, rendering the theory 'obsolete' and concluding that no basis exists to assume that personality is anything more than randomly associated with blood type."
See also
Blood type (non-human)
Human leukocyte antigen
hh blood group
References
Further reading
External links
BGMUT Blood Group Antigen Gene Mutation Database at NCBI, NIH has details of genes and proteins, and variations thereof, that are responsible for blood types
Type
Genetics
Hematology
Transfusion medicine
Antigens | Blood type | [
"Chemistry",
"Biology"
] | 5,242 | [
"Antigens",
"Genetics",
"Biomolecules"
] |
55,311 | https://en.wikipedia.org/wiki/Seattle%20Wireless | Seattle Wireless was an American non-profit project created by Matt Westervelt and Ken Caruso in June 2000. It sought to develop a free, locally owned wireless community network using widely available, license-free technology wireless broadband Internet access. It is a metropolitan area network. As of 2016, Seattle Wireless is no longer operational.
Seattle Wireless was one of the first Community Wireless Networks and one of the first project focused wikis. It also had a short lived (7 episode) online television show, called Seattle Wireless TV. It was created by Peter Yorke and Michael Pierce and ran July 2003 - June 2004. SWTV was an early adopter of Bittorrent to distribute its shows.
References
External links
Official website
Matt's Blog
Seattle Wireless Blog Planet
Peter Yorke's Blog
Kahney, Leander, "Home-grown Wireless Networks", TheFeature, 2001-05-07
Fleishman, Glenn, "The revolution may be wireless", Seattle Weekly, 2001-07-18
O'Shea, Dan, "Peace, Love & Wi-Fi", Telephony Online, 2002-05-18
Kharif, Olga, "Footing the Bill for Free Wi-Fi", BusinessWeek, 2002-09-17
"The Insider: Lucky few are going locomotive over Wi-Fi access", Seattle PI, 2005-03-28
Wireless network organizations | Seattle Wireless | [
"Technology"
] | 285 | [
"Wireless networking",
"Wireless network organizations"
] |
55,313 | https://en.wikipedia.org/wiki/Allergy | Allergies, also known as allergic diseases, are various conditions caused by hypersensitivity of the immune system to typically harmless substances in the environment. These diseases include hay fever, food allergies, atopic dermatitis, allergic asthma, and anaphylaxis. Symptoms may include red eyes, an itchy rash, sneezing, coughing, a runny nose, shortness of breath, or swelling. Note that food intolerances and food poisoning are separate conditions.
Common allergens include pollen and certain foods. Metals and other substances may also cause such problems. Food, insect stings, and medications are common causes of severe reactions. Their development is due to both genetic and environmental factors. The underlying mechanism involves immunoglobulin E antibodies (IgE), part of the body's immune system, binding to an allergen and then to a receptor on mast cells or basophils where it triggers the release of inflammatory chemicals such as histamine. Diagnosis is typically based on a person's medical history. Further testing of the skin or blood may be useful in certain cases. Positive tests, however, may not necessarily mean there is a significant allergy to the substance in question.
Early exposure of children to potential allergens may be protective. Treatments for allergies include avoidance of known allergens and the use of medications such as steroids and antihistamines. In severe reactions, injectable adrenaline (epinephrine) is recommended. Allergen immunotherapy, which gradually exposes people to larger and larger amounts of allergen, is useful for some types of allergies such as hay fever and reactions to insect bites. Its use in food allergies is unclear.
Allergies are common. In the developed world, about 20% of people are affected by allergic rhinitis, food allergy affects 10% of adults and 8% of children, and about 20% have or have had atopic dermatitis at some point in time. Depending on the country, about 1–18% of people have asthma. Anaphylaxis occurs in between 0.05–2% of people. Rates of many allergic diseases appear to be increasing. The word "allergy" was first used by Clemens von Pirquet in 1906.
Signs and symptoms
Many allergens such as dust or pollen are airborne particles. In these cases, symptoms arise in areas in contact with air, such as the eyes, nose, and lungs. For instance, allergic rhinitis, also known as hay fever, causes irritation of the nose, sneezing, itching, and redness of the eyes. Inhaled allergens can also lead to increased production of mucus in the lungs, shortness of breath, coughing, and wheezing.
Aside from these ambient allergens, allergic reactions can result from foods, insect stings, and reactions to medications like aspirin and antibiotics such as penicillin. Symptoms of food allergy include abdominal pain, bloating, vomiting, diarrhea, itchy skin, and hives. Food allergies rarely cause respiratory (asthmatic) reactions, or rhinitis. Insect stings, food, antibiotics, and certain medicines may produce a systemic allergic response that is also called anaphylaxis; multiple organ systems can be affected, including the digestive system, the respiratory system, and the circulatory system. Depending on the severity, anaphylaxis can include skin reactions, bronchoconstriction, swelling, low blood pressure, coma, and death. This type of reaction can be triggered suddenly, or the onset can be delayed. The nature of anaphylaxis is such that the reaction can seem to be subsiding but may recur throughout a period of time.
Skin
Substances that come into contact with the skin, such as latex, are also common causes of allergic reactions, known as contact dermatitis or eczema. Skin allergies frequently cause rashes, or swelling and inflammation within the skin, in what is known as a "weal and flare" reaction characteristic of hives and angioedema.
With insect stings, a large local reaction may occur in the form of an area of skin redness greater than 10 cm in size that can last one to two days. This reaction may also occur after immunotherapy.
The way our body responds to foreign invaders on the molecular level is similar to how our allergens are treated even on the skin. Our skin forms an effective barrier to the entry of most allergens but this barrier cannot withstand everything that comes at it because at the end of the day, it is only our skin. A situation such as an insect sting can breach the barrier and inject allergen to the affected spot. When an allergen enters the epidermis or dermis, it triggers a localized allergic reaction which activates the mast cells in the skin resulting in an immediate increase in vascular permeability, leading to fluid leakage and swelling in the affected area. Mast-cell activation also stimulates a skin lesion called the wheal-and-flare reaction. This is when the release of chemicals from local nerve endings by a nerve axon reflex, causes the vasodilatations of surrounding cutaneous blood vessels, which causes redness of the surrounding skin.
As a part of the allergy response, our body has developed a secondary response which in some individuals causes a more widespread and sustained edematous response. This usually occurs about 8 hours after the allergen originally comes in contact with the skin. When an allergen is ingested, a dispersed form of wheal-and-flare reaction, known as urticaria or hives will appear when the allergen enters the bloodstream and eventually reaches the skin. The way our skin reacts to different allergens gives allergists the upper hand and allows them to test for allergies by injecting a very small amount of an allergen into the skin. Even though these injections are very small and local, they still pose the risk of causing systematic anaphylaxis.
Cause
Risk factors for allergies can be placed in two broad categories, namely host and environmental factors. Host factors include heredity, sex, race, and age, with heredity being by far the most significant. However, there has been a recent increase in the incidence of allergic disorders that cannot be explained by genetic factors alone. Four major environmental candidates are alterations in exposure to infectious diseases during early childhood, environmental pollution, allergen levels, and dietary changes.
Dust mites
Dust mite allergy, also known as house dust allergy, is a sensitization and allergic reaction to the droppings of house dust mites. The allergy is common and can trigger allergic reactions such as asthma, eczema, or itching. The mite's gut contains potent digestive enzymes (notably peptidase 1) that persist in their feces and are major inducers of allergic reactions such as wheezing. The mite's exoskeleton can also contribute to allergic reactions. Unlike scabies mites or skin follicle mites, house dust mites do not burrow under the skin and are not parasitic.
Foods
A wide variety of foods can cause allergic reactions, but 90% of allergic responses to foods are caused by cow's milk, soy, eggs, wheat, peanuts, tree nuts, fish, and shellfish. Other food allergies, affecting less than 1 person per 10,000 population, may be considered "rare". The most common food allergy in the US population is a sensitivity to crustacea. Although peanut allergies are notorious for their severity, peanut allergies are not the most common food allergy in adults or children. Severe or life-threatening reactions may be triggered by other allergens and are more common when combined with asthma.
Rates of allergies differ between adults and children. Children can sometimes outgrow peanut allergies. Egg allergies affect one to two percent of children but are outgrown by about two-thirds of children by the age of 5. The sensitivity is usually to proteins in the white, rather than the yolk.
Milk-protein allergies—distinct from lactose intolerance—are most common in children. Approximately 60% of milk-protein reactions are immunoglobulin E–mediated, with the remaining usually attributable to inflammation of the colon. Some people are unable to tolerate milk from goats or sheep as well as from cows, and many are also unable to tolerate dairy products such as cheese. Roughly 10% of children with a milk allergy will have a reaction to beef. Lactose intolerance, a common reaction to milk, is not a form of allergy at all, but due to the absence of an enzyme in the digestive tract.
Those with tree nut allergies may be allergic to one or many tree nuts, including pecans, pistachios, and walnuts. In addition, seeds, including sesame seeds and poppy seeds, contain oils in which protein is present, which may elicit an allergic reaction.
Allergens can be transferred from one food to another through genetic engineering; however, genetic modification can also remove allergens. Little research has been done on the natural variation of allergen concentrations in unmodified crops.
Latex
Latex can trigger an IgE-mediated cutaneous, respiratory, and systemic reaction. The prevalence of latex allergy in the general population is believed to be less than one percent. In a hospital study, 1 in 800 surgical patients (0.125 percent) reported latex sensitivity, although the sensitivity among healthcare workers is higher, between seven and ten percent. Researchers attribute this higher level to the exposure of healthcare workers to areas with significant airborne latex allergens, such as operating rooms, intensive-care units, and dental suites. These latex-rich environments may sensitize healthcare workers who regularly inhale allergenic proteins.
The most prevalent response to latex is an allergic contact dermatitis, a delayed hypersensitive reaction appearing as dry, crusted lesions. This reaction usually lasts 48–96 hours. Sweating or rubbing the area under the glove aggravates the lesions, possibly leading to ulcerations. Anaphylactic reactions occur most often in sensitive patients who have been exposed to a surgeon's latex gloves during abdominal surgery, but other mucosal exposures, such as dental procedures, can also produce systemic reactions.
Latex and banana sensitivity may cross-react. Furthermore, those with latex allergy may also have sensitivities to avocado, kiwifruit, and chestnut. These people often have perioral itching and local urticaria. Only occasionally have these food-induced allergies induced systemic responses. Researchers suspect that the cross-reactivity of latex with banana, avocado, kiwifruit, and chestnut occurs because latex proteins are structurally homologous with some other plant proteins.
Medications
About 10% of people report that they are allergic to penicillin; however, of that 10%, 90% turn out not to be. Serious allergies only occur in about 0.03%.
Insect stings
One of the main sources of human allergies is insects. An allergy to insects can be brought on by bites, stings, ingestion, and inhalation.
Toxins interacting with proteins
Another non-food protein reaction, urushiol-induced contact dermatitis, originates after contact with poison ivy, eastern poison oak, western poison oak, or poison sumac. Urushiol, which is not itself a protein, acts as a hapten and chemically reacts with, binds to, and changes the shape of integral membrane proteins on exposed skin cells. The immune system does not recognize the affected cells as normal parts of the body, causing a T-cell-mediated immune response.
Of these poisonous plants, sumac is the most virulent. The resulting dermatological response to the reaction between urushiol and membrane proteins includes redness, swelling, papules, vesicles, blisters, and streaking.
Estimates vary on the population fraction that will have an immune system response. Approximately 25% of the population will have a strong allergic response to urushiol. In general, approximately 80–90% of adults will develop a rash if they are exposed to of purified urushiol, but some people are so sensitive that it takes only a molecular trace on the skin to initiate an allergic reaction.
Genetics
Allergic diseases are strongly familial; identical twins are likely to have the same allergic diseases about 70% of the time; the same allergy occurs about 40% of the time in non-identical twins. Allergic parents are more likely to have allergic children and those children's allergies are likely to be more severe than those in children of non-allergic parents. Some allergies, however, are not consistent along genealogies; parents who are allergic to peanuts may have children who are allergic to ragweed. The likelihood of developing allergies is inherited and related to an irregularity in the immune system, but the specific allergen is not.
The risk of allergic sensitization and the development of allergies varies with age, with young children most at risk. Several studies have shown that IgE levels are highest in childhood and fall rapidly between the ages of 10 and 30 years. The peak prevalence of hay fever is highest in children and young adults and the incidence of asthma is highest in children under 10.
Ethnicity may play a role in some allergies; however, racial factors have been difficult to separate from environmental influences and changes due to migration. It has been suggested that different genetic loci are responsible for asthma, to be specific, in people of European, Hispanic, Asian, and African origins.
When we think about how different we all look and perceive our surroundings, it becomes unimaginable to think about how different all the ways we are different on the molecular level. Everything from how we react to foreign bodies to how we respond to those bodies and why. This is all because of our genetic markup; our DNA, which is made up of genes that encode for specific molecules or whole complexes. Due to the variability in responses and how the disease manifests differently in individuals, a clear genetic basis for the predisposition and severity of allergic diseases has not yet been fully established. A lot of what causes the allergy is the way our body extremely reacts to the environment so the genes that cause these things are related to regulation of molecules.
Researchers have worked to characterize genes involved in inflammation and the maintenance of mucosal integrity. The identified genes associated with allergic disease severity, progression, and development primarily function in four areas: regulating inflammatory responses (IFN-α, TLR-1, IL-13, IL-4, IL-5, HLA-G, iNOS), maintaining vascular endothelium and mucosal lining (FLG, PLAUR, CTNNA3, PDCH1, COL29A1), mediating immune cell function (PHF11, H1R, HDC, TSLP, STAT6, RERE, PPP2R3C), and influencing susceptibility to allergic sensitization (e.g., ORMDL3, CHI3L1).
Multiple studies have investigated the genetic profiles of individuals with predispositions to and experiences of allergic diseases, revealing a complex polygenic architecture. Specific genetic loci, such as MIIP, CXCR4, SCML4, CYP1B1, ICOS, and LINC00824, have been directly associated with allergic disorders. Additionally, some loci show pleiotropic effects, linking them to both autoimmune and allergic conditions, including PRDM2, G3BP1, HBS1L, and POU2AF1. These genes engage in shared inflammatory pathways across various epithelial tissues—such as the skin, esophagus, vagina, and lung—highlighting common genetic factors that contribute to the pathogenesis of asthma and other allergic diseases.
In atopic patients, transcriptome studies have identified IL-13-related pathways as key for eosinophilic airway inflammation and remodeling. That causes the body to experience the type of airflow restriction of allergic asthma. Expression of genes was quite variable: genes associated with inflammation were found almost exclusively in superficial airways, while genes related to airway remodeling were mainly present in endobronchial biopsy specimens. This enhanced gene profile was similar across multiple sample sizes – nasal brushing, sputum, endobronchial brushing – demonstrating the importance of eosinophilic inflammation, mast cell degranulation and group 3 innate lymphoid cells in severe adult-onset asthma. IL-13 is an immunoregulatory cytokine that is made mostly by activated T-helper 2 (Th2) cells. It is an important cytokine for many steps in B-cell maturation and differentiation, since it increases CD23 and MHC class II molecules, and aids in B-cell isotype switching to IgE. IL-13 also suppresses macrophage function by reducing the release of pro-inflammatory cytokines and chemokines. The more striking thing is that IL-13 is the prime mover in allergen-induced asthma via pathways that are independent of IgE and eosinophils.
Hygiene hypothesis
Allergic diseases are caused by inappropriate immunological responses to harmless antigens driven by a TH2-mediated immune response. Many bacteria and viruses elicit a TH1-mediated immune response, which down-regulates TH2 responses. The first proposed mechanism of action of the hygiene hypothesis was that insufficient stimulation of the TH1 arm of the immune system leads to an overactive TH2 arm, which in turn leads to allergic disease. In other words, individuals living in too sterile an environment are not exposed to enough pathogens to keep the immune system busy. Since our bodies evolved to deal with a certain level of such pathogens, when they are not exposed to this level, the immune system will attack harmless antigens, and thus normally benign microbial objects—like pollen—will trigger an immune response.
The hygiene hypothesis was developed to explain the observation that hay fever and eczema, both allergic diseases, were less common in children from larger families, which were, it is presumed, exposed to more infectious agents through their siblings, than in children from families with only one child. It is used to explain the increase in allergic diseases that have been seen since industrialization, and the higher incidence of allergic diseases in more developed countries. The hygiene hypothesis has now expanded to include exposure to symbiotic bacteria and parasites as important modulators of immune system development, along with infectious agents.
Epidemiological data support the hygiene hypothesis. Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world, and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. Longitudinal studies in the third world demonstrate an increase in immunological disorders as a country grows more affluent and, it is presumed, cleaner. The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases. The use of antibacterial cleaning products has also been associated with higher incidence of asthma, as has birth by caesarean section rather than vaginal birth.
Stress
Chronic stress can aggravate allergic conditions. This has been attributed to a T helper 2 (TH2)-predominant response driven by suppression of interleukin 12 by both the autonomic nervous system and the hypothalamic–pituitary–adrenal axis. Stress management in highly susceptible individuals may improve symptoms.
Other environmental factors
Allergic diseases are more common in industrialized countries than in countries that are more traditional or agricultural, and there is a higher rate of allergic disease in urban populations versus rural populations, although these differences are becoming less defined. Historically, the trees planted in urban areas were predominantly male to prevent litter from seeds and fruits, but the high ratio of male trees causes high pollen counts, a phenomenon that horticulturist Tom Ogren has called "botanical sexism".
Alterations in exposure to microorganisms is another plausible explanation, at present, for the increase in atopic allergy. Endotoxin exposure reduces release of inflammatory cytokines such as TNF-α, IFNγ, interleukin-10, and interleukin-12 from white blood cells (leukocytes) that circulate in the blood. Certain microbe-sensing proteins, known as Toll-like receptors, found on the surface of cells in the body are also thought to be involved in these processes.
Parasitic worms and similar parasites are present in untreated drinking water in developing countries, and were present in the water of developed countries until the routine chlorination and purification of drinking water supplies. Recent research has shown that some common parasites, such as intestinal worms (e.g., hookworms), secrete chemicals into the gut wall (and, hence, the bloodstream) that suppress the immune system and prevent the body from attacking the parasite. This gives rise to a new slant on the hygiene hypothesis theory—that co-evolution of humans and parasites has led to an immune system that functions correctly only in the presence of the parasites. Without them, the immune system becomes unbalanced and oversensitive.
In particular, research suggests that allergies may coincide with the delayed establishment of gut flora in infants. However, the research to support this theory is conflicting, with some studies performed in China and Ethiopia showing an increase in allergy in people infected with intestinal worms. Clinical trials have been initiated to test the effectiveness of certain worms in treating some allergies. It may be that the term 'parasite' could turn out to be inappropriate, and in fact a hitherto unsuspected symbiosis is at work. For more information on this topic, see Helminthic therapy.
Pathophysiology
Acute response
In the initial stages of allergy, a type I hypersensitivity reaction against an allergen encountered for the first time and presented by a professional antigen-presenting cell causes a response in a type of immune cell called a TH2 lymphocyte, a subset of T cells that produce a cytokine called interleukin-4 (IL-4). These TH2 cells interact with other lymphocytes called B cells, whose role is production of antibodies. Coupled with signals provided by IL-4, this interaction stimulates the B cell to begin production of a large amount of a particular type of antibody known as IgE. Secreted IgE circulates in the blood and binds to an IgE-specific receptor (a kind of Fc receptor called FcεRI) on the surface of other kinds of immune cells called mast cells and basophils, which are both involved in the acute inflammatory response. The IgE-coated cells, at this stage, are sensitized to the allergen.
If later exposure to the same allergen occurs, the allergen can bind to the IgE molecules held on the surface of the mast cells or basophils. Cross-linking of the IgE and Fc receptors occurs when more than one IgE-receptor complex interacts with the same allergenic molecule and activates the sensitized cell. Activated mast cells and basophils undergo a process called degranulation, during which they release histamine and other inflammatory chemical mediators (cytokines, interleukins, leukotrienes, and prostaglandins) from their granules into the surrounding tissue causing several systemic effects, such as vasodilation, mucous secretion, nerve stimulation, and smooth muscle contraction.
This results in rhinorrhea, itchiness, dyspnea, and anaphylaxis. Depending on the individual, allergen, and mode of introduction, the symptoms can be system-wide (classical anaphylaxis) or localized to specific body systems. Asthma is localized to the respiratory system and eczema is localized to the dermis.
Late-phase response
After the chemical mediators of the acute response subside, late-phase responses can often occur. This is due to the migration of other leukocytes such as neutrophils, lymphocytes, eosinophils, and macrophages to the initial site. The reaction is usually seen 2–24 hours after the original reaction. Cytokines from mast cells may play a role in the persistence of long-term effects. Late-phase responses seen in asthma are slightly different from those seen in other allergic responses, although they are still caused by release of mediators from eosinophils and are still dependent on activity of TH2 cells.
Allergic contact dermatitis
Although allergic contact dermatitis is termed an "allergic" reaction (which usually refers to type I hypersensitivity), its pathophysiology involves a reaction that more correctly corresponds to a type IV hypersensitivity reaction. In type IV hypersensitivity, there is activation of certain types of T cells (CD8+) that destroy target cells on contact, as well as activated macrophages that produce hydrolytic enzymes.
Diagnosis
Effective management of allergic diseases relies on the ability to make an accurate diagnosis. Allergy testing can help confirm or rule out allergies. Correct diagnosis, counseling, and avoidance advice based on valid allergy test results reduce the incidence of symptoms and need for medications, and improve quality of life. To assess the presence of allergen-specific IgE antibodies, two different methods can be used: a skin prick test, or an allergy blood test. Both methods are recommended, and they have similar diagnostic value.
Skin prick tests and blood tests are equally cost-effective, and health economic evidence shows that both tests were cost-effective compared with no test. Early and more accurate diagnoses save cost due to reduced consultations, referrals to secondary care, misdiagnosis, and emergency admissions.
Allergy undergoes dynamic changes over time. Regular allergy testing of relevant allergens provides information on if and how patient management can be changed to improve health and quality of life. Annual testing is often the practice for determining whether allergy to milk, egg, soy, and wheat have been outgrown, and the testing interval is extended to 2–3 years for allergy to peanut, tree nuts, fish, and crustacean shellfish. Results of follow-up testing can guide decision-making regarding whether and when it is safe to introduce or re-introduce allergenic food into the diet.
Skin prick testing
Skin testing is also known as "puncture testing" and "prick testing" due to the series of tiny punctures or pricks made into the patient's skin. Tiny amounts of suspected allergens and/or their extracts (e.g., pollen, grass, mite proteins, peanut extract) are introduced to sites on the skin marked with pen or dye (the ink/dye should be carefully selected, lest it cause an allergic response itself). A negative and positive control are also included for comparison (eg, negative is saline or glycerin; positive is histamine). A small plastic or metal device is used to puncture or prick the skin. Sometimes, the allergens are injected "intradermally" into the patient's skin, with a needle and syringe. Common areas for testing include the inside forearm and the back.
If the patient is allergic to the substance, then a visible inflammatory reaction will usually occur within 30 minutes. This response will range from slight reddening of the skin to a full-blown hive (called "wheal and flare") in more sensitive patients similar to a mosquito bite. Interpretation of the results of the skin prick test is normally done by allergists on a scale of severity, with +/− meaning borderline reactivity, and 4+ being a large reaction. Increasingly, allergists are measuring and recording the diameter of the wheal and flare reaction. Interpretation by well-trained allergists is often guided by relevant literature.
In general, a positive response is interpreted when the wheal of an antigen is ≥3mm larger than the wheal of the negative control (eg, saline or glycerin). Some patients may believe they have determined their own allergic sensitivity from observation, but a skin test has been shown to be much better than patient observation to detect allergy.
If a serious life-threatening anaphylactic reaction has brought a patient in for evaluation, some allergists will prefer an initial blood test prior to performing the skin prick test. Skin tests may not be an option if the patient has widespread skin disease or has taken antihistamines in the last several days.
Patch testing
Patch testing is a method used to determine if a specific substance causes allergic inflammation of the skin. It tests for delayed reactions. It is used to help ascertain the cause of skin contact allergy or contact dermatitis. Adhesive patches, usually treated with several common allergic chemicals or skin sensitizers, are applied to the back. The skin is then examined for possible local reactions at least twice, usually at 48 hours after application of the patch, and again two or three days later.
Blood testing
An allergy blood test is quick and simple and can be ordered by a licensed health care provider (e.g., an allergy specialist) or general practitioner. Unlike skin-prick testing, a blood test can be performed irrespective of age, skin condition, medication, symptom, disease activity, and pregnancy. Adults and children of any age can get an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often gentler than several skin pricks.
An allergy blood test is available through most laboratories. A sample of the patient's blood is sent to a laboratory for analysis, and the results are sent back a few days later. Multiple allergens can be detected with a single blood sample. Allergy blood tests are very safe since the person is not exposed to any allergens during the testing procedure. After the onset of anaphylaxis or a severe allergic reaction, guidelines recommend emergency departments obtain a time-sensitive blood test to determine blood tryptase levels and assess for mast cell activation.
The test measures the concentration of specific IgE antibodies in the blood. Quantitative IgE test results increase the possibility of ranking how different substances may affect symptoms. A rule of thumb is that the higher the IgE antibody value, the greater the likelihood of symptoms. Allergens found at low levels that today do not result in symptoms cannot help predict future symptom development. The quantitative allergy blood result can help determine what a patient is allergic to, help predict and follow the disease development, estimate the risk of a severe reaction, and explain cross-reactivity.
A low total IgE level is not adequate to rule out sensitization to commonly inhaled allergens. Statistical methods, such as ROC curves, predictive value calculations, and likelihood ratios have been used to examine the relationship of various testing methods to each other. These methods have shown that patients with a high total IgE have a high probability of allergic sensitization, but further investigation with allergy tests for specific IgE antibodies for a carefully chosen of allergens is often warranted.
Laboratory methods to measure specific IgE antibodies for allergy testing include enzyme-linked immunosorbent assay (ELISA, or EIA), radioallergosorbent test (RAST), fluorescent enzyme immunoassay (FEIA), and chemiluminescence immunoassay (CLIA).
Other testing
Challenge testing: Challenge testing is when tiny amounts of a suspected allergen are introduced to the body orally, through inhalation, or via other routes. Except for testing food and medication allergies, challenges are rarely performed. When this type of testing is chosen, it must be closely supervised by an allergist.
Elimination/challenge tests: This testing method is used most often with foods or medicines. A patient with a suspected allergen is instructed to modify his diet to totally avoid that allergen for a set time. If the patient experiences significant improvement, he may then be "challenged" by reintroducing the allergen, to see if symptoms are reproduced.
Unreliable tests: There are other types of allergy testing methods that are unreliable, including applied kinesiology (allergy testing through muscle relaxation), cytotoxicity testing, urine autoinjection, skin titration (Rinkel method), and provocative and neutralization (subcutaneous) testing or sublingual provocation.
Differential diagnosis
Before a diagnosis of allergic disease can be confirmed, other plausible causes of the presenting symptoms must be considered. Vasomotor rhinitis, for example, is one of many illnesses that share symptoms with allergic rhinitis, underscoring the need for professional differential diagnosis. Once a diagnosis of asthma, rhinitis, anaphylaxis, or other allergic disease has been made, there are several methods for discovering the causative agent of that allergy.
Prevention
Giving peanut products early in childhood may decrease the risk of allergies, and only breastfeeding during at least the first few months of life may decrease the risk of allergic dermatitis. There is little evidence that a mother's diet during pregnancy or breastfeeding affects the risk of allergies, although there has been some research to show that irregular cow's milk exposure might increase the risk of cow's milk allergy. There is some evidence that delayed introduction of certain foods is useful, and that early exposure to potential allergens may actually be protective.
Fish oil supplementation during pregnancy is associated with a lower risk of food sensitivities. Probiotic supplements during pregnancy or infancy may help to prevent atopic dermatitis.
Management
Management of allergies typically involves avoiding the allergy trigger and taking medications to improve the symptoms. Allergen immunotherapy may be useful for some types of allergies.
Medication
Several medications may be used to block the action of allergic mediators, or to prevent activation of cells and degranulation processes. These include antihistamines, glucocorticoids, epinephrine (adrenaline), mast cell stabilizers, and antileukotriene agents are common treatments of allergic diseases. Anticholinergics, decongestants, and other compounds thought to impair eosinophil chemotaxis are also commonly used. Although rare, the severity of anaphylaxis often requires epinephrine injection, and where medical care is unavailable, a device known as an epinephrine autoinjector may be used.
Immunotherapy
Allergen immunotherapy is useful for environmental allergies, allergies to insect bites, and asthma. Its benefit for food allergies is unclear and thus not recommended. Immunotherapy involves exposing people to larger and larger amounts of allergen in an effort to change the immune system's response.
Meta-analyses have found that injections of allergens under the skin is effective in the treatment in allergic rhinitis in children and in asthma. The benefits may last for years after treatment is stopped. It is generally safe and effective for allergic rhinitis and conjunctivitis, allergic forms of asthma, and stinging insects.
To a lesser extent, the evidence also supports the use of sublingual immunotherapy for rhinitis and asthma. For seasonal allergies the benefit is small. In this form the allergen is given under the tongue and people often prefer it to injections. Immunotherapy is not recommended as a stand-alone treatment for asthma.
Alternative medicine
An experimental treatment, enzyme potentiated desensitization (EPD), has been tried for decades but is not generally accepted as effective. EPD uses dilutions of allergen and an enzyme, beta-glucuronidase, to which T-regulatory lymphocytes are supposed to respond by favoring desensitization, or down-regulation, rather than sensitization. EPD has also been tried for the treatment of autoimmune diseases, but evidence does not show effectiveness.
A review found no effectiveness of homeopathic treatments and no difference compared with placebo. The authors concluded that based on rigorous clinical trials of all types of homeopathy for childhood and adolescence ailments, there is no convincing evidence that supports the use of homeopathic treatments.
According to the National Center for Complementary and Integrative Health, U.S., the evidence is relatively strong that saline nasal irrigation and butterbur are effective, when compared to other alternative medicine treatments, for which the scientific evidence is weak, negative, or nonexistent, such as honey, acupuncture, omega 3's, probiotics, astragalus, capsaicin, grape seed extract, Pycnogenol, quercetin, spirulina, stinging nettle, tinospora, or guduchi.
Epidemiology
The allergic diseases—hay fever and asthma—have increased in the Western world over the past 2–3 decades. Increases in allergic asthma and other atopic disorders in industrialized nations, it is estimated, began in the 1960s and 1970s, with further increases occurring during the 1980s and 1990s, although some suggest that a steady rise in sensitization has been occurring since the 1920s. The number of new cases per year of atopy in developing countries has, in general, remained much lower.
Changing frequency
Although genetic factors govern susceptibility to atopic disease, increases in atopy have occurred within too short a period to be explained by a genetic change in the population, thus pointing to environmental or lifestyle changes. Several hypotheses have been identified to explain this increased rate. Increased exposure to perennial allergens may be due to housing changes and increased time spent indoors, and a decreased activation of a common immune control mechanism may be caused by changes in cleanliness or hygiene, and exacerbated by dietary changes, obesity, and decline in physical exercise. The hygiene hypothesis maintains that high living standards and hygienic conditions exposes children to fewer infections. It is thought that reduced bacterial and viral infections early in life direct the maturing immune system away from TH1 type responses, leading to unrestrained TH2 responses that allow for an increase in allergy.
Changes in rates and types of infection alone, however, have been unable to explain the observed increase in allergic disease, and recent evidence has focused attention on the importance of the gastrointestinal microbial environment. Evidence has shown that exposure to food and fecal-oral pathogens, such as hepatitis A, Toxoplasma gondii, and Helicobacter pylori (which also tend to be more prevalent in developing countries), can reduce the overall risk of atopy by more than 60%, and an increased rate of parasitic infections has been associated with a decreased prevalence of asthma. It is speculated that these infections exert their effect by critically altering TH1/TH2 regulation. Important elements of newer hygiene hypotheses also include exposure to endotoxins, exposure to pets and growing up on a farm.
History
Some symptoms attributable to allergic diseases are mentioned in ancient sources. Particularly, three members of the Roman Julio-Claudian dynasty (Augustus, Claudius and Britannicus) are suspected to have a family history of atopy. The concept of "allergy" was originally introduced in 1906 by the Viennese pediatrician Clemens von Pirquet, after he noticed that patients who had received injections of horse serum or smallpox vaccine usually had quicker, more severe reactions to second injections. Pirquet called this phenomenon "allergy" from the Ancient Greek words ἄλλος allos meaning "other" and ἔργον ergon meaning "work".
All forms of hypersensitivity used to be classified as allergies, and all were thought to be caused by an improper activation of the immune system. Later, it became clear that several different disease mechanisms were implicated, with a common link to a disordered activation of the immune system. In 1963, a new classification scheme was designed by Philip Gell and Robin Coombs that described four types of hypersensitivity reactions, known as Type I to Type IV hypersensitivity.
With this new classification, the word allergy, sometimes clarified as a true allergy, was restricted to type I hypersensitivities (also called immediate hypersensitivity), which are characterized as rapidly developing reactions involving IgE antibodies.
A major breakthrough in understanding the mechanisms of allergy was the discovery of the antibody class labeled immunoglobulin E (IgE). IgE was simultaneously discovered in 1966–67 by two independent groups: Ishizaka's team at the Children's Asthma Research Institute and Hospital in Denver, USA, and by Gunnar Johansson and Hans Bennich in Uppsala, Sweden. Their joint paper was published in April 1969.
Diagnosis
Radiometric assays include the radioallergosorbent test (RAST test) method, which uses IgE-binding (anti-IgE) antibodies labeled with radioactive isotopes for quantifying the levels of IgE antibody in the blood.
The RAST methodology was invented and marketed in 1974 by Pharmacia Diagnostics AB, Uppsala, Sweden, and the acronym RAST is actually a brand name. In 1989, Pharmacia Diagnostics AB replaced it with a superior test named the ImmunoCAP Specific IgE blood test, which uses the newer fluorescence-labeled technology.
American College of Allergy Asthma and Immunology (ACAAI) and the American Academy of Allergy Asthma and Immunology (AAAAI) issued the Joint Task Force Report "Pearls and pitfalls of allergy diagnostic testing" in 2008, and is firm in its statement that the term RAST is now obsolete:
The updated version, the ImmunoCAP Specific IgE blood test, is the only specific IgE assay to receive Food and Drug Administration approval to quantitatively report to its detection limit of 0.1kU/L.
Medical specialty
The medical speciality that studies, diagnoses and treats diseases caused by allergies is called allergology.
An allergist is a physician specially trained to manage and treat allergies, asthma, and the other allergic diseases. In the United States physicians holding certification by the American Board of Allergy and Immunology (ABAI) have successfully completed an accredited educational program and evaluation process, including a proctored examination to demonstrate knowledge, skills, and experience in patient care in allergy and immunology. Becoming an allergist/immunologist requires completion of at least nine years of training.
After completing medical school and graduating with a medical degree, a physician will undergo three years of training in internal medicine (to become an internist) or pediatrics (to become a pediatrician). Once physicians have finished training in one of these specialties, they must pass the exam of either the American Board of Pediatrics (ABP), the American Osteopathic Board of Pediatrics (AOBP), the American Board of Internal Medicine (ABIM), or the American Osteopathic Board of Internal Medicine (AOBIM). Internists or pediatricians wishing to focus on the sub-specialty of allergy-immunology then complete at least an additional two years of study, called a fellowship, in an allergy/immunology training program. Allergist/immunologists listed as ABAI-certified have successfully passed the certifying examination of the ABAI following their fellowship.
In the United Kingdom, allergy is a subspecialty of general medicine or pediatrics. After obtaining postgraduate exams (MRCP or MRCPCH), a doctor works for several years as a specialist registrar before qualifying for the General Medical Council specialist register. Allergy services may also be delivered by immunologists. A 2003 Royal College of Physicians report presented a case for improvement of what were felt to be inadequate allergy services in the UK.
In 2006, the House of Lords convened a subcommittee. It concluded likewise in 2007 that allergy services were insufficient to deal with what the Lords referred to as an "allergy epidemic" and its social cost; it made several recommendations.
Research
Low-allergen foods are being developed, as are improvements in skin prick test predictions; evaluation of the atopy patch test, wasp sting outcomes predictions, a rapidly disintegrating epinephrine tablet, and anti-IL-5 for eosinophilic diseases.
See also
Allergic shiner
GWAS in allergy
Histamine intolerance
List of allergens
Oral allergy syndrome
References
External links
Effects of external causes
Immunology
Respiratory diseases
Immune system
Immune system disorders
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | Allergy | [
"Biology"
] | 9,506 | [
"Organ systems",
"Immunology",
"Immune system"
] |
55,335 | https://en.wikipedia.org/wiki/Discounting | In finance, discounting is a mechanism in which a debtor obtains the right to delay payments to a creditor, for a defined period of time, in exchange for a charge or fee. Essentially, the party that owes money in the present purchases the right to delay the payment until some future date. This transaction is based on the fact that most people prefer current interest to delayed interest because of mortality effects, impatience effects, and salience effects. The discount, or charge, is the difference between the original amount owed in the present and the amount that has to be paid in the future to settle the debt.
The discount is usually associated with a discount rate, which is also called the discount yield. The discount yield is the proportional share of the initial amount owed (initial liability) that must be paid to delay payment for 1 year.
Since a person can earn a return on money invested over some period of time, most economic and financial models assume the discount yield is the same as the rate of return the person could receive by investing this money elsewhere (in assets of similar risk) over the given period of time covered by the delay in payment. The concept is associated with the opportunity cost of not having use of the money for the period of time covered by the delay in payment. The relationship between the discount yield and the rate of return on other financial assets is usually discussed in economic and financial theories involving the inter-relation between various market prices, and the achievement of Pareto optimality through the operations in the capitalistic price mechanism, as well as in the discussion of the efficient (financial) market hypothesis. The person delaying the payment of the current liability is essentially compensating the person to whom he/she owes money for the lost revenue that could be earned from an investment during the time period covered by the delay in payment. Accordingly, it is the relevant "discount yield" that determines the "discount", and not the other way around.
As indicated, the rate of return is usually calculated in accordance to an annual return on investment. Since an investor earns a return on the original principal amount of the investment as well as on any prior period investment income, investment earnings are "compounded" as time advances. Therefore, considering the fact that the "discount" must match the benefits obtained from a similar investment asset, the "discount yield" must be used within the same compounding mechanism to negotiate an increase in the size of the "discount" whenever the time period of the payment is delayed or extended. The "discount rate" is the rate at which the "discount" must grow as the delay in payment is extended. This fact is directly tied into the time value of money and its calculations.
The "time value of money" indicates there is a difference between the "future value" of a payment and the "present value" of the same payment. The rate of return on investment should be the dominant factor in evaluating the market's assessment of the difference between the future value and the present value of a payment; and it is the market's assessment that counts the most. Therefore, the "discount yield", which is predetermined by a related return on investment that is found in the different markets in the financial sector, is what is used within the time-value-of-money calculations to determine the "discount" required to delay payment of a financial liability for a given period of time.
Basic calculation
If we consider the value of the original payment presently due to be P, and the debtor wants to delay the payment for t years, then a market rate of return denoted r on a similar investment asset means the future value of P is , and the discount can be calculated as
We wish to calculate the present value, also known as the "discounted value" of a payment. Note that a payment made in the future is worth less than the same payment made today which could immediately be deposited into a bank account and earn interest, or invest in other assets. Hence we must discount future payments. Consider a payment F that is to be made t years in the future, we calculate the present value as
Suppose that we wanted to find the present value, denoted PV of $100 that will be received in five years time. If the interest rate r is 12% per year then
Discount rate
The discount rate which is used in financial calculations is usually chosen to be equal to the cost of capital. The cost of capital, in a financial market equilibrium, will be the same as the market rate of return on the financial asset mixture the firm uses to finance capital investment. Some adjustment may be made to the discount rate to take account of risks associated with uncertain cash flows, with other developments.
The discount rates typically applied to different types of companies show significant differences:
Start-ups seeking money: 50–100%
Early start-ups: 40–60%
Late start-ups: 30–50%
Mature companies: 10–25%
The higher discount rate for start-ups reflects the various disadvantages they face, compared to established companies:
Reduced marketability of ownerships because stocks are not traded publicly
Small number of investors willing to invest
High risks associated with start-ups
Overly optimistic forecasts by enthusiastic founders
One method that looks into a correct discount rate is the capital asset pricing model. This model takes into account three variables that make up the discount rate:
1. Risk free rate: The percentage of return generated by investing in risk free securities such as government bonds.
2. Beta: The measurement of how a company's stock price reacts to a change in the market. A beta higher than 1 means that a change in share price is exaggerated compared to the rest of shares in the same market. A beta less than 1 means that the share is stable and not very responsive to changes in the market. Less than 0 means that a share is moving in the opposite direction from the rest of the shares in the same market.
3. Equity market risk premium: The return on investment that investors require above the risk free rate.
Discount rate = (risk free rate) + beta * (equity market risk premium)
Discount factor
The discount factor, DF(T), is the factor by which a future cash flow must be multiplied in order to obtain the present value. For a zero-rate (also called spot rate) r, taken from a yield curve, and a time to cash flow T (in years), the discount factor is:
In the case where the only discount rate one has is not a zero-rate (neither taken from a zero-coupon bond nor converted from a swap rate to a zero-rate through bootstrapping) but an annually-compounded rate (for example if the benchmark is a US Treasury bond with annual coupons) and one only has its yield to maturity, one would use an annually-compounded discount factor:
However, when operating in a bank, where the amount the bank can lend (and therefore get interest) is linked to the value of its assets (including accrued interest), traders usually use daily compounding to discount cash flows. Indeed, even if the interest of the bonds it holds (for example) is paid semi-annually, the value of its book of bond will increase daily, thanks to accrued interest being accounted for, and therefore the bank will be able to re-invest these daily accrued interest (by lending additional money or buying more financial products). In that case, the discount factor is then (if the usual money market day count convention for the currency is ACT/360, in case of currencies such as United States dollar, euro, Japanese yen), with r the zero-rate and T the time to cash flow in years:
or, in case the market convention for the currency being discounted is ACT/365 (AUD, CAD, GBP):
Sometimes, for manual calculation, the continuously-compounded hypothesis is a close-enough approximation of the daily-compounding hypothesis, and makes calculation easier (even though its application is limited to instruments such as financial derivatives). In that case, the discount factor is:
Other discounts
For discounts in marketing, see discounts and allowances, sales promotion, and pricing. The article on discounted cash flow provides an example about discounting and risks in real estate investments.
See also
Coupon
Coupon (bond)
High-low pricing
Hyperbolic discounting
References
Notes
External links
Actuarial science
Loans | Discounting | [
"Mathematics"
] | 1,716 | [
"Applied mathematics",
"Actuarial science"
] |
55,345 | https://en.wikipedia.org/wiki/Net%20present%20value | The net present value (NPV) or net present worth (NPW) is a way of measuring the value of an asset that has cashflow by adding up the present value of all the future cash flows that asset will generate. The present value of a cash flow depends on the interval of time between now and the cash flow because of the Time value of money (which includes the annual effective discount rate). It provides a method for evaluating and comparing capital projects or financial products with cash flows spread over time, as in loans, investments, payouts from insurance contracts plus many other applications.
Time value of money dictates that time affects the value of cash flows. For example, a lender may offer 99 cents for the promise of receiving $1.00 a month from now, but the promise to receive that same dollar 20 years in the future would be worth much less today to that same person (lender), even if the payback in both cases was equally certain. This decrease in the current value of future cash flows is based on a chosen rate of return (or discount rate). If for example there exists a time series of identical cash flows, the cash flow in the present is the most valuable, with each future cash flow becoming less valuable than the previous cash flow. A cash flow today is more valuable than an identical cash flow in the future because a present flow can be invested immediately and begin earning returns, while a future flow cannot.
NPV is determined by calculating the costs (negative cash flows) and benefits (positive cash flows) for each period of an investment. After the cash flow for each period is calculated, the present value (PV) of each one is achieved by discounting its future value (see Formula) at a periodic rate of return (the rate of return dictated by the market). NPV is the sum of all the discounted future cash flows.
Because of its simplicity, NPV is a useful tool to determine whether a project or investment will result in a net profit or a loss. A positive NPV results in profit, while a negative NPV results in a loss. The NPV measures the excess or shortfall of cash flows, in present value terms, above the cost of funds. In a theoretical situation of unlimited capital budgeting, a company should pursue every investment with a positive NPV. However, in practical terms a company's capital constraints limit investments to projects with the highest NPV whose cost cash flows, or initial cash investment, do not exceed the company's capital. NPV is a central tool in discounted cash flow (DCF) analysis and is a standard method for using the time value of money to appraise long-term projects. It is widely used throughout economics, financial analysis, and financial accounting.
In the case when all future cash flows are positive, or incoming (such as the principal and coupon payment of a bond) the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). NPV can be described as the "difference amount" between the sums of discounted cash inflows and cash outflows. It compares the present value of money today to the present value of money in the future, taking inflation and returns into account.
The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputs a present value, which is the current fair price. The converse process in discounted cash flow (DCF) analysis takes a sequence of cash flows and a price as input and as output the discount rate, or internal rate of return (IRR) which would yield the given price as NPV. This rate, called the yield, is widely used in bond trading.
Formula
Each cash inflow/outflow is discounted back to its present value (PV). Then all are summed such that NPV is the sum of all terms:
where:
is the time of the cash flow
is the discount rate, i.e. the return that could be earned per unit of time on an investment with similar risk
is the net cash flow i.e. cash inflow – cash outflow, at time t. For educational purposes, is commonly placed to the left of the sum to emphasize its role as (minus) the investment.
is the discount factor, also known as the present value factor.
The result of this formula is multiplied with the Annual Net cash in-flows and reduced by Initial Cash outlay the present value, but in cases where the cash flows are not equal in amount, the previous formula will be used to determine the present value of each cash flow separately. Any cash flow within 12 months will not be discounted for NPV purpose, nevertheless the usual initial investments during the first year R0 are summed up a negative cash flow.
The NPV can also be thought of as the difference between the discounted benefits and costs over time. As such, the NPV can also be written as:
where:
are the benefits or cash inflows
are the costs or cash outflows
Given the (period, cash inflows, cash outflows) shown by (, , ) where is the total number of periods, the net present value is given by:
where:
are the benefits or cash inflows at time .
are the costs or cash outflows at time .
The NPV can be rewritten using the net cash flow in each time period as:By convention, the initial period occurs at time , where cash flows in successive periods are then discounted from and so on. Furthermore, all future cash flows during a period are assumed to be at the end of each period. For constant cash flow , the net present value is a finite geometric series and is given by:
Inclusion of the term is important in the above formulae. A typical capital project involves a large negative cashflow (the initial investment) with positive future cashflows (the return on the investment). A key assessment is whether, for a given discount rate, the NPV is positive (profitable) or negative (loss-making). The IRR is the discount rate for which the NPV is exactly 0.
Capital efficiency
The NPV method can be slightly adjusted to calculate how much money is contributed to a project's investment per dollar invested. This is known as the capital efficiency ratio. The formula for the net present value per dollar investment (NPVI) is given below:
where:
is the net cash flow i.e. cash inflow – cash outflow, at time t.
are the net cash outflows, at time t.
Example
If the discounted benefits across the life of a project are and the discounted net costs across the life of a project are then the NPVI is:
That is for every dollar invested in the project, a contribution of is made to the project's NPV.
Alternative discounting frequencies
The NPV formula assumes that the benefits and costs occur at the end of each period, resulting in a more conservative NPV. However, it may be that the cash inflows and outflows occur at the beginning of the period or in the middle of the period.
The NPV formula for mid period discounting is given by:
Over a project's lifecycle, cash flows are typically spread across each period (for example spread across each year), and as such the middle of the year represents the average point in time in which these cash flows occur. Hence mid period discounting typically provides a more accurate, although less conservative NPV.
ЧикЙ
The NPV formula using beginning of period discounting is given by:
This results in the least conservative NPV.
The discount rate
The rate used to discount future cash flows to the present value is a key variable of this process.
A firm's weighted average cost of capital (after tax) is often used, but many people believe that it is appropriate to use higher discount rates to adjust for risk, opportunity cost, or other factors. A variable discount rate with higher rates applied to cash flows occurring further along the time span might be used to reflect the yield curve premium for long-term debt.
Another approach to choosing the discount rate factor is to decide the rate which the capital needed for the project could return if invested in an alternative venture. If, for example, the capital required for Project A can earn 5% elsewhere, use this discount rate in the NPV calculation to allow a direct comparison to be made between Project A and the alternative. Related to this concept is to use the firm's reinvestment rate. Re-investment rate can be defined as the rate of return for the firm's investments on average. When analyzing projects in a capital constrained environment, it may be appropriate to use the reinvestment rate rather than the firm's weighted average cost of capital as the discount factor. It reflects opportunity cost of investment, rather than the possibly lower cost of capital.
An NPV calculated using variable discount rates (if they are known for the duration of the investment) may better reflect the situation than one calculated from a constant discount rate for the entire investment duration. Refer to the tutorial article written by Samuel Baker for more detailed relationship between the NPV and the discount rate.
For some professional investors, their investment funds are committed to target a specified rate of return. In such cases, that rate of return should be selected as the discount rate for the NPV calculation. In this way, a direct comparison can be made between the profitability of the project and the desired rate of return.
To some extent, the selection of the discount rate is dependent on the use to which it will be put. If the intent is simply to determine whether a project will add value to the company, using the firm's weighted average cost of capital may be appropriate. If trying to decide between alternative investments in order to maximize the value of the firm, the corporate reinvestment rate would probably be a better choice.
Risk-adjusted net present value (rNPV)
Using variable rates over time, or discounting "guaranteed" cash flows differently from "at risk" cash flows, may be a superior methodology but is seldom used in practice. Using the discount rate to adjust for risk is often difficult to do in practice (especially internationally) and is difficult to do well.
An alternative to using discount factor to adjust for risk is to explicitly correct the cash flows for the risk elements using risk-adjusted net present value (rNPV) or a similar method, then discount at the firm's rate.
Use in decision making
NPV is an indicator of how much value an investment or project adds to the firm. With a particular project, if is a positive value, the project is in the status of positive cash inflow in the time of t. If is a negative value, the project is in the status of discounted cash outflow in the time of t. Appropriately risked projects with a positive NPV could be accepted. This does not necessarily mean that they should be undertaken since NPV at the cost of capital may not account for opportunity cost, i.e., comparison with other available investments. In financial theory, if there is a choice between two mutually exclusive alternatives, the one yielding the higher NPV should be selected. A positive net present value indicates that the projected earnings generated by a project or investment (in present dollars) exceeds the anticipated costs (also in present dollars). This concept is the basis for the Net Present Value Rule, which dictates that the only investments that should be made are those with positive NPVs.
An investment with a positive NPV is profitable, but one with a negative NPV will not necessarily result in a net loss: it is just that the internal rate of return of the project falls below the required rate of return.
Advantages and disadvantages of using Net Present Value
NPV is an indicator for project investments, and has several advantages and disadvantages for decision-making.
Advantages
The NPV includes all relevant time and cash flows for the project by considering the time value of money, which is consistent with the goal of wealth maximization by creating the highest wealth for shareholders.
The NPV formula accounts for cash flow timing patterns and size differences for each project, and provides an easy, unambiguous dollar value comparison of different investment options.
The NPV can be easily calculated using modern spreadsheets, under the assumption that the discount rate and future cash flows are known. For a firm considering investing in multiple projects, the NPV has the benefit of being additive. That is, the NPVs of different projects may be aggregated to calculate the highest wealth creation, based on the available capital that can be invested by a firm.
Disadvantages
The NPV method has several disadvantages.
The NPV approach does not consider hidden costs and project size. Thus, investment decisions on projects with substantial hidden costs may not be accurate.
Relies on input parameters such as knowledge of future cash flows
The NPV is heavily dependent on knowledge of future cash flows, their timing, the length of a project, the initial investment required, and the discount rate. Hence, it can only be accurate if these input parameters are correct; although, sensitivity analyzes can be undertaken to examine how the NPV changes as the input variables are changed, thus reducing the uncertainty of the NPV.
Relies on choice of discount rate and discount factor
The accuracy of the NPV method relies heavily on the choice of a discount rate and hence discount factor, representing an investment's true risk premium. The discount rate is assumed to be constant over the life of an investment; however, discount rates can change over time. For example, discount rates can change as the cost of capital changes. There are other drawbacks to the NPV method, such as the fact that it displays a lack of consideration for a project’s size and the cost of capital.
Lack of consideration of non-financial metrics
The NPV calculation is purely financial and thus does not consider non-financial metrics that may be relevant to an investment decision.
Difficulty in comparing mutually exclusive projects
Comparing mutually exclusive projects with different investment horizons can be difficult. Since unequal projects are all assumed to have duplicate investment horizons, the NPV approach can be used to compare the optimal duration NPV.
Interpretation as integral transform
The time-discrete formula of the net present value
can also be written in a continuous variation
where
r(t) is the rate of flowing cash given in money per time, and r(t) = 0 when the investment is over.
Net present value can be regarded as Laplace- respectively Z-transformed cash flow with the integral operator including the complex number s which resembles to the interest rate i from the real number space or more precisely s = ln(1 + i).
From this follow simplifications known from cybernetics, control theory and system dynamics. Imaginary parts of the complex number s describe the oscillating behaviour (compare with the pork cycle, cobweb theorem, and phase shift between commodity price and supply offer) whereas real parts are responsible for representing the effect of compound interest (compare with damping).
Example
A corporation must decide whether to introduce a new product line. The company will have immediate costs of 100,000 at . Recall, a cost is a negative for outgoing cash flow, thus this cash flow is represented as −100,000. The company assumes the product will provide equal benefits of 10,000 for each of 12 years beginning at . For simplicity, assume the company will have no outgoing cash flows after the initial 100,000 cost. This also makes the simplifying assumption that the net cash received or paid is lumped into a single transaction occurring on the last day of each year. At the end of the 12 years the product no longer provides any cash flow and is discontinued without any additional costs. Assume that the effective annual discount rate is 10%.
The present value (value at ) can be calculated for each year:
The total present value of the incoming cash flows is 68,136.91. The total present value of the outgoing cash flows is simply the 100,000 at time .
Thus:
In this example:
Observe that as t increases the present value of each cash flow at t decreases. For example, the final incoming cash flow has a future value of 10,000 at but has a present value (at ) of 3,186.31. The opposite of discounting is compounding. Taking the example in reverse, it is the equivalent of investing 3,186.31 at (the present value) at an interest rate of 10% compounded for 12 years, which results in a cash flow of 10,000 at (the future value).
The importance of NPV becomes clear in this instance. Although the incoming cash flows () appear to exceed the outgoing cash flow (100,000), the future cash flows are not adjusted using the discount rate. Thus, the project appears misleadingly profitable. When the cash flows are discounted however, it indicates the project would result in a net loss of 31,863.09. Thus, the NPV calculation indicates that this project should be disregarded because investing in this project is the equivalent of a loss of 31,863.09 at . The concept of time value of money indicates that cash flows in different periods of time cannot be accurately compared unless they have been adjusted to reflect their value at the same period of time (in this instance, ). It is the present value of each future cash flow that must be determined in order to provide any meaningful comparison between cash flows at different periods of time. There are a few inherent assumptions in this type of analysis:
The investment horizon of all possible investment projects considered are equally acceptable to the investor (e.g. a 3-year project is not necessarily preferable vs. a 20-year project.)
The 10% discount rate is the appropriate (and stable) rate to discount the expected cash flows from each project being considered. Each project is assumed equally speculative.
The shareholders cannot get above a 10% return on their money if they were to directly assume an equivalent level of risk. (If the investor could do better elsewhere, no projects should be undertaken by the firm, and the excess capital should be turned over to the shareholder through dividends and stock repurchases.)
More realistic problems would also need to consider other factors, generally including: smaller time buckets, the calculation of taxes (including the cash flow timing), inflation, currency exchange fluctuations, hedged or unhedged commodity costs, risks of technical obsolescence, potential future competitive factors, uneven or unpredictable cash flows, and a more realistic salvage value assumption, as well as many others.
A more simple example of the net present value of incoming cash flow over a set period of time, would be winning a Powerball lottery of . If one does not select the "CASH" option they will be paid per year for 20 years, a total of , however, if one does select the "CASH" option, they will receive a one-time lump sum payment of approximately , the NPV of paid over time. See "other factors" above that could affect the payment amount. Both scenarios are before taxes.
Common pitfalls
If, for example, the Rt are generally negative late in the project (e.g., an industrial or mining project might have clean-up and restoration costs), then at that stage the company owes money, so a high discount rate is not cautious but too optimistic. Some people see this as a problem with NPV. A way to avoid this problem is to include explicit provision for financing any losses after the initial investment, that is, explicitly calculate the cost of financing such losses.
Another common pitfall is to adjust for risk by adding a premium to the discount rate. Whilst a bank might charge a higher rate of interest for a risky project, that does not mean that this is a valid approach to adjusting a net present value for risk, although it can be a reasonable approximation in some specific cases. One reason such an approach may not work well can be seen from the following: if some risk is incurred resulting in some losses, then a discount rate in the NPV will reduce the effect of such losses below their true financial cost. A rigorous approach to risk requires identifying and valuing risks explicitly, e.g., by actuarial or Monte Carlo techniques, and explicitly calculating the cost of financing any losses incurred.
Yet another issue can result from the compounding of the risk premium. R is a composite of the risk free rate and the risk premium. As a result, future cash flows are discounted by both the risk-free rate as well as the risk premium and this effect is compounded by each subsequent cash flow. This compounding results in a much lower NPV than might be otherwise calculated. The certainty equivalent model can be used to account for the risk premium without compounding its effect on present value.
Another issue with relying on NPV is that it does not provide an overall picture of the gain or loss of executing a certain project. To see a percentage gain relative to the investments for the project, usually, Internal rate of return or other efficiency measures are used as a complement to NPV.
Non-specialist users frequently make the error of computing NPV based on cash flows after interest. This is wrong because it double counts the time value of money. Free cash flow should be used as the basis for NPV computations.
When using Microsoft's Excel, the "=NPV(...)" formula makes two assumptions that result in an incorrect solution. The first is that the amount of time between each item in the input array is constant and equidistant (e.g., 30 days of time between item 1 and item 2) which may not always be correct based on the cash flow that is being discounted. The second item is that the function will assume the item in the first position of the array is period 1 not period zero. This then results in incorrectly discounting all array items by one extra period. The easiest fix to both of these errors is to use the "=XNPV(...)" formula.
Software support
Many computer-based spreadsheet programs have built-in formulae for PV and NPV.
History
Net present value as a valuation methodology dates at least to the 19th century. Karl Marx refers to NPV as fictitious capital, and the calculation as "capitalising," writing:
In mainstream neo-classical economics, NPV was formalized and popularized by Irving Fisher, in his 1907 The Rate of Interest and became included in textbooks from the 1950s onwards, starting in finance texts.
Alternative capital budgeting methods
Adjusted present value (APV): adjusted present value, is the net present value of a project if financed solely by ownership equity plus the present value of all the benefits of financing.
Accounting rate of return (ARR): a ratio similar to IRR and MIRR
Cost-benefit analysis: which includes issues other than cash, such as time savings.
Internal rate of return (IRR): which calculates the rate of return of a project while disregarding the absolute amount of money to be gained.
Modified internal rate of return (MIRR): similar to IRR, but it makes explicit assumptions about the reinvestment of the cash flows. Sometimes it is called Growth Rate of Return.
Payback period: which measures the time required for the cash inflows to equal the original outlay. It measures risk, not return.
Real option: which attempts to value managerial flexibility that is assumed away in NPV.
Equivalent annual cost (EAC): a capital budgeting technique that is useful in comparing two or more projects with different lifespans.
Adjusted present value
Accounting rate of return
Cost-benefit analysis
Internal rate of return
Modified internal rate of return
Payback period
Equivalent annual cost
See also
Profitability index
References
Mathematical finance
Investment
Engineering economics
Management accounting
Capital budgeting
Valuation (finance) | Net present value | [
"Mathematics",
"Engineering"
] | 4,912 | [
"Applied mathematics",
"Engineering economics",
"Mathematical finance"
] |
55,348 | https://en.wikipedia.org/wiki/Elevator%20music | Elevator music (also known as Muzak, piped music, or lift music) is a type of background music played in elevators, in rooms where many people come together for reasons other than listening to music, and during telephone calls when placed on hold. Before the emergence of the Internet, such music was often "piped" to businesses and homes through telephone lines, private networks or targeted radio broadcasting (as in the BBC's Music While You Work, where powerful speakers were set up in factories to make the broadcast audible).
There is no specific sound associated with elevator music, but it usually involves simple instrumental themes from "soft" popular music, or "light" classical music being performed by slow strings. This type of music was produced, for instance, by the Mantovani Orchestra, and conductors such as Franck Pourcel and James Last, peaking in popularity around the 1970s.
More recent types of elevator music may be computer-generated, with the actual score being composed entirely algorithmically.
A typical example of elevator music is the Hungarian band called Djabe which released all their records in audiophile quality.
Other uses
The term can also be used for kinds of easy listening, lounge, piano solo, jazz or middle of the road music, or what are known as "beautiful music" radio stations.
This style of music is sometimes used to comedic effect in mass media such as film, where intense or dramatic scenes may be interrupted or interspersed with such anodyne music while characters use an elevator. Some video games have used music similarly: Metal Gear Solid 4 where a few elevator music-themed tracks are accessible on the in-game iPod, as well as System Shock, Rise of the Triad: Dark War, GoldenEye 007, Mass Effect, and Earthworm Jim.
References
Elevators
Easy listening music
Industrial music services | Elevator music | [
"Engineering"
] | 368 | [
"Building engineering",
"Elevators"
] |
55,349 | https://en.wikipedia.org/wiki/RAM%20drive | A RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory.
It is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.
Historically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information.
Performance
The performance of a RAM drive is generally orders of magnitude faster than other forms of digital storage, such as SSD, tape, optical, hard disk, and floppy drives. This performance gain is due to multiple factors, including access time, maximum throughput, and file system characteristics.
File access time is greatly reduced since a RAM drive is solid state (no moving parts). A physical hard drive, optical (e.g, CD-ROM, DVD, and Blu-ray) or other media (e.g. magnetic bubble, acoustic storage, magnetic tape) must move the information to a particular position before reading or writing can occur. RAM drives can access data with only the address, eliminating this latency.
Second, the maximum throughput of a RAM drive is limited by the speed of the RAM, the data bus, and the CPU of the computer. Other forms of storage media are further limited by the speed of the storage bus, such as IDE (PATA), SATA, USB or FireWire. Compounding this limitation is the speed of the actual mechanics of the drive motors, heads, or eyes.
Third, the file system in use, such as NTFS, HFS, UFS, ext2, etc., uses extra accesses, reads and writes to the drive, which although small, can add up quickly, especially in the event of many small files vs. few larger files (temporary internet folders, web caches, etc.).
Because the storage is in RAM, it is volatile memory, which means it will be lost in the event of power loss, whether intentional (computer reboot or shutdown) or accidental (power failure or system crash). This is, in general, a weakness (the data must periodically be backed up to a persistent-storage medium to avoid loss), but is sometimes desirable: for example, when working with a decrypted copy of an encrypted file, or using the RAM drive to store the system's temporary files.
In many cases, the data stored on the RAM drive is created from data permanently stored elsewhere, for faster access, and is re-created on the RAM drive when the system reboots.
Apart from the risk of data loss, the major limitation of RAM drives is capacity, which is constrained by the amount of installed RAM. Multi-terabyte SSD storage has become common, but RAM is still measured in gigabytes.
RAM drives use normal system memory as if it were a partition on a physical hard drive rather than accessing the data bus normally used for secondary storage. Though RAM drives can often be supported directly in the operating system via special mechanisms in the OS kernel, it is generally simpler to access a RAM drive through a virtual device driver. This makes the non-disk nature of RAM drives invisible to both the OS and applications.
Usually no battery backup is needed due to the temporary nature of the information stored in the RAM drive, but an uninterruptible power supply can keep the system running during a short power outage.
Some RAM drives use a compressed file system such as cramfs to allow compressed data to be accessed on the fly, without decompressing it first. This is convenient because RAM drives are often small due to the higher price per byte than conventional hard drive storage.
History and operating system specifics
The first software RAM drive for microcomputers was invented and written by Jerry Karlin in the UK in 1979/80. The software, known as the Silicon Disk System, was further developed into a commercial product and marketed by JK Systems Research which became Microcosm Research Ltd when the company was joined by Peter Cheesewright of Microcosm Ltd. The idea was to enable the early microcomputers to use more RAM than the CPU could directly address. Making bank-switched RAM behave like a disk drive was much faster than the disk drives. especially before hard drives were readily available on such machines. The Silicon Disk was launched in 1980, initially for the CP/M operating system and later for MS-DOS.
The 128kB Atari 130XE (with DOS 2.5) and Commodore 128 natively support RAM drives, as does ProDOS for the Apple II. On systems with 128kB or more of RAM, ProDOS automatically creates a RAM drive named .
IBM added a RAM drive named VDISK.SYS to PC DOS (version 3.0) in August 1984, which was the first DOS component to use extended memory. VDISK.SYS was not available in Microsoft's MS-DOS as it, unlike most components of early versions of PC DOS, was written by IBM. Microsoft included the similar program RAMDRIVE.SYS in MS-DOS 3.2 (released in 1986), which could also use expanded memory. It was discontinued in Windows 7. DR-DOS and the DR family of multi-user operating systems also came with a RAM disk named VDISK.SYS. In Multiuser DOS, the RAM disk defaults to the drive letter M: (for memory drive). AmigaOS has had a built in RAM drive since the release of version 1.1 in 1985 and still has it in AmigaOS 4.1 (2010). Apple Computer added the functionality to the Apple Macintosh with System 7's Memory control panel in 1991, and kept the feature through the life of Mac OS 9. Mac OS X users can use the hdid, newfs (or newfs hfs) and mount utilities to create, format and mount a RAM drive.
A RAM drive innovation introduced in 1986 but made generally available in 1987 by Perry Kivolowitz for AmigaOS was the ability of the RAM drive to survive most crashes and reboots. Called the ASDG Recoverable Ram Disk, the device survived reboots by allocating memory dynamically in the reverse order of default memory allocation (a feature supported by the underlying OS) so as to reduce memory fragmentation. A "super-block" was written with a unique signature which could be located in memory upon reboot. The super-block, and all other RRD disk "blocks" maintained check sums to enable the invalidation of the disk if corruption was detected. At first, the ASDG RRD was locked to ASDG memory boards and used as a selling feature. Later, the ASDG RRD was made available as shareware carrying a suggested donation of 10 dollars. The shareware version appeared on Fred Fish Disks 58 and 241. AmigaOS itself would gain a Recoverable Ram Disk (called "RAD") in version 1.3.
Many Unix and Unix-like systems provide some form of RAM drive functionality, such as on Linux, or md(4) on FreeBSD. RAM drives are particularly useful in high-performance, low-resource applications for which Unix-like operating systems are sometimes configured. There are also a few specialized "ultra-lightweight" Linux distributions which are designed to boot from removable media and stored in a ramdisk for the entire session.
Dedicated hardware RAM drives
There have been RAM drives which use DRAM memory that is exclusively dedicated to function as an extremely low latency storage device. This memory is isolated from the processor and not directly accessible in the same manner as normal system memory. Some of the first dedicated RAM drives were released in 1983-1985.
An early example of a hardware RAM drive was introduced by Assimilation Process in 1986 for the Macintosh. Called the "Excalibur", it was an external 2MB RAM drive, and retailed for between $599 and $699 US. With the RAM capacity expandable in 1MB increments, its internal battery was said to be effective for between 6 and 8 hours, and, unusual for the time, it was connected via the Macintosh floppy disk port.
In 2002, Cenatek produced the Rocket Drive, max 4 GB, which had four DIMM slots for PC133 memory, with up to a maximum of four gigabytes of storage. At the time, common desktop computers used 64 to 128 megabytes of PC100 or PC133 memory. The one gigabyte PC133 modules (the largest available at the time) cost approximately $1,300 (). A fully outfitted Rocket Drive with four GB of storage would have cost $5,600 ().
In 2005, Gigabyte Technology produced the i-RAM, max 4 GB, which functioned essentially identically to the Rocket Drive, except upgraded to use the newer DDR memory technology, though also limited to a maximum of 4 GB capacity.
For both of these devices, the dynamic RAM requires continuous power to retain data; when power is lost, the data fades away. For the Rocket Drive, there was a connector for an external power supply separate from the computer, and the option for an external battery to retain data during a power failure. The i-RAM included a small battery directly on the expansion board, for 10-16 hours of protection.
Both devices used the SATA 1.0 interface to transfer data from the dedicated RAM drive to the system. The SATA interface was a slow bottleneck that limited the maximum performance of both RAM drives, but these drives still provided exceptionally low data access latency and high sustained transfer speeds, compared to mechanical hard drives.
In 2006, Gigabyte Technology produced the GC-RAMDISK, max 8GB, which was the second generation creation for the i-RAM. It has a maximum of 8 GB capacity, twice that of the i-RAM. It used the SATA-II port, again twice that of the i-RAM. One of its best selling points is that it can be used as a boot device.
In 2007, ACard Technology produced the ANS-9010 Serial ATA RAM disk, max 64 GB. Quote from the tech report: The ANS-9010 "which has eight DDR2 DIMM slots and support for up to 8 GB of memory per slot. The ANS-9010 also features a pair of Serial ATA ports, allowing it to function as a single drive or masquerade as a pair of drives that can easily be split into an even faster RAID 0 array."
In 2009, Acard Technology produced the ACARD ANS-9010BA 5.25'' Dynamic SSD SATA-II RAM Disk, max 64GB. It uses a single SATA-II port.
Both variants are equipped with one or more CompactFlash card interface located in the front panel, allowing non-volatile data being stored on the RAM drive to be copied on the CompactFlash card in case of power failure and low backup battery. Two pushbuttons located on the front panel allows the user to manually backup / restore data on the RAM drive. The CompactFlash card itself is not accessible to the user by normal means as the CF card is solely intended for RAM backup and restoration. The CF card's capacity has to meet / exceed the RAM module's total capacity in order to effectively work as a reliable backup.
In 2009, DDRdrive, LLC produced the DDRDrive X1, which claims to be the fastest solid state drive in the world. The drive is a primary 4GB DDR dedicated RAM drive for regular use, which can back up to and recall from a 4GB SLC NAND drive. The intended market is for keeping and recording log files. If there is a power loss the data can be saved to an internal 4GB ssd in 60 seconds, via the use of a battery backup. Thereafter the data can be recovered back in to RAM once power is restored. A host power loss triggers the DDRdrive X1 to back up volatile data to on-board non-volatile storage.
See also
Cache (computing), an area to store transient copies of data being written to, or repeatedly read from, a slower device
List of RAM drive software
References
External links
An extensive test report of several Windows RAM Disks
Solid-state computer storage media
File systems supported by the Linux kernel
AmigaOS | RAM drive | [
"Technology"
] | 2,689 | [
"AmigaOS",
"Computing platforms"
] |
55,359 | https://en.wikipedia.org/wiki/Single%20instruction%2C%20multiple%20data | Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.
Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions to improve the performance of multimedia use. SIMD has three different subcategories in Flynn's 1972 Taxonomy, one of which is SIMT. SIMT should not be confused with software threads or hardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution.
Each hardware element(PU) working on individual data item sometimes also referred as SIMD lane or channel. Modern graphics processing units (GPUs) are often wide SIMD(typically >16 data lanes or channel) implementations.
History
The first use of SIMD instructions was in the ILLIAC IV, which was completed in 1966.
SIMD was the basis for vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: Duncan's Taxonomy includes them whereas Flynn's Taxonomy does not, due to Flynn's work (1966, 1972) pre-dating the Cray-1 (1977).
The first era of modern SIMD computers was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar MIMD approaches based on commodity processors such as the Intel i860 XP became more powerful, and interest in SIMD waned.
The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand. Hewlett-Packard introduced MAX instructions into PA-RISC 1.1 desktops in 1994 to accelerate MPEG decoding. Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in its UltraSPARC I microprocessor. MIPS followed suit with their similar MDMX system.
The first widely deployed desktop SIMD was with Intel's MMX extensions to the x86 architecture in 1996. This sparked the introduction of the much more powerful AltiVec system in the Motorola PowerPC and IBM's POWER systems. Intel responded in 1999 by introducing the all-new SSE system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, AVX2 and AVX-512 are developed by Intel. AMD supports AVX, AVX2, and AVX-512 in their current products.
All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen words, depending on data type and architecture. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures, as earlier SIMD and vector supercomputers had vector lengths from 64 to 64,000. A modern supercomputer is almost always a cluster of MIMD computers, each of which implements (short-vector) SIMD instructions.
Advantages
An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory. Audio DSPs would likewise, for volume control, multiply both Left and Right channels simultaneously.
With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with a traditional CPU design.
Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight data points at once, the add operation being applied to the data will happen to all eight values at the same time. This parallelism is separate from the parallelism provided by a superscalar processor; the eight values are processed in parallel even on a non-superscalar processor, and a superscalar processor may be able to perform multiple SIMD operations in parallel.
Disadvantages
Not all algorithms can be vectorized easily. For example, a flow-control-heavy task like code parsing may not easily benefit from SIMD; however, it is theoretically possible to vectorize comparisons and "batch flow" to target maximal cache optimality, though this technique will require more intermediate state. Note: Batch-pipeline systems (example: GPUs or software rasterization pipelines) are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features. Further complexity may be apparent to avoid dependence within series such as code strings; while independence is required for vectorization.
Large register files which increases power consumption and required chip area.
Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers do not generate SIMD instructions from a typical C program, for instance. Automatic vectorization in compilers is an active area of computer science research. (Compare vector processing.)
Programming with particular SIMD instruction sets can involve numerous low-level challenges.
SIMD may have restrictions on data alignment; programmers familiar with one particular architecture may not expect this. Worse: the alignment may change from one revision or "compatible" processor to another.
Gathering data into SIMD registers and scattering it to the correct destination locations is tricky (sometimes requiring permute operations) and can be inefficient.
Specific instructions like rotations or three-operand addition are not available in some SIMD instruction sets.
Instruction sets are architecture-specific: some processors lack SIMD instructions entirely, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them.
Different architectures provide different register sizes (e.g. 64, 128, 256 and 512 bits) and instruction sets, meaning that programmers must provide multiple implementations of vectorized code to operate optimally on any given CPU. In addition, the possible set of SIMD instructions grows with each new register size. Unfortunately, for legacy support reasons, the older versions cannot be retired.
The early MMX instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. However, SSE2 corrects this.
To remedy problems 1 and 5, RISC-V's vector extension uses an alternative approach: instead of exposing the sub-register-level details to the programmer, the instruction set abstracts them out as a few "vector registers" that use the same interfaces across all CPUs with this instruction set. The hardware handles all alignment issues and "strip-mining" of loops. Machines with different vector sizes would be able to run the same code. LLVM calls this vector type "".
An order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude or greater effectiveness (work done per instruction) is achievable with Vector ISAs.
ARM's Scalable Vector Extension takes another approach, known in Flynn's Taxonomy as "Associative Processing", more commonly known today as "Predicated" (masked) SIMD. This approach is not as compact as Vector processing but is still far better than non-predicated SIMD. Detailed comparative examples are given in the Vector processing page.
Chronology
Hardware
Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or another, on most CPUs, including IBM's AltiVec and SPE for PowerPC, HP's PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, SSE, SSE2, SSE3 SSSE3 and SSE4.x, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS and VIS2, Sun's MAJC, ARM's Neon technology, MIPS' MDMX (MaDMaX) and MIPS-3D. The IBM, Sony, Toshiba co-developed Cell Processor's SPU's instruction set is heavily SIMD based. Philips, now NXP, developed several SIMD processors named Xetal. The Xetal has 320 16-bit processor elements especially designed for vision tasks.
Intel's AVX-512 SIMD instructions process 512 bits of data at once.
Software
SIMD instructions are widely used to process 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography. The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future.
Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like MMX and 3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the FPU and MMX registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding.
SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by Intel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like libSIMD, SIMDx86 and SLEEF have started to appear (see also libm).
Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were modified to support SSE2 and SSE3 as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and Power ISA designs from Freescale and IBM.
SIMD within a register, or SWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that does not provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.
Programmer interface
It is common for publishers of the SIMD instruction sets to make their own C/C++ language extensions with intrinsic functions or special datatypes (with operator overloading) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)
The GNU C Compiler takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes. The LLVM Clang compiler also implements the feature, with an analogous interface defined in the IR. Rust's crate (and the experimental ) uses this interface, and so does Swift 2.0+.
C++ has an experimental interface that works similarly to the GCC extension. LLVM's libcxx seems to implement it. For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.
Microsoft added SIMD to .NET in RyuJIT. The package, available on NuGet, implements SIMD datatypes. Java also has a new proposed API for SIMD instructions available in OpenJDK 17 in an incubator module. It also has a safe fallback mechanism on unsupported CPUs to simple loops.
Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. OpenMP 4.0+ has a hint. This OpenMP interface has replaced a wide set of nonstandard extensions, including Cilk's , GCC's , and many more.
SIMD multi-versioning
Consumer software is typically expected to work on a range of CPUs covering multiple generations, which could limit the programmer's ability to use new SIMD instructions to improve the computational performance of a program. The solution is to include multiple versions of the same code that uses either older or newer SIMD technologies, and pick one that best fits the user's CPU at run-time (dynamic dispatch). There are two main camps of solutions:
Function multi-versioning (FMV): a subroutine in the program or a library is duplicated and compiled for many instruction set extensions, and the program decides which one to use at run-time.
Library multi-versioning (LMV): the entire programming library is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.
FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. Intel C++ Compiler, GNU Compiler Collection since GCC 6, and Clang since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit labels in the code to "clone" functions, while ICC does so automatically (under the command-line option ). The Rust programming language also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.
As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. Glibc supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.
SIMD on the web
In 2013 John McCutchan announced that he had created a high-performance interface to SIMD instruction sets for the Dart programming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:
Float32x4, 4 single precision floating point values.
Int32x4, 4 32-bit integer values.
Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for 4×4 matrix multiplication, 3D vertex transformation, and Mandelbrot set visualization show near 400% speedup compared to scalar code written in Dart.
McCutchan's work on Dart, now called SIMD.js, has been adopted by ECMAScript and Intel announced at IDF 2013 that they are implementing McCutchan's specification for both V8 and SpiderMonkey. However, by 2017, SIMD.js has been taken out of the ECMAScript standard queue in favor of pursuing a similar interface in WebAssembly. As of August 2020, the WebAssembly interface remains unfinished, but its portable 128-bit SIMD feature has already seen some use in many engines.
Emscripten, Mozilla's C/C++-to-JavaScript compiler, with extensions can enable compilation of C++ programs that make use of SIMD intrinsics or GCC-style vector code to the SIMD API of JavaScript, resulting in equivalent speedups compared to scalar code. It also supports (and now prefers) the WebAssembly 128-bit SIMD proposal.
Commercial applications
It has generally proven difficult to find sustainable commercial applications for SIMD-only processors.
One that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications like conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from HDTV formats, etc.), deinterlacing, image noise reduction, adaptive video compression, and image enhancement.
A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The PlayStation 2 was unusual in that one of its vector-float units could function as an autonomous DSP executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
A later processor that used vector processing is the Cell Processor used in the Playstation 3, which was developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (a NUMA architecture, each with independent local store and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.
Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.
Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.
See also
Streaming SIMD Extensions, MMX, SSE2, SSE3, Advanced Vector Extensions, AVX-512
Instruction set architecture
Flynn's taxonomy
SIMD within a register (SWAR)
Single Program, Multiple Data (SPMD)
OpenCL
References
External links
SIMD architectures (2000)
Cracking Open The Pentium 3 (1999)
Short Vector Extensions in Commercial Microprocessor
Article about Optimizing the Rendering Pipeline of Animated Models Using the Intel Streaming SIMD Extensions
"Yeppp!": cross-platform, open-source SIMD library from Georgia Tech
Introduction to Parallel Computing from LLNL Lawrence Livermore National Laboratory
: A portable implementation of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON), using C/C++ headers
Classes of computers
Digital signal processing
Flynn's taxonomy
Parallel computing
de:Flynnsche Klassifikation#SIMD (Single Instruction, Multiple Data) | Single instruction, multiple data | [
"Technology"
] | 4,552 | [
"Classes of computers",
"Computers",
"Computer systems"
] |
55,395 | https://en.wikipedia.org/wiki/History%20of%20operating%20systems | Computer operating systems (OSes) provide a set of functions needed and used by most application programs on a computer, and the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and punched paper card readers. The growing complexity of hardware and application programs eventually made operating systems a necessity for everyday use.
Background
Early computers lacked any form of operating system. Instead, the user, also called the operator, had sole use of the machine for a scheduled period of time. The operator would arrive at the computer with program and data which needed to be loaded into the machine before the program could be run. Loading of program and data was accomplished in various ways including toggle switches, punched paper cards and magnetic or paper tape. Once loaded, the machine would be set to execute the single program until that program completed or crashed. Programs could generally be debugged via a control panel using dials, toggle switches and panel lights, making it a very manual and error-prone process.
Symbolic languages, assemblers, compilers were developed for programmers to translate symbolic program code into machine code that previously would have been hand-encoded. Later machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system; however, machines still ran a single program or job at a time. At Cambridge University in England the job queue was at one time a string from which tapes attached to corresponding job tickets were hung with stationery pegs.
As machines became more powerful the time to run programs diminished, and the time to hand off the equipment to the next user became large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punched cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example.
All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, and immediately go on to process the next job. These resident background programs, capable of managing multi step processes, were often called monitors or monitor-programs before the term "operating system" established itself.
An underlying program offering basic hardware management, software scheduling and resource monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers, radios, and air conditioners which later became standard, more and more optional software features became standard in every OS package. This has led to the perception of an OS as a complete user system with an integrated graphical user interface, utilities, and some applications such as file managers, text editors, and configuration tools.
The true descendant of the early operating systems is what is now called the "kernel". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control systems, which do not run user applications at the front end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.
The broader categories of systems and application software are discussed in the computer software article.
Mainframes
The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. Most other early operating systems for IBM mainframes were also produced by customers.
Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.
Systems on IBM hardware
The state of affairs continued until the 1960s when IBM, already a leading hardware vendor, stopped work on existing systems and put all its effort into developing the System/360 series of machines, all of which used the same instruction and input/output architecture. IBM intended to develop a single operating system for the new hardware, the OS/360. The problems encountered in the development of the OS/360 are legendary, and are described by Fred Brooks in The Mythical Man-Month—a book that has become a classic of software engineering. Because of performance differences across the hardware range and delays with software development, a whole family of operating systems was introduced instead of a single OS/360.
IBM wound up releasing a series of stop-gaps followed by two longer-lived operating systems:
OS/360 for mid-range and large systems. This was available in three system generation options:
PCP for early users and for those without the resources for multiprogramming.
MFT for mid-range systems, replaced by MFT-II in OS/360 Release 15/16. This had one successor, OS/VS1, which was discontinued in the 1980s.
MVT for large systems. This was similar in most ways to PCP and MFT (most programs could be ported among the three without being re-compiled), but has more sophisticated memory management and a time-sharing facility, TSO. MVT had several successors including the current z/OS.
DOS/360 for small System/360 models had several successors including the current z/VSE. It was significantly different from OS/360.
IBM maintained full compatibility with the past, so that programs developed in the sixties can still run under z/VSE (if developed for DOS/360) or z/OS (if developed for MFT or MVT) with no change.
IBM also developed TSS/360, a time-sharing system for the System/360 Model 67. Overcompensating for their perceived importance of developing a timeshare system, they set hundreds of developers to work on the project. Early releases of TSS were slow and unreliable; by the time TSS had acceptable performance and reliability, IBM wanted its TSS users to migrate to OS/360 and OS/VS2; while IBM offered a TSS/370 PRPQ, they dropped it after 3 releases.
Several operating systems for the IBM S/360 and S/370 architectures were developed by third parties, including the Michigan Terminal System (MTS) and MUSIC/SP.
Other mainframe operating systems
Control Data Corporation developed the SCOPE operating systems in the 1960s, for batch processing and later developed the MACE operating system for time sharing, which was the basis for the later Kronos. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and time sharing use. Like many commercial time sharing systems, its interface was an extension of the DTSS time sharing system, one of the pioneering efforts in timesharing and programming languages.
In the late 1970s, Control Data and the University of Illinois developed the PLATO system, which used plasma panel displays and long-distance time sharing networks. PLATO was remarkably innovative for its time; the shared memory model of PLATO's TUTOR programming language allowed applications such as real-time chat and multi-user graphical games.
For the UNIVAC 1107, UNIVAC, the first commercial computer manufacturer, produced the EXEC I operating system, and Computer Sciences Corporation developed the EXEC II operating system and delivered it to UNIVAC. EXEC II was ported to the UNIVAC 1108. Later, UNIVAC developed the EXEC 8 operating system for the 1108; it was the basis for operating systems for later members of the family. Like all early mainframe systems, EXEC I and EXEC II were a batch-oriented system that managed magnetic drums, disks, card readers and line printers; EXEC 8 supported both batch processing and on-line transaction processing. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
Burroughs Corporation introduced the B5000 in 1961 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages, with no software, not even at the lowest level of the operating system, being written directly in machine language or assembly language; the MCP was the first OS to be written entirely in a high-level language - ESPOL, a dialect of ALGOL 60 - although ESPOL had specialized statements for each "syllable" in the B5000 instruction set. MCP also introduced many other ground-breaking innovations, such as being one of the first commercial implementations of virtual memory. The rewrite of MCP for the B6500 is now marketed as the Unisys ClearPath/MCP.
GE introduced the GE-600 series with the General Electric Comprehensive Operating Supervisor (GECOS) operating system in 1962. After Honeywell acquired GE's computer business, it was renamed to General Comprehensive Operating System (GCOS). Honeywell expanded the use of the GCOS name to cover all its operating systems in the 1970s, though many of its computers had nothing in common with the earlier GE 600 series and their operating systems were not derived from the original GECOS.
Project MAC at MIT, working with GE and Bell Labs, developed Multics, which introduced the concept of ringed security privilege levels.
Digital Equipment Corporation developed TOPS-10 for its PDP-10 line of 36-bit computers in 1967. Before the widespread use of Unix, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. Bolt, Beranek, and Newman developed TENEX for a modified PDP-10 that supported demand paging; this was another popular system in the research and ARPANET communities, and was later developed by DEC into TOPS-20.
Scientific Data Systems/Xerox Data Systems developed several operating systems for the Sigma series of computers, such as the Basic Control Monitor (BCM), Batch Processing Monitor (BPM), and Basic Time-Sharing Monitor (BTM). Later, BPM and BTM were succeeded by the Universal Time-Sharing System (UTS); it was designed to provide multi-programming services for online (interactive) user programs in addition to batch-mode production jobs, It was succeeded by the CP-V operating system, which combined UTS with the heavily batch-oriented Xerox Operating System.
Minicomputers
Digital Equipment Corporation created several operating systems for its 16-bit PDP-11 machines, including the simple RT-11 system, the time-sharing RSTS operating systems, and the RSX-11 family of real-time operating systems, as well as the VMS system for the 32-bit VAX machines.
Several competitors of Digital Equipment Corporation such as Data General, Hewlett-Packard, and Computer Automation created their own operating systems. One such, "MAX III", was developed for Modular Computer Systems Modcomp II and Modcomp III computers. It was characterised by its target market being the industrial control market. The Fortran libraries included one that enabled access to measurement and control devices.
IBM's key innovation in operating systems in this class (which they call "mid-range"), was their "CPF" for the System/38. This had capability-based addressing, used a machine interface architecture to isolate the application software and most of the operating system from hardware dependencies (including even such details as address size and register size) and included an integrated RDBMS. The succeeding OS/400 (now known as IBM i) for the IBM AS/400 and later IBM Power Systems has no files, only objects of different types and these objects persist in very large, flat virtual memory, called a single-level store.
The Unix operating system was developed at AT&T Bell Laboratories in the late 1960s, originally for the PDP-7, and later for the PDP-11. Because it was essentially free in early editions, easily obtainable, and easily modified, it achieved wide acceptance. It also became a requirement within the Bell systems operating companies. Since it was written in the C language, when that language was ported to a new machine architecture, Unix was also able to be ported. This portability permitted it to become the choice for a second generation of minicomputers and the first generation of workstations, and its use became widespread. Unix exemplified the idea of an operating system that was conceptually the same across various hardware platforms. Because of its utility, it inspired many and later became one of the roots of the free software movement and open-source software. Numerous operating systems were based upon it including Minix, GNU/Linux, and the Berkeley Software Distribution. Apple's macOS is also based on Unix via NeXTSTEP and FreeBSD.
The Pick operating system was another operating system available on a wide variety of hardware brands. Commercially released in 1973 its core was a BASIC-like language called Data/BASIC and a SQL-style database manipulation language called ENGLISH. Licensed to a large variety of manufacturers and vendors, by the early 1980s observers saw the Pick operating system as a strong competitor to Unix.
Microcomputers
Beginning in the mid-1970s, a new class of small computers came onto the marketplace. Featuring 8-bit processors, typically the MOS Technology 6502, Intel 8080, Motorola 6800 or the Zilog Z80, along with rudimentary input and output interfaces and as much RAM as practical, these systems started out as kit-based hobbyist computers but soon evolved into an essential business tool.
Home computers
While many eight-bit home computers of the 1980s, such as the BBC Micro, Commodore 64, Apple II, Atari 8-bit computers, Amstrad CPC, ZX Spectrum series and others could load a third-party disk-loading operating system, such as CP/M or GEOS, they were generally used without one. Their built-in operating systems were designed in an era when floppy disk drives were very expensive and not expected to be used by most users, so the standard storage device on most was a tape drive using standard compact cassettes. Most, if not all, of these computers shipped with a built-in BASIC interpreter on ROM, which also served as a crude command-line interface, allowing the user to load a separate disk operating system to perform file management commands and load and save to disk. The most popular home computer, the Commodore 64, was a notable exception, as its DOS was on ROM in the disk drive hardware, and the drive was addressed identically to printers, modems, and other external devices.
Furthermore, those systems shipped with minimal amounts of computer memory—4-8 kilobytes was standard on early home computers—as well as 8-bit processors without specialized support circuitry like an MMU or even a dedicated real-time clock. On this hardware, a complex operating system's overhead supporting multiple tasks and users would likely compromise the performance of the machine without really being needed. As those systems were largely sold complete, with a fixed hardware configuration, there was also no need for an operating system to provide drivers for a wide range of hardware to abstract away differences.
Video games and even the available spreadsheet, database and word processors for home computers were mostly self-contained programs that took over the machine completely. Although integrated software existed for these computers, they usually lacked features compared to their standalone equivalents, largely due to memory limitations. Data exchange was mostly performed through standard formats like ASCII text or CSV, or through specialized file conversion programs.
Operating systems in video games and consoles
Since virtually all video game consoles and arcade cabinets designed and built after 1980 were true digital machines based on microprocessors (unlike the earlier Pong clones and derivatives), some of them carried a minimal form of BIOS or built-in game, such as the ColecoVision, the Sega Master System and the SNK Neo Geo.
Modern-day game consoles and videogames, starting with the PC-Engine, all have a minimal BIOS that also provides some interactive utilities such as memory card management, audio or video CD playback, copy protection and sometimes carry libraries for developers to use etc. Few of these cases, however, would qualify as a true operating system.
The most notable exceptions are probably the Dreamcast game console which includes a minimal BIOS, like the PlayStation, but can load the Windows CE operating system from the game disk allowing easily porting of games from the PC world, and the Xbox game console, which is little more than a disguised Intel-based PC running a secret, modified version of Microsoft Windows in the background. Furthermore, there are Linux versions that will run on a Dreamcast and later game consoles as well.
Long before that, Sony had released a kind of development kit called the Net Yaroze for its first PlayStation platform, which provided a series of programming and developing tools to be used with a normal PC and a specially modified "Black PlayStation" that could be interfaced with a PC and download programs from it. These operations require in general a functional OS on both platforms involved.
In general, it can be said that videogame consoles and arcade coin-operated machines used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while from the PlayStation era and beyond they started getting more and more sophisticated, to the point of requiring a generic or custom-built OS for aiding in development and expandability.
Personal computer era
The development of microprocessors made inexpensive computing available for the small business and hobbyist, which in turn led to the widespread use of interchangeable hardware components using a common interconnection (such as the S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for "standard" operating systems to control them. The most important of the early OSes on these machines was Digital Research's CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was based on several Digital Equipment Corporation operating systems, mostly for the PDP-11 architecture. Microsoft's first operating system, MDOS/MIDAS, was designed along many of the PDP-11 features, but for microprocessor based systems. MS-DOS, or PC DOS when supplied by IBM, was designed to be similar to CP/M-80. Each of these machines had a small boot program in ROM which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was an extension of this idea and has accreted more features and functions in the 20 years since the first IBM-PC was introduced in 1981.
The decreasing cost of display equipment and processors made it practical to provide graphical user interfaces for many operating systems, such as the generic X Window System that is provided with many Unix systems, or other graphical systems such as Apple's classic Mac OS and macOS, the Radio Shack Color Computer's OS-9 Level II/Multi-Vue, Commodore's AmigaOS, Atari TOS, IBM's OS/2, and Microsoft Windows. The original GUI was developed on the Xerox Alto computer system at Xerox Palo Alto Research Center in the early 1970s and commercialized by many vendors throughout the 1980s and 1990s.
Since the late 1990s, there have been three operating systems in widespread use on personal computers: Apple Inc.'s macOS, the open source Linux, and Microsoft Windows. Since 2005 and the Mac transition to Intel processors, all have been developed mainly on the x86 platform, although macOS retained PowerPC support until 2009 and Linux remains ported to a multitude of architectures including ones such as 68k, PA-RISC, and DEC Alpha, which have been long superseded and out of production, and SPARC and MIPS, which are used in servers or embedded systems but no longer for desktop computers. Other operating systems such as AmigaOS and OS/2 remain in use, if at all, mainly by retrocomputing enthusiasts or for specialized embedded applications.
Mobile operating systems
In the early 1990s, Psion released the Psion Series 3 PDA, a small mobile computing device. It supported user-written applications running on an operating system called EPOC. Later versions of EPOC became Symbian, an operating system used for mobile phones from Nokia, Ericsson, Sony Ericsson, Motorola, Samsung and phones developed for NTT Docomo by Sharp, Fujitsu & Mitsubishi. Symbian was the world's most widely used smartphone operating system until 2010 with a peak market share of 74% in 2006. In 1996, Palm Computing released the Pilot 1000 and Pilot 5000, running Palm OS. Microsoft Windows CE was the base for Pocket PC 2000, renamed Windows Mobile in 2003, which at its peak in 2007 was the most common operating system for smartphones in the U.S.
In 2007, Apple introduced the iPhone and its operating system, known as simply iPhone OS (until the release of iOS 4), which, like Mac OS X, is based on the Unix-like Darwin. In addition to these underpinnings, it also introduced a powerful and innovative graphic user interface that was later also used on the tablet computer iPad. A year later, Android, with its own graphical user interface, was introduced, based on a modified Linux kernel, and Microsoft re-entered the mobile operating system market with Windows Phone in 2010, which was replaced by Windows 10 Mobile in 2015.
In addition to these, a wide range of other mobile operating systems are contending in this area.
Rise of virtualization
Operating systems originally ran directly on the hardware itself and provided services to applications, but with virtualization, the operating system itself runs under the control of a hypervisor, instead of being in direct control of the hardware.
On mainframes IBM introduced the notion of a virtual machine in 1968 with CP/CMS on the IBM System/360 Model 67, and extended this later in 1972 with Virtual Machine Facility/370 (VM/370) on System/370.
On x86-based personal computers, VMware popularized this technology with their 1999 product, VMware Workstation, and their 2001 VMware GSX Server and VMware ESX Server products. Later, a wide range of products from others, including Xen, KVM and Hyper-V meant that by 2010 it was reported that more than 80 percent of enterprises had a virtualization program or project in place, and that 25 percent of all server workloads would be in a virtual machine.
Over time, the line between virtual machines, monitors, and operating systems was blurred:
Hypervisors grew more complex, gaining their own application programming interface, memory management or file system.
Virtualization becomes a key feature of operating systems, as exemplified by KVM and LXC in Linux, Hyper-V in Windows Server 2008 or HP Integrity Virtual Machines in HP-UX.
In some systems, such as POWER5 and later POWER servers from IBM, the hypervisor is no longer optional.
Radically simplified operating systems, such as CoreOS have been designed to run only on virtual systems.
Applications have been re-designed to run directly on a virtual machine monitor.
In many ways, virtual machine software today plays the role formerly held by the operating system, including managing the hardware resources (processor, memory, I/O devices), applying scheduling policies, or allowing system administrators to manage the system.
See also
Charles Babbage Institute
IT History Society
List of operating systems
Timeline of operating systems
History of computer icons
Notes
References
Further reading
Operating systems
History
History of computing
Software topical history overviews | History of operating systems | [
"Technology"
] | 5,180 | [
"History of software",
"Computers",
"History of computing"
] |
55,424 | https://en.wikipedia.org/wiki/Drive%20letter%20assignment | In computer data storage, drive letter assignment is the process of assigning alphabetical identifiers to volumes. Unlike the concept of UNIX mount points, where volumes are named and located arbitrarily in a single hierarchical namespace, drive letter assignment allows multiple highest-level namespaces. Drive letter assignment is thus a process of using letters to name the roots of the "forest" representing the file system; each volume holds an independent "tree" (or, for non-hierarchical file systems, an independent list of files).
Origin
The concept of drive letters, as used today, presumably owes its origins to IBM's VM family of operating systems, dating back to CP/CMS in 1967 (and its research predecessor CP-40), by way of Digital Research's (DRI) CP/M. The concept evolved through several steps:
CP/CMS uses drive letters to identify minidisks attached to a user session. A full file reference (pathname in today's parlance) consists of a filename, a filetype, and a disk letter called a filemode (e.g. A or B). Minidisks can correspond to physical disk drives, but more typically refer to logical drives, which are mapped automatically onto shared devices by the operating system as sets of virtual cylinders.
CP/CMS inspired numerous other operating systems, including the CP/M microcomputer operating system, which uses a drive letter to specify a physical storage device. Early versions of CP/M (and other microcomputer operating systems) implemented a flat file system on each disk drive, where a complete file reference consists of a drive letter, a colon, a filename (up to eight characters), a dot, and a filetype (three characters); for instance A:README.TXT. (This was the era of 8-inch floppy disks, where such small namespaces did not impose practical constraints.) This usage was influenced by the device prefixes used in Digital Equipment Corporation's (DEC) TOPS-10 operating system.
The drive letter syntax chosen for CP/M was inherited by Microsoft for its operating system MS-DOS by way of Seattle Computer Products' (SCP) 86-DOS, and thus also by IBM's OEM version PC DOS. Originally, drive letters always represented physical volumes, but support for logical volumes eventually appeared.
Through their designated position as DOS successor, the concept of drive letters was also inherited by OS/2 and the Microsoft Windows family.
The important capability of hierarchical directories within each drive letter was initially absent from these systems. This was a major feature of UNIX and other similar operating systems, where hard disk drives held thousands (rather than tens or hundreds) of files. Increasing microcomputer storage capacities led to their introduction, eventually followed by long filenames. In file systems lacking such naming mechanisms, drive letter assignment proved a useful, simple organizing principle.
Operating systems that use drive letter assignment
CP/M family
CP/M, MP/M, Concurrent CP/M, Concurrent DOS, FlexOS, 4680 OS, 4690 OS, S5-DOS/MT, Multiuser DOS, System Manager, REAL/32, REAL/NG, Personal CP/M, S5-DOS, DOS Plus
AMSDOS
DOS family
86-DOS, MS-DOS, PC DOS
DR DOS, Novell DOS, PalmDOS, OpenDOS
ROM-DOS
PTS-DOS, S/DOS
FreeDOS
PC-MOS/386
SISNE plus
GEMDOS, TOS, MiNT, MagiC, MultiTOS, EmuTOS
Atari DOS family
MSX-DOS
ANDOS, CSI-DOS, MK-DOS
GEOS
OS/2 (including eComStation and ArcaOS)
Windows family
Windows 9x family
Windows NT family
Xbox system software
ReactOS
Symbian OS
Hobbyist operating systems
SymbOS
TempleOS
Order of assignment
MS-DOS/PC DOS since version 5.0, and later operating systems, assigns drive letters according to the following algorithm:
Assign the drive letter A: to the first floppy disk drive (drive 0), and B: to the second floppy disk drive (drive 1). If only one physical floppy is present, drive B: will be assigned to a phantom floppy drive mapped to the same physical drive and dynamically assigned to either A: or B: for easier floppy file operations. If no physical floppy drive is present, DOS 4.0 will assign both A: and B: to the non-existent drive, whereas DOS 5.0 and higher will invalidate these drive letters. If more than two physical floppy drives are present, DOS versions prior to 5.0 will assign subsequent drive letters, whereas DOS 5.0 and higher will remap these drives to higher drive letters at a later stage; see below.
Assign a drive letter to the first active primary partition recognized upon the first physical hard disk. DOS 5.0 and higher will ensure that it will become drive C:, so that the boot drive will either have drive A: or C:.
Assign subsequent drive letters to the first primary partition upon each successive physical hard disk drive (DOS versions prior to 5.0 will probe for only two physical hard disks, whereas DOS 5.0 and higher support eight physical hard disks).
Assign subsequent drive letters to every recognized logical partition present in the first extended partition, beginning with the first hard drive and proceeding through successive physical hard disk drives.
DOS 5.0 and higher: Assign drive letters to all remaining primary partitions, beginning with the first hard drive and proceeding through successive physical hard disk drives.
DOS 5.0 and higher: Assign drive letters to all physical floppy drives beyond the second physical floppy drive.
Assign subsequent drive letters to any block device drivers loaded in CONFIG.SYS via DEVICE statements, e.g. RAM disks.
Assign subsequent drive letters to any dynamically loaded drives via CONFIG.SYS INSTALL statements, in AUTOEXEC.BAT or later, i.e. additional optical disc drives (MSCDEX etc.), PCMCIA / PC Card drives, USB or Firewire drives, or network drives.
Only partitions of recognized partition types are assigned letters. In particular, "hidden partitions" (those with their type ID changed to an unrecognized value, usually by adding 10h) are not.
MS-DOS/PC DOS versions 4.0 and earlier assign letters to all of the floppy drives before considering hard drives, so a system with four floppy drives would call the first hard drive E:. Starting with DOS 5.0, the system ensures that drive C: is always a hard disk, even if the system has more than two physical floppy drives.
While without deliberate remapping, the drive letter assignments are typically fixed until the next reboot, however, Zenith MS-DOS 3.21 will update the drive letter assignments when resetting a drive. This may cause drive letters to change without reboot if the partitioning of the harddisk was changed.
MS-DOS on the Apricot PC assigns letters to hard drives, starting with A:, before considering floppy drives. A system with two of each drive would call the hard drives A: and B:, and the floppies C: and D:.
On the Japanese PC-98, if the system is booted from floppy disk, the dedicated version of MS-DOS assigns letters to all floppy drives before considering hard drives; it does the opposite if it is booted from a hard drive, that is, if the OS was installed on the hard drive, MS-DOS would assign this drive as drive "A:" and a potentially existing floppy as drive "B:". The Japanese version of the Windows 95 SETUP program supports a special option /AT to enforce that Windows will be on drive C:.
Some versions of DOS do not assign the drive letter, beginning with C:, to the first active primary partition recognized upon the first physical hard disk, but on the first primary partition recognized of the first hard disk, even if it is not set active.
If there is more than one extended partition in a partition table, only the logical drives in the first recognized extended partition type are processed.
Some late versions of the DR-DOS IBMBIO.COM provide a preboot config structure, holding bit flags to select (beside others) between various drive letter assignment strategies. These strategies can be preselected by a user or OEM or be changed by a boot loader on the fly when launching DR-DOS. Under these issues, the boot drive can be different from A: or C: as well.
The drive letter order can depend on whether a given disk is managed by a boot-time driver or by a dynamically loaded driver. For example, if the second or third hard disk is of SCSI type and, on DOS, requires drivers loaded through the CONFIG.SYS file (e.g. the controller card does not offer on-board BIOS or using this BIOS is not practical), then the first SCSI primary partition will appear after all the IDE partitions on DOS. Therefore, DOS and for example OS/2 could have different drive letters, as OS/2 loads the SCSI driver earlier. A solution was not to use primary partitions on such hard disks.
In Windows NT and OS/2, the operating system uses the aforementioned algorithm to automatically assign letters to floppy disk drives, optical disc drives, the boot disk, and other recognized volumes that are not otherwise created by an administrator within the operating system. Volumes that are created within the operating system are manually specified, and some of the automatic drive letters can be changed. Unrecognized volumes are not assigned letters, and are usually left untouched by the operating system.
A common problem that occurs with the drive letter assignment is that the letter assigned to a network drive can interfere with the letter of a local volume (like a newly installed CD/DVD drive or a USB stick). For example, if the last local drive is drive D: and a network drive would have been assigned as E:, then a newly attached USB mass storage device would also be assigned drive E: causing loss of connectivity with either the network share or the USB device. Users with administrative privileges can assign drive letters manually to overcome this problem.
Another condition that can cause problems on Windows XP is when there are network drives defined, but in an error condition (as they would be on a laptop operating outside the network). Even when the unconnected network drive is not the next available drive letter, Windows XP may be unable to map a drive and this error may also prevent the mounting of the USB device.
Common assignments
Applying the scheme discussed above on a fairly modern Windows-based system typically results in the following drive letter assignments:
A: — Floppy disk drives, ″ or ″, and possibly other types of disk drives, if present.
B: — Reserved for a second floppy drive (that was present on many PCs).
C: — First hard disk drive partition.
D: to Z: — Other disk partitions get labeled here. Windows assigns the next free drive letter to the next drive it encounters while enumerating the disk drives on the system. Drives can be partitioned, thereby creating more drive letters. This applies to MS-DOS, as well as all Windows operating systems. Windows offers other ways to change the drive letters, either through the Disk Management snap-in or diskpart. MS-DOS typically uses parameters on the line loading device drivers inside the CONFIG.SYS file.
Case-specific drive letters:
F: — First network drive if using Novell NetWare.
G: — "Google Drive File Stream" if using Google Drive.
H: — "Home" directory on a network server.
L: — Dynamically assigned load drive under Concurrent DOS, Multiuser DOS, System Manager and REAL/32.
M: — Drive letter for optionally memory drive MDISK under Concurrent DOS.
N:, O:, P: — Assignable floating drives under CP/M-86 4.x, Personal CP/M-86 2.x, DOS Plus 1.1-2.1 (via BDOS call 0Fh), a concept later extended to any unused drive letters under Concurrent DOS, Multiuser DOS, System Manager, REAL/32 and DR DOS up to 6.0.
Q: — Microsoft Office Click-to-Run virtualization.
U: — Unix-like unified filesystem with virtual directory \DEV for device files under MiNT, MagiC, and MultiTOS.
Z: — First network drive if using Banyan VINES, and the initial drive letter assignment for the virtual disk network in the DOSBox x86 emulator. It is also the first letter selected by Windows for network resources, as it automatically selects from Z: downwards. By default, Wine maps Z: to the root of the UNIX directory tree.
When there is no second physical floppy drive, drive B: can be used as a "virtual" floppy drive mapped onto the physical drive A:, whereby the user would be prompted to switch floppies every time a read or write was required to whichever was the least recently used of A: or B:. This allows for much of the functionality of two floppy drives on a computer that has only one. This concept of multiple drive letters sharing a single physical device (optionally with different "views" of it) is not limited to the first floppy drive, but can be utilized for other drives as well by setting up additional block devices for them with the standard DOS DRIVER.SYS in CONFIG.SYS.
Network drives are often assigned letters towards the end of the alphabet. This is often done to differentiate them from local drives: by using letters towards the end, it reduces the risk of an assignment conflict. It is especially true when the assignment is done automatically across a network (usually by a logon script).
In most DOS systems, it is not possible to have more than 26 mounted drives. Atari GEMDOS supports 16 drive letters A: to P: only. The PalmDOS PCMCIA driver stack supports drive letters 0:, 1:, 2:, ... to address PCMCIA drive slots.
Some Novell network drivers for DOS support up to 32 drive letters under compatible DOS versions. In addition, Novell DOS 7, OpenDOS 7.01, and DR-DOS 7.02 genuinely support a CONFIG.SYS LASTDRIVE=32 directive in order to allocate up to 32 drive letters, named A: to Z:, [:, \:, ]:, ^:, _: and `:. (DR-DOS 7.02-7.07 also supports HILASTDRIVE and LASTDRIVEHIGH directives in order to relocate drive structures into upper memory.) Some DOS application programs do not expect drive letters beyond Z: and will not work with them, therefore it is recommended to use them for special purposes or search drives.
JP Software's 4DOS command line processor supports drive letters beyond Z: in general, but since some of the letters clash with syntactical extensions of this command line processor, they need to be escaped in order to use them as drive letters.
Windows 9x (MS-DOS 7.0/MS-DOS 7.1) added support for LASTDRIVE=32 and LASTDRIVEHIGH=32 as well.
If access to more filesystems than Z: is required under Windows NT, Volume Mount Points must be used. However, it is possible to mount non-letter drives, such as 1:, 2:, or !: using the command line SUBST utility in Windows XP or later (i.e. SUBST 1: C:\TEMP), but it is not officially supported and may break programs that assume that all drives are letters A: to Z:.
ASSIGN, JOIN and SUBST in DOS and Windows
Drive letters are not the only way of accessing different volumes. DOS offers a JOIN command that allows access to an assigned volume through an arbitrary directory, similar to the Unix mount command. It also offers a SUBST command which allows the assignment of a drive letter to a directory. One or both of these commands were removed in later systems like OS/2 or Windows NT, but starting with Windows 2000, both are again supported: The SUBST command exists as before, while JOIN's functionality is subsumed in LINKD (part of the Windows Resource Kit). In Windows Vista, the new command MKLINK can be used for this purpose. Also, Windows 2000 and later support mount points, accessible from the Control Panel.
Many operating systems originating from Digital Research provide means to implicitly assign substitute drives, called floating drives in DRI terminology, by using the CD/CHDIR command in the following syntax:
CD N:=C:\SUBDIR
DOS Plus supports this for drive letters N:, O:, and P:. This feature is also present in Concurrent DOS, Multiuser DOS, System Manager 7, and REAL/32, however, these systems extend the concept to all unused drive letters from A: to Z:, except for the reserved drive letter L:. DR DOS 3.31 - 6.0 (up to the 1992-11 updates with BDOS 6.7 only) also supports this including drive letter L:. This feature is not available under DR DOS 6.0 (1992 upgrade), PalmDOS 1.0, Novell DOS 7, OpenDOS 7.01, DR-DOS 7.02 and higher. Floating drives are implemented in the BDOS kernel, not in the command line shell, thus they can be used and assigned also from within applications when they use the "change directory" system call. However, most DOS applications are not aware of this extension and will consequently discard such directory paths as invalid. JP Software's command line interpreter 4DOS supports floating drives on operating systems also supporting it.
In a similar feature, Concurrent DOS, Multiuser DOS, System Manager and REAL/32 will dynamically assign a drive letter L: to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path. This load drive feature makes it easier to move software installations on and across disks without having to adapt paths to overlays, configuration files or user data stored in the load directory or subsequent directories.
(For similar reasons, the appendage to the environment block associated with loaded applications under DOS 3.0 (and higher) contains a reference to the load path of the executable as well, however, this consumes more resident memory, and to take advantage of it, support for it must be coded into the executable, whereas DRI's solution works with any kind of applications and is fully transparent to users as well.)
In some versions of DR-DOS, the load path contained in the appendage to the environment passed to drivers can be shortened to that of a temporary substitute drive (e.g. SUBST B: C:\DIR) through the INSTALL[HIGH]/LOADHIGH option /D[:loaddrive] (for B:TSR.COM instead of, say, C:\DIR\TSR.COM). This can be used to minimize a driver's effective memory footprint, if the executable is located in a deep subdirectory and the resident driver happens to not need its load path after installation any more.
See also
Drive mapping
Filename
net (command), a command in Microsoft Windows that can be used for viewing/controlling drive-letter assignments for network drives
Portable application
References
External links
Change Drive Letter in Windows 8
Tips for USB related drive letter issues
Windows architecture
DOS technology
Computer peripherals
Assignment operations
de:Laufwerk (Computer)#Laufwerksbuchstaben | Drive letter assignment | [
"Technology"
] | 4,095 | [
"Computer peripherals",
"Components"
] |
55,451 | https://en.wikipedia.org/wiki/Bertha%20von%20Suttner | Baroness Bertha Sophie Felicitas von Suttner (; ; 9 June 184321 June 1914) was an Austro-Bohemian noblewoman, pacifist and novelist. In 1905, she became the second female Nobel laureate (after Marie Curie in 1903), the first woman to be awarded the Nobel Peace Prize, and the first Austrian and Czech laureate.
Early life
Bertha Kinský was born on 9 June 1843 at Kinský Palace in the Obecní dvůr (cz) district of Prague. Her parents were the Austrian Lieutenant general () Franz Michael de Paula Josef Graf Kinsky von Wchinitz und Tettau (1769–1843), then recently deceased at the age of 75, and his young wife, Sophie Wilhelmine von Körner (1815–1884), who was almost fifty years his junior.
Her father was a member of the illustrious House of Kinsky via descent from Count Wilhelm Kinsky (1574–1634), being younger son of Count Franz Ferdinand Kinsky von Wchinitz und Tettau (1738–1806) and his wife, Princess Maria Christina Anna von und zu Liechtenstein (1741–1819), daughter of Prince Emanuel of Liechtenstein. Bertha's mother came from a family that belonged to an untitled nobility of significantly lower status, being the daughter of her husband's comrade, Joseph von Körner (a captain of the cavalry in the Habsburg Imperial Army), a distant relative of the poet Theodor Körner. Through her mother, Bertha was also related to Theodor Körner, Edler von Siegringen, namesake and great-nephew of the poet, who later served as the 4th President of Austria.
For the rest of her life, Bertha faced exclusion from the Austrian high nobility due to her "mixed" descent; for instance, only those with an unblemished aristocratic pedigree back to their great-great-grandparents were eligible for presentation at the imperial court. She was additionally disadvantaged because her father, as a third son, had no great estates or other financial resources to bequeath. Bertha was baptised at Prague's Church of Our Lady of the Snows – not a traditional choice for the aristocracy.
Soon after her birth, Bertha's mother moved to live in Brno near Bertha's guardian, Landgrave Friedrich Michael zu Fürstenberg-Taikowitz (1793–1866). Her older brother, Count Arthur Franz Kinsky von Wchinitz und Tettau (1827–1906), was sent to a military school at the age of six and subsequently had little contact with the family. In 1855, Bertha's maternal aunt Charlotte (Lotte) Büschel, née von Körner (de) (also a widow), and her daughter Elvira joined the household. Elvira, whose father was a private scholar and whose official guardian, after the death of her father, became Count Johann Carl August von Huyn (1812-1889), was of a similar age to Bertha and interested in intellectual pursuits, introducing her cousin to literature and philosophy. Beyond her reading, Bertha gained proficiency in French, Italian and English as an adolescent, under the supervision of a succession of private tutors; she also became an accomplished amateur pianist and singer.
Bertha's mother and aunt, regarding themselves as clairvoyant, went to gamble at Wiesbaden in the summer of 1856, hoping to return with a fortune. Their losses proved so heavy that they were forced to move to Vienna. During this trip, Bertha received a marriage proposal from Prince Philipp zu Sayn-Wittgenstein-Berleburg (1836–1858), third son of Prince August Ludwig zu Sayn-Wittgenstein-Berleburg (de) (Minister of State of the Duchy of Nassau) and his wife, Franziska Allesina genannt von Schweitzer (1802–1878), which was declined due to her young age. The family returned to Wiesbaden in 1859; the second trip proved similarly unsuccessful, and they had to relocate to a small property in Klosterneuburg. Shortly after this, Bertha wrote her first published work, the novella Erdenträume im Monde, which appeared in Die Deutsche Frau. Continuing poor financial circumstances led Bertha to a brief engagement to the wealthy Gustav, Baron Heine von Geldern, 31 years her senior and a member of the banking Heine family (de), whom she came to find unattractive and rejected; her memoirs record her disgusted response to the older man's attempt to kiss her.
In 1864, the family spent the summer at Bad Homburg, a fashionable gambling-destination among the aristocracy of the era. Bertha befriended the Georgian aristocrat Ekaterine Dadiani, Princess of Mingrelia, and met Tsar Alexander II, to whom she was very distantly related. Seeking a career as an opera singer as an alternative to marrying into money, she undertook an intensive course of lessons, working on her voice for over four hours a day. Despite tuition from the eminent Gilbert Duprez in Paris in 1867, and from Pauline Viardot in Baden-Baden in 1868, she never secured a professional engagement. She suffered from stage fright and was unable to project well in performance. In the summer of 1872, she became engaged to Prince Adolf zu Sayn-Wittgenstein-Hohenstein (1839–1872), son of Prince Alexander zu Sayn-Wittgenstein-Hohenstein (1801–1874) and Countess Amalie Luise von Bentheim-Tecklenburg-Rheda (1802–1887). Prince Adolf died at sea that October while travelling to America to escape his debts.
Tutor in the Suttner household, life in Georgia
Bertha's guardian (Landgrave Friedrich zu Fürstenberg) and her cousin Elvira both died in 1866, and she (now above the typical age of marriage) felt increasingly constrained by her mother's eccentricity and the family's poor financial circumstances. In 1873, she found employment as a tutor and companion to the four daughters of Karl, Freiherr von Suttner, who were aged between 15 and 20. The Suttner family (de) lived in the Innere Stadt of Vienna three seasons of the year, and spent the summer at Castle Harmannsdorf (de) in Lower Austria. She had an affectionate relationship with her four young students, who nicknamed her "Boulotte" (fatty) due to her size, a name she would later adopt as a literary pseudonym in the form "B. Oulot".
She soon fell in love with the girls' elder brother, Baron (1850–1902), younger son of Karl Gundaccar Freiherr von Suttner (1819–1898) and his wife, Karola Knolz (b. 1822), who was seven years her junior. They were engaged but unable to marry due to his parents' disapproval. In 1876, with the encouragement of her employers, she answered a newspaper advertisement which led to her briefly becoming secretary and housekeeper to Alfred Nobel in Paris. In the few weeks of her employment, she and Nobel developed a friendship, and Nobel may have made romantic overtures. However, she remained committed to Arthur and returned shortly to Vienna to marry him in secrecy, in the church of St. Aegyd in Gumpendorf.
The newlywed couple eloped to Mingrelia in western Georgia, Russian Empire, near the Black Sea, where she hoped to make use of her connection to the former ruling House of Dadiani. On their arrival, they were entertained by Prince Niko. The couple settled in Kutaisi, where they found work teaching languages and music to the children of the local aristocracy. However, they experienced considerable hardship despite their social connections, living in a simple three-roomed wooden house. Their situation worsened in 1877 on the outbreak of the Russo-Turkish War, although Arthur worked as a reporter on the conflict for the Neue Freie Presse. Suttner also wrote frequently for the Austrian press in this period and worked on her early novels, including Es Löwos, a romanticised account of her life with Arthur. In the aftermath of the war, Arthur attempted to set up a timber business, but it was unsuccessful.
Arthur and Bertha von Suttner
Arthur and Bertha von Suttner were largely socially isolated in Georgia; their poverty restricted their engagement with high society, and neither ever became fluent speakers of Mingrelian or Georgian. To support themselves, both began writing as a career. While Arthur's writing during this period is dominated by local themes, Suttner's was not similarly influenced by Georgian culture.
In August 1882, Ekaterine Dadiani died. Soon afterwards, the couple decided to move to Tbilisi. There, Arthur took whatever work he could (in accounting, construction and wallpaper design), while Suttner largely concentrated on her writing. She became a correspondent of Michael Georg Conrad, eventually contributing an article to the 1885 edition of his publication Die Gesellschaft. The piece, entitled "Truth and Lies", is a polemic in favour of the naturalism of Émile Zola. Her first significant political work, Inventarium einer Seele ("Inventory of the Soul"), was published in Leipzig in 1883. In this work, Suttner takes a pro-disarmament, progressive stance, arguing for the inevitability of world peace due to technological advancement; a possibility also considered by her friend Nobel due to the increasingly deterrent effect of more powerful weapons.
In 1884, Suttner's mother died, leaving the couple with further debts. Arthur had befriended a Georgian journalist in Tbilisi, M, and the couple agreed to collaborate with him on a translation of the Georgian epic The Knight in the Panther Skin. Suttner was to improve M.'s literal translation of the Georgian to French, and Arthur to translate the French to German. This method proved arduous, and they worked for few hours each day due to the distraction of the Mingrelian countryside around M.'s home. Arthur published several articles on the work in the Georgian press, and Mihály Zichy prepared some illustrations for the publication, but M. failed to make the expected payment, and after the Bulgarian Crisis began in 1885 the couple felt increasingly unsafe in Georgian society, which was becoming more hostile to Austrians due to Russian influence. They finally reconciled with Arthur's family and in May 1885 could return to Austria, where the couple lived at Harmannsdorf Castle in Lower Austria.
Bertha found refuge in her marriage with Arthur, of which she remarked that "the third field of my feelings and moods lay within our married happiness. In this was my peculiarly inalienable home, my refuge for all possible conditions of life, […] and so the leaves of my diary are full not only of political domestic records of all kinds, but also of memoranda of our gay little jokes, our confidential enjoyable walks, our uplifting reading, our hours of music together, and our evening games of chess. To us personally nothing could happen. We had each other – that was everything."
Peace activism
After their return to Austria, Suttner continued her journalism and concentrated on peace and war issues, corresponding with the French philosopher Ernest Renan and influenced by the International Arbitration and Peace Association founded by Hodgson Pratt in 1880.
In 1889, Suttner became a leading figure in the peace movement with the publication of her pacifist novel, Die Waffen nieder! (Lay Down Your Arms!), which made her one of the leading figures of the Austrian peace movement. The book was published in 37 editions and translated into 15 languages. She witnessed the foundation of the Inter-Parliamentary Union and called for the establishment of the Austrian Gesellschaft der Friedensfreunde pacifist organisation in an 1891 Neue Freie Presse editorial. Suttner became chairwoman and also founded the German Peace Society the next year. She became known internationally as the editor of the international pacifist journal Die Waffen nieder!, named after her book, from 1892 to 1899. In 1897, she presented Emperor Franz Joseph I of Austria with a list of signatures urging the establishment of an International Court of Justice and took part in the First Hague Convention in 1899 with the help of Theodor Herzl, who paid for her trip as a correspondent of the Zionist newspaper, Die Welt.
Upon her husband's death in 1902, Suttner had to sell Harmannsdorf Castle and moved back to Vienna. In 1904 she addressed the International Congress of Women in Berlin and for seven months travelled around the United States, attending a universal peace congress in Boston and meeting President Theodore Roosevelt.
Though her personal contact with Alfred Nobel had been brief, she corresponded with him until his death in 1896, and it is believed that Von Suttner was a major influence on his decision to include a peace prize among those prizes provided in his will. Bertha von Suttner was awarded the Nobel Peace Price in the fifth term on 10 December 1905, together with her comrade, the legal scholar Tobias Asser (1838–1913) for their help in developing an international order based on peace rather than war. The presentation took place on 18 April 1906 in Kristiania.
In 1907, Von Suttner was the only woman to attend the Second Hague Peace Conference, which mainly pertained to the law of war. Von Suttner was actually highly critical of the 1907 conference, and warned of a war to come. When accepting her Nobel Peace prize, she said: "(…) whether our Europe will become a showpiece of ruins and failure, or whether we can avoid this danger and so enter sooner the coming era of secure peace and law in which a civilisation of unimagined glory will develop. The many aspects of this question are what the second Hague Conference should be discussing rather than the proposed topics concerning the laws and practices of war at sea, the bombardment of ports, towns, and villages, the laying of mines, and so on. The contents of this agenda demonstrate that, although the supporters of the existing structure of society, which accepts war, come to a peace conference prepared to modify the nature of war, they are basically trying to keep the present system intact".
Around this time, she also crossed paths with Anna Bernhardine Eckstein, another German champion of world peace, who influenced the agenda of the Second Hague Peace Conference. A year later she attended the International Peace Congress in London, where she first met Caroline Playne, an English anti-war activist who would later write the first biography of Suttner.
In the run-up to World War I, Suttner continued to campaign against international armament. In 1911 she became a member of the advisory council of the Carnegie Peace Foundation. In the last months of her life, while suffering from cancer, she helped organise the next Peace Conference, intended to take place in September 1914. However, the conference never took place, as she died of cancer on 21 June 1914, and seven days later the heir to her nation's throne, Franz Ferdinand was killed, triggering World War I.
Suttner's pacifism was influenced by the writings of Immanuel Kant, Henry Thomas Buckle, Herbert Spencer, Charles Darwin and Leo Tolstoy (Tolstoy praised Die Waffen nieder!)
conceiving peace as a natural state impaired by the human aberrances of war and militarism. As a result, she argued that a right to peace could be demanded under international law and was necessary in the context of an evolutionary Darwinist conception of history. Suttner was a respected journalist, with one historian describing her as "a most perceptive and adept political commentator".
Writing
As a career writer, Suttner often had to write novels and novellas that she did not believe in or really want to write, to support herself. However, even in those novels there are traces of her political ideals; often, the romantic heroes would fall in love upon realising they were both fighting for the same ideals, usually peace and tolerance.
To promote her writing career and ideals, she used her connections in aristocracy and friendships with wealthy individuals, such as Alfred Nobel, to gain access to international heads of state, and also to gain popularity for her writing. To increase the financial success of her writing, she used a male pseudonym early in her career. In addition, Suttner often worked as a journalist to publicise her message or promote her own books, events, and causes.
As Tolstoy noted and others have since agreed, there is a strong similarity between Suttner and Harriet Beecher Stowe. Both Beecher Stowe and Suttner "were neither simply writers of popular entertainment nor authors of tendentious propaganda.... [They] used entertainment for idealistic purposes." For Suttner, peace and acceptance of all individuals and all peoples was the greatest ideal and theme.
Suttner also wrote about other issues and ideals. Two common issues in her work, apart from pacifism, are religion and sex.
Religion
There are two main issues with religion that Suttner often wrote about. She had a disdain for the spectacle and pomp of some religious practices. In a scene in Lay Down Your Arms she highlighted the odd theatricality of some religious practices. In the scene, the emperor and empress are washing the feet of normal citizens to show they are as humble as Jesus, but they invite everyone to witness their show of humility and enter the hall in a dramatic fashion. The protagonist Martha remarks that it was "indeed a sham washing."
Another issue prominent in much of her writing is the idea that war is righteously for God, and leaders often use religion as a pretext for war. Suttner criticised this reasoning on the grounds that it placed the state as the important entity to God rather than the individual, thereby making dying in battle more glorious than other forms of death or surviving a war. Much of Lay Down Your Arms discusses this topic.
This type of religious thinking also leads to segregation and fighting based on religious differences, which Bertha and Arthur von Suttner refused to accept. As a devout Christian, Arthur founded the League Against Anti-Semitism in response to the pogroms in Eastern Europe and the growing antisemitism across Europe. The Suttner family called for acceptance of all people and all faiths, with Suttner writing in her memoirs that "religion was neighbourly love, not neighbourly hatred. Any kind of hatred, against other nations or against other creeds, detracted from the humaneness of humanity."
Sex
Suttner is often considered a leader in the women's liberation movement.
Von Suttner broke through sex barriers by her work as a writer and activist. She was an outspoken leader in a society in which women were to be seen, not be heard. But she did not actively participate in the movements for women's suffrage, for instance, which she explained due to a lack of time. She instead focused on reaching out to other women in the international peace movement, though she kept close contact to the women's suffrage movement. As a sign of joint solidarity, for instance, Von Suttner was a prominent participant of the 1904 'International Women's Conference' ('Internationale Frauen-Kongress') in Berlin. Von Suttner knew, though, that conflict can only be avoided if both men and women together struggle for peace, which required an absolute belief in sex equality. "The tasks involved in mankind's continuing ennoblement are such that they can only be fulfilled through fair and equal cooperation between the sexes", she wrote.
In Lay Down Your Arms, the protagonist Martha often clashes with her father on this issue. Martha does not want her son to play with toy soldiers and be indoctrinated to the masculine ideas of war. Martha's father attempts to put Martha back in the female sexed box by suggesting that the son will not need to ask for approval from a woman, and also states that Martha should marry again because women her age should not be alone.
This was not simply because she insisted that women are equal to men, but that she was able to tease out how sexism affects both men and women. Like Martha being placed in a female structured sex box, the character of Tilling is also placed in the male stereotyped box and affected by that. The character even discusses it, saying, "we men have to repress the instinct of self-preservation. Soldiers have also to repress the compassion, the sympathy for the gigantic trouble which invades both friend and foe; for next to cowardice, what is most disgraceful to us is all sentimentality, all that is emotional."
Legacy
Although Suttner was not financially successful during her lifetime, her work has remained influential for those involved in the peace movement.
She was awarded the Nobel Peace Prize in 1905
She has also been commemorated on several coins and stamps:
She was selected as a main motif for a high value collectors' coin: the 2008 Europe Taler, which featured important people in the history of Europe. Also depicted in the coin are Martin Luther, Antonio Vivaldi, and James Watt.
A commemorative silver 10 euro coin was issued in Germany in honor of the centennial of her Nobel Prize.
She is depicted on the Austrian 2 euro coin, and was pictured on the old Austrian 1,000 schilling bank note.
She was commemorated on a 1965 Austrian postage stamp and a 2005 German postage stamp.
On 10 December 2019, Google celebrated her with a Google Doodle.
There is a statue in her honor in Vienna and one in Graz.
On film
Die Waffen nieder, by Holger Madsen and Carl Theodor Dreyer. Released by Nordisk Films Kompagni in 1914.
No Greater Love (), a 1952 film has Bertha as the main character.
TV
(Eine Liebe für den Frieden – Bertha von Suttner und Alfred Nobel), TV biopic, ORF/Degeto/BR 2014, after the play Mr. & Mrs. Nobel by Esther Vilar.
Works translated into English
See also
Pacifism
List of peace activists
List of Austrians
List of Austrian writers
List of female Nobel laureates
References
Citations and notes
Bibliography
Irwin Abrams: "Bertha von Suttner and the Nobel Peace Prize". In Journal of Central European Affairs. Bd. 22, 1962, S. 286–307
Laurence, Richard R. "Bertha von Suttner and the peace movement in Austria to World War I." Austrian History Yearbook 23 (1992): 181-201.
Brigitte Hamann: Bertha von Suttner. Ein Leben für den Frieden. Piper, München 2002,
Laurie R. Cohen (Hrsg.): „Gerade weil Sie eine Frau sind…“. Erkundungen über Bertha von Suttner, die unbekannte Friedensnobelpreisträgerin. Braumüller, Wien 2005, .
Maria Enichlmair: Abenteurerin Bertha von Suttner: Die unbekannten Georgien-Jahre 1876 bis 1885. Ed. Roesner, Maria Enzersdorf 2005, .
Beatrix Müller-Kampel (Hrsg.): „Krieg ist der Mord auf Kommando“. Bürgerliche und anarchistische Friedenskonzepte. Bertha von Suttner und Pierre Ramus. Graswurzelrevolution, Nettersheim 2005, .
Beatrix Kempf: "Bertha von Suttner und die „bürgerliche“ Friedensbewegung". In Friede – Fortschritt – Frauen. Friedensnobelpreisträgerin Bertha von Suttner auf Schloss Harmannsdorf. LIT-Verlag, Wien 2007, S. 45 ff.
Valentin Belentschikow: Bertha von Suttner und Russland (= Vergleichende Studien zu den slavischen Sprachen und Literaturen.). Lang, Frankfurt am Main u.a. 2012, .
Simone Peter: "Bertha von Suttner (1843–1914)". In Bardo Fassbender, Anne Peters (eds.): The Oxford Handbook of the History of International Law. Oxford University Press, Oxford 2012, S. 1142–1145 (Vorschau).
Stefan Frankenberger (ed.): Der unbekannte Soldat – Zum Andenken an Bertha von Suttner. Mono, Wien 2014,
External links
Biographic information
including the Nobel Lecture, April 18, 1906 The Evolution of the Peace Movement
Bertha Freifrau (Baroness) von Suttner on nobel-winners.com
Another biography on Suttner
Digital editions
Online text of "Lay down Your Arms", archive.org
Bertha von Suttner, "Visit to Alfred Nobel ," in Memoirs of Bertha von Suttner: The Records of an Eventful Life. Vol. 1. 2 vols. Boston: Ginn & Co., 1910.
The Bertha Von Suttner Project (repository of print and multimedia resources in English)
Other links
(PDF of full review of Memoirs)
2005 – the Bertha von Suttner Year
1843 births
1914 deaths
Writers from Prague
19th-century Czech people
19th-century Austrian novelists
Czech Nobel laureates
Austrian Nobel laureates
Nobel Peace Prize laureates
German Peace Society members
Austrian pacifists
Austrian women activists
Austrian women novelists
Austrian journalists
Austrian women writers
Writers from Austria-Hungary
Nobel laureates from Austria-Hungary
Austrian people of Czech descent
Austrian baronesses
Habsburg Bohemian nobility
Bertha Suttner
Women Nobel laureates
19th-century women writers
International Congress of Women people | Bertha von Suttner | [
"Technology"
] | 5,289 | [
"Women Nobel laureates",
"Women in science and technology"
] |
55,465 | https://en.wikipedia.org/wiki/High-occupancy%20vehicle%20lane | A high-occupancy vehicle lane (also known as an HOV lane, carpool lane, diamond lane, 2+ lane, and transit lane or T2 or T3 lanes) is a restricted traffic lane reserved for the exclusive use of vehicles with a driver and at least one passenger, including carpools, vanpools, and transit buses. These restrictions may be only imposed during peak travel times or may apply at all times. There are different types of lanes: temporary or permanent lanes with concrete barriers, two-directional or reversible lanes, and exclusive, concurrent, or contraflow lanes working in peak periods.
The normal minimum occupancy level is two or three occupants. Many jurisdictions exempt other vehicles, including motorcycles, charter buses, emergency and law enforcement vehicles, low-emission and other green vehicles, and/or single-occupancy vehicles paying a toll. HOV lanes are normally introduced to increase average vehicle occupancy and persons traveling with the goal of reducing traffic congestion and air pollution.
History
United States
The introduction of HOV lanes in the United States progressed slowly during the 1970s and early 1980s. Major growth occurred from the mid-1980s to the late 1990s. The first freeway HOV lane in the United States was implemented in the Henry G. Shirley Memorial Highway in Northern Virginia, between Washington, DC, and the Capital Beltway, and was opened in 1969 as a bus-only lane. The busway was opened in December 1973 to carpools with four or more occupants, becoming the first instance in which buses and carpools officially shared a HOV lane over a considerable distance.
In 2005, the two lanes of this HOV 3+ facility carried during the morning peak hour (6:30 am to 9:30 am) a total of 31,700 people in 8,600 vehicles (3.7 persons/veh), while the three or four general-purpose lanes carried 23,500 people in 21,300 vehicles (1.1 persons/veh). Average travel time in the HOV facility was 29 minutes, and 64 minutes in the general traffic lanes. As of 2012, the I-95/I-395 HOV facility is long, extends from Washington, D.C., to Dumfries, Virginia, and has two reversible lanes separated from the regular lanes by barriers, with access through elevated on- and off-ramps. Three or more people in a vehicle (HOV 3+) are required to travel on the facility during rush hours on weekdays.
The second freeway HOV facility, which opened in 1970, was the contraflow bus lane on the Lincoln Tunnel Approach and Helix in Hudson County, New Jersey. According to the Federal Highway Administration (FHWA), the Lincoln Tunnel XBL is the country's HOV facility with the highest number of peak hour persons among HOV facilities with utilization data available, with 23,500 persons in the morning peak, and 62,000 passengers during the four-hour morning peak.
The first permanent HOV facility in California was the bypass lane at the San Francisco–Oakland Bay Bridge toll plaza, opened to the public in April 1970. The El Monte Busway (I-10 / San Bernardino Freeway) in Los Angeles was initially only available for buses when it opened in 1973. Three-person carpools were allowed to use the bus lane for three months in 1974 due to a strike by bus operators, and then permanently at a 3+ HOV from 1976. It is one of the most efficient HOV facilities in North America and was converted into a high-occupancy toll lane operation in 2013 to allow low-occupancy vehicles to bid for excess capacity on the lane in the Metro ExpressLanes project.
Beginning in the 1970s, the Urban Mass Transportation Administration recognized the advantages of exclusive bus lanes and encouraged their funding. In the 1970s the FHWA began to allow state highway agencies to spend federal funds on HOV lanes. As a result of the 1973 Arab Oil Embargo, interest in ridesharing picked up, and states began experimenting with HOV lanes. In order to reduce crude oil consumption, the 1974 Emergency Highway Energy Conservation Act mandated maximum speed limits of on public highways and became the first instance when the U.S. federal government provided funding for ridesharing and states were allowed to spend their highway funds on rideshare demonstration projects. The 1978 Surface Transportation Assistance Act made funding for rideshare initiatives permanent.
Also during the early 1970s, ridesharing was recommended for the first time as a tool to mitigate air quality problems. The 1970 Clean Air Act Amendments established the National Ambient Air Quality Standards and gave the Environmental Protection Agency (EPA) substantial authority to regulate air quality attainment. A final control plan for the Los Angeles Basin was issued in 1973, and one of its main provisions was a two-phase conversion of of freeway and arterial roadway lanes to bus/carpool lanes and the development of a regional computerized carpool matching system. However, it took until 1985 before any HOV project was constructed in Los Angeles County, and by 1993 there were only of HOV lanes countywide.
A significant policy shift took place in October 1990, when a memorandum from the FHWA administrator stated that "FHWA strongly supports the objective of HOV preferential facilities and encourages the proper application of HOV technology." Regional administrators were directed to promote HOV lanes and related facilities. Also in the early 1990s, two laws reinforced the U.S. commitment to HOV lane construction. The Clean Air Act Amendments of 1990 included HOV lanes as one of the transportation control measures that could be included in state implementation plans to attain federal air quality standards. The 1990 amendments also deny the administrator of the EPA the authority to block FHWA from funding 24-hour HOV lanes as part of the sanctions for a state's failure to comply with the Clean Air Act, if the secretary of transportation wishes to approve the FHWA funds.
On the other hand, the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 encouraged the construction of HOV lanes, which were made eligible for Congestion Mitigation and Air Quality (CMAQ) funds in regions not attaining federal air quality standards. CMAQ funds may be spent on new HOV lane construction, even if the HOV designation holds only at peak travel times or in the peak direction. ISTEA also provided that under the Interstate Maintenance Program, only HOV projects would receive the 90% federal matching ratio formerly available for the addition of general purpose lanes. ISTEA, in addition, permitted state authorities to define a high occupancy vehicle as having a minimum of two occupants (HOV 2+).
As of 2009, California was the state with the most HOV facilities in the country, with 88, followed by Minnesota with 83 facilities, Washington with 41, Texas with 35, and Virginia with 21. By 2006, HOV lanes in California were operating at two-thirds of their capacity, and these HOV facilities carried on average 2,518 persons per hour during peak hours, substantially more people than the congested general-traffic lanes.
As of October 2016, the longest continuous HOV facility in the U.S. is on I-15 in Utah, extending approximately from Layton to Spanish Fork with a single HOV lane in each direction for a total of of HOV lanes. While the Utah facility is the longest, the I-495 Capital Beltway in the Washington, D.C., Metropolitan Area extends but has two HOV lanes in each direction for a total of of HOV lanes.
On October 24, 2023, Michigan opened its first-ever HOV lanes on a portion of I-75 in Oakland County from South Boulevard in Bloomfield Township to 12 Mile Road in Madison Heights as part of a freeway modernization project. One lane in both directions is restricted to HOV use from 6 a.m. to 9 a.m. and from 3 p.m. to 6 p.m. Monday through Friday, while all other drivers regardless of the number of occupants in their vehicle can freely use the lanes outside of those hours.
Canada
The first HOV facilities in Canada were opened in Greater Vancouver and Toronto in the early 1990s, followed shortly by facilities in Ottawa, Gatineau, Montreal, and later Calgary. As of 2010 there were about of highway HOV lanes in 11 locations in British Columbia, Ontario, and Quebec, and over of arterial HOV lanes in 24 locations in Greater Vancouver, Calgary, Toronto, Ottawa, and Gatineau. The Ontario Ministry of Transportation (MTO) in 2006 estimated that commuters in Toronto using the HOV facilities on Highways 403 and 404 were saving 14–17 minutes per trip compared to their travel time before the HOV lanes opened. The MTO also estimated that almost 40% of commuters were carpooling on Highway 403 eastbound in the morning peak hour, compared to 14% in 2003, and 37% of commuters were carpooling on Highway 403 westbound in the afternoon peak hour, compared to 22% in 2003. The average rush hour speed on the HOV lanes is , compared to in general-traffic lanes on Highway 403.
Temporary HOV lanes were added to selections of 400-series highways in the Greater Toronto Area for the 2015 Pan American Games and 2015 Parapan American Games.
Europe
As of 2012, there are a few HOV lanes in operation in Europe. The main reason for this is that, in general, European cities have better public transport services and fewer high-capacity multi-lane urban motorways than do the U.S. and Canada. However, at around 1.3 persons per vehicle, average car occupancy is relatively low in most European cities. The emphasis in Europe has been on providing bus lanes and on-street bus priority measures.
The first HOV lane in Europe was opened in the Netherlands in October 1993 and operated until August 1994. Its facility was a barrier-separated HOV 3+ on the A1 near Amsterdam. The facility did not attract enough users to overcome public criticism and was converted to a reversible lane open to general traffic after the judge in a legal test case ruled that Dutch traffic law lacked the concept of a car pool and thus that the principle of equality was violated.
Spain was the next European country to introduce HOV lanes (), when median reversible Bus-VAO lanes were opened in Madrid's A-6 in 1995. This facility is Europe's oldest HOV facility that is still in operation.
The first HOV facility in the United Kingdom opened in Leeds in 1998. The facility was implemented on A647 road near Leeds as an experimental scheme, but it became permanent. The HOV facility is long and operates as a HOV 2+ facility.
A HOV 3+ facility opened in Linz, Austria, in 1999.
The first HOV lane in Norway was implemented in May 2001 as an HOV 3+ on Elgeseter Street, an undivided four-lane arterial road in Trondheim. This facility was followed by HOV lanes in Oslo and Kristiansand.
New Zealand and Australia
The first HOV lane (known as a Transit Lane T2 or T3) in Australia opened in February 1992, located on the Eastern Freeway in Melbourne travelling inbound. In May 2005, T2 Transit lanes were opened on Hoddle Street in Melbourne. As of 2012, there were also T2 and T3 facilities in Canberra, Sydney and Brisbane.
In Auckland, New Zealand, there are several short HOV 2+ and 3+ lanes throughout the region, commonly known as T2 and T3 lanes. There is a T2 transit lane in Tamaki Drive, in a short stretch between Okahu Bay Reserve and downtown Auckland. There are also T2 priority lanes on Auckland's Northern, Southern, Northwestern, and Southwestern Motorways. These priority lanes are left-side on-ramp lanes heading towards the motorway, where vehicles with two or more people can bypass the ramp meter signal. Priority lanes can also be used by trucks, buses, and motorcycles, and the priority lanes can be used by carpoolers at any time. Eleven lanes were opened to electric vehicles in a one-year trial from September 2017. There are also several short T2 and T3 facilities in North Shore City operating during rush hours.
Indonesia
In Jakarta, HOV 3+ is known as "Three in One" (Tiga dalam satu) and was first implemented by governor Sutiyoso. HOV 3+ is implemented on weekdays in existing roads of Sisingamangaraja Road (fast and slow lane), Jalan Jenderal Sudirman (fast and slow lane), Jalan M.H. Thamrin (fast and slow lane), Medan Merdeka Barat Road, Majapahit Road, and sections of Jalan Jenderal Gatot Subroto. The policy was originally implemented only between 7:00 am and 10:00 am. Since the introduction of Jakarta's bus rapid transit in December 2003, the policy was extended to 7:00 am – 10:00 am and 4:00 pm – 7:00 pm. In September 2004, the evening time was changed to 4:30 pm – 7:00 pm. Car jockeys are paid by drivers to ride on vehicles, so that those vehicles would bypass the three in one restriction. On August 30, 2016, an odd–even rationing (ganjil-genap) system began to replace "3-in-1" rule, after a successful trial. Odd plate numbers can enter former "3-in-1" areas on odd days and even plate numbers on even ones.
China
In Shenzhen, HOV 2+ has been implemented on Binhai Avenue since 25 April 2016. The policy was then extended to 7:30 am – 9:30 am and 5:30 pm – 9:30 pm.
In Chengdu, from January 23, 2017, HOV 2+ has been implemented on Kehua Road South, Kehua Road Middle, and Tianfu Avenue Section 1 and 2, during 7:00 am-9: 00 am and 5:00 pm-7: 00 pm.
In Dalian, an expressway (Northeast Expressway, or Dongbei Expressway) linking old town and new town had one lane in both outbound and inbound directions set to HOV 2+. Starting from September 20, 2017, commuters can opt to drive in HOV lane on Northeast Expressway during the morning peak hours of 06:30-08:30, and evening peak hours of 16:30-19:00. A fine of CNY100 (about USD15) will be enforced for first violators. For a second violation, the fine will double.
Design and operations
HOV lanes may be either a single traffic lane within the main roadway with distinctive markings or a separate roadway with one or more traffic lanes either parallel to the general lanes or grade-separated, above or below the general lanes. For example, Interstate 110 in California has four HOV lanes on an upper deck.
HOV bypass lanes are intended to allow carpool traffic, busses and police to bypass areas of regular congestion in many places. An HOV lane may operate as a reversible lane, working in the direction of the dominant traffic flow in both the morning and the afternoon. All lanes of a section of the Interstate 66 in the suburbs of Washington, D.C., are treated as an HOV during the rush hour in the primary direction of flow.
The traffic speed differential between HOV and general-purpose lanes creates a potentially dangerous situation if the HOV lanes are not separated by a barrier. A Texas Transportation Institute study found that HOV lanes lacking barrier separations caused a 50% increase in injury crashes.
Variants
Business access and transit lane
A business access and transit (BAT) lane is a type of HOV lane that allows for all traffic to enter the lane for a short distance in order to access other streets and business entrances.
High-occupancy toll lane
Because some HOV lanes were not utilized to their full capacity, users of low- or single-occupancy vehicles may be permitted to use an HOV lane if they pay a toll. This scheme is known as high-occupancy toll lane (or HOT lanes), and it has been introduced mainly in the United States. The first practical implementation was California's formerly private toll 91 Express Lanes, in Orange County, California, in 1995, followed in 1996 by Interstate 15 north of San Diego. According to the Texas A&M Transportation Institute, by 2012 there were 294 corridor-miles of HOT/Express lanes and 163 corridor-miles of HOT/Express lanes under construction in the United States.
Solo drivers are permitted to use the HOV lanes upon payment of a fee that varies based on demand. Tolls change throughout the day according to real-time traffic conditions, which is intended to manage the number of cars in the lanes to maintain good journey times.
Proponents claim that all motorists benefit from HOT lanes, even those who choose not to use them. This argument applies only to projects that increase the total number of lanes. Proponents also claim that HOT lanes provide an incentive to use transit and ridesharing. There has been controversy over this concept, and HOT schemes have been called "Lexus" lanes, as critics see this new pricing scheme as a perk for the rich.
HOT tolls are collected by staffed toll booths, automatic number plate recognition, or electronic toll collection systems. Some systems use RFID transmitters to monitor entry and exiting of the lane and charge drivers depending on demand. Typically, tolls increase as traffic density and congestion within the tolled lanes increase, a policy known as congestion pricing. The goal of this pricing scheme is to minimize traffic congestion within the lanes.
Qualifying vehicles
Qualification for HOV status varies by scheme, but the following vehicles may be included:
Private cars and taxis with a minimum number of human occupants (often two or three), including babies of any age (but only after birth).
Single-occupant green vehicles, such as hybrid electric vehicles, plug-in hybrids, and battery electric vehicles
Motorcycles - motorcycles are allowed via federal United States HOV lane law (Title 23, Section 166). They can use HOV lanes in Ontario even if they only have a single passenger.
Buses designed to transport sixteen or more passengers, including the driver
Public utility vehicles when responding to emergency calls
Bicycles
Police are allowed to use the HOV lanes in Ontario.
New York City HOV lane codes prior to 2008 did not allow motorcycles leading to ticketing of motorcycle drivers and complaints from the American Motorcyclist Association, but have since been revised to comply with the federal regulations listed above.
In some jurisdictions such as Ontario, Canada, taxicabs and airport limousines are allowed to use HOV lanes even when no passenger is present because that vehicle "will be able return to duty faster after dropping off a fare or arrive sooner to pick up a fare, thereby moving more people to their destinations in fewer vehicles".
In Virginia, the San Francisco Bay Area, Houston, and other HOV lane locations, commuters form sluglines where drivers pick up one or more passengers from a designated "casual carpool" or "slug lines" to drive on HOV lanes; the driver pulls over near the sluglines and shouts out their destination, and people in the line going to that destination enter the car on a first-come, first-served basis.
Compliance, enforcement, and avoidance
Fines are usually imposed on drivers of non-qualifying vehicles who use the lanes.
Following the introduction of HOVs, some drivers placed inflatable dolls in the passenger seat, a practice that persists today, even though it is now illegal. Cameras that can distinguish between humans and mannequins or dolls were tested in the United Kingdom in 2005.
In the United States, law enforcement officials have documented a variety of methods used by drivers in attempts to circumvent HOV occupancy rules:
Placing store mannequins, blow-up dolls, kickboxing dummies, or cardboard cut-outs in the passenger seat;
Taping styrofoam wig stands with wigs or balloons with faces drawn on them to the passenger seat headrest;
Buckling the passenger-side seat belt and pretending to talk to someone reclining in that seat;
Tinting the front windshield and/or lowering the passenger side visor in an effort to obstruct the view into the passenger seat;
Covering an empty infant seat with a blanket and/or placing a doll in it;
Strapping dogs, cats, or other pets into the passenger seat.
In early 2006, an Arizona woman asserted that she had been improperly ticketed for using the HOV lane because the unborn child she was carrying in her womb justified her use of the lane, while noting that Arizona traffic laws do not define what constitutes a person. However, a judge subsequently ruled that to qualify as an "individual" under Arizona traffic laws, the individual must occupy a "separate and distinct" space in a vehicle. Likewise, in California, in order to use HOV lanes, there must be two (or, if posted, three) separate individuals occupying seats in a vehicle, and an unborn child does not count towards this requirement.
In 2009 and 2010 it was found that non-compliance rates on HOV lanes in Brisbane, Australia, were approaching 90%. Enhanced enforcement led to increased compliance, average bus journey times dropped by about 19%, and total person throughput increased by 12%.
In February 2010, a 61-year-old woman tried to pass off a life-sized mannequin as a passenger in order to use the HOV lane in New York State. A police officer on a routine HOV patrol became suspicious when he noticed that the so-called passenger was wearing sunglasses and using the visor on a cloudy morning. When the officer approached the vehicle, he discovered that the "passenger" was, in fact, a mannequin wearing lipstick, designer shades, a full-length wig, and a blue sweater. The driver was issued a traffic ticket for using the HOV lane without a human passenger, which carries a fine of $135 in 2010 and two points on a driver's license.
In January 2013, a motorist tried to claim that the Articles of Incorporation of his business, which had been placed unbuckled on the passenger’s seat, constituted a person, citing the principle of corporate personhood and California's state Vehicle Code, which defines a person as "natural persons and corporations". This argument was rejected in traffic court, where the presiding judge commented, "Common sense says carrying a sheaf of papers in the front seat does not relieve traffic congestion."
In March 2015, a motorist tried to use a cardboard cutout of actor Jonathan Goldsmith to access an HOV lane in Fife, Washington. The officer noted that other drivers had used sleeping bags in earlier attempts to access the HOV lane.
In July 2022, a pregnant woman in Texas argued that her fetus counted as a passenger for the purpose of using the HOV lane following the Dobbs v. Jackson Women's Health Organization decision and Texas law subsequently considering fetuses people.
Effectiveness
According to 2009 data from the U.S. census, 76% drive to work alone and only 10% rideshare. For suburban commuters working in a city, the solo driving rate is 82%.
Some underused HOV lanes in several states have been converted to high-occupancy toll lanes (HOT), which offer solo drivers access to HOV lanes after paying a toll.
HOV lanes are also an effective way to manage traffic after natural disasters, as seen in New York City after Hurricane Sandy in October 2012. At the time Mayor Bloomberg banned passenger cars with fewer than three occupants from entering Manhattan. The restriction affected all bridges and tunnels entering the city except the George Washington Bridge.
Criticism
Critics have argued that HOV lanes are underused. It is unclear whether HOV lanes are sufficiently used to compensate for delays in the other mixed-use lanes.
The situations have caused social problems in Indonesia, where some people become "car jockey", people who make their living by offering drivers to fill their car in order to meet the occupancy limit. Reportedly, the situation caused people stay in unemployment for doing so, increased congestion, and let parents profit from their babies.
Gallery
See also
Bus rapid transit
Local-express lanes
Toll road
Transportation Demand Management
Notes and references
External links
Frequently Asked HOV Questions, Federal Highway Administration
High Occupancy Vehicle Lanes in Canada, Transport Canada
HOV Priority, TDM Encyclopedia, Victoria Transport Policy Institute
California Eligible Vehicle List – Single occupant carpool lane stickers, California Air Resources Board.
Information on how to map HOV facilities within OpenStreetMap
HOV lanes mapping based on data from OpenStreetMap.
Deal lowers tolls on I-85 HOT lanes
VARIABLE PRICING:San Diego's I-15 HOT Lanes Mainstreamed Article about first variable price toll lane (1998)
Comprehensive analysis on the conversion of existing HOV Lanes to HOT Lanes in Tennessee, by Deo Chimba and Janey Camp.
Road traffic management
Road infrastructure
Sustainable transport | High-occupancy vehicle lane | [
"Physics"
] | 5,185 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
55,490 | https://en.wikipedia.org/wiki/Roald%20Dahl | Roald Dahl (13 September 1916 – 23 November 1990) was a British author of popular children's literature and short stories, a poet, screenwriter and a wartime fighter ace. His books have sold more than 300 million copies worldwide. He has been called "one of the greatest storytellers for children of the 20th century".
Dahl was born in Wales to affluent Norwegian immigrant parents, and lived for most of his life in England. He served in the Royal Air Force (RAF) during the Second World War. He became a fighter pilot and, subsequently, an intelligence officer, rising to the rank of acting wing commander. He rose to prominence as a writer in the 1940s with works for children and for adults, and he became one of the world's best-selling authors. His awards for contribution to literature include the 1983 World Fantasy Award for Life Achievement and the British Book Awards' Children's Author of the Year in 1990. In 2008, The Times placed Dahl 16th on its list of "The 50 Greatest British Writers Since 1945". In 2021, Forbes ranked him the top-earning dead celebrity.
Dahl's short stories are known for their unexpected endings, and his children's books for their unsentimental, macabre, often darkly comic mood, featuring villainous adult enemies of the child characters. His children's books champion the kindhearted and feature an underlying warm sentiment. His works for children include James and the Giant Peach, Charlie and the Chocolate Factory, Matilda, The Witches, Fantastic Mr Fox, The BFG, The Twits, George's Marvellous Medicine and Danny, the Champion of the World. His works for older audiences include the short story collections Tales of the Unexpected and The Wonderful Story of Henry Sugar and Six More.
Early life and education
Childhood
Roald Dahl was born in 1916 at Villa Marie, Fairwater Road, in Llandaff, Cardiff, Wales, to Norwegians Harald Dahl and Sofie Magdalene Dahl (née Hesselberg). Dahl's father, a wealthy shipbroker and self-made man, had emigrated to Britain from Sarpsborg, Norway and settled in Cardiff in the 1880s with his first wife, Frenchwoman Marie Beaurin-Gresser. They had two children together (Ellen Marguerite and Louis) before her death in 1907. Roald's mother belonged to a well-established Norwegian family of lawyers, priests in the state church and wealthy merchants and estate owners, and emigrated to Britain when she married his father in 1911. Dahl was named after Norwegian polar explorer Roald Amundsen. His first language was Norwegian, which he spoke at home with his parents and his sisters Astri, Alfhild, and Else. The children were raised in Norway's Lutheran state church, the Church of Norway, and were baptised at the Norwegian Church, Cardiff. His maternal grandmother Ellen Wallace was a granddaughter of the member of parliament Georg Wallace and a descendant of an early 18th-century Scottish immigrant to Norway.
Dahl's sister Astri died from appendicitis at age seven in 1920 when Dahl was three years old, and his father died of pneumonia at age 57 several weeks later. Later in the same year, his youngest sister, Asta, was born. Upon his death, Harald Dahl left a fortune assessed for probate of £158,917 10s. 0d. (equivalent to £ in ). Dahl's mother decided to remain in Wales instead of returning to Norway to live with relatives, as her husband had wanted their children to be educated in English schools, which he considered the world's best. When he was six years old, Dahl met his idol Beatrix Potter, author of The Tale of Peter Rabbit featuring the mischievous Peter Rabbit, the first licensed fictional character. The meeting, which took place at Potter's home, Hill Top in the Lake District, north west England, was dramatised in the 2020 television film, Roald & Beatrix: The Tail of the Curious Mouse.
Dahl first attended The Cathedral School, Llandaff. At age eight, he and four of his friends were caned by the headmaster after putting a dead mouse in a jar of gobstoppers at the local sweet shop, which was owned by a "mean and loathsome" old woman named Mrs Pratchett. The five boys named their prank the "Great Mouse Plot of 1924". Mrs Pratchett inspired Dahl's creation of the cruel headmistress Miss Trunchbull in Matilda, and a prank, this time in a water jug belonging to Trunchbull, would also appear in the book. Gobstoppers were a favourite sweet among British schoolboys between the two World Wars, and Dahl referred to them in his fictional Everlasting Gobstopper which was featured in Charlie and the Chocolate Factory.
Dahl transferred to St Peter's boarding school in Weston-super-Mare. His parents had wanted him to be educated at an English public school, and this proved to be the nearest because of the regular ferry link across the Bristol Channel. Dahl's time at St Peter's was unpleasant; he was very homesick and wrote to his mother every week but never revealed his unhappiness to her. After her death in 1967, he learned that she had saved every one of his letters; they were broadcast in abridged form as BBC Radio 4's Book of the Week in 2016 to mark the centenary of his birth. Dahl wrote about his time at St Peter's in his autobiography Boy: Tales of Childhood. Excelling at conkers—a traditional autumnal children's game in Britain and Ireland played using the seeds of horse chestnut trees—Dahl recollected, "at the ages of eight, nine and ten, conkers brought sunshine to our lives during the dreary autumn term".
Repton School
From 1929, when he was 13, Dahl attended Repton School in Derbyshire. Dahl disliked the hazing and described an environment of ritual cruelty and status domination, with younger boys having to act as personal servants for older boys, frequently subject to terrible beatings. His biographer Donald Sturrock described these violent experiences in Dahl's early life. Dahl expresses some of these darker experiences in his writings, which is also marked by his hatred of cruelty and corporal punishment.
According to Dahl's autobiography, Boy: Tales of Childhood, a friend named Michael was viciously caned by headmaster Geoffrey Fisher. Writing in that same book, Dahl reflected: "All through my school life I was appalled by the fact that masters and senior boys were allowed literally to wound other boys, and sometimes quite severely... I couldn't get over it. I never have got over it." Fisher was later appointed Archbishop of Canterbury, and he crowned Queen Elizabeth II in 1953. However, according to Dahl's biographer Jeremy Treglown, the caning took place in May 1933, a year after Fisher had left Repton; the headmaster was in fact J. T. Christie, Fisher's successor as headmaster. Dahl said the incident caused him to "have doubts about religion and even about God". He viewed the brutality of the caning as being the result of the headmaster's enmity towards children, an attitude Dahl would later attribute to the Grand High Witch in his dark fantasy The Witches, with the novel's main antagonist exclaiming that "children are rrreee-volting!".
Dahl was never seen as a particularly talented writer in his school years, with one of his English teachers writing in his school report, "I have never met anybody who so persistently writes words meaning the exact opposite of what is intended." He was exceptionally tall, reaching in adult life. Dahl played sports including cricket, football and golf, and was made captain of the squash team. As well as having a passion for literature, he developed an interest in photography and often carried a camera with him.
During his years at Repton, the Cadbury chocolate company occasionally sent boxes of new chocolates to the school to be tested by the pupils. Dahl dreamt of inventing a new chocolate bar that would win the praise of Mr Cadbury himself; this inspired him in writing his third children's book, Charlie and the Chocolate Factory, and to refer to chocolate in other children's books.
Throughout his childhood and adolescent years, Dahl spent most of his summer holidays with his mother's family in Norway. He wrote about many happy memories from those visits in Boy: Tales of Childhood, such as when he replaced the tobacco in his half-sister's fiancé's pipe with goat droppings. He noted only one unhappy memory of his holidays in Norway: at around the age of eight, he had to have his adenoids removed by a doctor. His childhood and first job selling kerosene in Midsomer Norton and surrounding villages in Somerset are subjects in Boy: Tales of Childhood.
After school
After finishing his schooling, in August 1934 Dahl crossed the Atlantic on the and hiked through Newfoundland with the British Public Schools Exploring Society.
In July 1934, Dahl joined the Shell Petroleum Company. Following four years of training in the United Kingdom, he was assigned first to Mombasa, Kenya, then to Dar es Salaam in the British colony of Tanganyika (now part of Tanzania). Dahl explains in his autobiography Going Solo that only three young Englishmen ran the Shell company in the territory, of which he was the youngest and junior. Along with the only two other Shell employees in the entire territory, he lived in luxury in the Shell House outside Dar es Salaam, with a cook and personal servants. While out on assignments supplying oil to customers across Tanganyika, he encountered black mamba snakes and lions, among other wildlife.
Fighter pilot
In August 1939, as the Second World War loomed, the British made plans to round up the hundreds of Germans living in Dar-es-Salaam. Dahl was commissioned as a lieutenant into the King's African Rifles, commanding a platoon of Askari men, indigenous troops who were serving in the colonial army.
In November 1939, Dahl joined the Royal Air Force (RAF) as an aircraftman with service number 774022. After a car journey from Dar es Salaam to Nairobi, he was accepted for flight training with sixteen other men, of whom only three survived the war. With seven hours and 40 minutes experience in a De Havilland Tiger Moth, he flew solo; Dahl enjoyed watching the wildlife of Kenya during his flights. He continued to advanced flying training in Iraq, at RAF Habbaniya, west of Baghdad. Following six months' training on Hawker Harts, Dahl was commissioned as a pilot officer on 24 August 1940, and was judged ready to join a squadron and face the enemy.
He was assigned to No. 80 Squadron RAF, flying obsolete Gloster Gladiators, the last biplane fighter aircraft used by the RAF. Dahl was surprised to find that he would not receive any specialised training in aerial combat or in flying Gladiators. On 19 September 1940, Dahl and another pilot were ordered to fly their Gladiators by stages from Abu Sueir (near Ismailia, in Egypt) to 80 Squadron's forward airstrip south of Mersa Matruh. On the final leg, they could not find the airstrip and, running low on fuel and with night approaching, Dahl was forced to attempt a landing in the desert. The undercarriage hit a boulder and the aircraft crashed. Dahl's skull was fractured and his nose was smashed; he was temporarily blinded. He managed to drag himself away from the wreckage and lost consciousness. His colleague, Douglas McDonald, had landed safely and was able to comfort Dahl until they were rescued. He wrote about the crash in his first published work. Dahl came to believe that the head injury he sustained in the crash resulted in his creative genius.
Dahl was rescued and taken to a first-aid post in Mersa Matruh, where he regained consciousness, but not his sight. He remained blind for six weeks due to massive swelling of the brain. He was transported by train to the Royal Navy hospital in Alexandria. There he fell in and out of love with a nurse, Mary Welland. An RAF inquiry into the crash revealed that the location to which he had been told to fly was completely wrong, and he had mistakenly been sent instead into the no man's land between the Allied and Italian forces.
In February 1941, Dahl was discharged from the hospital and deemed fully fit for flying duties. By this time, 80 Squadron had been transferred to the Greek campaign and based at Eleusina, near Athens. The squadron was now equipped with Hawker Hurricanes. Dahl flew a replacement Hurricane across the Mediterranean Sea in April 1941, after seven hours' experience flying Hurricanes. By this stage in the Greek campaign, the RAF had only 18 combat aircraft in Greece: 14 Hurricanes and four Bristol Blenheim light bombers. Dahl flew in his first aerial combat on 15 April 1941, while flying alone over the city of Chalcis. He attacked six Junkers Ju 88s that were bombing ships and shot one down. On 16 April in another air battle, he shot down another Ju 88.
On 20 April 1941, Dahl took part in the Battle of Athens, alongside the highest-scoring British Commonwealth ace of World War II, Pat Pattle, and Dahl's friend David Coke. Of 12 Hurricanes involved, five were shot down and four of their pilots killed, including Pattle. Greek observers on the ground counted 22 German aircraft downed, but because of the confusion of the aerial engagement, none of the pilots knew which aircraft they had shot down. Dahl described it as "an endless blur of enemy fighters whizzing towards me from every side".
In May, as the Germans were pressing on Athens, Dahl was evacuated to Egypt. His squadron was reassembled in Haifa to take part in Operation Exporter. From there, Dahl flew sorties every day for a period of four weeks, shooting down a Vichy French Air Force Potez 63 on 8 June and another Ju 88 on 15 June. In a memoir, Dahl recounts in detail an attack by him and his fellow Hurricane pilots on the Vichy-held Rayak airfield. He says that as he and his fellow Hurricane pilots swept in:
... low over the field at midday we saw to our astonishment a bunch of girls in brightly coloured cotton dresses standing out by the planes with glasses in their hands having drinks with the French pilots, and I remember seeing bottles of wine standing on the wing of one of the planes as we went swooshing over. It was a Sunday morning and the Frenchmen were evidently entertaining their girlfriends and showing off their aircraft to them, which was a very French thing to do in the middle of a war at a front-line aerodrome. Every one of us held our fire on that first pass over the flying field and it was wonderfully comical to see the girls all dropping their wine glasses and galloping in their high heels for the door of the nearest building. We went round again, but this time we were no longer a surprise and they were ready for us with their ground defences, and I am afraid that our chivalry resulted in damage to several of our Hurricanes, including my own. But we destroyed five of their planes on the ground.
Despite this somewhat light-hearted account, Dahl also noted that, ultimately, Vichy forces killed four of the nine Hurricane pilots in his squadron. Describing the Vichy forces as "disgusting", he stated that "... thousands of lives were lost, and I for one have never forgiven the Vichy French for the unnecessary slaughter they caused."
When he began to get severe headaches that caused him to black out, he was invalided home to Britain where he stayed with his mother in Buckinghamshire. Although at this time Dahl was only a pilot officer on probation, in September 1941 he was simultaneously confirmed as a pilot officer and promoted to war substantive flying officer.
Diplomat, writer and intelligence officer
After being invalided home, Dahl was posted to an RAF training camp in Uxbridge. He attempted to recover his health enough to become an instructor. In late March 1942, while in London, he met the Under-Secretary of State for Air, Major Harold Balfour, at his club. Impressed by Dahl's war record and conversational abilities, Balfour appointed the young man as assistant air attaché at the British Embassy in Washington, D.C. Initially resistant, Dahl was finally persuaded by Balfour to accept, and took passage on the from Glasgow a few days later. He arrived in Halifax, Canada, on 14 April, after which he took a sleeper train to Montreal.
Coming from war-starved Britain (in what was a wartime period of rationing in the United Kingdom), Dahl was amazed by the wealth of food and amenities to be had in North America. Arriving in Washington a week later, Dahl found he liked the atmosphere of the US capital. He shared a house with another attaché at 1610 34th Street, NW, in Georgetown. But after ten days in his new posting, Dahl strongly disliked it, feeling he had taken on "a most ungodly unimportant job". He later explained, "I'd just come from the war. People were getting killed. I had been flying around, seeing horrible things. Now, almost instantly, I found myself in the middle of a pre-war cocktail party in America."
Dahl was unimpressed by his office in the British Air Mission, attached to the embassy. He was also unimpressed by the ambassador, Lord Halifax, with whom he sometimes played tennis and whom he described as "a courtly English gentleman". Dahl socialised with Charles E. Marsh, a Texas publisher and oilman, at his house at 2136 R Street, NW, and the Marsh country estate in Virginia. As part of his duties as assistant air attaché, Dahl was to help neutralise the isolationist views still held by many Americans by giving pro-British speeches and discussing his war service; the United States had entered the war only the previous December, following the attack on Pearl Harbor.
At this time Dahl met the noted British novelist C. S. Forester, who was also working to aid the British war effort. Forester worked for the British Ministry of Information and was writing propaganda for the Allied cause, mainly for American consumption. The Saturday Evening Post had asked Forester to write a story based on Dahl's flying experiences; Forester asked Dahl to write down some RAF anecdotes so that he could shape them into a story. After Forester read what Dahl had given him, he decided to publish the story exactly as Dahl had written it. In reality a number of changes were made to the original manuscript before publication. He originally titled the article as "A Piece of Cake" but the magazine changed it to "Shot Down Over Libya" to make it sound more dramatic, although Dahl had not been shot down; it was published on 1 August 1942 issue of the Post. Dahl was promoted to flight lieutenant (war-substantive) in August 1942. Later he worked with such other well-known British officers as Ian Fleming (who later published the popular James Bond series) and David Ogilvy, promoting Britain's interests and message in the US and combating the "America First" movement.
This work introduced Dahl to espionage and the activities of the Canadian spymaster William Stephenson, known by the codename "Intrepid." During the war, Dahl supplied intelligence from Washington to Prime Minister Winston Churchill. As Dahl later said: "My job was to try to help Winston to get on with FDR, and tell Winston what was in the old boy's mind." Dahl also supplied intelligence to Stephenson and his organisation, known as British Security Coordination, which was part of MI6. Dahl was once sent back to Britain by British Embassy officials, supposedly for misconduct—"I got booted out by the big boys", he said. Stephenson promptly sent him back to Washington—with a promotion to wing commander rank. Toward the end of the war, Dahl wrote some of the history of the secret organisation; he and Stephenson remained friends for decades after the war.
Upon the war's conclusion, Dahl held the rank of a temporary wing commander (substantive flight lieutenant). Owing to the severity of his injuries from the 1940 accident, he was pronounced unfit for further service and was invalided out of the RAF in August 1946. He left the service with the substantive rank of squadron leader. His record of five aerial victories, qualifying him as a flying ace, has been confirmed by post-war research and cross-referenced in Axis records. It is most probably that he scored more than those victories during 20 April 1941, when 22 German aircraft were shot down.
Post-war life
Dahl married American actress Patricia Neal on 2 July 1953 at Trinity Church in New York City. Their marriage lasted for 30 years and they had five children:
Olivia Twenty (1955–1962);
Chantal Sophia "Tessa" (born 1957), who became an author, and mother of author, cookbook writer and former model Sophie Dahl (after whom Sophie in The BFG is named);
Theo Matthew (born 1960);
Ophelia Magdalena (born 1964);
Lucy Neal (born 1965).
On 5 December 1960, four-month-old Theo was severely injured when his baby carriage was struck by a taxicab in New York City. For a time, he suffered from hydrocephalus. As a result, Dahl became involved in the development of what became known as the "Wade-Dahl-Till" (or WDT) valve, a device to improve the shunt used to alleviate the condition. The valve was a collaboration between Dahl, hydraulic engineer Stanley Wade, and London's Great Ormond Street Hospital neurosurgeon Kenneth Till, and was used successfully on almost 3,000 children around the world.
In November 1962, Dahl's daughter Olivia died of measles encephalitis, age seven. Her death left Dahl "limp with despair", and feeling guilty about not having been able to do anything for her. Dahl subsequently became a proponent of immunisation—writing "Measles: A Dangerous Illness" in 1988 in response to measles cases in the UK—and dedicated his 1982 book The BFG to his daughter. After Olivia's death and a meeting with a Church official, Dahl came to view Christianity as a sham. In mourning he had sought spiritual guidance from Geoffrey Fisher, the former Archbishop of Canterbury, and was dismayed being told that, although Olivia was in Paradise, her beloved dog Rowley would never join her there. Dahl recalled years later:
In 1965, Dahl's wife Patricia Neal suffered three burst cerebral aneurysms while pregnant with their fifth child, Lucy. Dahl took control of her rehabilitation over the next months; Neal had to re-learn to talk and walk, but she managed to return to her acting career. This period of their lives was dramatised in the film The Patricia Neal Story (1981), in which the couple were played by Glenda Jackson and Dirk Bogarde.
In 1972, Roald Dahl met Felicity d'Abreu Crosland, niece of Lt.-Col. Francis D'Abreu who was married to Margaret Bowes Lyon, the first cousin of the Queen Mother, while Felicity was working as a set designer on an advert for Maxim coffee with the author's then wife, Patricia Neal. Soon after the pair were introduced, they began an 11-year affair. In 1983 Neal and Dahl divorced and Dahl married Felicity, at Brixton Town Hall, South London. Felicity (known as Liccy) gave up her job and moved into Gipsy House, Great Missenden in Buckinghamshire, which had been Dahl's home since 1954.
In the 1986 New Years Honours List, Dahl was offered an appointment to Officer of the Order of the British Empire (OBE), but turned it down. He reportedly wanted a knighthood so that his wife would be Lady Dahl. Dahl's last significant involvement in medical charities during his lifetime was with dyslexia. In 1990, the year which saw the UN launch International Literacy Year, Dahl assisted with the British Dyslexia Association's Awareness Campaign. That year saw Dahl write one of his last children's books, The Vicar of Nibbleswicke, which features a vicar who has a fictitious form of dyslexia that causes him to pronounce words backwards. Called "a comic tale in the best Dahl tradition of craziness" by Waterstones, Dahl donated the rights of the book to the Dyslexia Institute in London.
In 2012, Dahl was featured in the list of The New Elizabethans to mark the diamond Jubilee of Queen Elizabeth II. A panel of seven academics, journalists and historians named Dahl among the group of people in Britain "whose actions during the reign of Elizabeth II have had a significant impact on lives in these islands and given the age its character". In September 2016, Dahl's daughter Lucy received the BBC's Blue Peter Gold badge in his honour, the first time it had ever been awarded posthumously.
Writing
Dahl's first published work, inspired by a meeting with C. S. Forester, was "A Piece of Cake", on 1 August 1942. The story, about his wartime adventures, was bought by The Saturday Evening Post for US$1,000 () and published under the title "Shot Down Over Libya".
His first children's book was The Gremlins, published in 1943, about mischievous little creatures that were part of Royal Air Force folklore. The RAF pilots blamed the gremlins for all the problems with the aircraft. The protagonist Gus—an RAF pilot, like Dahl—joins forces with the gremlins against a common enemy, Hitler and the Nazis. While at the British Embassy in Washington, Dahl sent a copy to the First Lady Eleanor Roosevelt who read it to her grandchildren, and the book was commissioned by Walt Disney for a film that was never made. Dahl went on to write some of the best-loved children's stories of the 20th century, such as Charlie and the Chocolate Factory, Matilda, James and the Giant Peach, The Witches, Fantastic Mr Fox, The BFG, The Twits and George's Marvellous Medicine.
Dahl also had a successful parallel career as the writer of macabre adult short stories, which often blended humour and innocence with surprising plot twists. The Mystery Writers of America presented Dahl with three Edgar Awards for his work, and many were originally written for American magazines such as Collier's ("The Collector's Item" was Collier's Star Story of the week for 4 September 1948), Ladies' Home Journal, Harper's, Playboy and The New Yorker. Works such as Kiss Kiss subsequently collected Dahl's stories into anthologies, and gained significant popularity. Dahl wrote more than 60 short stories; they have appeared in numerous collections, some only being published in book form after his death. His three Edgar Awards were given for: in 1954, the collection Someone Like You; in 1959, the story "The Landlady"; and in 1980, the episode of Tales of the Unexpected based on "Skin".
One of his more famous adult stories, "The Smoker", also known as "Man from the South", was filmed twice as both 1960 and 1985 episodes of Alfred Hitchcock Presents, filmed as a 1979 episode of Tales of the Unexpected, and also adapted into Quentin Tarantino's segment of the film Four Rooms (1995). This oft-anthologised classic concerns a man in Jamaica who wagers with visitors in an attempt to claim the fingers from their hands. The original 1960 version in the Hitchcock series stars Steve McQueen and Peter Lorre. Five additional Dahl stories were used in the Hitchcock series. Dahl was credited with teleplay for two episodes, and four of his episodes were directed by Alfred Hitchcock himself, an example of which was "Lamb to the Slaughter" (1958).
Dahl acquired a traditional Romanichal vardo in the 1960s, and the family used it as a playhouse for his children at home in Great Missenden, Buckinghamshire. He later used the vardo as a writing room, where he wrote Danny, the Champion of the World in 1975. Dahl incorporated a similar caravan into the main plot of the book, where the young English boy, Danny, and his father, William (played by Jeremy Irons in the film adaptation) live in a vardo. Many other scenes and characters from Great Missenden are reflected in his work. For example, the village library was the inspiration for Mrs Phelps' library in Matilda, where the title character devours classic literature by the age of four.
His short story collection Tales of the Unexpected was adapted to a successful TV series of the same name, beginning with "Man from the South". When the stock of Dahl's own original stories was exhausted, the series continued by adapting stories written in Dahl's style by other authors, including John Collier and Stanley Ellin. Another collection of short stories, The Wonderful Story of Henry Sugar and Six More, was published in 1977, and the eponymous short story was adapted into a short film in 2023 by director Wes Anderson with Benedict Cumberbatch as the titular character Henry Sugar and Ralph Fiennes as Dahl.
Some of Dahl's short stories are supposed to be extracts from the diary of his (fictional) Uncle Oswald, a rich gentleman whose sexual exploits form the subject of these stories. In his novel My Uncle Oswald, the uncle engages a temptress to seduce 20th century geniuses and royalty with a love potion secretly added to chocolate truffles made by Dahl's favourite chocolate shop, Prestat of Piccadilly, London. Memories with Food at Gipsy House, written with his wife Felicity and published posthumously in 1991, was a mixture of recipes, family reminiscences and Dahl's musings on favourite subjects such as chocolate, onions and claret.
The last book published in his lifetime, Esio Trot, released in January 1990, marked a change in style for the author. Unlike other Dahl works (which often feature tyrannical adults and heroic/magical children), it is the story of an old, lonely man trying to make a connection with a woman he has loved from afar. In 1994, the English language audiobook recording of the book was provided by Monty Python member Michael Palin. Screenwriter Richard Curtis adapted it into a 2015 BBC television comedy film, Roald Dahl's Esio Trot, featuring Dustin Hoffman and Judi Dench as the couple.
Written in 1990 and published posthumously in 1991, Roald Dahl's Guide to Railway Safety was one of the last things he ever wrote. In a response to rising levels of train-related fatalities involving children, the British Railways Board had asked Dahl to write the text of the booklet, and Quentin Blake to illustrate it, to help young people enjoy using the railways safely. The booklet is structured as a conversation with children, and it was distributed to primary school pupils in Britain. According to children's literature critic Deborah Cogan Thacker, Dahl's tendency in his children's books is to "put child characters in powerful positions" and so, the idea of "talking down" to children was always an anathema to him, therefore Dahl, in the introduction of the booklet, states; "I must now regretfully become one of those unpopular giants who tells you WHAT TO DO and WHAT NOT TO DO. This is something I have never done in any of my books."
Children's fiction
Dahl's children's works are usually told from the point of view of a child. They typically involve adult villains who hate and mistreat children, and feature at least one "good" adult to counteract the villain(s). These stock characters are possibly a reference to the abuse that Dahl stated that he experienced in the boarding schools he attended. In a biography of Dahl, Matthew Dennison wrote that "his writing frequently included protests against unfairness". Dahl's books see the triumph of the child; children's book critic Amanda Craig said, "He was unequivocal that it is the good, young and kind who triumph over the old, greedy and the wicked." Anna Leskiewicz in The Telegraph wrote, "It's often suggested that Dahl's lasting appeal is a result of his exceptional talent for wriggling his way into children's fantasies and fears, and laying them out on the page with anarchic delight. Adult villains are drawn in terrifying detail, before they are exposed as liars and hypocrites, and brought tumbling down with retributive justice, either by a sudden magic or the superior acuity of the children they mistreat."
While his whimsical fantasy stories feature an underlying warm sentiment, they are often juxtaposed with grotesque, darkly comic and sometimes harshly violent scenarios. The Witches, George's Marvellous Medicine and Matilda are examples of this formula. The BFG follows, with the good giant (the BFG or "Big Friendly Giant") representing the "good adult" archetype and the other giants being the "bad adults". This formula is also somewhat evident in Dahl's film script for Chitty Chitty Bang Bang. Class-conscious themes also surface in works such as Fantastic Mr Fox and Danny, the Champion of the World where the unpleasant wealthy neighbours are outwitted.
Dahl also features characters who are very fat, usually children. Augustus Gloop, Bruce Bogtrotter and Bruno Jenkins are a few of these characters, although an enormous woman named Aunt Sponge features in James and the Giant Peach and the nasty farmer Boggis in Fantastic Mr Fox is an enormously fat character. All of these characters (with the possible exception of Bruce Bogtrotter) are either villains or simply unpleasant gluttons. They are usually punished for this: Augustus Gloop drinks from Willy Wonka's chocolate river, disregarding the adults who tell him not to, and falls in, getting sucked up a pipe and nearly being turned into fudge. In Matilda, Bruce Bogtrotter steals cake from the evil headmistress, Miss Trunchbull, and is forced to eat a gigantic chocolate cake in front of the school; when he unexpectedly succeeds at this, Trunchbull smashes the empty plate over his head. In The Witches, Bruno Jenkins is lured by the witches (whose leader is the Grand High Witch) into their convention with the promise of chocolate, before they turn him into a mouse. Aunt Sponge is flattened by a giant peach. When Dahl was a boy his mother used to tell him and his sisters tales about trolls and other mythical Norwegian creatures, and some of his children's books contain references or elements inspired by these stories, such as the giants in The BFG, the fox family in Fantastic Mr Fox and the trolls in The Minpins.
Receiving the 1983 World Fantasy Award for Life Achievement, Dahl encouraged his children and his readers to let their imagination run free. His daughter Lucy stated "his spirit was so large and so big he taught us to believe in magic." She said her father later told her that if they had simply said goodnight after a bedtime story, he assumed it wasn't a good idea. But if they begged him to continue, he knew he was on to something, and the story would sometimes turn into a book.
Dahl was also famous for his inventive, playful use of language, which was a key element to his writing. He invented over 500 new words by scribbling down his words before swapping letters around and adopting spoonerisms and malapropisms. The lexicographer Susan Rennie stated that Dahl built his new words on familiar sounds, adding:
As marketing director of Penguin Books in the 1980s, Barry Cunningham travelled the UK with Dahl on a promotional book tour, during which he asked Dahl what the secret of his success was, with Dahl responding, "the thing you've got to remember, is that humour is delayed fear, laughter is delayed fear." Cunningham later recollected, "if you look at the way he uses humour and the way that children use humour, perhaps sometimes it's the only weapon they have against terrifying circumstances or people. That's very indicative of his stories and the style of those stories."
A UK television special titled Roald Dahl's Revolting Rule Book which was hosted by Richard E. Grant and aired on 22 September 2007, commemorated Dahl's 90th birthday and also celebrated his impact as a children's author in popular culture. It also featured eight main rules he applied on all his children's books:
Just add chocolate
Adults can be scary
Bad things happen
Revenge is sweet
Keep a wicked sense of humour
Pick perfect pictures
Films are fun...but books are better!
Food is fun!
In 2016, marking the centenary of Dahl's birth, Rennie compiled The Oxford Roald Dahl Dictionary which includes many of his invented words and their meaning. Rennie commented that some of Dahl's words have already escaped his world, for example, Scrumdiddlyumptious: "Food that is utterly delicious". In his poetry, Dahl gives a humorous re-interpretation of well-known nursery rhymes and fairy tales, parodying the narratives and providing surprise endings in place of the traditional happily-ever-after. Dahl's collection of poems, Revolting Rhymes, is recorded in audiobook form, and narrated by actor Alan Cumming.
Screenplays
For a brief period in the 1960s, Dahl wrote screenplays. Two, the James Bond film You Only Live Twice and Chitty Chitty Bang Bang, were adaptations of novels by Ian Fleming. Dahl also began adapting his own novel Charlie and the Chocolate Factory, which was completed and rewritten by David Seltzer after Dahl failed to meet deadlines, and produced as the film Willy Wonka & the Chocolate Factory (1971). Dahl later disowned the film, saying he was "disappointed" because "he thought it placed too much emphasis on Willy Wonka and not enough on Charlie". He was also "infuriated" by the deviations in the plot devised by David Seltzer in his draft of the screenplay. This resulted in his refusal for any more versions of the book to be made in his lifetime, as well as an adaptation for the sequel Charlie and the Great Glass Elevator.
He wrote the script for a film that began filming but was abandoned, Death, Where is Thy Sting-a-ling-ling?.
Influences
A major part of Dahl's literary influences stemmed from his childhood. In his younger days, he was an avid reader, especially awed by fantastic tales of heroism and triumph. He met his idol, Beatrix Potter, when he was six years old. His other favourite authors included Rudyard Kipling, Charles Dickens, William Makepeace Thackeray and former Royal Navy officer Frederick Marryat, and their works made a lasting mark on his life and writing. He named Marryat's Mr Midshipman Easy as his favourite novel. Joe Sommerlad in The Independent writes, "Dahl's novels are often dark affairs, filled with cruelty, bereavement and Dickensian adults prone to gluttony and sadism. The author clearly felt compelled to warn his young readers about the evils of the world, taking the lesson from earlier fairy tales that they could stand hard truths and would be the stronger for hearing them."
Dahl was also influenced by Lewis Carroll's Alice's Adventures in Wonderland. The "Drink Me" episode in Alice inspired a scene in Dahl's George's Marvellous Medicine where a tyrannical grandmother drinks a potion and is blown up to the size of a farmhouse. Finding too many distractions in his house, Dahl remembered the poet Dylan Thomas had found a peaceful shed to write in close to home. Dahl travelled to visit Thomas's hut in Carmarthenshire, Wales in the 1950s and, after taking a look inside, decided to make a replica of it to write in. Appearing on BBC Radio 4's Desert Island Discs in October 1979, Dahl named Thomas "the greatest poet of our time", and as one of his eight chosen records selected Thomas's reading of his poem "Fern Hill".
Dahl liked ghost stories, and claimed that Trolls by Jonas Lie was one of the finest ghost stories ever written. While he was still a youngster, his mother, Sofie Dahl, related traditional Norwegian myths and legends from her native homeland to Dahl and his sisters. Dahl always maintained that his mother and her stories had a strong influence on his writing. In one interview, he mentioned: "She was a great teller of tales. Her memory was prodigious and nothing that ever happened to her in her life was forgotten." When Dahl started writing and publishing his famous books for children, he included a grandmother character in The Witches, and later said that she was based directly on his own mother as a tribute.
Television
In 1961, Dahl hosted and wrote for a science fiction and horror television anthology series called Way Out, which preceded the Twilight Zone series on the CBS network for 14 episodes from March to July. One of the last dramatic network shows shot in New York City, the entire series is available for viewing at The Paley Center for Media in New York City and Los Angeles. He also wrote for the satirical BBC comedy programme That Was the Week That Was, which was hosted by David Frost.
The British television series, Tales of the Unexpected, originally aired on ITV between 1979 and 1988. The series was released to tie in with Dahl's short story anthology of the same name, which had introduced readers to many motifs that were common in his writing. The series was an anthology of different tales, initially based on Dahl's short stories. The stories were sometimes sinister, sometimes wryly comedic and usually had a twist ending. Dahl introduced on camera all the episodes of the first two series, which bore the full title Roald Dahl's Tales of the Unexpected.
Death and legacy
Roald Dahl died on 23 November 1990, at the age of 74 of a rare cancer of the blood, myelodysplastic syndrome, in Oxford, and was buried in the cemetery at the Church of St Peter and St Paul, Great Missenden, Buckinghamshire, England. His obituary in The Times was titled "Death silences Pied Piper of the macabre". According to his granddaughter, the family gave him a "sort of Viking funeral". He was buried with his snooker cues, some very good burgundy, chocolates, HB pencils and a power saw. Today, children continue to leave toys and flowers by his grave.
In 1996, the Roald Dahl Children's Gallery was opened at the Buckinghamshire County Museum in nearby Aylesbury. The main-belt asteroid 6223 Dahl, discovered by Czech astronomer Antonín Mrkos, was named in his memory in 1996.
In 2002, one of Cardiff Bay's modern landmarks, the Oval Basin plaza, was renamed Roald Dahl Plass. Plass is Norwegian for "place" or "square", alluding to the writer's Norwegian roots. There have also been calls from the public for a permanent statue of him to be erected in Cardiff. In 2016, the city celebrated the centenary of Dahl's birth in Llandaff. Welsh Arts organisations, including National Theatre Wales, Wales Millennium Centre and Literature Wales, came together for a series of events, titled Roald Dahl 100, including a Cardiff-wide City of the Unexpected, which marked his legacy.
Dahl's charitable commitments in the fields of neurology, haematology and literacy during his life have been continued by his widow since his death, through Roald Dahl's Marvellous Children's Charity, formerly known as the Roald Dahl Foundation. The charity provides care and support to seriously ill children and young people throughout Britain. In June 2005, the Roald Dahl Museum and Story Centre in the author's home village Great Missenden was officially opened by Cherie Blair, wife of then British Prime Minister Tony Blair, to celebrate the work of Roald Dahl and advance his work in literacy education. Over 50,000 visitors from abroad, mainly from Australia, Japan, the United States and Germany, travel to the village museum every year.
In 2008, the UK charity Booktrust and Children's Laureate Michael Rosen inaugurated The Roald Dahl Funny Prize, an annual award to authors of humorous children's fiction. On 14 September 2009 (the day after what would have been Dahl's 93rd birthday) the first blue plaque in his honour was unveiled in Llandaff. Rather than commemorating his place of birth, however, the plaque was erected on the wall of the former sweet shop (and site of "The Great Mouse Plot of 1924") that features in the first part of his autobiography Boy. It was unveiled by his widow Felicity and son Theo. In 2018, Weston-super-Mare, the town described by Dahl as a "seedy seaside resort", unveiled a blue plaque dedicated to him, on the site of the since-demolished boarding school Dahl attended, St Peter's. The anniversary of Dahl's birthday on 13 September is celebrated as "Roald Dahl Day" in Africa, the United Kingdom and Latin America.
In honour of Dahl, the Royal Gibraltar Post Office issued a set of four stamps in 2010 featuring Quentin Blake's original illustrations for four of the children's books written by Dahl during his long career; The BFG, The Twits, Charlie and the Chocolate Factory, and Matilda. A set of six commemorative Royal Mail stamps was issued in 2012, featuring Blake's illustrations for Charlie and the Chocolate Factory, The Witches, The Twits, Matilda, Fantastic Mr Fox, and James and the Giant Peach. Dahl's influence has extended beyond literary figures. For instance, the film director Tim Burton recalled from childhood "the second layer [after Dr. Seuss] of connecting to a writer who gets the idea of the modern fable—and the mixture of light and darkness, and not speaking down to kids, and the kind of politically incorrect humour that kids get. I've always like that, and it's shaped everything I've felt that I've done." Steven Spielberg read The BFG to his children when they were young, stating the book celebrates the fact that it's OK to be different as well as to have an active imagination: "It's very important that we preserve the tradition of allowing young children to run free with their imaginations and magic and imagination are the same thing." Actress Scarlett Johansson named Fantastic Mr Fox one of the five books that made a difference to her.
Regarded as "one of the greatest storytellers for children of the 20th century", Dahl was named by The Times one of the 50 greatest British writers since 1945. He ranks amongst the world's best-selling fiction authors with sales estimated at over 300 million, and his books have been published in 63 languages. In 2000, Dahl topped the list of Britain's favourite authors. In 2003, four books by Dahl, led by Charlie and the Chocolate Factory at number 35, ranked among the Top 100 in The Big Read, a survey of the British public by the BBC to determine the "nation's best-loved novel" of all time. In surveys of British teachers, parents and students, Dahl is frequently ranked the best children's writer. He won the first three Australian BILBY Younger Readers Award; for Matilda, The BFG, and Charlie and the Chocolate Factory. In a 2006 list for the Royal Society of Literature, Harry Potter creator J. K. Rowling named Charlie and the Chocolate Factory one of her top ten books every child should read. Critics have commented on the similarities between the Dursley family from Harry Potter and the nightmarish guardians seen in many of Dahl's books, such as Aunt Sponge and Aunt Spiker from James and the Giant Peach, Grandma from George's Marvellous Medicine, and the Wormwoods from Matilda. Barry Cunningham, who as publisher of Bloomsbury signed Rowling, cited his experiences travelling with Dahl in promotional book tours of the UK as helping him see the potential of Rowling's work, stating, "I think it was because I didn't come from a traditional background. I'd come from marketing and promotion. I'd seen how children relate to books". In 2012, Matilda was ranked number 30 among all-time best children's novels in a survey published by School Library Journal, a monthly with primarily US audience. The Top 100 included four books by Dahl, more than any other writer. The American magazine Time named three Dahl books in its list of the 100 Best Young-Adult Books of All Time, more than any other author. Dahl is one of the most borrowed authors in British libraries.
In 2012, Dahl was among the British cultural icons selected by artist Peter Blake to appear in a new version of his most famous artwork—the Beatles' Sgt. Pepper's Lonely Hearts Club Band album cover—to celebrate the British cultural figures of his life he most admires. In 2016 Dahl's enduring popularity was proved by his ranking in Amazon's the top five best-selling children's authors on the online store over the last year, looking at sales in print and on the Kindle store. In a 2017 UK poll of the greatest authors, songwriters, artists and photographers, Dahl was named the greatest storyteller of all time, ranking ahead of Dickens, Shakespeare, Rowling and Spielberg. In 2017, the airline Norwegian announced Dahl's image would appear on the tail fin one of their Boeing 737-800 aircraft. He is one of the company's six "British tail fin heroes", joining Queen frontman Freddie Mercury, England World Cup winner Bobby Moore, novelist Jane Austen, pioneering pilot Amy Johnson and aviation entrepreneur Freddie Laker.
In September 2021, Netflix acquired the Roald Dahl Story Company in a deal worth more than £500 million ($686 million). A film adaptation of Matilda the Musical was released by Netflix and Sony Pictures Releasing in December 2022, and the cast includes Emma Thompson as Miss Trunchbull. The next Dahl adaptation for Netflix, The Wonderful Story of Henry Sugar, was released in September 2023, with its director Wes Anderson also adapting three additional Dahl short stories for Netflix in 2024.
Criticism and controversies
Opposition to Israel and antisemitic comments
Dahl reviewed Australian author Tony Clifton's God Cried, a picture book about the siege of West Beirut by the Israeli army during the 1982 Lebanon War. The article appeared in the August 1983 issue of the Literary Review and was the subject of much media comment and criticism at the time. According to Dahl, until this point in time "a race of people", meaning Jews, had never "switched so rapidly from much-pitied victims to barbarous murderers". The empathy of all after the Holocaust had turned "into hatred and revulsion". Dahl wrote that Clifton's book would make readers "violently anti-Israeli", saying, "I am not anti-Semitic. I am anti-Israel." He asked, "must Israel, like Germany, be brought to her knees before she learns how to behave in this world?". The United States, he said, was "so utterly dominated by the great Jewish financial institutions" that "they dare not defy" Israelis.
Following the Literary Review article, Dahl told a journalist from the New Statesman: "There's a trait in the Jewish character that does provoke animosity, maybe it's a kind of lack of generosity towards non-Jews. I mean there is always a reason why anti-anything crops up anywhere; even a stinker like Hitler didn't just pick on them for no reason." In 1990, during an interview with The Independent, Dahl explained that his issue with Israel began when they invaded Lebanon in 1982:
Responding in 1990 to a journalist from The Jewish Chronicle, whom he considered rude, he said, "I am an old hand at dealing with you buggers." Jeremy Treglown, in his 1994 biography, writes of Dahl's first novel Sometime Never (1948), "plentiful revelations about Nazi anti-Semitism and the Holocaust did not discourage him from satirising 'a little pawnbroker in Hounsditch called Meatbein who, when the wailing started, would rush downstairs to the large safe in which he kept his money, open it and wriggle inside on to the lowest shelf where he lay like a hibernating hedgehog until the all-clear had gone'." In a short story entitled "Madame Rosette", the eponymous character is termed "a filthy old Syrian Jewess".
Dahl had Jewish friends, including the philosopher Isaiah Berlin, who commented, "I thought he might say anything. Could have been pro-Arab or pro-Jew. There was no consistent line. He was a man who followed whims, which meant he would blow up in one direction, so to speak." Amelia Foster, director of the Roald Dahl Museum in Great Missenden, says, "This is again an example of how Dahl refused to take anything seriously, even himself. He was very angry at the Israelis. He had a childish reaction to what was going on in Israel. Dahl wanted to provoke, as he always provoked at dinner. His publisher was a Jew, his agent was a Jew... and he thought nothing but good things of them. He asked me to be his managing director, and I'm Jewish."
In 2014, the Royal Mint decided not to produce a coin to commemorate the centenary of Dahl's birth, saying that it considered him to be "associated with antisemitism and not regarded as an author of the highest reputation". In 2020, Dahl's family published a statement on the official Roald Dahl website apologising for his antisemitism. The statement says, "The Dahl family and the Roald Dahl Story Company deeply apologise for the lasting and understandable hurt caused by some of Roald Dahl's statements. Those prejudiced remarks are incomprehensible to us and stand in marked contrast to the man we knew and to the values at the heart of Roald Dahl's stories, which have positively impacted young people for generations. We hope that, just as he did at his best, at his absolute worst, Roald Dahl can help remind us of the lasting impact of words." The apology was received with appreciation by some Jewish groups but not others. The Campaign Against Antisemitism, for example, said that, "For his family and estate to have waited thirty years to make an apology, apparently until lucrative deals were signed with Hollywood, is disappointing and sadly rather more comprehensible."
Use of stereotypes
In 1972, Eleanor Cameron, also a children's book author, published an article in The Horn Book criticising Charlie and the Chocolate Factory for being self-referentially hypocritical: Cameron also took issue with Dahl's depiction of the African-derived Oompa-Loompas, who "have never been given the opportunity of any life outside of the chocolate factory", and suggested that teachers look for better literature to use in the classroom. In 1973, Dahl posted a reply, calling Cameron's accusations "insensitive" and "monstrous". The Horn Book published Cameron's response, where she clarified that she intended her article not to be a personal attack on Dahl, but rather to point out that, although the book is a work of fiction, it still influences reality. Herein, she again objected to the characterization of the Oompa-Loompas, stating, "[T]he situation of the Oompa-Loompas is real; it could not be more so, and it is anything but funny." The debate between the two authors sparked much discussion and a number of letters to the editor.
A 1991 Washington Post article echoed Cameron's comments, with Michael Dirda writing, "the Oompa-Loompas... reveal virtually every stereotype about blacks." Dirda's article also discussed many of the other criticisms of Dahl's writing, including his alleged sexism, of which Dirda wrote, "The Witches verges on a general misogyny." In a 1998 article for Lilith, Michele Landsberg analysed the alleged issues in Dahl's work and concluded that, "Throughout his work, evil, domineering, smelly, fat, ugly women are his favorite villains."
In 2008, Una Mullally wrote an article for The Irish Times that described Dahl's short story collection Switch Bitch as "a collection better forgotten, laden with crude and often disturbing sexual fantasy writing". Nonetheless, Mullally argued that there are feminist messages in Dahl's work, even if they may be obscured, "The Witches offers up plenty of feminist complexities. The witches themselves are terrifying and vile things, and always women... The book is often viewed as sexist, but that assessment ignores one of the heroines of the story, the child narrator's grandmother."
2023 revisions
In 2023, Puffin Books, which holds the rights to all Dahl's children's books, ignited controversy after they hired sensitivity readers to go through the original text of Dahl's works, which led to hundreds of revisions to his books; The Telegraph published a list of many of these changes. The move was supported by a number of authors, most notably by Joanne Harris, chair of the Society of Authors, and Diego Jourdan Pereira at Writer's Digest, but drew many more critical responses. Several public figures, including then-Prime Minister Rishi Sunak and author Salman Rushdie, spoke out against the changes. It was reported that when Dahl was alive, he had spoken out very strongly against any changes ever being made to any of his books. On 23 February 2023, Puffin announced it would release an unedited selection of Dahl's children's books as 'The Roald Dahl Classic Collection', stating, "We've listened to the debate over the past week which has reaffirmed the extraordinary power of Roald Dahl's books" and "recognise the importance of keeping Dahl's classic texts in print".
Filmography
Writing roles
Presenting roles
Non-presenting appearances
Publications
Notes
References
Sources
Further reading
Jason Hook, Roald Dahl: The Storyteller, Raintree, 2004
External links
Roald Dahl's darkest hour (biography excerpt)
Radio interview by NRK (1975)
"The Devious Bachelor", Sunday Book Review of The Irregulars, Roald Dahl and the British Spy Ring in Wartime Washington by Jennet Conant, The New York Times, 17 October 2008
Profile of Patricia Neal (2011) on Voice of America (VOAnews.com), with transcript
Footage of one Whitbread Book Prize presentation by Dahl (1982)
Michael Coren, How I outed Roald Dahl as a venomous antisemite
1916 births
1990 deaths
20th-century British male writers
20th-century British novelists
20th-century British poets
British humorous poets
20th-century British short story writers
20th-century Welsh male writers
20th-century Welsh novelists
20th-century Welsh poets
20th-century Welsh short story writers
Writers from Cardiff
Absurdist fiction
British anti-Zionists
Audiobook narrators
British children's writers
British fantasy writers
British horror writers
British male screenwriters
British male writers
British parodists
British people of Norwegian descent
British science fiction writers
British short story writers
British World War II flying aces
Burials in Buckinghamshire
British children's poets
Deaths from cancer in England
Deaths from myelodysplastic syndrome
Edgar Award winners
Former Lutherans
King's African Rifles officers
Military personnel from Cardiff
People educated at The Cathedral School, Llandaff
People educated at Repton School
People from Llandaff
Royal Air Force pilots of World War II
Royal Air Force squadron leaders
Vaccination advocates
Welsh children's writers
Welsh fantasy writers
Welsh horror writers
Welsh male screenwriters
Welsh people of Norwegian descent
Welsh people of Scottish descent
Welsh science fiction writers
World War II spies for the United Kingdom | Roald Dahl | [
"Biology"
] | 12,381 | [
"Vaccination",
"Vaccination advocates"
] |
55,525 | https://en.wikipedia.org/wiki/Liturgical%20year | The liturgical year, also called the church year, Christian year, ecclesiastical calendar, or kalendar, consists of the cycle of liturgical days and seasons that determines when feast days, including celebrations of saints, are to be observed, and which portions of scripture are to be read.
Distinct liturgical colours may be used in connection with different seasons of the liturgical year. The dates of the festivals vary somewhat among the different churches, although the sequence and logic is largely the same.
Liturgical cycle
The liturgical cycle divides the year into a series of seasons, each with their own mood, theological emphases, and modes of prayer, which can be signified by different ways of decorating churches, colours of paraments and vestments for clergy, scriptural readings, themes for preaching and even different traditions and practices often observed personally or in the home. In churches that follow the liturgical year, the scripture passages for each Sunday (and even each day of the year in some traditions) are specified in a lectionary.
After the Protestant Reformation, Anglicans and Lutherans continued to follow the lectionary of the Roman Rite. Following a decision of the Second Vatican Council, the Catholic Church revised that lectionary in 1969, adopting a three-year cycle of readings for Sundays and a two-year cycle for weekdays.
Adaptations of the revised Roman Rite lectionary were adopted by Protestants, leading to the publication in 1994 of the Revised Common Lectionary for Sundays and major feasts, which is now used by many Protestant denominations, including also Methodists, United, some Reformed, etc. This has led to a greater awareness of the traditional Christian year among Protestants, especially among mainline denominations.
Biblical calendars
Scholars are not in agreement about whether the calendars used by the Jews before the Babylonian exile were solar (based on the return of the same relative position between the Sun and the Earth), lunisolar (based on months that corresponded to the cycle of the moon, with periodic additional months to bring the calendar back into agreement with the solar cycle) like the present-day Jewish calendar of Hillel II, or lunar, such as the Hijri calendar.
The first month of the Hebrew year was called (Aviv), meaning the month of green ears of grain. Having to occur at the appropriate time in the spring, it thus was originally part of a tropical calendar. At about the time of the Babylonian exile, when using the Babylonian civil calendar, the Jews adopted the term (Nisan) as the name for the month, based on the Babylonian name Nisanu. Thomas J Talley says that the adoption of the Babylonian term occurred even before the exile.
In the earlier calendar, most of the months were simply called by a number (such as "the fifth month"). The Babylonian-derived names of the month that are used by Jews are:
Nisan (March–April)
Iyar (April–May)
Sivan (May–June)
Tammuz (June–July)
Av (July–August)
Elul (August–September)
Tishrei (September–October)
Marcheshvan (October–November)
Kislev (November–December)
Tevet (December–January)
Shevat (January–February)
Adar 1 (February; only during leap years)
Adar (February–March)
In Biblical times, the following Jewish religious feasts were celebrated:
Pesach (Passover) – 14 Nisan (sacrifice of a lamb), 15 Nisan (Passover seder)
Chag HaMatzot (Unleavened Bread) – 15–21 Nisan
Reishit Katzir (Firstfruits) – 16 Nisan
Shavuot (Weeks) – Fiftieth day counted from Passover, normally 6–7 Sivan
Rosh Hashanah (Trumpets) – 1–2 Tishrei
Yom Kippur (Atonement) – 10 Tishrei
Sukkot (Tabernacles) – 15–21 Tishrei
Chanukah (Dedication) – 25 Kislev–2/3 Tevet (instituted in 164 BC)
Purim (Lots) – 14–15 Adar (instituted in )
Eastern Christianity
East Syriac Rite
The Liturgical Calendar of the East Syriac Rite is fixed according to the flow of salvation history. With a focus upon the historical life of Jesus Christ, believers are led to the eschatological fulfillment (i.e. the heavenly bliss) through this special arrangement of liturgical seasons. The liturgical year is divided into 8 seasons of approximately 7 weeks each but adjusted to fit the solar calendar. The arrangement of the Seasons in the Liturgical Year is based on seven central events on celebrations of the Salvation History. They are:
Nativity of Christ
Epiphany of Christ
Resurrection of Christ
Pentecost
Transfiguration
Glorious Cross
Parousia (the Dedication of Church after Christ's second coming)
One of the oldest available records mentioning the liturgical cycle of east-syriac rite is handwritten manuscript named 'Preface to Hudra' written by Rabban Brick-Iso in 14th century. The manuscript mentions that the liturgical year is divided into nine seasons starting from Subara and ends with Qudas Edta. Catholic churches of east-syriac rite maintains the same liturgical calendar until the current date except that many consider 7th and 8th seasons as a single one. The biblical reading and prayers during Mass and Liturgy of the Hours vary according to different seasons in the liturgical calendar.
Liturgical Calendar
The various seasons of the liturgical calendar of Syro-Malabar Church and Chaldean Catholic Church are given below.
Annunciation (Subara)
Weeks of Annunciation (Subara) is the first season of the liturgical year. The liturgical year begins with the commemoration of biblical events leading to the annunciation and birth of Jesus as expected savior in the old testament. The season begins on the Sunday just before the first of December and ends with the feast of Epiphany that is the Feast of the Baptism of Jesus. The faithful practice abstinence during December 1–25 in preparation for Christmas; this period is called "25 days Lent".
Feasts celebrated during this season
Feast of the Immaculate Conception of Mary, mother of Jesus (December 8)
Feast of Miraculous Cross of Mylapore (Saint Thomas Christian cross) (December 18) in Syro Malabar Church
Nativity of Our Lord and Saviour Jesus Christ or Christmas (December 25)
Feast of Holy Infants (December 28)
Feast of Name Iso (January 1)
Feast of Mary, mother of Jesus (last Friday of Season)
Epiphany (Denha)
Weeks of Epiphany begins on the Sunday closest to the feast of Epiphany and runs to the beginning of Great Fast. The word denha in Syriac means sunrise. Church considers the baptism of Jesus in the River Jordan as the first historical event in which the Trinity was revealed to humankind in the person of Jesus Christ. Thus the season commemorates the manifestation or revelation of Jesus and Trinity to the world. During the season the church celebrates the feasts of Saints in connection with the manifestation of the Lord.
Feasts celebrated during the period
Feast of Epiphany or Feast of Baptism of the Lord (January 6)
Feast of Saint John the Baptist on first Friday of Epiphany
Feast of Apostles Peter (Kepha) and Paul on second Friday of Epiphany
Feast of Evangelists on third Friday of Epiphany
Feast of Saint Stephan on fourth Friday of Epiphany
Feast of Fathers of Church on fifth Friday of Epiphany
Feast of Patron Saint of Church on sixth Friday of Epiphany
Feast of all departed faithful on last Friday of Epiphany
Great Fast (Sawma Rabba)
During these weeks the faithful meditate over the 40-day fast of Jesus and the culmination of his public life in passion, death and burial. The season begins 50 days before Easter on Peturta Sunday and comprises the whole period of Great Lent and culminates on Resurrection Sunday. Word Peturta in Syriac means "looking back" or "reconciliation".
Faithful enter the weeks of Great Fast, celebrating the memory of all the Faithful Departed on the last Friday of Denha.
According to the ecclesial tradition, the weeks of Great Fast is also an occasion to keep up the memory of the beloved Departed through special prayers, renunciation, almsgiving, and so on and thus prepare oneself for a good death and resurrection in Jesus Christ. During the fast faithful of Syro Malabar Church do not use meat, fish, egg, many dairy products, and most favorite food items, and avoid sexual contacts on all days including Sundays and Feast days. Before European colonization, Indian Nasranis used to have food only once a day (after 3:00 pm) on all days during Great Fast.
-
Feasts in the Lenten Season
Peturta Sunday on First Sunday of Great Fast
Ash Monday or Clean Monday on the first day (Monday) of Great Fast
Lazarus Friday on the sixth Friday of Great Fast
Oshana Sunday on the seventh Sunday of Great Fast
Thursday of Pesha
Friday of Passion or Good Friday
Great Saturday or Saturday of Light
The following feasts are always in the Lenten Season:
Feast of Mar Cyril of Jerusalem (March 18)
Feast of Saint Joseph (March 19)
Feast of the Annunciation (March 25)
Resurrection (Qyamta)
The weeks of Great Resurrection begin on the Resurrection Sunday and run to the feast of Pentecost. The Church celebrates the Resurrection of our Lord during these seven weeks: Jesus' victory over death, sin, suffering and Satan. The church also commemorates various events that occurred after the resurrection of Christ, such as the visits of Jesus to the Apostles and the ascension of Jesus.
According to eastern Christianity, the Feast of Resurrection is the most important and the greatest feast in a liturgical year. Therefore, the season commemorating the resurrection of Christ is also of prime importance in the church liturgy. The first week of the season is celebrated as the 'Week of weeks' as it is the week of the resurrection of Christ.
Feasts celebrated during the period:
Feast of Resurrection of Christ
Feast of All Confessors (Saints) on the first Friday of Qyamta
New Sunday or St. Thomas Sunday on the second Sunday of Qyamta
Feast of Ascension of Jesus on the sixth Friday of Qyamta
The following feasts are always in the season of resurrection:
Feast of Saint George (April 24)
Feast of Mark the Evangelist (April 25)
Feast of Saint Joseph, the worker (May 1)
Feasts of Saint Philip and Saint James the apostles (May 3)
Apostles (Slihe)
Weeks of apostles (Slihe) starts on the feast of Pentecost, fiftieth day of the Resurrection Sunday. During these days the church commemorates the inauguration of church and the acts of apostles and church fathers through which the foundation of the church was laid. Church meditates on the virtues of the early church: fellowship, breaking of bread and sharing of wealth, and the fruits and gifts of Holy Spirit. The spread of the church all over the world as well as her growth is also remembered during this season.
Feast celebrated during the season:
Feast of Pentecost on first Sunday of Slihe
Feast of Friday of Gold: The first commemoration of the first miracle of apostles done by Saint Peter.
The following feasts are commemorated in the season of Slihe
Feast of Mar Aphrem (June 10)
Feast of the Apostles Peter and Paul (June 29)
Feast of Mar Thoma, founding father of east Syriac churches (July 3)
Feast of Mar Quriaqos and Yolitha (July 15)
Qaita (Summer)
Along the weeks of Qaita maturity and fruitfulness of church are commemorated. The Syriac word Qaita means "summer" and it is a time of harvest for the Church. The fruits of the Church are those of holiness and martyrdom. While the sprouting and infancy of the Church were celebrated in 'the Weeks of the Apostles,' her development in different parts of the world by reflecting the image of the heavenly Kingdom and giving birth to many saints and martyrs are proclaimed during this season. Fridays of this Season are set apart for honoring saints and martyrs.
Feast celebrated during the season:
Feast of the twelve apostles and Nusardeil on the first Sunday of Qaita (Nusardeil is a Persian word which means "God-given New Year Day").
Feast of Mar Jacob of Nisibis on the first Friday of Qaita.
Feast of Mar Mari on the second Friday of Qaita.
Feast of Marta Simoni and her Seven Children on the fifth Friday of Qaita.
Feast of Mar Shimun Bar Sabbai and Companions on the sixth Friday of Qaita.
Feast of martyr Mar Quardag on the seventh Friday of Qaita.
The following feasts are commemorated in the season of Qaita
Feast of seventy disciples of Jesus (July 27)
Feast of Saint Alphonsa in Syro Malabar Catholic Church (July 28)
Feast of Transfiguration of Jesus (August 6)
Feast of Assumption of Mary (August 15)
Eliyah-Sliba-Moses
The name of the seasons of Eliyah-Sliba-Moses takes their origin from the feast of the transfiguration of Jesus. And the seasons revolve around the exaltation of the cross on the feast of the glorious cross on September 14. During the seasons of Eliyah and Sliba church reminds the faithful of the heavenly bliss which is promised to be inherited at the end of earthly life and the church commemorates the exaltic experience of the bliss through various sacraments. While during the season of Moses church meditates upon the end of time and the last judgment. Many at times the season of Moses is regarded as a distinct and separate season from the other two since it has a distinct theme.
The season of Eliyah has a length of one to three Sundays. Season of Sliba starts on Sunday on or after the feast of the glorious cross and has a length of three to four weeks. The first Sunday of Sliba is always considered as the fourth Sunday of the combined season. The season of Moses always has four weeks.
Feast celebrated during the seasons:
Feast of the glorious Cross
The following feasts are commemorated in the seasons of Eliyah-Sliba-Moses
Feast of Nativity of Mary on September 8 and the eight-day fast in preparation for the feast
Dedication of the church (Qudas Edta)
The weeks of the dedication of the church is the last liturgical season in the East Syriac rite. It consists of four weeks and ends on the Saturday before Sunday between November 27 and December 3. The theme of the season is that the church is presented by Christ as his eternal bride before his father at the heavenly bride chamber. The period has its origin in the feast of the dedication of the church of Sephelcure or the Jewish feast of Hanukkah. However, the season was officially instituted by Patriarch Isho-Yahb III of Seleucia-Ctesiphon (647–657) by separating it from the season of Moses.
Feasts celebrated during the season:
Feast of dedication of the church on 1st Sunday of Qudas Edta
Feast of Christ the King on last Sunday of Qudas Edta (Celebrated only in eastern catholic churches of the rite since pope Pius XI instituted it in Roman-rite).
Eastern Orthodox Church
The liturgical year in the Eastern Orthodox Church is characterized by alternating fasts and feasts, and is in many ways similar to the Catholic year. However, Church New Year (Indiction) traditionally begins on September 1 (Old Style or New Style), rather than the first Sunday of Advent. It includes both feasts on the Fixed Cycle and the Paschal Cycle (or Moveable Cycle). The most important feast day by far is the Feast of Pascha (Easter) – the Feast of Feasts. Then the Twelve Great Feasts, which commemorate various significant events in the lives of Jesus Christ and of the Theotokos (Virgin Mary).
The majority of Orthodox Christians (Russians, in particular) follow the Julian Calendar in calculating their ecclesiastical feasts, but many (including the Ecumenical Patriarchate and the Church of Greece), while preserving the Julian calculation for feasts on the Paschal Cycle, have adopted the Revised Julian Calendar (at present coinciding with the Gregorian Calendar) to calculate those feasts which are fixed according to the calendar date.
Between 1900 and 2100, there is a thirteen-day difference between the dates of the Julian and the Revised Julian and Gregorian calendars. Thus, for example, where Christmas is celebrated on December 25 O.S. (Old Style), the celebration coincides with January 7 in the Revised Calendar. The computation of the day of Pascha (Easter) is, however, always computed according to a lunar calendar based on the Julian Calendar, even by those churches which observe the Revised Calendar.
There are four fasting seasons during the year: The most important fast is Great Lent which is an intense time of fasting, almsgiving and prayer, extending for forty days prior to Palm Sunday and Holy Week, as a preparation for Pascha. The Nativity Fast (Winter Lent) is a time of preparation for the Feast of the Nativity of Christ (Christmas), but whereas Advent in the West lasts only four weeks, Nativity Fast lasts a full forty days. The Apostles' Fast is variable in length, lasting anywhere from eight days to six weeks, in preparation for the Feast of Saints Peter and Paul (June 29). The Dormition Fast lasts for two weeks from August 1 to August 14 in preparation for the Feast of the Dormition of the Theotokos (August 15). The liturgical year is so constructed that during each of these fasting seasons, one of the Great Feasts occurs, so that fasting may be tempered with joy.
In addition to these fasting seasons, Orthodox Christians fast on Wednesdays and Fridays throughout the year (and some Orthodox monasteries also observe Monday as a fast day). Certain fixed days are always fast days, even if they fall on a Saturday or Sunday (in which case the fast is lessened somewhat, but not abrogated altogether); these are: The Decollation of St. John the Baptist, the Exaltation of the Cross and the day before the Epiphany (January 5). There are several fast-free periods, when it is forbidden to fast, even on Wednesday and Friday. These are: the week following Pascha, the week following Pentecost, the period from the Nativity of Christ until January the 5th and the first week of the Triodion (the week following the 17th Sunday before Pentecost).
Pascha
The greatest feast is Pascha. Easter for both East and West is calculated as the first Sunday after the full moon that falls on or after March 21 (nominally the day of the vernal equinox), but the Orthodox calculations are based on the Julian calendar, whose March 21 corresponds at present with April 3 of the Gregorian calendar, and on calculations of the date of full moon different from those used in the West (see computus for further details).
The date of Pascha is central to the entire ecclesiastical year, determining not only the date for the beginning of Great Lent and Pentecost, but affecting the cycle of moveable feasts, of scriptural readings and the Octoechos (texts chanted according to the eight ecclesiastical modes) throughout the year. There are also a number of lesser feasts throughout the year that are based upon the date of Pascha. The moveable cycle begins on the Zacchaeus Sunday (the first Sunday in preparation for Great Lent or the 33rd Sunday after Pentecost as it is known), though the cycle of the Octoechos continues until Palm Sunday.
The date of Pascha affects the following liturgical seasons:
The period of the Triodion (the Sundays before Great Lent, Cheesefare Week, Palm Sunday, and Holy Week)
The period of the Pentecostarion (Sunday of Pascha through the Sunday After Pentecost which is also called the Sunday of all saints)
The twelve Great Feasts
Some of these feasts follow the Fixed Cycle, and some follow the Moveable (Paschal) Cycle. Most of those on the Fixed Cycle have a period of preparation called a Forefeast, and a period of celebration afterward, similar to the Western Octave, called an Afterfeast. Great Feasts on the Paschal Cycle do not have Forefeasts. The lengths of Forefeasts and Afterfeasts vary, according to the feast.
Nativity of the Theotokos (September 8)
birth of the Theotokos to Joachim and Anna
Elevation of the Cross (September 14)
the rediscovery of the original Cross on which Christ was crucified
Entrance of the Theotokos into the Temple (November 21)
the entry of the Theotokos into the Temple around the age of 3
Nativity of Our Lord and Saviour Jesus Christ (December 25)
the birth of Jesus, or Christmas
Theophany (January 6)
the baptism of Jesus Christ, Christ's blessing of the water, and the revealing of Christ as God
Presentation of Our Lord in the Temple (February 2)
Christ's presentation as an infant in the Temple by the Theotokos and Joseph.
Annunciation of the Theotokos (March 25)
Gabriel's announcement to the Theotokos that she will conceive the Christ, and her wilful agreement thereto
Note: In Eastern practice, should this feast fall during Holy Week or on Pascha itself, the feast of the Annunciation is not transferred to another day. In fact, the conjunction of the feasts of the Annunciation and Pascha (dipli Paschalia, ) is considered an extremely festive event.
Entry into Jerusalem (Sunday before Pascha)
known in the West as Palm Sunday.
Ascension (40 days after Pascha)
Christ's ascension into Heaven following his resurrection.
Pentecost (50 days after Pascha)
The Holy Spirit comes and indwells the apostles and other Christian believers.
Transfiguration of Our Lord (August 6)
Christ's Transfiguration as witnessed by Peter, James and John.
Dormition of the Theotokos (August 15)
The falling asleep of the Theotokos (cf. the Assumption of Mary in Western Christianity)
Other feasts
Some additional feasts are observed as though they were Great Feasts:
The Protection of the Mother of God (October 1), especially among the Russian Orthodox
The Feast of Saint James the Just (October 23)
The Feast of Saint Demetrius of Thessaloniki (October 26)
The Feast of the Holy Archangels Michael and Gabriel (November 8)
The Feast of Saint Nicholas, the Bishop of Myra in Lycia (December 6)
The Feast of the Conception of Mary by Saints Joachim and Anne (December 9)
The Feast of Saint Spiridon (December 12)
The Feast of Saint Stephen the Deacon (December 27)
The Feast of Saint Basil the Great and the Circumcision of Christ (January 1)
The Feast of the Three Holy Hierarchs: Basil the Great, Gregory the Theologian and John Chrysostom (January 30)
The Feast of the Forty Martyrs of Sebaste (March 9)
The Feast of Saint Patrick (March 17)
The Feast of Saint George (April 23)
The Feast of the Holy Emperors Constantine and Helen (May 21)
The Nativity of Saint John the Baptist (June 24)
The Feast of Saints Peter and Paul (June 29)
The Feast of Saint Elijah the Prophet (July 20)
The Feast of Saint Christina of Bolsena the Great Martyr (July 24)
The Beheading of St. John the Baptist (August 29)
Beginning of the Indiction-Ecclesiastical Year (September 1)
The Patronal Feast of a church or monastery
Every day throughout the year commemorates some saint or some event in the lives of Christ or the Theotokos. When a feast on the moveable cycle occurs, the feast on the fixed cycle that was set for that calendar day is transferred, with the propers of the feast often being chanted at Compline on the nearest convenient day.
Cycles
In addition to the Fixed and Moveable Cycles, there are a number of other liturgical cycles in the ecclesiastical year that affect the celebration of the divine services. These include, the Daily Cycle, the Weekly Cycle, the Cycle of Matins Gospels, and the Octoechos.
Oriental Orthodox and P'ent'ay Evangelical Churches
Western Christianity
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and are also followed in many Protestant churches, including the Lutheran, Anglican, and other traditions. Generally, the seasons in liturgical western Christianity are Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Protestant traditions do not include Ordinary Time: every day falls into a denominated season. Other Protestant churches, such as a minority in the Reformed tradition, reject the liturgical year entirely on the grounds that its observance is not directed in scripture.
For those that follow the Western liturgical year, the Revised Common Lectionary provides scriptural structure for the patterns of the seasons. Protestant denominations that follow this lectionary include Methodists, Anglicans/Episcopalians, Lutherans, Presbyterians, some Baptists and Anabaptists, among others, With regard to the calendars of the Western Christian Churches that use the Revised Common Lectionary,Vanderbilt University Professor Hoyt L. Hickman, states that:
Protestant Churches, with exception of the Lutheran and Anglican, generally observe fewer if any feasts with regard to the saints than the aforementioned liturgical denominations, in addition to the Catholic and Orthodox Churches.
Denominational specifics
Catholic Church
The Catholic Church sets aside certain days and seasons of each year to recall and celebrate various events in the life of Christ and his saints.
In its Roman Rite the liturgical year begins with Advent, the time of preparation for both the nativity of Christ, and his expected second coming at the end of time. The Advent season lasts until the first vespers of Christmas Eve on December 24.
Christmastide follows, beginning with First Vespers of Christmas on the evening of December 24 and ending with the Feast of the Baptism of the Lord, on the first Sunday after Epiphany (the latter is on January 6 generally).
First ordinary time, includes the days between Christmastide and Lent.
Lent is the period of purification and penance that begins on Ash Wednesday and ends on Holy Thursday.
The Mass of the Lord's Supper on the evening of Holy Thursday marks the beginning of the Easter Triduum, which includes Good Friday, Holy Saturday, and Easter Sunday. The days of the Easter Triduum recall Christ's last supper with his disciples, his capture and passion, his death on the cross, burial, and resurrection.
The seven-week liturgical Eastertide immediately follows the Triduum, climaxing at Pentecost. This last feast recalls the descent of the Holy Spirit upon Jesus' disciples after the Ascension of Jesus.
Second ordinary time, includes the days between Eastertide and Advent.
There are many forms of liturgy in the Catholic Church. Even putting aside the many Eastern rites in use, the Latin liturgical rites alone include the Ambrosian Rite, the Mozarabic Rite, and the Cistercian Rite, as well as other forms that have been largely abandoned in favour of adopting the Roman Rite. There are also historical versions of the liturgy that varied greatly from the present one, such those used by the Anglo-Saxon Church.
The liturgical calendar in that form of the Roman Rite (see General Roman Calendar) of 1960 differs in some respects from that of the present form of the Roman Rite.
Lutheran Churches
Anglican Church
The Church of England, Mother Church of the Anglican Communion, uses a liturgical year that is in most respects identical to that of the 1969 Catholic Common Lectionary. While the calendars contained within the Book of Common Prayer and the Alternative Service Book (1980) have no "Ordinary Time", Common Worship (2000) adopted the ecumenical 1983 Revised Common Lectionary. The few exceptions are Sundays following Christmas and Transfiguration, observed on the last Sunday before Lent instead of on Reminiscere.
In some Anglican traditions (including the Church of England) the Christmas season is followed by an Epiphany season, which begins on the Eve of the Epiphany (on January 6 or the Sunday after January 1) and ends on the Feast of the Presentation (on February 2 or the Sunday after January 27). Ordinary Time begins after this period.
The Book of Common Prayer contains within it the traditional Western Eucharistic lectionary which traces its roots to the Comes of St. Jerome in the 5th century. Its similarity to the ancient lectionary is particularly obvious during Trinity season (Sundays after the Sunday after Pentecost), reflecting that understanding of sanctification.
Reformed Churches
Reformed Christians emphasize weekly celebration of the Lord's Day. While some of them celebrate also what they call the five evangelical feasts, others celebrate no holy days but the Lord's Day and reject the liturgical year as non-scriptural, and as therefore inconsistent with the regulative principle of worship.
Liturgical calendar
Advent
Advent (from the Latin word adventus, which means "arrival" or "coming") is the first season of the liturgical year. It begins four Sundays before Christmas, the Sunday falling on or nearest to November 30, and ends on Christmas Eve. Traditionally observed as a "fast", it focuses on preparation for the coming of Christ, not only the coming of the Christ-child at Christmas, but also, in the first weeks, on the eschatological final coming of Christ, making Advent "a period for devout and joyful expectation".
This season is often marked by the Advent Wreath, a garland of evergreens with four candles. Although the main symbolism of the advent wreath is simply marking the progression of time, many churches attach themes to each candle, most often 'hope', 'faith', 'joy', and 'love'. Other popular devotions during Advent include the use of the Advent Calendar or the Tree of Jesse to count down the days to Christmas.
Liturgical colour: violet or purple; blue in some traditions, such as Anglican/Episcopalian, Methodist, and Lutheran.
Christmastide
The Christmas season immediately follows Advent. The traditional Twelve Days of Christmas begin with Christmas Eve on the evening of December 24 and continue until the feast of Epiphany. The actual Christmas season continues until the Feast of the Baptism of Christ, which is celebrated on the Sunday after January 6, or the following Monday if that Sunday is kept as Epiphany.
In the pre-1970 form, this feast is celebrated on January 13, unless January 13 is a Sunday, in which case the feast of the Holy Family is celebrated instead. Until the suppression of the Octave of the Epiphany in the 1960 reforms, January 13 was the Octave day of the Epiphany, providing the date for the end of the season.
Traditionally, the end of Christmastide was February 2, or the Feast of the Presentation of the Lord, also known as Candlemas. This feast recounts the 40 days of rest Mary took before being purified and presenting her first-born son to the Temple in Jerusalem. In medieval times, Candlemas eve (Feb. 1st) marked the day when all Christmas decorations, including the Christmas tree and the Nativity scene, were taken down. However, the tradition of ending Christmastide on Candlemas has slowly waned, except in some pockets of the Hispanic world where Candlemas (or La Fiesta de la Candelaria) is still an important feast and the unofficial end of the Christmas season.
Liturgical colour: white
Ordinary Time
"Ordinary" comes from the same root as our word "ordinal", and in this sense means "the counted weeks". In the Catholic Church and in some Protestant traditions, these are the common weeks which do not belong to a proper season. In Latin, these seasons are called the weeks , or "through the year".
In the current form of the Roman Rite adopted following the Second Vatican Council, Ordinary Time consists of 33 or 34 Sundays and is divided into two sections. The first portion extends from the day following the Feast of the Baptism of Christ until the day before Ash Wednesday (the beginning of Lent). It contains anywhere from three to eight Sundays, depending on how early or late Easter falls.
The main focus in the readings of the Mass is Christ's earthly ministry, rather than any one particular event. The counting of the Sundays resumes following Eastertide; however, two Sundays are replaced by Pentecost and Trinity Sunday, and depending on whether the year has 52 or 53 weeks, one may be omitted.
In the pre-1970 form of the Roman Rite, the Time after Epiphany has anywhere from one to six Sundays. As in the current form of the rite, the season mainly concerns Christ's preaching and ministry, with many of his parables read as the Gospel readings. The season begins on January 14 and ends on the Saturday before Septuagesima Sunday. Omitted Sundays after Epiphany are transferred to Time after Pentecost and celebrated between the Twenty-Third and the Last Sunday after Pentecost according to an order indicated in the Code of Rubrics, 18, with complete omission of any for which there is no Sunday available in the current year. Before the 1960 revisions, the omitted Sunday would be celebrated on the Saturday before Septuagesima Sunday, or, in the case of the Twenty-Third Sunday after Pentecost, on the Saturday before the Last Sunday after Pentecost.
Liturgical colour: green
Pre-Lent
Gregory the Great is the first to document a period of preparation for Easter beginning with Septuagesima, whose name refers to a period of around seventy days before Easter. This pre-Lenten period lasts two and a half weeks, encompassing Sexagesima and Quinquagesima. It concludes with Carnival and Shrove Tuesday.
This period opens an educational period leading up to the reception of catechumens at Easter. Events such as mystery plays from the Old Testament performed during this period historically supported this instructional campaign, reflecting the traditional lectionary for the Canonical hours, which begins on Septuagesima with the Book of Genesis, as is still reflected in the Book of Common Prayer.
The pre-Lenten liturgy introduces some customs of Lent, including the suppression of the Alleluia and its replacement at Mass with the Tract. The Gloria is no longer said on Sundays.
The 1969 reform of the Roman Rite subsumed these weeks liturgically into Ordinary Time, but Carnival is still widely celebrated. A pre-Lenten provision continues in many Anglican and Lutheran liturgies.
Liturgical colour (where observed): violet or purple
Lent and Passiontide
Lent is a major penitential season of preparation for Easter. It begins on Ash Wednesday and, if the penitential days of Good Friday and Holy Saturday are included, lasts for forty days, since the six Sundays within the season are not counted.
In the Roman Rite, the Gloria in Excelsis Deo and the Te Deum are not used in the Mass and Liturgy of the Hours respectively, except on Solemnities and Feasts, and the Alleluia and verse that usually precede the reading of the Gospel is either omitted or replaced with another acclamation.
Lutheran churches make these same omissions.
As in Advent, the deacon and subdeacon of the pre-1970 form of the Roman Rite do not wear their habitual dalmatic and tunicle (signs of joy) in Masses of the season during Lent; instead they wear "folded chasubles", in accordance with the ancient custom.
In the pre-1970 form of the Roman Rite, the two weeks before Easter form the season of Passiontide, a subsection of the Lenten season that begins with Matins of Ash Wednesday and ends immediately before the Mass of the Easter Vigil. In this form, what used to be officially called Passion Sunday, has the official name of the First Sunday in Passiontide, and Palm Sunday has the additional name of the Second Sunday in Passiontide. In Sunday and ferial Masses (but not on feasts celebrated in the first of these two weeks) the Gloria Patri is omitted at the Entrance Antiphon and at the Lavabo, as well as in the responds in the Divine Office.
In the post-1969 form of the Roman Rite, "Passion Sunday" and "Palm Sunday" are both names for the Sunday before Easter, officially called "Palm Sunday of the Lord's Passion". The former Passion Sunday became a fifth Sunday of Lent. The earlier form reads Matthew's account on Sunday, Mark's on Tuesday, and Luke's on Wednesday, while the post-1969 form reads the Passion only on Palm Sunday (with the three Synoptic Gospels arranged in a three-year cycle) and on Good Friday, when it reads the Passion according to John, as also do earlier forms of the Roman Rite.
The veiling of crucifixes and images of the saints with violet cloth, which was obligatory before 1970, is left to the decision of the national bishops' conferences. In the United States, it is permitted but not required, at the discretion of the pastor. In all forms, the readings concern the events leading up to the Last Supper and the betrayal, Passion, and death of Christ.
The week before Easter is called Holy Week.
In the Roman Rite, feasts that fall within that week are simply omitted, unless they have the rank of Solemnity, in which case they are transferred to another date. The only solemnities inscribed in the General Calendar that can fall within that week are those of Saint Joseph and the Annunciation.
Liturgical colour: violet or purple. The colour rose may be used, where it is the practice, on Laetare Sunday (4th Sunday of Lent). On Palm Sunday the colour since 1970 is red, by earlier rules violet or purple, with red being used after 1955 for the blessing of the palms.
Easter Triduum
The Easter Triduum consists of Good Friday, Holy Saturday and Easter Sunday. Each of these days begins liturgically not with the morning but with the preceding evening.
The triduum begins on the evening before Good Friday with Mass of the Lord's Supper, celebrated with white vestments, and often includes a ritual of ceremonial footwashing. It is customary on this night for a vigil involving private prayer to take place, beginning after the evening service and continuing until midnight. This vigil is occasionally renewed at dawn, continuing until the Good Friday liturgy.
During the day of Good Friday Mass is not celebrated in the Catholic Church. Instead a Celebration of the Passion of the Lord is held in the afternoon or evening. It consists of three parts: a Liturgy of the Word that includes the reading of the account of the Passion by John the Evangelist and concludes with a solemn Universal Prayer. Other churches also have their Good Friday commemoration of the Passion.
The colour of vestments varies: no colour, red, or black are used in different traditions. Coloured hangings may be removed. Lutheran churches often either remove colourful adornments and icons, or veil them with drab cloth. The service is usually plain with somber music, ending with the congregation leaving in silence. In the Catholic, some Lutheran, and High Anglican rites, a crucifix (not necessarily the one which stands on or near the altar on other days of the year) is ceremoniously unveiled. Other crucifixes are unveiled, without ceremony, after the service.
Holy Saturday commemorates the day during which Christ lay in the tomb. In the Catholic Church, there is no Mass on this day; the Easter Vigil Mass, which, though celebrated properly at the following midnight, is often celebrated in the evening, is an Easter Mass. With no liturgical celebration, there is no question of a liturgical colour.
The Easter Vigil is held in the night between Holy Saturday and Easter Sunday, to celebrate the resurrection of Jesus. See also Paschal candle. The liturgical colour is white, often together with gold. In the Roman Rite, during the "Gloria in Excelsis Deo" the organ and bells are used in the liturgy for the first time in two days, and the statues, which have been veiled during Passiontide (at least in the Roman Rite through the 1962 version), are unveiled. In Lutheran churches, colours and icons are re-displayed as well.
Eastertide
Easter is the celebration of Jesus' Resurrection. The date of Easter varies from year to year, according to a lunar-calendar dating system (see computus for details). In the Roman Rite, the Easter season extends from the Easter Vigil through Pentecost Sunday. In the pre-1970 form of the rite, this season includes also the Octave of Pentecost, so Eastertide lasts until None of the following Saturday.
In the Roman Rite, the Easter octave allows no other feasts to be celebrated or commemorated during it; a solemnity, such as the Annunciation, falling within it is transferred to the following Monday. If Easter Sunday or Easter Monday falls on April 25, the Greater Litanies, which in the pre-1970 form of the Roman Rite are on that day, are transferred to the following Tuesday.
By a decree of May 5, 2000, the Second Sunday of Easter (the Sunday after Easter Day itself), is known also in the Roman Rite as the Feast of the Divine Mercy.
Ascension Thursday, which celebrates the return of Jesus to heaven following his resurrection, is the fortieth day of Easter, but, in places where it is not observed as a Holy Day of Obligation, the post-1969 form of the Roman rite transfers it to the following Sunday.
Pentecost is the fiftieth and last day of the Easter season. It celebrates the sending of the Holy Spirit to the Apostles, which traditionally marks the birth of the Church, see also Apostolic Age.
Liturgical colour: white, but red on the feast of Pentecost.
Ordinary Time, Time after Pentecost, Time after Trinity, or Kingdomtide
This season, under various names, follows the Easter season and the feasts of Easter, Ascension, and Pentecost. In the post-1969 form of the Roman rite, Ordinary Time resumes on Pentecost Monday, omitting the Sunday which would have fallen on Pentecost. In the earlier form, where Pentecost is celebrated with an octave, the Time after Pentecost begins at Vespers on the Saturday after Pentecost. The Sundays resume their numbering at the point that will make the Sunday before Advent the thirty-fourth, omitting any weeks for which there is no room (present-day form of the Roman Rite) or are numbered as "Sundays after Pentecost" (pre-1970 Roman Rite, Eastern Orthodoxy and some Protestants) or as "Sundays after Trinity" (some Protestants). This season ends on the Saturday before the First Sunday of Advent.
Feasts during this season include:
Trinity Sunday, the first Sunday after Pentecost
Solemnity of the Most Holy Body and Blood of Christ (Roman Rite and some Anglican and Lutheran traditions), Thursday of the second week after Pentecost, often celebrated on the following Sunday
Solemnity of the Most Sacred Heart of Jesus (Roman Rite), Friday of the third week after Pentecost
Assumption of Mary on August 15
Feast of Christ the King, last Sunday before Advent (Roman Rite, Lutherans, Anglicans) or last Sunday in October (1925–1969 form of the Roman Rite)
In the final few weeks of Ordinary Time, many churches direct attention to the coming of the Kingdom of God, thus ending the liturgical year with an eschatological theme that is one of the predominant themes of the season of Advent that began the liturgical year. For instance, in the extraordinary form of the Roman Rite, the Gospel of the Last Sunday is and in the ordinary form of the Roman Rite all the last three Sundays of the liturgical year are affected by the theme of the Second Coming.
While the Roman Rite adopts no special designation for this final part of Ordinary Time, some denominations do, and may also change the liturgical colour. The Church of England uses the term "Sundays before Advent" for the final four Sundays and permits red vestments as an alternative. The United Methodist Church may use the name "Kingdomtide". The Lutheran Church–Missouri Synod (LCMS) uses the terms "Third-Last, Second-Last and Last Sunday in the Church Year" and does not change from green. The LCMS does not officially celebrate a "Feast of Christ the King." The Wisconsin Evangelical Lutheran Synod (WELS) uses the term "Period of End Times" and assigns red vestments to the first and second Sundays.
Calendar of saints
In some Protestant traditions, especially those with closer ties to the Lutheran tradition, Reformation Sunday is celebrated on the Sunday preceding October 31, commemorating the purported day Martin Luther posted the 95 Theses on the door of the Castle Church in Wittenberg. The liturgical colour is red, celebrating the Holy Spirit's continuing work in renewing the Church.
Most Western traditions celebrate All Saints' Day (All Hallow's Day) on November 1 or the Sunday following, with the eve of this feast, All Hallow's Eve being October 31. The liturgical colour is white. The following day, November 2, is All Souls' Day. The period including these days is often referred to as Allhallowtide or Allsaintstide.
Saints Days are observed by Lutherans and include the apostles, Virgin Mary and noteworthy figures in the Christian faith. The Confession of St. Peter Week of Prayer for Christian Unity starting on January 18. Conversion of St. Paul ended week of prayer on January 25. Martin Luther King Jr., renewer of society, martyr January 15 (Evangelical Lutheran Church in America only), Presentation of Our Lord and Purification of the Mary Candlemas on February 2. Joseph, Guardian of Jesus St Joseph on March 19, Annunciation March 25, Visitation of Mary on May 31.
Lutherans also celebrate St John the Baptist or the Beheading of St John the Baptist on June 24, St Mary Magdalene July 22, St. Mary, Mother of Our Lord or the Assumption of the Blessed Virgin Mary on August 15, Holy Cross Day September 14, Francis of Assisi, renewal of the Church St. Francis of Assisi on October 4, and the Holy Innocents, Martyrs December 28.
Lesser Feasts and Commemorations on the Lutheran liturgical calendar include Anthony of Egypt on January 17, Henry, Bishop of Uppsala, martyr Henry of Uppsala on January 19, Timothy, Titus and Silas, missionaries St Timothy, St Titus and St Silas Day on January 26, Ansgar, Bishop of Hamburg, missionary to Denmark and Sweden St Ansgar on February 3, Cyril, monk and Methodius, bishop, missionaries to the Slavs St Cyril and St Methodius on February 14, Gregory the Great on March 12, St Patrick on March 17, Olavus Petri, priest and Laurentius Petri, Bishop of Uppsala, on April 19, St Anselm on April 21, Catherine of Siena on April 29, St Athanasius on May 2, St Monica on May 4, Eric IX of Sweden on May 18, St Boniface on June 5, Basil the Great, Gregory of Nyssa and Gregory of Nazianzus on June 14, Benedict of Nursia on July 11, Birgitta of Sweden on July 23, St Anne, Mother of Mary on July 26, St Dominic on August 8, Augustine of Hippo on August 28, St Cyprian on September 16, Teresa of Avila on October 15, Martin de Porres on November 3, Martin of Tours on November 11, Elizabeth of Hungary on November 17, St Lucy on December 13. There are many other holy days in the Lutheran calendar.
Some traditions celebrate St. Michael's Day (Michaelmas) on September 29.
Some traditions celebrate St. Martin's Day (Martinmas) on November 11.
Liturgical colours: white if the saint was not martyred; red if the saint was martyred
Hierarchy of feast days
There are degrees of solemnity of the office of the feast days of saints. In the 13th century, the Roman Rite distinguished three ranks: simple, semidouble and double, with consequent differences in the recitation of the Divine Office or Breviary. The simple feast commenced with the chapter (capitulum) of First Vespers, and ended with None. It had three lessons and took the psalms of Matins from the ferial office; the rest of the office was like the semidouble. The semidouble feast had two Vespers, nine lessons in Matins, and ended with Compline. The antiphons before the psalms were only intoned.
In the Mass, the semidouble had always at least three "orationes" or collects. On a double feast the antiphons were sung in their entirety, before and after the psalms, while in Lauds and Vespers there were no suffragia of the saints, and the Mass had only one "oratio" (if no commemoration was prescribed). If ordinary double feasts (referred to also as lesser doubles) occurred with feasts of a higher rank, they could be simplified, except the octave days of some feasts and the feasts of the Doctors of the Church, which were transferred.
To the existing distinction between major and ordinary or minor doubles, Pope Clement VIII added two more ranks, those of first-class or second-class doubles. Some of these two classes were kept with octaves. This was still the situation when the 1907 article Ecclesiastical Feasts in the Catholic Encyclopedia was written. In accordance with the rules then in force, feast days of any form of double, if impeded by "occurrence" (falling on the same day) with a feast day of higher class, were transferred to another day.
Pope Pius X simplified matters considerably in his 1911 reform of the Roman Breviary. In the case of occurrence the lower-ranking feast day could become a commemoration within the celebration of the higher-ranking one. Until then, ordinary doubles took precedence over most of the semidouble Sundays, resulting in many of the Sunday Masses rarely being said. While retaining the semidouble rite for Sundays, Pius X's reform permitted only the most important feast days to be celebrated on Sunday, although commemorations were still made until Pope John XXIII's reform of 1960.
The division into doubles (of various kinds) semidoubles and simples continued until 1955, when Pope Pius XII abolished the rank of semidouble, making all the previous semidoubles simples, and reducing the previous simples to a mere commemoration in the Mass of another feast day or of the feria on which they fell (see General Roman Calendar of Pope Pius XII).
Then, in 1960, Pope John XXIII issued the Code of Rubrics, completely ending the ranking of feast days by doubles etc., and replacing it by a ranking, applied not only to feast days but to all liturgical days, as I, II, III, and IV class days.
The 1969 revision by Pope Paul VI divided feast days into "solemnities", "feasts" and "memorials", corresponding approximately to Pope John XXIII's I, II and III class feast days. Commemorations were abolished. While some of the memorials are considered obligatory, others are optional, permitting a choice on some days between two or three memorials, or between one or more memorials and the celebration of the feria. On a day to which no obligatory celebration is assigned, the Mass may be of any saint mentioned in the Roman Martyrology for that day.
Assumption of Mary
Observed by Catholics and some Anglicans on August 15, which is the same as the Eastern and Orthodox feast of the Dormition, the end of the earthly life of the Virgin Mary and, for some, her bodily Assumption into heaven, is celebrated. The teaching on this dogma was summmed by Pope Pius XII in his bull Munificentissimus Deus of 1 November, 1950.
In other Anglican and Lutheran traditions, as well as a few others, August 15 is celebrated as St. Mary, Mother of the Lord.
Liturgical colour: white
Secular observance
Because of the dominance of Christianity in Europe throughout the Middle Ages, many features of the Christian year became incorporated into the secular calendar. Many of its feasts (e.g., Christmas, Mardi Gras, Saint Patrick's Day) remain holidays, and are now celebrated by people of all faiths and none—in some cases worldwide. The secular celebrations bear varying degrees of likeness to the religious feasts from which they derived, often also including elements of ritual from pagan festivals of similar date.
Comparison
See also
Notes
References
Further reading
Stookey, L. H. Calendar: Christ's Time for the Church, 1996.
Hickman, Hoyt L., et al. Handbook of the Christian Year, 1986.
Webber, Robert E. Ancient-Future Time: Forming Spirituality through the Christian Year, 2004.
Schmemann, Fr. Alexander. The Church Year (Celebration of Faith Series, Sermons Vol. 2), 1994.
Talley, Thomas J. The Origins of the Liturgical Year, Ed. 2. 1991.
External links
The Catholic Church's liturgical calendar, from US Catholic Bishops , or from O.S.V. publishing .
Universalis – A liturgical calendar of the Catholic Church including the Liturgy of the Hours and the Mass readings.
Greek Orthodox Calendar – Greek Orthodox Calendar & Online Chapel
Islamic Calendar for people 1999-2024.
Russian Orthodox Calendar at Holy Trinity Russian Orthodox Church
Lectionary Central – For the study and use of the traditional Western Eucharistic lectionary (Anglican).
Seasons
Christian terminology
Types of year | Liturgical year | [
"Physics"
] | 11,147 | [
"Physical phenomena",
"Earth phenomena",
"Seasons"
] |
55,530 | https://en.wikipedia.org/wiki/Personal%20protective%20equipment | Personal protective equipment (PPE) is protective clothing, helmets, goggles, or other garments or equipment designed to protect the wearer's body from injury or infection. The hazards addressed by protective equipment include physical, electrical, heat, chemical, biohazards, and airborne particulate matter. Protective equipment may be worn for job-related occupational safety and health purposes, as well as for sports and other recreational activities. Protective clothing is applied to traditional categories of clothing, and protective gear applies to items such as pads, guards, shields, or masks, and others. PPE suits can be similar in appearance to a cleanroom suit.
The purpose of personal protective equipment is to reduce employee exposure to hazards when engineering controls and administrative controls are not feasible or effective to reduce these risks to acceptable levels. PPE is needed when there are hazards present. PPE has the serious limitation that it does not eliminate the hazard at the source and may result in employees being exposed to the hazard if the equipment fails.
Any item of PPE imposes a barrier between the wearer/user and the working environment. This can create additional strains on the wearer, impair their ability to carry out their work and create significant levels of discomfort. Any of these can discourage wearers from using PPE correctly, therefore placing them at risk of injury, ill-health or, under extreme circumstances, death. Good ergonomic design can help to minimise these barriers and can therefore help to ensure safe and healthy working conditions through the correct use of PPE.
Practices of occupational safety and health can use hazard controls and interventions to mitigate workplace hazards, which pose a threat to the safety and quality of life of workers. The hierarchy of hazard controls provides a policy framework which ranks the types of hazard controls in terms of absolute risk reduction. At the top of the hierarchy are elimination and substitution, which remove the hazard entirely or replace the hazard with a safer alternative. If elimination or substitution measures cannot be applied, engineering controls and administrative controlswhich seek to design safer mechanisms and coach safer human behaviorare implemented. Personal protective equipment ranks last on the hierarchy of controls, as the workers are regularly exposed to the hazard, with a barrier of protection. The hierarchy of controls is important in acknowledging that, while personal protective equipment has tremendous utility, it is not the desired mechanism of control in terms of worker safety.
History
Early PPE such as body armor, boots and gloves focused on protecting the wearer's body from physical injury. The plague doctors of sixteenth-century Europe also wore protective uniforms consisting of a full-length gown, helmet, glass eye coverings, gloves and boots (see Plague doctor costume) to prevent contagion when dealing with plague victims. These were made of thick material which was then covered in wax to make it water-resistant. A mask with a beak-like structure was filled with pleasant-smelling flowers, herbs and spices to prevent the spread of miasma, the prescientific belief of bad smells which spread disease through the air. In more recent years, scientific personal protective equipment is generally believed to have begun with the cloth facemasks promoted by Wu Lien-teh in the 1910–11 Manchurian pneumonic plague outbreak, although some doctors and scientists of the time doubted the efficacy of facemasks in preventing the spread of that disease since they didn't believe it was transmitted through the air.
Types
Personal protective equipment can be categorized by the area of the body protected, by the type of hazard, and by the type of garment or accessory. A single itemfor example, bootsmay provide multiple forms of protection: a steel toe cap and steel insoles for protection of the feet from crushing or puncture injuries, impervious rubber and lining for protection from water and chemicals, high reflectivity and heat resistance for protection from radiant heat, and high electrical resistivity for protection from electric shock. The protective attributes of each piece of equipment must be compared with the hazards expected to be found in the workplace. More breathable types of personal protective equipment may not lead to more contamination but do result in greater user satisfaction.
Respirators
Respirators are protective breathing equipment, which protect the user from inhaling contaminants in the air, thus preserving the health of their respiratory tract. There are two main types of respirators. One type of respirator functions by filtering out chemicals and gases, or airborne particles, from the air breathed by the user. The filtration may be either passive or active (powered). Gas masks and particulate respirators (like N95 masks) are examples of this type of respirator. A second type of respirator protects users by providing clean, respirable air from another source. This type includes airline respirators and self-contained breathing apparatus (SCBA). In work environments, respirators are relied upon when adequate ventilation is not available or other engineering control systems are not feasible or inadequate.
In the United Kingdom, an organization that has extensive expertise in respiratory protective equipment is the Institute of Occupational Medicine. This expertise has been built on a long-standing and varied research programme that has included the setting of workplace protection factors to the assessment of efficacy of masks available through high street retail outlets.
The Health and Safety Executive (HSE), NHS Health Scotland and Healthy Working Lives (HWL) have jointly developed the RPE (Respiratory Protective Equipment) Selector Tool, which is web-based. This interactive tool provides descriptions of different types of respirators and breathing apparatuses, as well as "dos and don'ts" for each type.
In the United States, The National Institute for Occupational Safety and Health (NIOSH) provides recommendations on respirator use, in accordance to NIOSH federal respiratory regulations 42 CFR Part 84. The National Personal Protective Technology Laboratory (NPPTL) of NIOSH is tasked towards actively conducting studies on respirators and providing recommendations.
Surgical masks
Surgical masks are sometimes considered as PPE, but are not considered as respirators, being unable to stop submicron particles from passing through, and also having unrestricted air flow at the edges of the masks.
Surgical masks are not certified for the prevention of tuberculosis.
Skin protection
Occupational skin diseases such as contact dermatitis, skin cancers, and other skin injuries and infections are the second-most common type of occupational disease and can be very costly. Skin hazards, which lead to occupational skin disease, can be classified into four groups. Chemical agents can come into contact with the skin through direct contact with contaminated surfaces, deposition of aerosols, immersion or splashes. Physical agents such as extreme temperatures and ultraviolet or solar radiation can be damaging to the skin over prolonged exposure. Mechanical trauma occurs in the form of friction, pressure, abrasions, lacerations and contusions. Biological agents such as parasites, microorganisms, plants and animals can have varied effects when exposed to the skin.
Any form of PPE that acts as a barrier between the skin and the agent of exposure can be considered skin protection. Because much work is done with the hands, gloves are an essential item in providing skin protection. Some examples of gloves commonly used as PPE include rubber gloves, cut-resistant gloves, chainsaw gloves and heat-resistant gloves. For sports and other recreational activities, many different gloves are used for protection, generally against mechanical trauma.
Other than gloves, any other article of clothing or protection worn for a purpose serve to protect the skin. Lab coats for example, are worn to protect against potential splashes of chemicals. Face shields serve to protect one's face from potential impact hazards, chemical splashes or possible infectious fluid.
Many migrant workers need training in PPE for Heat Related Illnesses prevention (HRI). Based on study results, research identified some potential gaps in heat safety education. While some farm workers reported receiving limited training on pesticide safety, others did not. This could be remedied by incoming groups of farm workers receiving video and in-person training on HRI prevention. These educational programs for farm workers are most effective when they are based on health behavior theories, use adult learning principles and employ train-the-trainer approaches.
Eye protection
Each day, about 2,000 US workers have a job-related eye injury that requires medical attention. Eye injuries can happen through a variety of means. Most eye injuries occur when solid particles such as metal slivers, wood chips, sand or cement chips get into the eye. Smaller particles in smokes and larger particles such as broken glass also account for particulate matter-causing eye injuries. Blunt force trauma can occur to the eye when excessive force comes into contact with the eye. Chemical burns, biological agents, and thermal agents, from sources such as welding torches and UV light, also contribute to occupational eye injury.
While the required eye protection varies by occupation, the safety provided can be generalized. Safety glasses provide protection from external debris, and should provide side protection via a wrap-around design or side shields.
Goggles provide better protection than safety glasses, and are effective in preventing eye injury from chemical splashes, impact, dusty environments and welding. Goggles with high air flow should be used to prevent fogging.
Face shields provide additional protection and are worn over the standard eyewear; they also provide protection from impact, chemical, and blood-borne hazards.
Full-facepiece respirators are considered the best form of eye protection when respiratory protection is needed as well, but may be less effective against potential impact hazards to the eye.
Eye protection for welding is shaded to different degrees, depending on the specific operation.
Hearing protection
Industrial noise is often overlooked as an occupational hazard, as it is not visible to the eye. Overall, about 22 million workers in the United States are exposed to potentially damaging noise levels each year. Occupational hearing loss accounted for 14% of all occupational illnesses in 2007, with about 23,000 cases significant enough to cause permanent hearing impairment. About 82% of occupational hearing loss cases occurred to workers in the manufacturing sector. In the US the Occupational Safety and Health Administration establishes occupational noise exposure standards. The National Institute for Occupational Safety and Health recommends that worker exposures to noise be reduced to a level equivalent to 85 dBA for eight hours to reduce occupational noise-induced hearing loss.
PPE for hearing protection consists of earplugs and earmuffs. Workers who are regularly exposed to noise levels above the NIOSH recommendation should be provided with hearing protection by the employers, as they are a low-cost intervention. A personal attenuation rating can be objectively measured through a hearing protection fit-testing system. The effectiveness of hearing protection varies with the training offered on their use.
Protective clothing and ensembles
This form of PPE is all-encompassing and refers to the various suits and uniforms worn to protect the user from harm. Lab coats worn by scientists and ballistic vests worn by law enforcement officials, which are worn on a regular basis, would fall into this category. Entire sets of PPE, worn together in a combined suit, are also in this category.
Ensembles
Below are some examples of ensembles of personal protective equipment, worn together for a specific occupation or task, to provide maximum protection for the user:
PPE gowns are used by medical personnel like doctors and nurses.
Chainsaw protection (especially a helmet with face guard, hearing protection, kevlar chaps, anti-vibration gloves, and chainsaw safety boots).
Bee-keepers wear various levels of protection depending on the temperament of their bees and the reaction of the bees to nectar availability. At minimum, most beekeepers wear a brimmed hat and a veil made of fine mesh netting. The next level of protection involves leather gloves with long gauntlets and some way of keeping bees from crawling up one's trouser legs. In extreme cases, specially fabricated shirts and trousers can serve as barriers to the bees' stingers.
Diving equipment, for underwater diving, constitutes equipment such as a diving helmet or diving mask, an underwater breathing apparatus, and a diving suit.
Firefighters wear PPE designed to provide protection against fires and various fumes and gases. PPE worn by firefighters include bunker gear, self-contained breathing apparatus, a helmet, safety boots, and a PASS device.
In sports
Participants in sports often wear protective equipment. Studies performed on the injuries of professional athletes, such as that on NFL players, question the effectiveness of existing personal protective equipment.
Limits of the definition
The definition of what constitutes personal protective equipment varies by country. In the United States, the laws regarding PPE also vary by state. In 2011, workplace safety complaints were brought against Hustler and other adult film production companies by the AIDS Healthcare Foundation, leading to several citations brought by Cal/OSHA. The failure to use condoms by adult film stars was a violation of Cal/OSHA's Blood borne Pathogens Program, Personal Protective Equipment. This example shows that personal protective equipment can cover a variety of occupations in the United States, and has a wide-ranging definition.
Legislation
United States
The National Defense Authorization Act for 2022 defines personal protective equipment as
Under this Act, US military services are prohibited from purchasing PPE from suppliers in North Korea, China, Russia or Iran, unless there are problems with the supply or cost of PPE of "satisfactory quality and quantity".
European Union
At the European Union level, personal protective equipment is governed by Directive 89/686/EEC on personal protective equipment (PPE). The Directive is designed to ensure that PPE meets common quality and safety standards by setting out basic safety requirements for personal protective equipment, as well as conditions for its placement on the market and free movement within the EU single market. It covers "any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards". The directive was adopted on 21 January 1989 and came into force on 1 July 1992. The European Commission additionally allowed for a transition period until 30 June 1995 to give companies sufficient time to adapt to the legislation. After this date, all PPE placed on the market in EU Member States was required to comply with the requirements of Directive 89/686/EEC and carry the CE Marking.
Article 1 of Directive 89/686/EEC defines personal protective equipment as any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards. PPE which falls under the scope of the Directive is divided into three categories:
Category I: simple design (e.g. gardening gloves, footwear, ski goggles)
Category II: PPE not falling into category I or III (e.g. personal flotation devices, dry and wet suits, motorcycle personal protective equipment)
Category III: complex design (e.g. respiratory equipment, harnesses)
Directive 89/686/EEC on personal protective equipment does not distinguish between PPE for professional use and PPE for leisure purposes.
Personal protective equipment falling within the scope of the Directive must comply with the basic health and safety requirements set out in Annex II of the Directive. To facilitate conformity with these requirements, harmonized standards are developed at the European or international level by the European Committee for Standardization (CEN, CENELEC) and the International Organization for Standardization in relation to the design and manufacture of the product. Usage of the harmonized standards is voluntary and provides presumption of conformity. However, manufacturers may choose an alternative method of complying with the requirements of the Directive.
Personal protective equipment excluded from the scope of the Directive includes:
PPE designed for and used by the armed forces or in the maintenance of law and order;
PPE for self-defence (e.g. aerosol canisters, personal deterrent weapons);
PPE designed and manufactured for personal use against adverse atmospheric conditions (e.g. seasonal clothing, umbrellas), damp and water (e.g. dish-washing gloves) and heat;
PPE used on vessels and aircraft but not worn at all times;
helmets and visors intended for users of two- or three-wheeled motor vehicles.
The European Commission is currently working to revise Directive 89/686/EEC. The revision will look at the scope of the Directive, the conformity assessment procedures and technical requirements regarding market surveillance. It will also align the Directive with the New Legislative Framework. The European Commission is likely to publish its proposal in 2013. It will then be discussed by the European Parliament and Council of the European Union under the ordinary legislative procedure before being published in the Official Journal of the European Union and becoming law.
Research
Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers.
There is low certainty evidence that supports making improvements or modifications to PPE in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is low certainty evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
See also
(Chemical Biological Radiological Nuclear, known formerly as NBC)
(hazardous materials)
Normalization of deviance – one reason people stop using effective prevention measures
References
External links
CDC - Emergency Response Resources: Personal Protective Equipment - NIOSH Workplace Safety and Health Topic
European Commission, DG Enterprise, Personal Protective Equipment
Directive 89/686/EEC on Personal Protective Equipment
A short guide to the Personal Protective Equipment at Work Regulations 1992' INDG174(rev1), revised 8/05 (HSE)
Occupational safety and health
Risk management in business
Industrial hygiene
Safety engineering
Environmental social science
Working conditions | Personal protective equipment | [
"Engineering",
"Environmental_science"
] | 3,669 | [
"Safety engineering",
"Systems engineering",
"Personal protective equipment",
"Environmental social science"
] |
55,539 | https://en.wikipedia.org/wiki/Conventional%20sex | Conventional sex, colloquially known as vanilla sex, is sexual behavior that is within the range of normality for a culture or subculture, and typically involves sex which does not include elements of BDSM, kink, fetishism, and/or happens within a marriage or relationship.
Description
What is regarded as conventional sex depends on cultural and subcultural norms. Among heterosexual couples in the Western world, for example, conventional sex often refers to sexual intercourse in the missionary position. It can also describe penetrative sex which does not have any element of BDSM, kink or fetish.
The British Medical Journal regards conventional sex between homosexual couples as "sex that does not extend beyond affection, mutual masturbation, and oral and anal sex." In addition to mutual masturbation (including manual sex), penetrative sexual activity among same-sex pairings is contrasted by non-insertive acts such as intercrural sex, frot and tribadism, although tribadism has been cited as a common but rarely discussed sexual practice among lesbians.
Vanilla sexuality
The term "vanilla" in "vanilla sex" leverages the polysemic nature of the term, meaning both literally "vanilla" or "conventional", depending on the context.
It originally derives from the use of vanilla extract as the basic flavoring for ice cream, and by extension, meaning plain or conventional. In relationships where only one partner enjoys less conventional forms of sexual expression, the partner who does not enjoy such activities as much as the other is often referred to as the vanilla partner. As such, it is easy for them to be erroneously branded unadventurous in sexual matters. Through exploration with their partner, it may be possible for a more vanilla-minded person to discover new facets of their sexuality. As with any sexually active person, they may find their preferences on the commonly termed "vanilla-kink spectrum" are sufficient for their full satisfaction.
References
LGBTQ terminology
Sexual acts | Conventional sex | [
"Biology"
] | 405 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
55,557 | https://en.wikipedia.org/wiki/Timeline | A timeline is a list of events displayed in chronological order. It is typically a graphic design showing a long bar labelled with dates paralleling it, and usually contemporaneous events.
Timelines can use any suitable scale representing time, suiting the subject and data; many use a linear scale, in which a unit of distance is equal to a set amount of time. This timescale is dependent on the events in the timeline. A timeline of evolution can be over millions of years, whereas a timeline for the day of the September 11 attacks can take place over minutes, and that of an explosion over milliseconds. While many timelines use a linear timescale—especially where very large or small timespans are relevant -- logarithmic timelines entail a logarithmic scale of time; some "hurry up and wait" chronologies are depicted with zoom lens metaphors.
More usually, "timeline" refers merely to a data set which could be displayed as described above. For example, this meaning is used in the titles of many Wikipedia articles starting "Timeline of ..."
History
Time and space (particularly the line) are intertwined concepts in human thought. The line is ubiquitous in clocks in the form of a circle, time is spoken of in terms of length, intervals, a before and an after. The idea of orderly, segmented time is also represented in almanacs, calendars, charts, graphs, genealogical and evolutionary trees, where the line is central.
Originally, chronological events were arranged in a mostly textual form. This took form in annals, like king lists. Alongside them, the table was used like in the Greek tables of Olympiads and Roman lists of consuls and triumphs. Annals had little narrative and noted what happened to people, making no distinction between natural and human actions.
In Europe, from the 4th century, the dominant chronological notation was the table. This can be partially credited to Eusebius, who laid out the relations between Jewish, pagan, and Christian histories in parallel columns, culminating in the Roman Empire, according to the Christian view when Christ was born to spread salvation as far as possible. His work was widely copied and was among the first printed books. This served the idea of Christian world history and providential time. The table is easy to produce, append, and read with indices, so it also fit the Renaissance scholars' absorption of a wide variety of sources with its focus on commonalities. These uses made the table with years in one column and places of events (kingdoms) on the top the dominant visual structure of time.
By the 17th century, historians had started to claim that chronology and geography were the two sources of precise information which bring order to the chaos of history. In geography, Renaissance mapmakers updated Ptolemy's maps and the map became a symbol of the power of monarchs, and knowledge. Likewise, the idea that a singular chronology of world history from contemporary sources is possible affected historians. The want for precision in chronology gave rise to adding historical eclipses to tables, like in the case of Gerardus Mercator. Various graphical experiments emerged, from fitting the whole of history on a calendar year to series of historical drawings, in the hopes of making a metaphorical map of time. Developments in printing and engraving that made practical larger and more detailed book illustrations allowed these changes, but in the 17th century, the table with some modifications continued to dominate.
The modern timeline emerged in Joseph Priestley's A Chart of Biography, published in 1765. It presented dates simply and provided an analogue for the concept of historical progress that was becoming popular in the 18th century. However, as Priestley recognized, history is not totally linear. The table has the advantage in that it can present many of these intersections and branching paths. For Priestley, its main use was a "mechanical help to the knowledge of history", not as an image of history. Regardless, the timeline had become very popular during the 18th and 19th centuries. Positivism emerged in the 19th century and the development of chronophotography and tree ring analysis made visible time taking place at various speeds. This encouraged people to think that events might be truly objectively recorded.
However, in some cases, filling in a timeline with more data only pushed it towards impracticality. Jacques Barbeu-Duborg's 1753 Chronologie Universelle was mounted on a 54-feet-long (16½ m) scroll. Charles Joseph Minard's 1869 thematic map of casualties of the French army in its Russian campaign put much less focus on the one-directional line. Charles Renouvier's 1876 Uchronie, a branching map of the history of Europe, depicted both the actual course of history and counterfactual paths. At the end of the 19th century, Henri Bergson declared the metaphor of the timeline to be deceiving in Time and Free Will. The question of big history and deep time engendered estranging forms of the timeline, like in Olaf Stapledon's 1930 work Last and First Men where timelines are drawn on scales from the historical to the cosmological. Similar techniques are used by the Long Now Foundation, and the difficulties of chronological representation have been presented by visual artists including Francis Picabia, On Kawara, J. J. Grandville, and Saul Steinberg.
Types
There are different types of timelines:
Text timelines, labeled as text
Number timelines, the labels are numbers, commonly line graphs
Interactive, clickable, zoomable
Video timelines
There are many methods to visualize timelines. Historically, timelines were static images and were generally drawn or printed on paper. Timelines relied heavily on graphic design, and the ability of the artist to visualize the data.
Uses
Timelines are often used in education to help students and researchers with understanding the order or chronology of historical events and trends for a subject. To show time on a specific scale on an axis, a timeline can visualize time lapses between events, durations (such as lifetimes or wars), and the simultaneity or the overlap of spans and events.
In historical studies
Timelines are particularly useful for studying history, as they convey a sense of change over time. Wars and social movements are often shown as timelines. Timelines are also useful for biographies. Examples include:
Timeline of the civil rights movement
Timeline of European exploration
Timeline of European imperialism
Timeline of Solar System exploration
Timeline of United States history
Timeline of World War I
List of timelines of World War II
Timeline of religion
In natural sciences
Timelines are also used in the natural world and sciences, such as in astronomy, biology, chemistry, and geology:
2009 swine flu pandemic timeline
Chronology of the universe
Geologic time scale
Timeline of the evolutionary history of life
Timeline of crystallography
In project management
Another type of timeline is used for project management. Timelines help team members know what milestones need to be achieved and under what time schedule. An example is establishing a project timeline in the implementation phase of the life cycle of a computer system.
Software
Timelines (no longer constrained by previous space and functional limitations) are now digital and interactive, generally created with computer software. Microsoft Encarta encyclopedia provided one of the earliest multimedia timelines intended for students and the general public. ChronoZoom is another examplespa of interactive timeline software.
See also
Chronology
ChronoZoom – an open source project for visualizing the timeline of Big History
Detailed logarithmic timeline
List of timelines
Living graph
Logarithmic timeline
Many-worlds interpretation
Sequence of events
Synchronoptic view
Timecode
Timestream
Timelines of world history
World line
References
External links
Infographics
Statistical charts and diagrams
Chronology
Visualization (graphics) | Timeline | [
"Physics"
] | 1,584 | [
"Spacetime",
"Chronology",
"Physical quantities",
"Time"
] |
55,562 | https://en.wikipedia.org/wiki/National%20Road | The National Road (also known as the Cumberland Road) was the first major improved highway in the United States built by the federal government. Built between 1811 and 1837, the road connected the Potomac and Ohio Rivers and was a main transport path to the West for thousands of settlers. When improved in the 1830s, it became the second U.S. road surfaced with the macadam process pioneered by Scotsman John Loudon McAdam.
Construction began heading west in 1811 at Cumberland, Maryland, on the Potomac River. After the Financial Panic of 1837 and the resulting economic depression, congressional funding ran dry and construction was stopped at Vandalia, Illinois, the then-capital of Illinois, northeast of St. Louis across the Mississippi River.
The road has also been referred to as the Cumberland Turnpike, the Cumberland–Brownsville Turnpike (or Road or Pike), the Cumberland Pike, the National Pike, and the National Turnpike.
In the 20th century with the advent of the automobile, the National Road was connected with other historic routes to California under the title, National Old Trails Road. Today, much of the alignment is followed by U.S. Route 40 (US 40), with various portions bearing the Alternate U.S. Route 40 (Alt. US 40) designation, or various state-road numbers (such as Maryland Route 144 for several sections between Baltimore and Cumberland).
In 1976, the American Society of Civil Engineers designated the National Road as a National Historic Civil Engineering Landmark. In 2002, the entire road, including extensions east to Baltimore and west to St. Louis, was designated the Historic National Road, an All-American Road.
History
Braddock Road
The Braddock Road had been opened by the Ohio Company in 1751 between Fort Cumberland, the limit of navigation on the upper Potomac River, and the French military station at Fort Duquesne at the forks of the Ohio River, (at the confluence of the Allegheny and Monongahela Rivers), an important trading and military point where the city of Pittsburgh now stands. It received its name during the colonial-era French and Indian War of 1753–1763 (also known as the Seven Years' War in Europe), when it was constructed by British General Edward Braddock, who was accompanied by Colonel George Washington of the Virginia militia regiment in the ill-fated July 1755 Braddock expedition, an attempt to assault the French-held Fort Duquesne.
Cumberland Road
Construction of the Cumberland Road (which later became part of the longer National Road) was authorized on March 29, 1806, by Congress. The new Cumberland Road would replace the wagon and foot paths of the Braddock Road for travel between the Potomac and Ohio Rivers, following roughly the same alignment until just east of Uniontown, Pennsylvania. From there, where the Braddock Road turned north towards Pittsburgh, the new National Road/Cumberland Road continued west to Wheeling, Virginia (now West Virginia), also on the Ohio River.
The contract for the construction of the first section was awarded to Henry McKinley on May 8, 1811, and construction began later that year, with the road reaching Wheeling on August 1, 1818. For more than 100 years, a simple granite stone was the only marker of the road's beginning in Cumberland, Maryland. In June 2012, a monument and plaza were built in that town's Riverside Park, next to the historic original starting point.
Beyond the National Road's eastern terminus at Cumberland and toward the Atlantic coast, a series of private toll roads and turnpikes were constructed, connecting the National Road (also known as the Old National Pike) with Baltimore, then the third-largest city in the country, and a major maritime port on Chesapeake Bay. Completed in 1824, these feeder routes formed what is referred to as an eastern extension of the federal National Road.
Westward extension
On May 15, 1820, Congress authorized an extension of the road to St. Louis, on the Mississippi River, and on March 3, 1825, across the Mississippi and to Jefferson City, Missouri. Work on the extension between Wheeling and Zanesville, Ohio, used the pre-existing Zane's Trace of Ebenezer Zane, and was completed in 1833 to the new state capital of Columbus, Ohio, and in 1838 to the college town of Springfield, Ohio.
In 1849, a bridge was completed to carry the National Road across the Ohio River at Wheeling. The Wheeling Suspension Bridge, designed by Charles Ellet Jr., was at the time the world's longest bridge span at from tower to tower.
Transfer to states
Maintenance costs on the Cumberland Road were becoming more than Congress was willing to bear. In agreements with Maryland, Virginia, and Pennsylvania, the road was to be reconstructed and resurfaced. The section that ran over Haystack Mountain, just west of Cumberland, was abandoned and a new road was built through the Cumberland Narrows.
On April 1, 1835, the section from Wheeling to Cumberland was transferred to Maryland, Pennsylvania, and Virginia (now West Virginia). The last congressional appropriation was made May 25, 1838, and in 1840, Congress voted against completing the unfinished portion of the road, with the deciding vote being cast by Henry Clay. By that time, railroads were proving a better method of long-distance transportation, and the Baltimore and Ohio Railroad was being built west from Baltimore to Cumberland, mostly along the Potomac River, and then by a more direct route than the National Road across the Allegheny Plateau of West Virginia (then Virginia) to Wheeling. Construction of the National Road stopped in 1839. Much of the road through Indiana and Illinois remained unfinished and was transferred to the states.
Federal construction of the road stopped at Vandalia, Illinois, which at that time was the state's capital. Illinois officials decided not to continue construction without the federal funds because two state roads from Vandalia to the St. Louis area, today's US 40 and Illinois Route 140 (known then as the Alton Road), already existed.
Subsequent events
In 1912, the National Road was chosen to become part of the National Old Trails Road, which would extend further east to New York City and west to Los Angeles, California. Five Madonna of the Trail monuments, donated by the Daughters of the American Revolution, were erected along the Old Trails Road.
In 1927, the National Road was designated as the eastern part of US 40, which still generally follows the National Road's alignment with occasional bypasses, realignments, and newer bridges. The mostly parallel Interstate 70 (I-70) now provides a faster route for through travel without the many sharp curves, steep grades, and narrow bridges of US 40 and other segments of the National Road. Heading west from Hancock in western Maryland, I-70 takes a more northerly path to connect with and follow the Pennsylvania Turnpike (also designated as I-76) across the mountains between Breezewood and New Stanton, where I-70 turns west to rejoin the National Road's route (and US 40) near Washington, Pennsylvania.
The more recently constructed I-68 parallels the old road from Hancock through Cumberland west to Keyser's Ridge, Maryland, where the National Road and US 40 turn northwest into Pennsylvania, but I-68 continues directly west to meet I-79 near Morgantown, West Virginia. The portion of I-68 in Maryland is designated as the National Freeway.
Historical structures
Many of the National Road's original stone arch bridges also remain on former alignments, including:
Casselman River Bridge near Grantsville, MarylandBuilt in 1813–1814 to carry the road across the Casselman River, it was the longest single-span stone arch bridge in America at the time.
Great Crossings Bridge near Confluence, Pennsylvania—built in 1818 to carry the road over the Youghiogheny River—the bridge, and the adjacent town of Somerfield, Pennsylvania (which was razed) are under the waters of Youghiogheny River Lake (though still visible at times of extremely low water levels).
Another remaining National Road bridge is the Wheeling Suspension Bridge at Wheeling, West Virginia. Opened in 1849 to carry the road over the Ohio River, it was the largest suspension bridge in the world until 1851, and until 2019 was the oldest vehicular suspension bridge in the United States still in use, although it has since been closed to vehicular traffic due to repeated overweight vehicles ignoring the weight limits and damaging the bridge. A newer bridge now carries the realigned US 40 and I-70 across the river nearby.
Three of the road's original toll houses are preserved:
La Vale Tollgate House, in La Vale, Maryland
Petersburg Tollhouse, in Addison, Pennsylvania
Searights Tollhouse, near Uniontown, Pennsylvania
Additionally, several Old National Pike Milestones—some well-maintained, others deteriorating, and yet others represented by modern replacements—remain intact along the route.
Route description
In general, the road climbed westwards along the Amerindian trail known as Chief Nemacolin's Path, once followed and improved by a young George Washington, then also followed by the Braddock Expedition. Using the Cumberland Narrows, its first phase of construction crossed the Allegheny Mountains entered southwestern Pennsylvania, reaching the Allegheny Plateau in Somerset County, Pennsylvania. There, travelers could turn off to Pittsburgh or continue west through Uniontown and reach navigable water, the Monongahela River, at Brownsville, Pennsylvania, which was by then a major outfitting center and riverboat-building emporium. Many settlers boarded boats there to travel down the Ohio and up the Missouri, or elsewhere on the Mississippi watershed.
By 1818, travelers could press on, still following Chief Nemacolin's trail across the ford, or taking a ferry to West Brownsville, moving through Washington County, Pennsylvania, and passing into Wheeling, Virginia (now West Virginia), away on the Ohio River. Subsequent efforts pushed the road across the states of Ohio and Indiana and into the Illinois Territory. The western terminus of the National Road at its greatest extent was at the Kaskaskia River in Vandalia, Illinois, near the intersection of modern US 51 and US 40.
Today, travelers driving east from Vandalia travel along modern US 40 through south-central Illinois. The National Road continued into Indiana along modern US 40, passing through the cities of Terre Haute and Indianapolis. Within Indianapolis, the National Road used the original alignment of US 40 along West and East Washington Street (modern US 40 is now routed along I-465). East of Indianapolis, the road went through the city of Richmond before entering Ohio, where the road continued along modern US 40 and passed through the northern suburbs of Dayton, Springfield, and Columbus.
West of Zanesville, Ohio, despite US 40's predominantly following the original route, many segments of the original road can still be found. Between Old Washington and Morristown, the original roadbed has been overlaid by I-70. The road then continued east across the Ohio River into Wheeling in West Virginia, the original western end of the National Road when it was first paved. After running in West Virginia, the National Road then entered Pennsylvania.
The road cut across southwestern Pennsylvania, heading southeast for about before entering Maryland. East of Keyser's Ridge, the road used modern Alt US 40 to the city of Cumberland (modern US 40 is now routed along I-68). Cumberland was the original eastern terminus of the road.
In the mid-19th century, a turnpike extension to Baltimore was approved—along what is now Maryland Route 144 from Cumberland to Hancock, US 40 from Hancock to Hagerstown, Alternate US 40 from Hagerstown to Frederick, and Maryland Route 144 from Frederick to Baltimore. The approval process was a hotly debated subject because of the removal of the original macadam construction that made this road famous.
The road's route between Baltimore and Cumberland continues to use the name National Pike or Baltimore National Pike and as Main Street in Ohio today, with various portions now signed as US 40, Alt. US 40, or Maryland Route 144. A spur between Frederick, Maryland, and Georgetown (Washington, D.C.), now Maryland Route 355, bears various local names, but is sometimes referred to as the Washington National Pike; it is now paralleled by I-270 between the Capital Beltway (I-495) and Frederick.
Millionaires' Row
Nicknamed the "Main Street of America", the road's presence in towns on its route and effective access to surrounding towns attracted wealthy residents to build their houses along the road in towns such as in Richmond, Indiana, and Springfield, Ohio, creating Millionaires' Rows.
Historic designations
In 1976, the American Society of Civil Engineers designated the National Road as a National Historic Civil Engineering Landmark.
There are several structures associated with the National Road that are listed on the National Register of Historic Places. Some are listed below.
Maryland
Sixty-nine milestones in Maryland on Maryland Route 144 and Maryland Route 165, U.S. Route 40, U.S. Route 40 Alternate, and U.S. Route 40 Scenic
Inns on the National Road in Cumberland, Maryland, and Grantsville, Maryland
Casselman River Bridge near Grantsville, Maryland
Pennsylvania
The Pennsylvania Historical and Museum Commission has installed five historical markers noting the historic importance of the road: one in Somerset County on August 10, 1947, one in Washington County on April 1, 1949, and three in Fayette County on October 12, 1948, October 12, 1948, and May 19, 1996.
Petersburg Tollhouse in Addison, Pennsylvania
Mount Washington Tavern adjacent to the Fort Necessity National Battlefield in Wharton Township, Pennsylvania
Searights Tollhouse, National Road, in Uniontown, Pennsylvania
Dunlap's Creek Bridge, near Brownsville, Pennsylvania, the first cast iron arch bridge in the United States. Completed in 1839, it was designed by Richard Delafield and built by the United States Army Corps of Engineers. Still in use, the bridge is also a National Historic Civil Engineering Landmark.
Claysville S Bridge in Washington County, Pennsylvania, near Claysville, Pennsylvania
West Virginia
Mile markers 8, 9, 10, 11, 13, and 14 in West Virginia
National Road Corridor Historic District in Wheeling, West Virginia
Wheeling Suspension Bridge in Wheeling, West Virginia
Ohio
Peacock Road in Cambridge, Ohio
The Red Brick Tavern in Lafayette, Madison County, Ohio, built in 1837
Indiana
Hudleston Farmhouse Inn in Mount Auburn, Indiana
James Whitcomb Riley House in Greenfield, Indiana
Illinois
Old Stone Arch, National Road, near Marshall, Illinois
Gallery
See also
National Old Trails Road (Ocean-to-Ocean Highway)
References
Further reading
restricted access
External links
American Society of Civil Engineers landmark information
– National Road in Illinois
Indiana National Road Association – National Road in Indiana
Ohio National Road Association – National Road in Ohio
National Road in West Virginia – by the West Virginia Department of Commerce
National Road Heritage Corridor – National Road in Pennsylvania
The National Old Trails Road Part 1: The Quest for a National Road
Maryland's Bank Road (Baltimore to Cumberland)
PRR Chronology
The Historic National Road, from the America's Byways website of the Federal Highway Administration
The National Old Trails Road Photo Gallery
Ohio National Road driving tour
125 M to B: The National Pike and National Road
All-American Roads
Historic trails and roads in the United States
Roads on the National Register of Historic Places
Historic Civil Engineering Landmarks
History of Cumberland, MD-WV MSA
Roads on the National Register of Historic Places in Ohio
Roads on the National Register of Historic Places in West Virginia
Roads on the National Register of Historic Places in Indiana
Roads on the National Register of Historic Places in Pennsylvania
Roads on the National Register of Historic Places in Maryland
U.S. Route 40
9th United States Congress
Scenic byways in Ohio
1811 establishments in the United States | National Road | [
"Engineering"
] | 3,187 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
55,584 | https://en.wikipedia.org/wiki/Gout | Gout ( ) is a form of inflammatory arthritis characterized by recurrent attacks of pain in a red, tender, hot, and swollen joint, caused by the deposition of needle-like crystals of uric acid known as monosodium urate crystals. Pain typically comes on rapidly, reaching maximal intensity in less than 12 hours. The joint at the base of the big toe is affected (Podagra) in about half of cases. It may also result in tophi, kidney stones, or kidney damage.
Gout is due to persistently elevated levels of uric acid (urate) in the blood (hyperuricemia). This occurs from a combination of diet, other health problems, and genetic factors. At high levels, uric acid crystallizes and the crystals deposit in joints, tendons, and surrounding tissues, resulting in an attack of gout. Gout occurs more commonly in those who regularly drink beer or sugar-sweetened beverages; eat foods that are high in purines such as liver, shellfish, or anchovies; or are overweight. Diagnosis of gout may be confirmed by the presence of crystals in the joint fluid or in a deposit outside the joint. Blood uric acid levels may be normal during an attack.
Treatment with nonsteroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, or colchicine improves symptoms. Once the acute attack subsides, levels of uric acid can be lowered via lifestyle changes and in those with frequent attacks, allopurinol or probenecid provides long-term prevention. Taking vitamin C and having a diet high in low-fat dairy products may be preventive.
Gout affects about 1–2% of adults in the developed world at some point in their lives. It has become more common in recent decades. This is believed to be due to increasing risk factors in the population, such as metabolic syndrome, longer life expectancy, and changes in diet. Older males are most commonly affected. Gout was historically known as "the disease of kings" or "rich man's disease". It has been recognized at least since the time of the ancient Egyptians.
Signs and symptoms
Gout can present in several ways, although the most common is a recurrent attack of acute inflammatory arthritis (a red, tender, hot, swollen joint). The metatarsophalangeal joint at the base of the big toe is affected most often, accounting for half of cases. Other joints, such as the heels, knees, wrists, and fingers, may also be affected. Joint pain usually begins during the night and peaks within 24 hours of onset. This is mainly due to lower body temperature. Other symptoms may rarely occur along with the joint pain, including fatigue and high fever.
Long-standing elevated uric acid levels (hyperuricemia) may result in other symptoms, including hard, painless deposits of uric acid crystals called tophi. Extensive tophi may lead to chronic arthritis due to bone erosion. Elevated levels of uric acid may also lead to crystals precipitating in the kidneys, resulting in kidney stone formation and subsequent acute uric acid nephropathy.
Cause
The crystallization of uric acid, often related to relatively high levels in the blood, is the underlying cause of gout. This can occur because of diet, genetic predisposition, or underexcretion of urate, the salts of uric acid. Underexcretion of uric acid by the kidney is the primary cause of hyperuricemia in about 90% of cases, while overproduction is the cause in less than 10%. About 10% of people with hyperuricemia develop gout at some point in their lifetimes. The risk, however, varies depending on the degree of hyperuricemia. When levels are between 415 and 530 μmol/L (7 and 8.9 mg/dL), the risk is 0.5% per year, while in those with a level greater than 535 μmol/L (9 mg/dL), the risk is 4.5% per year.
Lifestyle
Dietary causes account for about 12% of gout, and include a strong association with the consumption of alcohol, sugar-sweetened beverages, meat, and seafood. The dietary mechanisms and nutritional basis involved in gout provide evidence for strategies of prevention and improvement of gout, and dietary modifications based on effective regulatory mechanisms may be a promising strategy to reduce the high prevalence of gout. Among foods richest in purines yielding high amounts of uric acid are dried anchovies, shrimp, organ meat, dried mushrooms, seaweed, and beer yeast. Chicken and potatoes also appear related. Other triggers include physical trauma and surgery.
Studies in the early 2000s found that other dietary factors are not relevant. Specifically, a diet with moderate purine-rich vegetables (e.g., beans, peas, lentils, and spinach) is not associated with gout. Neither is total dietary protein. Alcohol consumption is strongly associated with increased risk, with wine presenting somewhat less of a risk than beer or spirits. Eating skim milk powder enriched with glycomacropeptide (GMP) and G600 milk fat extract may reduce pain but may result in diarrhea and nausea.
Physical fitness, healthy weight, low-fat dairy products, and to a lesser extent, coffee and taking vitamin C, appear to decrease the risk of gout; however, taking vitamin C supplements does not appear to have a significant effect in people who already have established gout. Peanuts, brown bread, and fruit also appear protective. This is believed to be partly due to their effect in reducing insulin resistance.
Other than dietary and lifestyle choices, the recurrence of gout attacks is also linked to the weather. High ambient temperature and low relative humidity may increase the risk of a gout attack.
Genetics
Gout is partly genetic, contributing to about 60% of variability in uric acid level. The SLC2A9, SLC22A12, and ABCG2 genes have been found to be commonly associated with gout and variations in them can approximately double the risk. Loss-of-function mutations in SLC2A9 and SLC22A12 causes low blood uric acid levels by reducing urate absorption and unopposed urate secretion. The rare genetic disorders familial juvenile hyperuricemic nephropathy, medullary cystic kidney disease, phosphoribosylpyrophosphate synthetase superactivity and hypoxanthine-guanine phosphoribosyltransferase deficiency as seen in Lesch–Nyhan syndrome, are complicated by gout.
Medical conditions
Gout frequently occurs in combination with other medical problems. Metabolic syndrome, a combination of abdominal obesity, hypertension, insulin resistance, and abnormal lipid levels, occurs in nearly 75% of cases. Other conditions commonly complicated by gout include lead poisoning, kidney failure, hemolytic anemia, psoriasis, solid organ transplants, and myeloproliferative disorders such as polycythemia. A body mass index greater than or equal to 35 increases male risk of gout threefold. Chronic lead exposure and lead-contaminated alcohol are risk factors for gout due to the harmful effect of lead on kidney function.
Medication
Diuretics have been associated with attacks of gout, but a low dose of hydrochlorothiazide does not seem to increase risk. Other medications that increase the risk include niacin, aspirin (acetylsalicylic acid), ACE inhibitors, angiotensin receptor blockers, beta blockers, ritonavir, and pyrazinamide. The immunosuppressive drugs ciclosporin and tacrolimus are also associated with gout, the former more so when used in combination with hydrochlorothiazide. Hyperuricemia may be induced by excessive use of Vitamin D supplements. Levels of serum uric acid have been positively associated with 25(OH) D. The incidence of hyperuricemia increased 9.4% for every 10 nmol/L increase in 25(OH) D (P < 0.001).
Pathophysiology
Gout is a disorder of purine metabolism, and occurs when its final metabolite, uric acid, crystallizes in the form of monosodium urate, precipitating and forming deposits (tophi) in joints, on tendons, and in the surrounding tissues. Microscopic tophi may be walled off by a ring of proteins, which blocks interaction of the crystals with cells and therefore avoids inflammation. Naked crystals may break out of walled-off tophi due to minor physical damage to the joint, medical or surgical stress, or rapid changes in uric acid levels. When they break through the tophi, they trigger a local immune-mediated inflammatory reaction in macrophages, which is initiated by the NLRP3 inflammasome protein complex. Activation of the NLRP3 inflammasome recruits the enzyme caspase 1, which converts pro-interleukin 1β into active interleukin 1β, one of the key proteins in the inflammatory cascade. An evolutionary loss of urate oxidase (uricase), which breaks down uric acid, in humans and higher primates has made this condition common.
The triggers for precipitation of uric acid are not well understood. While it may crystallize at normal levels, it is more likely to do so as levels increase. Other triggers believed to be important in acute episodes of arthritis include cool temperatures, rapid changes in uric acid levels, acidosis, articular hydration and extracellular matrix proteins. The increased precipitation at low temperatures partly explains why the joints in the feet are most commonly affected. Rapid changes in uric acid may occur due to factors including trauma, surgery, chemotherapy and diuretics. The starting or increasing of urate-lowering medications can lead to an acute attack of gout with febuxostat of a particularly high risk. Calcium channel blockers and losartan are associated with a lower risk of gout compared to other medications for hypertension.
Diagnosis
Gout may be diagnosed and treated without further investigations in someone with hyperuricemia and the classic acute arthritis of the base of the great toe (known as podagra). Synovial fluid analysis should be done if the diagnosis is in doubt. Plain X-rays are usually normal and are not useful for confirming a diagnosis of early gout. They may show signs of chronic gout such as bone erosion.
Synovial fluid
A definitive diagnosis of gout is based upon the identification of monosodium urate crystals in synovial fluid or a tophus. All synovial fluid samples obtained from undiagnosed inflamed joints by arthrocentesis should be examined for these crystals. Under polarized light microscopy, they have a needle-like morphology and strong negative birefringence. This test is difficult to perform and requires a trained observer. The fluid must be examined relatively soon after aspiration, as temperature and pH affect solubility.
Blood tests
Hyperuricemia is a classic feature of gout, but nearly half of the time gout occurs without hyperuricemia and most people with raised uric acid levels never develop gout. Thus, the diagnostic utility of measuring uric acid levels is limited. Hyperuricemia is defined as a plasma urate level greater than 420 μmol/L (7.0 mg/dL) in males and 360 μmol/L (6.0 mg/dL) in females. Other blood tests commonly performed are white blood cell count, electrolytes, kidney function and erythrocyte sedimentation rate (ESR). However, both the white blood cells and ESR may be elevated due to gout in the absence of infection. A white blood cell count as high as 40.0×109/l (40,000/mm3) has been documented.
Differential diagnosis
The most important differential diagnosis in gout is septic arthritis. This should be considered in those with signs of infection or those who do not improve with treatment. To help with diagnosis, a synovial fluid Gram stain and culture may be performed. Other conditions that can look similar include CPPD (pseudogout), rheumatoid arthritis, psoriatic arthritis, palindromic rheumatism, and reactive arthritis. Gouty tophi, in particular when not located in a joint, can be mistaken for basal cell carcinoma or other neoplasms.
Prevention
Risk of gout attacks can be lowered by complete abstinence from drinking alcoholic beverages, reducing the intake of fructose (e.g. high fructose corn syrup), sucrose, and purine-rich foods of animal origin, such as organ meats and seafood. Eating dairy products, vitamin C-rich foods, coffee, and cherries may help prevent gout attacks, as does losing weight. Gout may be secondary to sleep apnea via the release of purines from oxygen-starved cells. Treatment of apnea can lessen the occurrence of attacks.
Medications
As of 2020, allopurinol is generally the recommended preventative treatment if medications are used. A number of other medications may occasionally be considered to prevent further episodes of gout, including probenecid, febuxostat, benzbromarone, and colchicine. Long term medications are not recommended until a person has had two attacks of gout, unless destructive joint changes, tophi, or urate nephropathy exist. It is not until this point that medications are cost-effective. They are not usually started until one to two weeks after an acute flare has resolved, due to theoretical concerns of worsening the attack. They are often used in combination with either an NSAID or colchicine for the first three to six months.
While it has been recommended that urate-lowering measures should be increased until serum uric acid levels are below 300–360 μmol/L (5.0–6.0 mg/dL), there is little evidence to support this practice over simply putting people on a standard dose of allopurinol. If these medications are in chronic use at the time of an attack, it is recommended that they be continued. Levels that cannot be brought below 6.0 mg/dL while attacks continue indicates refractory gout.
While historically it is not recommended to start allopurinol during an acute attack of gout, this practice appears acceptable. Allopurinol blocks uric acid production, and is the most commonly used agent. Long term therapy is safe and well-tolerated and can be used in people with renal impairment or urate stones, although hypersensitivity occurs in a small number of individuals. The HLA-B*58:01 allele of the human leukocyte antigen B (HLA-B) is strongly associated with severe cutaneous adverse reactions during treatment with allopurinol and is most common among Asian subpopulations, notably those of Korean, Han-Chinese, or Thai descent.
Febuxostat is only recommended in those who cannot tolerate allopurinol. There are concerns about more deaths with febuxostat compared to allopurinol. Febuxostat may also increase the rate of gout flares during early treatment. However, there is tentative evidence that febuxostat may bring down urate levels more than allopurinol.
Probenecid appears to be less effective than allopurinol and is a second line agent. Probenecid may be used if undersecretion of uric acid is present (24-hour urine uric acid less than 800 mg). It is, however, not recommended if a person has a history of kidney stones. Probenecid can be used in a combined therapy with allopurinol is more effective than allopurinol monotherapy.
Pegloticase is an option for the 3% of people who are intolerant to other medications. It is a third line agent. Pegloticase is given as an intravenous infusion every two weeks, and reduces uric acid levels. Pegloticase is useful decreasing tophi but has a high rate of side effects and many people develop resistance to it. Using lesinurad plus febuxostat is more beneficial for tophi resolution than lesinural with febuxostat, with similar side effects. Lesinural plus allopurinol is not effective for tophi resolution. Potential side effects include kidney stones, anemia and joint pain. In 2016, it was withdrawn from the European market.
Lesinurad reduces blood uric acid levels by preventing uric acid absorption in the kidneys. It was approved in the United States for use together with allopurinol, among those who were unable to reach their uric acid level targets. Side effects include kidney problems and kidney stones.
Treatment
The initial aim of treatment is to settle the symptoms of an acute attack. Repeated attacks can be prevented by medications that reduce serum uric acid levels. Tentative evidence supports the application of ice for 20 to 30 minutes several times a day to decrease pain. Options for acute treatment include nonsteroidal anti-inflammatory drugs (NSAIDs), colchicine, and glucocorticoids. While glucocorticoids and NSAIDs work equally well, glucocorticoids may be safer. Options for prevention include allopurinol, febuxostat, and probenecid. Lowering uric acid levels can cure the disease. Treatment of associated health problems is also important. Lifestyle interventions have been poorly studied. It is unclear whether dietary supplements have an effect in people with gout.
NSAIDs
NSAIDs are the usual first-line treatment for gout. No specific agent is significantly more or less effective than any other. Improvement may be seen within four hours and treatment is recommended for one to two weeks. They are not recommended for those with certain other health problems, such as gastrointestinal bleeding, kidney failure, or heart failure. While indometacin has historically been the most commonly used NSAID, an alternative, such as ibuprofen, may be preferred due to its better side effect profile in the absence of superior effectiveness. For those at risk of gastric side effects from NSAIDs, an additional proton pump inhibitor may be given. There is some evidence that COX-2 inhibitors may work as well as nonselective NSAIDs for acute gout attack with fewer side effects.
Colchicine
Colchicine is an alternative for those unable to tolerate NSAIDs. At high doses, side effects (primarily gastrointestinal upset) limit its usage. At lower doses, which are still effective, it is well tolerated. Colchicine may interact with other commonly prescribed drugs, such as atorvastatin and erythromycin, among others.
Glucocorticoids
Glucocorticoids have been found to be as effective as NSAIDs and may be used if contraindications exist for NSAIDs. They also lead to improvement when injected into the joint. A joint infection must be excluded, however, as glucocorticoids worsen this condition. There were no short-term adverse effects reported.
Others
Interleukin-1 inhibitors, such as canakinumab, showed moderate effectiveness for pain relief and reduction of joint swelling, but have increased risk of adverse events, such as back pain, headache, and increased blood pressure. They, however, may work less well than usual doses of NSAIDS. The high cost of this class of drugs may also discourage their use for treating gout.
Prognosis
Without treatment, an acute attack of gout usually resolves in five to seven days; however, 60% of people have a second attack within one year. Those with gout are at increased risk of hypertension, diabetes mellitus, metabolic syndrome, and kidney and cardiovascular disease and thus are at increased risk of death. It is unclear whether medications that lower urate affect cardiovascular disease risks. This may be partly due to its association with insulin resistance and obesity, but some of the increased risk appears to be independent.
Without treatment, episodes of acute gout may develop into chronic gout with destruction of joint surfaces, joint deformity, and painless tophi. These tophi occur in 30% of those who are untreated for five years, often in the helix of the ear, over the olecranon processes, or on the Achilles tendons. With aggressive treatment, they may dissolve. Kidney stones also frequently complicate gout, affecting between 10 and 40% of people, and occur due to low urine pH promoting the precipitation of uric acid. Other forms of chronic kidney dysfunction may occur.
Epidemiology
Gout affects around 1–2% of people in the Western world at some point in their lifetimes and is becoming more common. Some 5.8 million people were affected in 2013. Rates of gout approximately doubled between 1990 and 2010. This rise is believed to be due to increasing life expectancy, changes in diet and an increase in diseases associated with gout, such as metabolic syndrome and high blood pressure. Factors that influence rates of gout include age, race, and the season of the year. In men over 30 and women over 50, rates are 2%.
In the United States, gout is twice as likely in males of African descent than those of European descent. Rates are high among Polynesians, but the disease is rare in aboriginal Australians, despite a higher mean uric acid serum concentration in the latter group. It has become common in China, Polynesia, and urban Sub-Saharan Africa. Some studies found that attacks of gout occur more frequently in the spring. This has been attributed to seasonal changes in diet, alcohol consumption, physical activity, and temperature.
Taiwan, Hong Kong and Singapore have relatively higher prevalence of gout. A study based on the National Health Insurance Research Database (NHIRD) estimated that 4.92% of Taiwanese residents have gout in 2004. A survey hold by the Hong Kong government found that 5.1% of Hong Kong resident between 45–59 years and 6.1% of those older than 60 years have gout. A study hold in Singapore found that 2,117 in 52,322 people between 45–74 years have gout, roughly equals to 4.1%.
History
The English term "gout" first occurs in the work of Randolphus of Bocking, around 1200 AD.
It derives from the Latin word , meaning "a drop" (of liquid). According to the Oxford English Dictionary, this originates from humorism and "the notion of the 'dropping' of a morbid material from the blood in and around the joints".
Gout has been known since antiquity. Historically, wits have referred to it as "the king of diseases and the disease of kings" or as "rich man's disease". The Ebers papyrus and the Edwin Smith papyrus, () each mention arthritis of the first metacarpophalangeal joint as a distinct type of arthritis. These ancient manuscripts cite (now missing) Egyptian texts about gout that are claimed to have been written 1,000 years earlier and ascribed to Imhotep. Greek physician Hippocrates around 400 BC commented on it in his Aphorisms, noting its absence in eunuchs and premenopausal women. Aulus Cornelius Celsus (30 AD) described the linkage with alcohol, later onset in women and associated kidney problems:
Benjamin Welles, an English physician, authored the first medical book on gout, A Treatise of the Gout, or Joint Evil, in 1669. In 1683, Thomas Sydenham, an English physician, described its occurrence in the early hours of the morning and its predilection for older males:
In the 18th century, Thomas Marryat distinguished different manifestations of gout:
The Gout is a chronical disease most commonly affecting the feet. If it attacks the knees, it is called ; if the hands, ; if the elbow, Onagra; if the shoulder, ; if the back or loins, Lumbago.
Dutch scientist Antonie van Leeuwenhoek first described the microscopic appearance of urate crystals in 1679. In 1848, English physician Alfred Baring Garrod identified excess uric acid in the blood as the cause of gout.
Other animals
Gout is rare in most other animals due to their ability to produce uricase, which breaks down uric acid. Humans and other great apes do not have this ability; thus, gout is common. Other animals with uricase include fish, amphibians and most non-primate mammals. The Tyrannosaurus rex specimen known as "Sue" is believed to have had gout.
Research
A number of new medications are under study for treating gout, including anakinra, canakinumab, and rilonacept. Canakinumab may result in better outcomes than a low dose of a glucocorticoid, but costs five thousand times more. A recombinant uricase enzyme (rasburicase) is available but its use is limited, as it triggers an immune response. Less antigenic versions are in development.
See also
List of people known as the Gouty
References
External links
Arthritis
Articles containing video clips
Inborn errors of purine-pyrimidine metabolism
Inflammatory polyarthropathies
Rheumatology
Wikipedia medicine articles ready to translate (full)
Wikipedia emergency medicine articles ready to translate
Skin conditions resulting from errors in metabolism
Steroid-responsive inflammatory conditions
Uric acid
Crystal deposition diseases | Gout | [
"Biology"
] | 5,376 | [
"Uric acid",
"Excretion"
] |
55,607 | https://en.wikipedia.org/wiki/Discriminant | In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.
The discriminant of the quadratic polynomial is
the quantity which appears under the square root in the quadratic formula. If this discriminant is zero if and only if the polynomial has a double root. In the case of real coefficients, it is positive if the polynomial has two distinct real roots, and negative if it has two distinct complex conjugate roots. Similarly, the discriminant of a cubic polynomial is zero if and only if the polynomial has a multiple root. In the case of a cubic with real coefficients, the discriminant is positive if the polynomial has three distinct real roots, and negative if it has one real root and two distinct complex conjugate roots.
More generally, the discriminant of a univariate polynomial of positive degree is zero if and only if the polynomial has a multiple root. For real coefficients and no multiple roots, the discriminant is positive if the number of non-real roots is a multiple of 4 (including none), and negative otherwise.
Several generalizations are also called discriminant: the discriminant of an algebraic number field; the discriminant of a quadratic form; and more generally, the discriminant of a form, of a homogeneous polynomial, or of a projective hypersurface (these three concepts are essentially equivalent).
Origin
The term "discriminant" was coined in 1851 by the British mathematician James Joseph Sylvester.
Definition
Let
be a polynomial of degree (this means ), such that the coefficients belong to a field, or, more generally, to a commutative ring. The resultant of and its derivative,
is a polynomial in with integer coefficients, which is the determinant of the Sylvester matrix of and . The nonzero entries of the first column of the Sylvester matrix are and and the resultant is thus a multiple of Hence the discriminant—up to its sign—is defined as the quotient of the resultant of and by :
Historically, this sign has been chosen such that, over the reals, the discriminant will be positive when all the roots of the polynomial are real. The division by may not be well defined if the ring of the coefficients contains zero divisors. Such a problem may be avoided by replacing by 1 in the first column of the Sylvester matrix—before computing the determinant. In any case, the discriminant is a polynomial in with integer coefficients.
Expression in terms of the roots
When the above polynomial is defined over a field, it has roots, , not necessarily all distinct, in any algebraically closed extension of the field. (If the coefficients are real numbers, the roots may be taken in the field of complex numbers, where the fundamental theorem of algebra applies.)
In terms of the roots, the discriminant is equal to
It is thus the square of the Vandermonde polynomial times .
This expression for the discriminant is often taken as a definition. It makes clear that if the polynomial has a multiple root, then its discriminant is zero, and that, in the case of real coefficients, if all the roots are real and simple, then the discriminant is positive. Unlike the previous definition, this expression is not obviously a polynomial in the coefficients, but this follows either from the fundamental theorem of Galois theory, or from the fundamental theorem of symmetric polynomials and Vieta's formulas by noting that this expression is a symmetric polynomial in the roots of .
Low degrees
The discriminant of a linear polynomial (degree 1) is rarely considered. If needed, it is commonly defined to be equal to 1 (using the usual conventions for the empty product and considering that one of the two blocks of the Sylvester matrix is empty). There is no common convention for the discriminant of a constant polynomial (i.e., polynomial of degree 0).
For small degrees, the discriminant is rather simple (see below), but for higher degrees, it may become unwieldy. For example, the discriminant of a general quartic has 16 terms, that of a quintic has 59 terms, and that of a sextic has 246 terms.
This is OEIS sequence .
Degree 2
The quadratic polynomial has discriminant
The square root of the discriminant appears in the quadratic formula for the roots of the quadratic polynomial:
where the discriminant is zero if and only if the two roots are equal. If are real numbers, the polynomial has two distinct real roots if the discriminant is positive, and two complex conjugate roots if it is negative.
The discriminant is the product of and the square of the difference of the roots.
If are rational numbers, then the discriminant is the square of a rational number if and only if the two roots are rational numbers.
Degree 3
The cubic polynomial has discriminant
In the special case of a depressed cubic polynomial , the discriminant simplifies to
The discriminant is zero if and only if at least two roots are equal. If the coefficients are real numbers, and the discriminant is not zero, the discriminant is positive if the roots are three distinct real numbers, and negative if there is one real root and two complex conjugate roots.
The square root of a quantity strongly related to the discriminant appears in the formulas for the roots of a cubic polynomial. Specifically, this quantity can be times the discriminant, or its product with the square of a rational number; for example, the square of in the case of Cardano formula.
If the polynomial is irreducible and its coefficients are rational numbers (or belong to a number field), then the discriminant is a square of a rational number (or a number from the number field) if and only if the Galois group of the cubic equation is the cyclic group of order three.
Degree 4
The quartic polynomial
has discriminant
The depressed quartic polynomial
has discriminant
The discriminant is zero if and only if at least two roots are equal. If the coefficients are real numbers and the discriminant is negative, then there are two real roots and two complex conjugate roots. Conversely, if the discriminant is positive, then the roots are either all real or all non-real.
Properties
Zero discriminant
The discriminant of a polynomial over a field is zero if and only if the polynomial has a multiple root in some field extension.
The discriminant of a polynomial over an integral domain is zero if and only if the polynomial and its derivative have a non-constant common divisor.
In characteristic 0, this is equivalent to saying that the polynomial is not square-free (i.e., it is divisible by the square of a non-constant polynomial).
In nonzero characteristic , the discriminant is zero if and only if the polynomial is not square-free or it has an irreducible factor which is not separable (i.e., the irreducible factor is a polynomial in ).
Invariance under change of the variable
The discriminant of a polynomial is, up to a scaling, invariant under any projective transformation of the variable. As a projective transformation may be decomposed into a product of translations, homotheties and inversions, this results in the following formulas for simpler transformations, where denotes a polynomial of degree , with as leading coefficient.
Invariance by translation:
This results from the expression of the discriminant in terms of the roots
Invariance by homothety:
This results from the expression in terms of the roots, or of the quasi-homogeneity of the discriminant.
Invariance by inversion:
when Here, denotes the reciprocal polynomial of ; that is, if and then
Invariance under ring homomorphisms
Let be a homomorphism of commutative rings. Given a polynomial
in , the homomorphism acts on for producing the polynomial
in .
The discriminant is invariant under in the following sense. If then
As the discriminant is defined in terms of a determinant, this property results immediately from the similar property of determinants.
If then may be zero or not. One has, when
When one is only interested in knowing whether a discriminant is zero (as is generally the case in algebraic geometry), these properties may be summarised as:
if and only if either or
This is often interpreted as saying that if and only if has a multiple root (possibly at infinity).
Product of polynomials
If is a product of polynomials in , then
where denotes the resultant with respect to the variable , and and are the respective degrees of and .
This property follows immediately by substituting the expression for the resultant, and the discriminant, in terms of the roots of the respective polynomials.
Homogeneity
The discriminant is a homogeneous polynomial in the coefficients; it is also a homogeneous polynomial in the roots and thus quasi-homogeneous in the coefficients.
The discriminant of a polynomial of degree is homogeneous of degree in the coefficients. This can be seen in two ways. In terms of the roots-and-leading-term formula, multiplying all the coefficients by does not change the roots, but multiplies the leading term by . In terms of its expression as a determinant of a matrix (the Sylvester matrix) divided by , the determinant is homogeneous of degree in the entries, and dividing by makes the degree .
The discriminant of a polynomial of degree is homogeneous of degree in the roots. This follows from the expression of the discriminant in terms of the roots, which is the product of a constant and squared differences of roots.
The discriminant of a polynomial of degree is quasi-homogeneous of degree in the coefficients, if, for every , the coefficient of is given the weight . It is also quasi-homogeneous of the same degree, if, for every , the coefficient of is given the weight . This is a consequence of the general fact that every polynomial which is homogeneous and symmetric in the roots may be expressed as a quasi-homogeneous polynomial in the elementary symmetric functions of the roots.
Consider the polynomial
It follows from what precedes that the exponents in every monomial appearing in the discriminant satisfy the two equations
and
and also the equation
which is obtained by subtracting the second equation from the first one multiplied by .
This restricts the possible terms in the discriminant. For the general quadratic polynomial, the discriminant is a homogeneous polynomial of degree 2 which has only two there are only two terms, while the general homogeneous polynomial of degree two in three variables has 6 terms. The discriminant of the general cubic polynomial is a homogeneous polynomial of degree 4 in four variables; it has five terms, which is the maximum allowed by the above rules, while the general homogeneous polynomial of degree 4 in 4 variables has 35 terms.
For higher degrees, there may be monomials which satisfy above rules and do not appear in the discriminant. The first example is for the quartic polynomial , in which case the monomial satisfies the rules without appearing in the discriminant.
Real roots
In this section, all polynomials have real coefficients.
It has been seen in that the sign of the discriminant provides useful information on the nature of the roots for polynomials of degree 2 and 3. For higher degrees, the information provided by the discriminant is less complete, but still useful. More precisely, for a polynomial of degree , one has:
The polynomial has a multiple root if and only if its discriminant is zero.
If the discriminant is positive, the number of non-real roots is a multiple of 4. That is, there is a nonnegative integer such that there are pairs of complex conjugate roots and real roots.
If the discriminant is negative, the number of non-real roots is not a multiple of 4. That is, there is a nonnegative integer such that there are pairs of complex conjugate roots and real roots.
Homogeneous bivariate polynomial
Let
be a homogeneous polynomial of degree in two indeterminates.
Supposing, for the moment, that and are both nonzero, one has
Denoting this quantity by
one has
and
Because of these properties, the quantity is called the discriminant or the homogeneous discriminant of .
If and are permitted to be zero, the polynomials and may have a degree smaller than . In this case, above formulas and definition remain valid, if the discriminants are computed as if all polynomials would have the degree . This means that the discriminants must be computed with and indeterminate, the substitution for them of their actual values being done after this computation. Equivalently, the formulas of must be used.
Use in algebraic geometry
The typical use of discriminants in algebraic geometry is for studying plane algebraic curves, and more generally algebraic hypersurfaces. Let be such a curve or hypersurface; is defined as the zero set of a multivariate polynomial. This polynomial may be considered as a univariate polynomial in one of the indeterminates, with polynomials in the other indeterminates as coefficients. The discriminant with respect to the selected indeterminate defines a hypersurface in the space of the other indeterminates. The points of are exactly the projection of the points of (including the points at infinity), which either are singular or have a tangent hyperplane that is parallel to the axis of the selected indeterminate.
For example, let be a bivariate polynomial in and with real coefficients, so that is the implicit equation of a real plane algebraic curve. Viewing as a univariate polynomial in with coefficients depending on , then the discriminant is a polynomial in whose roots are the -coordinates of the singular points, of the points with a tangent parallel to the -axis and of some of the asymptotes parallel to the -axis. In other words, the computation of the roots of the -discriminant and the -discriminant allows one to compute all of the remarkable points of the curve, except the inflection points.
Generalizations
There are two classes of the concept of discriminant. The first class is the discriminant of an algebraic number field, which, in some cases including quadratic fields, is the discriminant of a polynomial defining the field.
Discriminants of the second class arise for problems depending on coefficients, when degenerate instances or singularities of the problem are characterized by the vanishing of a single polynomial in the coefficients. This is the case for the discriminant of a polynomial, which is zero when two roots collapse. Most of the cases, where such a generalized discriminant is defined, are instances of the following.
Let be a homogeneous polynomial in indeterminates over a field of characteristic 0, or of a prime characteristic that does not divide the degree of the polynomial. The polynomial defines a projective hypersurface, which has singular points if and only the partial derivatives of have a nontrivial common zero. This is the case if and only if the multivariate resultant of these partial derivatives is zero, and this resultant may be considered as the discriminant of . However, because of the integer coefficients resulting of the derivation, this multivariate resultant may be divisible by a power of , and it is better to take, as a discriminant, the primitive part of the resultant, computed with generic coefficients. The restriction on the characteristic is needed because otherwise a common zero of the partial derivative is not necessarily a zero of the polynomial (see Euler's identity for homogeneous polynomials).
In the case of a homogeneous bivariate polynomial of degree , this general discriminant is times the discriminant defined in . Several other classical types of discriminants, that are instances of the general definition are described in next sections.
Quadratic forms
A quadratic form is a function over a vector space, which is defined over some basis by a homogeneous polynomial of degree 2:
or, in matrix form,
for the symmetric matrix , the row vector , and the column vector . In characteristic different from 2, the discriminant or determinant of is the determinant of .
The Hessian determinant of is times its discriminant. The multivariate resultant of the partial derivatives of is equal to its Hessian determinant. So, the discriminant of a quadratic form is a special case of the above general definition of a discriminant.
The discriminant of a quadratic form is invariant under linear changes of variables (that is a change of basis of the vector space on which the quadratic form is defined) in the following sense: a linear change of variables is defined by a nonsingular matrix , changes the matrix into and thus multiplies the discriminant by the square of the determinant of . Thus the discriminant is well defined only up to the multiplication by a square. In other words, the discriminant of a quadratic form over a field is an element of , the quotient of the multiplicative monoid of by the subgroup of the nonzero squares (that is, two elements of are in the same equivalence class if one is the product of the other by a nonzero square). It follows that over the complex numbers, a discriminant is equivalent to 0 or 1. Over the real numbers, a discriminant is equivalent to −1, 0, or 1. Over the rational numbers, a discriminant is equivalent to a unique square-free integer.
By a theorem of Jacobi, a quadratic form over a field of characteristic different from 2 can be expressed, after a linear change of variables, in diagonal form as
More precisely, a quadratic forms on may be expressed as a sum
where the are independent linear forms and is the number of the variables (some of the may be zero). Equivalently, for any symmetric matrix , there is an elementary matrix such that is a diagonal matrix.
Then the discriminant is the product of the , which is well-defined as a class in .
Geometrically, the discriminant of a quadratic form in three variables is the equation of a quadratic projective curve. The discriminant is zero if and only if the curve is decomposed in lines (possibly over an algebraically closed extension of the field).
A quadratic form in four variables is the equation of a projective surface. The surface has a singular point if and only its discriminant is zero. In this case, either the surface may be decomposed in planes, or it has a unique singular point, and is a cone or a cylinder. Over the reals, if the discriminant is positive, then the surface either has no real point or has everywhere a negative Gaussian curvature. If the discriminant is negative, the surface has real points, and has a negative Gaussian curvature.
Conic sections
A conic section is a plane curve defined by an implicit equation of the form
where are real numbers.
Two quadratic forms, and thus two discriminants may be associated to a conic section.
The first quadratic form is
Its discriminant is the determinant
It is zero if the conic section degenerates into two lines, a double line or a single point.
The second discriminant, which is the only one that is considered in many elementary textbooks, is the discriminant of the homogeneous part of degree two of the equation. It is equal to
and determines the shape of the conic section. If this discriminant is negative, the curve either has no real points, or is an ellipse or a circle, or, if degenerated, is reduced to a single point. If the discriminant is zero, the curve is a parabola, or, if degenerated, a double line or two parallel lines. If the discriminant is positive, the curve is a hyperbola, or, if degenerated, a pair of intersecting lines.
Real quadric surfaces
A real quadric surface in the Euclidean space of dimension three is a surface that may be defined as the zeros of a polynomial of degree two in three variables. As for the conic sections there are two discriminants that may be naturally defined. Both are useful for getting information on the nature of a quadric surface.
Let be a polynomial of degree two in three variables that defines a real quadric surface. The first associated quadratic form, depends on four variables, and is obtained by homogenizing ; that is
Let us denote its discriminant by
The second quadratic form, depends on three variables, and consists of the terms of degree two of ; that is
Let us denote its discriminant by
If and the surface has real points, it is either a hyperbolic paraboloid or a one-sheet hyperboloid. In both cases, this is a ruled surface that has a negative Gaussian curvature at every point.
If the surface is either an ellipsoid or a two-sheet hyperboloid or an elliptic paraboloid. In all cases, it has a positive Gaussian curvature at every point.
If the surface has a singular point, possibly at infinity. If there is only one singular point, the surface is a cylinder or a cone. If there are several singular points the surface consists of two planes, a double plane or a single line.
When the sign of if not 0, does not provide any useful information, as changing into does not change the surface, but changes the sign of However, if and the surface is a paraboloid, which is elliptic or hyperbolic, depending on the sign of
Discriminant of an algebraic number field
The discriminant of an algebraic number field measures the size of the (ring of integers of the) algebraic number field.
More specifically, it is proportional to the squared volume of the fundamental domain of the ring of integers, and it regulates which primes are ramified.
The discriminant is one of the most basic invariants of a number field, and occurs in several important analytic formulas such as the functional equation of the Dedekind zeta function of K, and the analytic class number formula for K. A theorem of Hermite states that there are only finitely many number fields of bounded discriminant, however determining this quantity is still an open problem, and the subject of current research.
Let K be an algebraic number field, and let OK be its ring of integers. Let b1, ..., bn be an integral basis of OK (i.e. a basis as a Z-module), and let {σ1, ..., σn} be the set of embeddings of K into the complex numbers (i.e. injective ring homomorphisms K → C). The discriminant of K is the square of the determinant of the n by n matrix B whose (i,j)-entry is σi(bj). Symbolically,
The discriminant of K can be referred to as the absolute discriminant of K to distinguish it from the of an extension K/L of number fields. The latter is an ideal in the ring of integers of L, and like the absolute discriminant it indicates which primes are ramified in K/L. It is a generalization of the absolute discriminant allowing for L to be bigger than Q; in fact, when L = Q, the relative discriminant of K/Q is the principal ideal of Z generated by the absolute discriminant of K.
Fundamental discriminants
A specific type of discriminant useful in the study of quadratic fields is the fundamental discriminant. It arises in the theory of integral binary quadratic forms, which are expressions of the form:
where , , and are integers. The discriminant of is given by:Not every integer can arise as a discriminant of an integral binary quadratic form. An integer is a fundamental discriminant if and only if it meets one of the following criteria:
Case 1: is congruent to 1 modulo 4 () and is square-free, meaning it is not divisible by the square of any prime number.
Case 2: is equal to four times an integer () where is congruent to 2 or 3 modulo 4 () and is square-free.
These conditions ensure that every fundamental discriminant corresponds uniquely to a specific type of quadratic form.
The first eleven positive fundamental discriminants are:
1, 5, 8, 12, 13, 17, 21, 24, 28, 29, 33 (sequence A003658 in the OEIS).
The first eleven negative fundamental discriminants are:
−3, −4, −7, −8, −11, −15, −19, −20, −23, −24, −31 (sequence A003657 in the OEIS).
Quadratic number fields
A quadratic field is a field extension of the rational numbers that has degree 2. The discriminant of a quadratic field plays a role analogous to the discriminant of a quadratic form.
There exists a fundamental connection: an integer is a fundamental discriminant if and only if:
, or
is the discriminant of a quadratic field.
For each fundamental discriminant , there exists a unique (up to isomorphism) quadratic field with as its discriminant. This connects the theory of quadratic forms and the study of quadratic fields.
Prime factorization
Fundamental discriminants can also be characterized by their prime factorization. Consider the set consisting of the prime numbers congruent to 1 modulo 4, and the additive inverses of the prime numbers congruent to 3 modulo 4:An integer is a fundamental discriminant if and only if it is a product of elements of that are pairwise coprime.
References
External links
Wolfram Mathworld: Polynomial Discriminant
Planetmath: Discriminant
Polynomials
Conic sections
Quadratic forms
Determinants
Algebraic number theory | Discriminant | [
"Mathematics"
] | 5,603 | [
"Polynomials",
"Number theory",
"Algebraic number theory",
"Quadratic forms",
"Algebra"
] |
55,610 | https://en.wikipedia.org/wiki/Interior%20%28topology%29 | In mathematics, specifically in topology,
the interior of a subset of a topological space is the union of all subsets of that are open in .
A point that is in the interior of is an interior point of .
The interior of is the complement of the closure of the complement of .
In this sense interior and closure are dual notions.
The exterior of a set is the complement of the closure of ; it consists of the points that are in neither the set nor its boundary.
The interior, boundary, and exterior of a subset together partition the whole space into three blocks (or fewer when one or more of these is empty).
The interior and exterior of a closed curve are a slightly different concept; see the Jordan curve theorem.
Definitions
Interior point
If is a subset of a Euclidean space, then is an interior point of if there exists an open ball centered at which is completely contained in
(This is illustrated in the introductory section to this article.)
This definition generalizes to any subset of a metric space with metric : is an interior point of if there exists a real number such that is in whenever the distance
This definition generalizes to topological spaces by replacing "open ball" with "open set".
If is a subset of a topological space then is an of in if is contained in an open subset of that is completely contained in
(Equivalently, is an interior point of if is a neighbourhood of )
Interior of a set
The interior of a subset of a topological space denoted by or or can be defined in any of the following equivalent ways:
is the largest open subset of contained in
is the union of all open sets of contained in
is the set of all interior points of
If the space is understood from context then the shorter notation is usually preferred to
Examples
In any space, the interior of the empty set is the empty set.
In any space if then
If is the real line (with the standard topology), then whereas the interior of the set of rational numbers is empty:
If is the complex plane then
In any Euclidean space, the interior of any finite set is the empty set.
On the set of real numbers, one can put other topologies rather than the standard one:
If is the real numbers with the lower limit topology, then
If one considers on the topology in which every set is open, then
If one considers on the topology in which the only open sets are the empty set and itself, then is the empty set.
These examples show that the interior of a set depends upon the topology of the underlying space.
The last two examples are special cases of the following.
In any discrete space, since every set is open, every set is equal to its interior.
In any indiscrete space since the only open sets are the empty set and itself, and for every proper subset of is the empty set.
Properties
Let be a topological space and let and be subsets of
is open in
If is open in then if and only if
is an open subset of when is given the subspace topology.
is an open subset of if and only if
:
:
/:
However, the interior operator does not distribute over unions since only is guaranteed in general and equality might not hold. For example, if and then is a proper subset of
/: If then
Other properties include:
If is closed in and then
Relationship with closure
The above statements will remain true if all instances of the symbols/words
"interior", "int", "open", "subset", and "largest"
are respectively replaced by
"closure", "cl", "closed", "superset", and "smallest"
and the following symbols are swapped:
"" swapped with ""
"" swapped with ""
For more details on this matter, see interior operator below or the article Kuratowski closure axioms.
Interior operator
The interior operator is dual to the closure operator, which is denoted by or by an overline —, in the sense that
and also
where is the topological space containing and the backslash denotes set-theoretic difference.
Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators, by replacing sets with their complements in
In general, the interior operator does not commute with unions. However, in a complete metric space the following result does hold:
The result above implies that every complete metric space is a Baire space.
Exterior of a set
The exterior of a subset of a topological space denoted by or simply is the largest open set disjoint from namely, it is the union of all open sets in that are disjoint from The exterior is the interior of the complement, which is the same as the complement of the closure; in formulas,
Similarly, the interior is the exterior of the complement:
The interior, boundary, and exterior of a set together partition the whole space into three blocks (or fewer when one or more of these is empty):
where denotes the boundary of The interior and exterior are always open, while the boundary is closed.
Some of the properties of the exterior operator are unlike those of the interior operator:
The exterior operator reverses inclusions; if then
The exterior operator is not idempotent. It does have the property that
Interior-disjoint shapes
Two shapes and are called interior-disjoint if the intersection of their interiors is empty.
Interior-disjoint shapes may or may not intersect in their boundary.
See also
References
Bibliography
External links
Closure operators
General topology | Interior (topology) | [
"Mathematics"
] | 1,116 | [
"General topology",
"Topology",
"Closure operators",
"Order theory"
] |
55,611 | https://en.wikipedia.org/wiki/Alexandroff%20extension | In the mathematical field of topology, the Alexandroff extension is a way to extend a noncompact topological space by adjoining a single point in such a way that the resulting space is compact. It is named after the Russian mathematician Pavel Alexandroff.
More precisely, let X be a topological space. Then the Alexandroff extension of X is a certain compact space X* together with an open embedding c : X → X* such that the complement of X in X* consists of a single point, typically denoted ∞. The map c is a Hausdorff compactification if and only if X is a locally compact, noncompact Hausdorff space. For such spaces the Alexandroff extension is called the one-point compactification or Alexandroff compactification. The advantages of the Alexandroff compactification lie in its simple, often geometrically meaningful structure and the fact that it is in a precise sense minimal among all compactifications; the disadvantage lies in the fact that it only gives a Hausdorff compactification on the class of locally compact, noncompact Hausdorff spaces, unlike the Stone–Čech compactification which exists for any topological space (but provides an embedding exactly for Tychonoff spaces).
Example: inverse stereographic projection
A geometrically appealing example of one-point compactification is given by the inverse stereographic projection. Recall that the stereographic projection S gives an explicit homeomorphism from the unit sphere minus the north pole (0,0,1) to the Euclidean plane. The inverse stereographic projection is an open, dense embedding into a compact Hausdorff space obtained by adjoining the additional point . Under the stereographic projection latitudinal circles get mapped to planar circles . It follows that the deleted neighborhood basis of given by the punctured spherical caps corresponds to the complements of closed planar disks . More qualitatively, a neighborhood basis at is furnished by the sets as K ranges through the compact subsets of . This example already contains the key concepts of the general case.
Motivation
Let be an embedding from a topological space X to a compact Hausdorff topological space Y, with dense image and one-point remainder . Then c(X) is open in a compact Hausdorff space so is locally compact Hausdorff, hence its homeomorphic preimage X is also locally compact Hausdorff. Moreover, if X were compact then c(X) would be closed in Y and hence not dense. Thus a space can only admit a Hausdorff one-point compactification if it is locally compact, noncompact and Hausdorff. Moreover, in such a one-point compactification the image of a neighborhood basis for x in X gives a neighborhood basis for c(x) in c(X), and—because a subset of a compact Hausdorff space is compact if and only if it is closed—the open neighborhoods of must be all sets obtained by adjoining to the image under c of a subset of X with compact complement.
The Alexandroff extension
Let be a topological space. Put and topologize by taking as open sets all the open sets in X together with all sets of the form where C is closed and compact in X. Here, denotes the complement of in Note that is an open neighborhood of and thus any open cover of will contain all except a compact subset of implying that is compact .
The space is called the Alexandroff extension of X (Willard, 19A). Sometimes the same name is used for the inclusion map
The properties below follow from the above discussion:
The map c is continuous and open: it embeds X as an open subset of .
The space is compact.
The image c(X) is dense in , if X is noncompact.
The space is Hausdorff if and only if X is Hausdorff and locally compact.
The space is T1 if and only if X is T1.
The one-point compactification
In particular, the Alexandroff extension is a Hausdorff compactification of X if and only if X is Hausdorff, noncompact and locally compact. In this case it is called the one-point compactification or Alexandroff compactification of X.
Recall from the above discussion that any Hausdorff compactification with one point remainder is necessarily (isomorphic to) the Alexandroff compactification. In particular, if is a compact Hausdorff space and is a limit point of (i.e. not an isolated point of ), is the Alexandroff compactification of .
Let X be any noncompact Tychonoff space. Under the natural partial ordering on the set of equivalence classes of compactifications, any minimal element is equivalent to the Alexandroff extension (Engelking, Theorem 3.5.12). It follows that a noncompact Tychonoff space admits a minimal compactification if and only if it is locally compact.
Non-Hausdorff one-point compactifications
Let be an arbitrary noncompact topological space. One may want to determine all the compactifications (not necessarily Hausdorff) of obtained by adding a single point, which could also be called one-point compactifications in this context.
So one wants to determine all possible ways to give a compact topology such that is dense in it and the subspace topology on induced from is the same as the original topology. The last compatibility condition on the topology automatically implies that is dense in , because is not compact, so it cannot be closed in a compact space.
Also, it is a fact that the inclusion map is necessarily an open embedding, that is, must be open in and the topology on must contain every member
of .
So the topology on is determined by the neighbourhoods of . Any neighborhood of is necessarily the complement in of a closed compact subset of , as previously discussed.
The topologies on that make it a compactification of are as follows:
The Alexandroff extension of defined above. Here we take the complements of all closed compact subsets of as neighborhoods of . This is the largest topology that makes a one-point compactification of .
The open extension topology. Here we add a single neighborhood of , namely the whole space . This is the smallest topology that makes a one-point compactification of .
Any topology intermediate between the two topologies above. For neighborhoods of one has to pick a suitable subfamily of the complements of all closed compact subsets of ; for example, the complements of all finite closed compact subsets, or the complements of all countable closed compact subsets.
Further examples
Compactifications of discrete spaces
The one-point compactification of the set of positive integers is homeomorphic to the space consisting of K = {0} U {1/n | n is a positive integer} with the order topology.
A sequence in a topological space converges to a point in , if and only if the map given by for in and is continuous. Here has the discrete topology.
Polyadic spaces are defined as topological spaces that are the continuous image of the power of a one-point compactification of a discrete, locally compact Hausdorff space.
Compactifications of continuous spaces
The one-point compactification of n-dimensional Euclidean space Rn is homeomorphic to the n-sphere Sn. As above, the map can be given explicitly as an n-dimensional inverse stereographic projection.
The one-point compactification of the product of copies of the half-closed interval [0,1), that is, of , is (homeomorphic to) .
Since the closure of a connected subset is connected, the Alexandroff extension of a noncompact connected space is connected. However a one-point compactification may "connect" a disconnected space: for instance the one-point compactification of the disjoint union of a finite number of copies of the interval (0,1) is a wedge of circles.
The one-point compactification of the disjoint union of a countable number of copies of the interval (0,1) is the Hawaiian earring. This is different from the wedge of countably many circles, which is not compact.
Given compact Hausdorff and any closed subset of , the one-point compactification of is , where the forward slash denotes the quotient space.
If and are locally compact Hausdorff, then where is the smash product. Recall that the definition of the smash product: where is the wedge sum, and again, / denotes the quotient space.
As a functor
The Alexandroff extension can be viewed as a functor from the category of topological spaces with proper continuous maps as morphisms to the category whose objects are continuous maps and for which the morphisms from to are pairs of continuous maps such that . In particular, homeomorphic spaces have isomorphic Alexandroff extensions.
See also
Notes
References
General topology
Compactification (mathematics) | Alexandroff extension | [
"Mathematics"
] | 1,867 | [
"General topology",
"Topology",
"Compactification (mathematics)"
] |
55,623 | https://en.wikipedia.org/wiki/Cotyledon | A cotyledon ( ; ; "a cavity, small cup, any cup-shaped hollow",
gen. (), ) is a "seed leaf" – a significant part of the embryo within the seed of a plant – and is formally defined as "the embryonic leaf in seed-bearing plants, one or more of which are the first to appear from a germinating seed." Botanists use the number of cotyledons present as one characteristic to classify the flowering plants (angiosperms): species with one cotyledon are called monocotyledonous ("monocots"); plants with two embryonic leaves are termed dicotyledonous ("dicots").
In the case of dicot seedlings whose cotyledons are photosynthetic, the cotyledons are functionally similar to leaves. However, true leaves and cotyledons are developmentally distinct. Cotyledons form during embryogenesis, along with the root and shoot meristems, and are therefore present in the seed prior to germination. True leaves, however, form post-embryonically (i.e. after germination) from the shoot apical meristem, which generates subsequent aerial portions of the plant.
The cotyledon of grasses and many other monocotyledons is a highly modified leaf composed of a scutellum and a coleoptile. The scutellum is a tissue within the seed that is specialized to absorb stored food from the adjacent endosperm. The coleoptile is a protective cap that covers the plumule (precursor to the stem and leaves of the plant).
Gymnosperm seedlings also have cotyledons. Gnetophytes, cycads, and ginkgos all have 2, whereas in conifers they are often variable in number (multicotyledonous), with 2 to 24 cotyledons forming a whorl at the top of the hypocotyl (the embryonic stem) surrounding the plumule. Within each species, there is often still some variation in cotyledon numbers, e.g. Monterey pine (Pinus radiata) seedlings have between 5 and 9, and Jeffrey pine (Pinus jeffreyi) 7 to 13 (Mirov 1967), but other species are more fixed, with e.g. Mediterranean cypress always having just two cotyledons. The highest number reported is for big-cone pinyon (Pinus maximartinezii), with 24 (Farjon & Styles 1997).
Cotyledons may be ephemeral, lasting only days after emergence, or persistent, enduring at least a year on the plant. The cotyledons contain (or in the case of gymnosperms and monocotyledons, have access to) the stored food reserves of the seed. As these reserves are used up, the cotyledons may turn green and begin photosynthesis, or may wither as the first true leaves take over food production for the seedling.
Epigeal versus hypogeal development
Cotyledons may be either epigeal, expanding on the germination of the seed, throwing off the seed shell, rising above the ground, and perhaps becoming photosynthetic; or hypogeal, not expanding, remaining below ground and not becoming photosynthetic. The latter is typically the case where the cotyledons act as a storage organ, as in many nuts and acorns.
Hypogeal plants have (on average) significantly larger seeds than epigeal ones. They are also capable of surviving if the seedling is clipped off, as meristem buds remain underground (with epigeal plants, the meristem is clipped off if the seedling is grazed). The tradeoff is whether the plant should produce a large number of small seeds, or a smaller number of seeds which are more likely to survive.
The ultimate development of the epigeal habit is represented by a few plants, mostly in the family Gesneriaceae in which the cotyledon persists for a lifetime. Such a plant is Streptocarpus wendlandii of South Africa in which one cotyledon grows to be up to 75 centimeters (2.5 feet) in length and up to 61 cm (two feet) in width (the largest cotyledon of any dicot, and exceeded only by Lodoicea). Adventitious flower clusters form along the midrib of the cotyledon. The second cotyledon is much smaller and ephemeral.
Related plants may show a mixture of hypogeal and epigeal development, even within the same plant family. Groups which contain both hypogeal and epigeal species include, for example, the Southern Hemisphere conifer family Araucariaceae, the pea family, Fabaceae, and the genus Lilium (see Lily seed germination types). The frequently garden grown common bean, Phaseolus vulgaris, is epigeal, while the closely related runner bean, Phaseolus coccineus, is hypogeal.
History
The term cotyledon was coined by Marcello Malpighi (1628–1694). John Ray was the first botanist to recognize that some plants have two and others only one, and eventually the first to recognize the immense importance of this fact to systematics, in Methodus plantarum (1682).
Theophrastus (3rd or 4th century BC) and Albertus Magnus (13th century) may also have recognized the distinction between the dicotyledons and monocotyledons.
Notes
References
Bibliography
Mirov, N. T. (1967). The Genus Pinus. Ronald Press Company, New York.
Farjon, A. & Styles, B. T. (1997). Pinus (Pinaceae). Flora Neotropica Monograph 75: 221–224.
External links
Tiscali.reference – Cotyledon
Plant anatomy
Plant morphology
Plant reproduction
Leaves | Cotyledon | [
"Biology"
] | 1,261 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Plant morphology"
] |
55,632 | https://en.wikipedia.org/wiki/Linear%20combination | In mathematics, a linear combination or superposition is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.
Definition
Let V be a vector space over the field K. As usual, we call elements of V vectors and call elements of K scalars.
If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is
There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, as in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the reference is to the expression. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not produce distinct linear combinations.
In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination.
Note that by definition, a linear combination involves only finitely many vectors (except as described in the section.
However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors.
Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V.
Examples and counterexamples
Euclidean vectors
Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R3.
Consider the vectors , and .
Then any vector in R3 is a linear combination of e1, e2, and e3.
To see that this is so, take an arbitrary vector (a1,a2,a3) in R3, and write:
Functions
Let K be the set C of all complex numbers, and let V be the set CC(R) of all continuous functions from the real line R to the complex plane C.
Consider the vectors (functions) f and g defined by f(t) := eit and g(t) := e−it.
(Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.)
Some linear combinations of f and g are:
On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of eit and e−it. This means that there would exist complex scalars a and b such that for all real numbers t. Setting t = 0 and t = π gives the equations and , and clearly this cannot happen. See Euler's identity.
Polynomials
Let K be R, C, or any field, and let V be the set P of all polynomials with coefficients taken from the field K.
Consider the vectors (polynomials) p1 := 1, , and .
Is the polynomial x2 − 1 a linear combination of p1, p2, and p3?
To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x2 − 1.
Picking arbitrary coefficients a1, a2, and a3, we want
Multiplying the polynomials out, this means
and collecting like powers of x, we get
Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude
This system of linear equations can easily be solved.
First, the first equation simply says that a3 is 1.
Knowing that, we can solve the second equation for a2, which comes out to −1.
Finally, the last equation tells us that a1 is also −1.
Therefore, the only possible way to get a linear combination is with these coefficients.
Indeed,
so x2 − 1 is a linear combination of p1, p2, and p3.
On the other hand, what about the polynomial x3 − 1? If we try to make this vector a linear combination of p1, p2, and p3, then following the same process as before, we get the equation
However, when we set corresponding coefficients equal in this case, the equation for x3 is
which is always false.
Therefore, there is no way for this to work, and x3 − 1 is not a linear combination of p1, p2, and p3.
The linear span
Take an arbitrary field K, an arbitrary vector space V, and let v1,...,vn be vectors (in V).
It is interesting to consider the set of all linear combinations of these vectors.
This set is called the linear span (or just span) of the vectors, say S = {v1, ..., vn}. We write the span of S as span(S) or sp(S):
Linear independence
Suppose that, for some sets of vectors v1,...,vn,
a single vector can be written in two different ways as a linear combination of them:
This is equivalent, by subtracting these (), to saying a non-trivial combination is zero:
If that is possible, then v1,...,vn are called linearly dependent; otherwise, they are linearly independent.
Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors.
If S is linearly independent and the span of S equals V, then S is a basis for V.
Affine, conical, and convex combinations
By restricting the coefficients used in linear combinations, one can define the related concepts of affine combination, conical combination, and convex combination, and the associated notions of sets closed under these operations.
Because these are more restricted operations, more subsets will be closed under them, so affine subsets, convex cones, and convex sets are generalizations of vector subspaces: a vector subspace is also an affine subspace, a convex cone, and a convex set, but a convex set need not be a vector subspace, affine, or a convex cone.
These concepts often arise when one can take certain linear combinations of objects, but not any: for example, probability distributions are closed under convex combination (they form a convex set), but not conical or affine combinations (or linear), and positive measures are closed under conical combination but not affine or linear – hence one defines signed measures as the linear closure.
Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers.
If one allows only scalar multiplication, not addition, one obtains a (not necessarily convex) cone; one often restricts the definition to only allowing multiplication by positive scalars.
All of these concepts are usually defined as subsets of an ambient vector space (except for affine spaces, which are also considered as "vector spaces forgetting the origin"), rather than being axiomatized independently.
Operad theory
More abstractly, in the language of operad theory, one can consider vector spaces to be algebras over the operad (the infinite direct sum, so only finitely many terms are non-zero; this corresponds to only taking finite sums), which parametrizes linear combinations: the vector for instance corresponds to the linear combination . Similarly, one can consider affine combinations, conical combinations, and convex combinations to correspond to the sub-operads where the terms sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories.
From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations.
The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination: the basic operations are a generating set for the operad of all linear combinations.
Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces.
Generalizations
If V is a topological vector space, then there may be a way to make sense of certain infinite linear combinations, using the topology of V.
For example, we might be able to speak of a1v1 + a2v2 + a3v3 + ⋯, going on forever.
Such infinite linear combinations do not always make sense; we call them convergent when they do.
Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis.
The articles on the various flavors of topological vector spaces go into more detail about these.
If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change.
The only difference is that we call spaces like this V modules instead of vector spaces.
If K is a noncommutative ring, then the concept still generalizes, with one caveat:
since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module.
This is simply a matter of doing scalar multiplication on the correct side.
A more complicated twist comes when V is a bimodule over two rings, KL and KR.
In that case, the most general linear combination looks like
where a1,...,an belong to KL, b1,...,bn belong to KR, and v1,…,vn belong to V.
See also
Weighted sum
Citations
References
Textbook
Web
External links
Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
Linear algebra | Linear combination | [
"Mathematics"
] | 2,465 | [
"Linear algebra",
"Algebra"
] |
55,678 | https://en.wikipedia.org/wiki/Interstate%20Commerce%20Commission | The Interstate Commerce Commission (ICC) was a regulatory agency in the United States created by the Interstate Commerce Act of 1887. The agency's original purpose was to regulate railroads (and later trucking) to ensure fair rates, to eliminate rate discrimination, and to regulate other aspects of common carriers, including interstate bus lines and telephone companies. Congress expanded ICC authority to regulate other modes of commerce beginning in 1906. Throughout the 20th century, several of ICC's authorities were transferred to other federal agencies. The ICC was abolished in 1995, and its remaining functions were transferred to the Surface Transportation Board.
The Commission's five members were appointed by the President with the consent of the United States Senate. This was the first independent agency (or so-called Fourth Branch).
Creation
The ICC was established by the Interstate Commerce Act of 1887, which was signed into law by President Grover Cleveland. The creation of the commission was the result of widespread and longstanding anti-railroad agitation. Western farmers, specifically those of the Grange Movement, were the dominant force behind the unrest, but Westerners generally — especially those in rural areas — believed that the railroads possessed economic power that they systematically abused. A central issue was rate discrimination between similarly situated customers and communities. Other potent issues included alleged attempts by railroads to obtain influence over city and state governments and the widespread practice of granting free transportation in the form of yearly passes to opinion leaders (elected officials, newspaper editors, ministers, and so on) so as to dampen any opposition to railroad practices.
Various sections of the Interstate Commerce Act banned "personal discrimination" and required shipping rates to be "just and reasonable."
President Cleveland appointed Thomas M. Cooley as the first chairman of the ICC. Cooley had been Dean of the University of Michigan Law School and Chief Justice of the Michigan Supreme Court.
Initial implementation and legal challenges
The Commission had a troubled start because the law that created it failed to give it adequate enforcement powers.
Following the passage of the 1887 act, the ICC proceeded to set maximum shipping rates for railroads. However, in the late 1890s, several railroads challenged the agency's ratemaking authority in litigation, and the courts severely limited the ICC's powers.
The ICC became the United States' investigation agency for railroad accidents.
Expansion of ICC authority
Congress expanded the commission's powers through subsequent legislation. The 1893 Railroad Safety Appliance Act gave the ICC jurisdiction over railroad safety, removing this authority from the states, and this was followed with amendments in 1903 and 1910. The Hepburn Act of 1906 authorized the ICC to set maximum railroad rates, and extended the agency's authority to cover bridges, terminals, ferries, sleeping cars, express companies and oil pipelines.
A long-standing controversy was how to interpret language in the Act that banned long haul-short haul fare discrimination. The Mann-Elkins Act of 1910 addressed this question by strengthening ICC authority over railroad rates. This amendment also expanded the ICC's jurisdiction to include regulation of telephone, telegraph and wireless companies.
The Valuation Act of 1913 required the ICC to organize a Bureau of Valuation that would assess the value of railroad property. This information would be used to set rates. The Esch-Cummins Act of 1920 expanded the ICC's rate-setting responsibilities, and the agency in turn required updated valuation data from the railroads. The enlarged process led to a major increase in ICC staff, and the valuations continued for almost 20 years. The valuation process turned out to be of limited use in helping the ICC set rates fairly.
In 1934, Congress transferred the telecommunications authority to the new Federal Communications Commission.
In 1935, Congress passed the Motor Carrier Act, which extended ICC authority to regulate interstate bus lines and trucking as common carriers.
Ripley Plan to consolidate railroads into regional systems
The Transportation Act of 1920 directed the Interstate Commerce Commission to prepare and adopt a plan for the consolidation of the railway properties of the United States into a limited number of systems. Between 1920 and 1923, William Z. Ripley, a professor of political economy at Harvard University, wrote up ICC's plan for the regional consolidation of the U.S. railways. His plan became known as the Ripley Plan. In 1929 the ICC published Ripley's Plan under the title Complete Plan of Consolidation. Numerous hearings were held by ICC regarding the plan under the topic "In the Matter of Consolidation of the Railways of the United States into a Limited Number of Systems".
The proposed 21 regional railroads were as follows:
Boston and Maine Railroad; Maine Central Railroad; Bangor and Aroostook Railroad; Delaware and Hudson Railway
New Haven Railroad; New York, Ontario and Western Railway; Lehigh and Hudson River Railway; Lehigh and New England Railroad
New York Central Railroad; Rutland Railroad; Virginian Railway; Chicago, Attica and Southern Railroad
Pennsylvania Railroad; Long Island Rail Road
Baltimore and Ohio Railroad; Central Railroad of New Jersey; Reading Railroad; Buffalo and Susquehanna Railroad; Buffalo, Rochester and Pittsburgh Railway; 50% of Detroit, Toledo and Ironton Railroad; 50% of Detroit and Toledo Shore Line Railroad; 50% of Monon Railroad; Chicago and Alton Railroad (Alton Railroad)
Chesapeake and Ohio-Nickel Plate Road; Hocking Valley Railway; Erie Railroad; Pere Marquette Railway; Delaware, Lackawanna and Western Railroad; Bessemer and Lake Erie Railroad; Chicago and Illinois Midland Railway; 50% of Detroit and Toledo Shore Line Railroad
Wabash-Seaboard Air Line Railroad; Lehigh Valley Railroad; Wheeling and Lake Erie Railway; Pittsburgh and West Virginia Railway; Western Maryland Railway; Akron, Canton and Youngstown Railway; Norfolk and Western Railway; 50% of Detroit, Toledo and Ironton Railroad; Toledo, Peoria and Western Railroad; Ann Arbor Railroad; 50% of Winston-Salem Southbound Railway
Atlantic Coast Line Railroad; Louisville and Nashville Railroad; Nashville, Chattanooga and St. Louis Railway; Clinchfield Railroad; Atlanta, Birmingham and Coast Railroad; Gulf, Mobile and Northern Railroad; New Orleans, Jackson and Great Northern; 25% of Chicago, Indianapolis and Louisville Railway (Monon Railroad); 50% of Winston-Salem Southbound Railway
Southern Railway; Norfolk Southern Railway; Tennessee Central Railway (east of Nashville); Florida East Coast Railway; 25% of Chicago, Indianapolis and Louisville Railway (Monon Railway)
Illinois Central Railroad; Central of Georgia Railway; Minneapolis and St. Louis Railway; Tennessee Central Railway (west of Nashville); St. Louis Southwestern Railway (Cotton Belt); Atlanta and St. Andrews Bay Railroad
Chicago and North Western Railway; Chicago and Eastern Illinois Railroad; Litchfield and Madison Railway; Mobile and Ohio Railroad; Columbus and Greenville Railway; Lake Superior and Ishpeming Railroad
Great Northern-Northern Pacific Railway; Spokane, Portland and Seattle Railway; 50% of Butte, Anaconda and Pacific Railway
Milwaukee Road; Escanaba and Lake Superior Railroad; Duluth, Missabe and Northern Railway; Duluth and Iron Range Railroad; 50% of Butte, Anaconda and Pacific Railway; trackage rights on Spokane, Portland and Seattle Railway to Portland, Oregon.
Burlington Route; Colorado and Southern Railway; Fort Worth and Denver Railway; Green Bay and Western Railroad; Missouri-Kansas-Texas Railroad; 50% of Trinity and Brazos Valley Railroad; Oklahoma City-Ada-Atoka Railway
Union Pacific Railroad; Kansas City Southern Railway
Southern Pacific Railroad
Santa Fe Railway; Chicago Great Western Railway; Kansas City, Mexico and Orient Railway; Missouri and North Arkansas Railroad; Midland Valley Railroad; Minneapolis, Northfield and Southern Railway
Missouri Pacific Railroad; Texas and Pacific Railway; Kansas, Oklahoma and Gulf Railway; Denver and Rio Grande Western Railroad; Denver and Salt Lake Railroad; Western Pacific Railroad; Fort Smith and Western Railway
Rock Island-Frisco Railway; Alabama, Tennessee and Northern Railroad; 50% of Trinity and Brazos Valley Railroad; Louisiana and Arkansas Railway; Meridian and Bigbee Railroad
Canadian National; Detroit, Grand Haven and Milwaukee Railway; Grand Trunk Western Railroad
Canadian Pacific; Soo Line; Duluth, South Shore and Atlantic Railway; Mineral Range Railroad
Terminal railroads proposed
There were 100 terminal railroads that were also proposed. Below is a sample:
Toledo Terminal Railroad; Detroit Terminal Railroad; Kankakee & Seneca Railroad
Indianapolis Union Railway; Boston Terminal; Ft. Wayne Union Railway; Norfolk & Portsmouth Belt Line Railroad
Toledo, Angola & Western Railway
Akron and Barberton Belt Railroad; Canton Railroad; Muskegon Railway & Navigation
Philadelphia Belt Line Railroad; Fort Street Union Depot; Detroit Union Railroad Depot & Station; 15 other properties throughout the United States
St. Louis & O'Fallon Railway; Detroit & Western Railway; Flint Belt Railroad; 63 other properties throughout the United States
Youngstown & Northern Railroad; Delray Connecting Railroad; Wyandotte Southern Railroad; Wyandotte Terminal Railroad; South Brooklyn Railway
Plan rejected
Many small railroads failed during the Great Depression of the 1930s. Of those lines that survived, the stronger ones were not interested in supporting the weaker ones. Congress repudiated Ripley's Plan with the Transportation Act of 1940, and the consolidation idea was scrapped.
Racial integration of transport
Although racial discrimination was never a major focus of its efforts, the ICC had to address civil rights issues when passengers filed complaints.
History
April 28, 1941 - In Mitchell v. United States, the United States Supreme Court ruled that discrimination in which a colored man who had paid a first class fare for an interstate journey was compelled to leave that car and ride in a second class car was essentially unjust, and violated the Interstate Commerce Act. The court thus overturns an ICC order dismissing a complaint against an interstate carrier.
June 3, 1946 - In Morgan v. Virginia, the Supreme Court invalidates provisions of the Virginia Code which require the separation of white and colored passengers where applied to interstate bus transport. The state law is unconstitutional insofar as it is burdening interstate commerce, an area of federal jurisdiction.
June 5, 1950 - In Henderson v. United States, the Supreme Court rules to abolish segregation of reserved tables in railroad dining cars. The Southern Railway had reserved tables in such a way as to allocate one table conditionally for blacks and multiple tables for whites; a black passenger traveling first-class was not served in the dining car as the one reserved table was in use. The ICC ruled the discrimination to be an error in judgement on the part of an individual dining car steward; both the United States District Court for the District of Maryland and the Supreme Court disagreed, finding the published policies of the railroad itself to be in violation of the Interstate Commerce Act.
September 1, 1953 - In Sarah Keys v. Carolina Coach Company, Women's Army Corps private Sarah Keys, represented by civil rights lawyer Dovey Johnson Roundtree, becomes the first black person to challenge the "separate but equal" doctrine in bus segregation before the ICC. While the initial ICC reviewing commissioner declined to accept the case, claiming Brown v. Board of Education (1954) "did not preclude segregation in a private business such as a bus company," Roundtree ultimately prevailed in obtaining a review by the full eleven-person commission.
November 7, 1955 – ICC bans bus segregation in interstate travel in Sarah Keys v. Carolina Coach Company. This extends the logic of Brown v. Board of Education, a precedent ending the use of "separate but equal" as a defence against discrimination claims in education, to bus travel across state lines.
December 5, 1960 - In Boynton v. Virginia, the Supreme Court holds that racial segregation in bus terminals is illegal because such segregation violates the Interstate Commerce Act. This ruling, in combination with the ICC's 1955 decision in Keys v. Carolina Coach, effectively outlaws segregation on interstate buses and at the terminals servicing such buses.
September 23, 1961 - The ICC, at Attorney General Robert F. Kennedy's insistence, issues new rules ending discrimination in interstate travel. Effective November 1, 1961, six years after the commission's own ruling in Keys v. Carolina Coach Company, all interstate buses required to display a certificate that reads: "Seating aboard this vehicle is without regard to race, color, creed, or national origin, by order of the Interstate Commerce Commission."
Criticism
The limitation on railroad rates in 1906-07 depreciated the value of railroad securities, a factor in causing the panic of 1907.
Some economists and historians, such as Milton Friedman assert that existing railroad interests took advantage of ICC regulations to strengthen their control of the industry and prevent competition, constituting regulatory capture.
Economist David D. Friedman argues that the ICC always served the railroads as a cartelizing agent and used its authority over other forms of transportation to prevent them, where possible, from undercutting the railroads.
In March 1920, the ICC had Eben Moody Boynton, the inventor of the Boynton Bicycle Railroad, committed as a lunatic to an institution in Washington, D.C. Boynton's monorail electric light rail system, it was reported, had the potential to revolutionize transportation, superseding then-current train travel. ICC officials said that they had Boynton committed because he was "worrying them to death" in his promotion of the bicycle railroad. Based on his own testimony and that of a Massachusetts congressman, Boynton won release on May 28, 1920, overcoming testimony of the ICC's chief clerk that Boynton was virtually a daily visitor at ICC offices, seeking Commission adoption of his proposal to revolutionize the railroad industry.
Abolition
Congress passed various deregulation measures in the 1970s and early 1980s which diminished ICC authority, including the Railroad Revitalization and Regulatory Reform Act of 1976 ("4R Act"), the Motor Carrier Act of 1980 and the Staggers Rail Act of 1980. Senator Fred R. Harris of Oklahoma strongly advocated the abolition of the Commission. In December 1995, when most of the ICC's powers had been eliminated or repealed, Congress finally abolished the agency with the ICC Termination Act of 1995. Final Chair Gail McDonald oversaw transferring its remaining functions to a new agency, the U.S. Surface Transportation Board (STB), which reviews mergers and acquisitions, rail line abandonments and railroad corporate filings.
ICC jurisdiction on rail safety (hours of service rules, equipment and inspection standards) was transferred to the Federal Railroad Administration pursuant to the Federal Railroad Safety Act of 1970.
Before the ICC was abolished motor carriers (bus lines, trucking companies) had safety regulations enforced by the Office of Motor Carriers (OMC) under the Federal Highway Administration (FHWA). The OMC inherited many of the "Economic" regulations enforced by the ICC in addition to the safety regulations imposed on motor carriers. In January 2000 the OMC became the Federal Motor Carrier Safety Administration (FMCSA), within the U.S. Department of Transportation. Prior to its abolition, the ICC gave identification numbers to motor carriers for which it issued licenses. The identification numbers were generally in the form of "ICC MC-000000". When the ICC was dissolved, the function of licensing interstate motor carriers was transferred to FMCSA. All interstate motor carriers that transport freight moving across state lines have a USDOT number, such as "USDOT 000000." There are private carriers, e.g. Walmart that move their own freight requiring only a USDOT number, and carriers with authority that haul freight for hire that are still required to have a USDOT number and a Motor Carrier (MC) number that replaced the ICC numbers.
Legacy
The ICC served as a model for later regulatory efforts. Unlike, for example, state medical boards (historically administered by the doctors themselves), the seven Interstate Commerce Commissioners and their staffs were full-time regulators who could have no economic ties to the industries they regulated. Since 1887, some state and other federal agencies adopted this structure. And, like the ICC, later agencies tended to be organized as multi-headed independent commissions with staggered terms for the commissioners. At the federal level, agencies patterned after the ICC included the Federal Trade Commission (1914), the Federal Communications Commission (1934), the Securities and Exchange Commission (1934), the National Labor Relations Board (1935), the Civil Aeronautics Board (1940), Postal Regulatory Commission (1970) and the Consumer Product Safety Commission (1975).
In recent decades, this regulatory structure of independent federal agencies has gone out of fashion. The agencies created after the 1970s generally have single heads appointed by the President and are divisions inside executive Cabinet Departments (e.g., the Occupational Safety and Health Administration (1970) or the Transportation Security Administration (2002)). The trend is the same at the state level, though it is probably less pronounced.
International influence
The Interstate Commerce Commission had a strong influence on the founders of Australia. The Constitution of Australia provides (§§ 101-104; also § 73) for the establishment of an Inter-State Commission, modeled after the United States' Interstate Commerce Commission. However, these provisions have largely not been put into practice; the Commission existed between 1913–1920, and 1975–1989, but never assumed the role which Australia's founders had intended for it.
See also
Airline deregulation in the United States
History of rail transport in the United States
United States administrative law
References
Sources
Further reading
External links
Public Broadcasting Service (PBS). "People & Events: Interstate Commerce Commission." (Notes for the television program The American Experience: Streamliners.)
Historic technical reports from the Interstate Commerce Commission (and other Federal agencies) are available in the Technical Reports Archive and Image Library (TRAIL)
Records of the Interstate Commerce Commission and Surface Transportation Board in the National Archives (Record Group 134)
1887 establishments in the United States
1995 disestablishments in the United States
Defunct organizations based in Washington, D.C.
Government agencies established in 1887
Rail accident investigators
United States administrative law | Interstate Commerce Commission | [
"Technology"
] | 3,589 | [
"Railway accidents and incidents",
"Rail accident investigators"
] |
55,693 | https://en.wikipedia.org/wiki/Turquoise | Turquoise is an opaque, blue-to-green mineral that is a hydrous phosphate of copper and aluminium, with the chemical formula . It is rare and valuable in finer grades and has been prized as a gemstone for millennia due to its hue.
Like most other opaque gems, turquoise has been devalued by the introduction of treatments, imitations, and synthetics into the market. The robin egg blue or sky blue color of the Persian turquoise mined near the modern city of Nishapur, Iran, has been used as a guiding reference for evaluating turquoise quality.
Names
The word turquoise dates to the 17th century and is derived from the Old French turquois meaning "Turkish" because the mineral was first brought to Europe through the Ottoman Empire. However, according to Etymonline, the word dates to the 14th century with the form turkeis, meaning "Turkish", which was replaced with turqueise from French in the 1560s. According to the same source, the gemstone was first brought to Europe from Turkestan or another Turkic territory. Pliny the Elder referred to the mineral as callais (from Ancient Greek ) and the Aztecs knew it as chalchihuitl.
In professional mineralogy, until the mid-19th century, the scientific names kalaite or azure spar were also used, which simultaneously provided a version of the mineral origin of turquoise. However, these terms did not become widespread and gradually fell out of use.
Properties
The finest of turquoise reaches a maximum Mohs hardness of just under 6, or slightly more than window glass. Characteristically a cryptocrystalline mineral, turquoise almost never forms single crystals, and all of its properties are highly variable. X-ray diffraction testing shows its crystal system to be triclinic. With lower hardness comes greater porosity. The lustre of turquoise is typically waxy to subvitreous, and its transparency is usually opaque, but may be semitranslucent in thin sections. Colour is as variable as the mineral's other properties, ranging from white to a powder blue to a sky blue and from a blue-green to a yellowish green. The blue is attributed to idiochromatic copper while the green may be the result of iron impurities (replacing copper.)
The refractive index of turquoise varies from 1.61 to 1.65 on the three crystal axes, with birefringence 0.040, biaxial positive, as measured from rare single crystals.
Crushed turquoise is soluble in hot hydrochloric acid. Its streak is white to greenish to blue, and its fracture is smooth to conchoidal. Despite its low hardness relative to other gems, turquoise takes a good polish. Turquoise may also be peppered with flecks of pyrite or interspersed with dark, spidery limonite veining.
Turquoise is nearly always cryptocrystalline and massive and assumes no definite external shape. Crystals, even at the microscopic scale, are rare. Typically the form is a vein or fracture filling, nodular, or botryoidal in habit. Stalactite forms have been reported. Turquoise may also pseudomorphously replace feldspar, apatite, other minerals, or even fossils. Odontolite is fossil bone or ivory that has historically been thought to have been altered by turquoise or similar phosphate minerals such as the iron phosphate vivianite. Intergrowth with other secondary copper minerals such as chrysocolla is also common. Turquoise is distinguished from chrysocolla, the only common mineral with similar properties, by its greater hardness.
Turquoise forms a complete solid solution series with chalcosiderite, , in which ferric iron replaces aluminium.
Formation
Turquoise deposits probably form in more than one way. However, a typical turquoise deposit begins with hydrothermal deposition of copper sulfides. This takes place when hydrothermal fluids leach copper from a host rock, which is typically an intrusion of calc-alkaline rock with a moderate to high silica content that is relatively oxidized. The copper is redeposited in more concentrated form as a copper porphyry, in which veins of copper sulfide fill joints and fractures in the rock. Deposition takes place mostly in the potassic alteration zone, which is characterized by conversion of existing feldspar to potassium feldspar and deposition of quartz and micas at a temperature of
Turquoise is a secondary or supergene mineral, not present in the original copper porphyry. It forms when meteoric water (rain or snow melt infiltrating the Earth's surface) percolates through the copper porphyry. Dissolved oxygen in the water oxidizes the copper sulfides to soluble sulfates, and the acidic, copper-laden solution then reacts with aluminum and potassium minerals in the host rock to precipitate turquoise. This typically fills veins in volcanic rock or phosphate-rich sediments. Deposition usually takes place at a relatively low temperature, , and seems to occur more readily in arid environments.
Turquoise in the Sinai Peninsula is found in lower Carboniferous sandstones overlain by basalt flows and upper Carboniferous limestone. The overlying beds were presumably the source of the copper, which precipitated as turquoise in nodules, horizontal seams, or vertical joints in the sandstone beds. The classical Iranian deposits are found in sandstones and limestones of Tertiary age were intruded by apatite-rich porphyritic trachytes and mafic rock. Supergene alteration fractured the rock and converted some of the minerals in the rock to alunite, which freed aluminum and phosphate to combine with copper from oxidized copper sulfides to form turquoise. This process took place at a relatively shallow depth, and by 1965 the mines had "bottomed" at a depth averaging just below the surface.
Turquoise deposits are widespread in North America. Some deposits, such as those of Saguache and Conejos Counties in Colorado or the Cerrillos Hills in New Mexico, are typical supergene deposits formed from copper porphyries. The deposits in Cochise County, Arizona, are found in Cambrian quartzites and geologically young granites and go down at least as deep as .
Occurrence
Turquoise was among the first gems to be mined, and many historic sites have been depleted, though some are still worked to this day. These are all small-scale operations, often seasonal owing to the limited scope and remoteness of the deposits. Most are worked by hand with little or no mechanization. However, turquoise is often recovered as a byproduct of large-scale copper mining operations, especially in the United States.
Deposits typically take the form of small veins in partially decomposed volcanic rock in arid climates.
Iran
Iran has been an important source of turquoise for at least 2,000 years. It was initially named by Iranians "pērōzah" meaning "victory", and later the Arabs called it "fayrūzah", which is pronounced in Modern Persian as "fīrūzeh". In Iranian architecture, the blue turquoise was used to cover the domes of palaces because its intense blue colour was also a symbol of heaven on earth.
This deposit is blue naturally and turns green when heated due to dehydration. It is restricted to a mine-riddled region in Nishapur, the mountain peak of Ali-mersai near Mashhad, the capital of Khorasan Province, Iran. Weathered and broken trachyte is host to the turquoise, which is found both in situ between layers of limonite and sandstone and amongst the scree at the mountain's base. These workings are the oldest known, together with those of the Sinai Peninsula. Iran also has turquoise mines in Semnan and Kerman provinces.
Sinai
Since at least the First Dynasty (3000 BCE) in ancient Egypt, and possibly before then, turquoise was used by the Egyptians and was mined by them in the Sinai Peninsula. This region was known as the Country of Turquoise by the native Monitu. There are six mines in the peninsula, all on its southwest coast, covering an area of some . The two most important of these mines, from a historical perspective, are Serabit el-Khadim and Wadi Maghareh, believed to be among the oldest of known mines. The former mine is situated about 4 kilometres from an ancient temple dedicated to the deity Hathor.
The turquoise is found in sandstone that is, or was originally, overlain by basalt. Copper and iron workings are present in the area. Large-scale turquoise mining is not profitable today, but the deposits are sporadically quarried by Bedouin peoples using homemade gunpowder. In the rainy winter months, miners face a risk from flash flooding; even in the dry season, death from the collapse of the haphazardly exploited sandstone mine walls may occur. The colour of Sinai material is typically greener than that of Iranian material but is thought to be stable and fairly durable. Often referred to as "Egyptian turquoise", Sinai material is typically the most translucent, and under magnification, its surface structure is revealed to be peppered with dark blue discs not seen in material from other localities.
United States
The Southwest United States is a significant source of turquoise; Arizona, California (San Bernardino, Imperial, Inyo counties), Colorado (Conejos, El Paso, Lake, Saguache counties), New Mexico (Eddy, Grant, Otero, Santa Fe counties) and Nevada (Clark, Elko, Esmeralda County, Eureka, Lander, Mineral County and Nye counties) are (or were) especially rich. The deposits of California and New Mexico were mined by pre-Columbian Native Americans using stone tools, some local and some from as far away as central Mexico. Cerrillos, New Mexico is thought to be the location of the oldest mines; prior to the 1920s, the state was the country's largest producer; it is more or less exhausted today. Only one mine in California, located at Apache Canyon, operates at a commercial capacity today.
The turquoise occurs as vein or seam fillings, and as compact nuggets; these are mostly small in size. While quite fine material is sometimes found, rivalling Iranian material in both colour and durability, most American turquoise is of a low grade (called "chalk turquoise"); high iron levels mean greens and yellows predominate, and a typically friable consistency in the turquoise's untreated state precludes use in jewelry.
Arizona is currently the most important producer of turquoise by value. Several mines exist in the state, two of them famous for their unique colour and quality and considered the best in the industry: the Sleeping Beauty Mine in Globe ceased turquoise mining in August 2012. The mine chose to send all ore to the crusher and to concentrate on copper production due to the rising price of copper on the world market. The price of natural untreated Sleeping Beauty turquoise has risen dramatically since the mine's closing. The Kingman Mine as of 2015 still operates alongside a copper mine outside of the city. Other mines include the Blue Bird mine, Castle Dome, and Ithaca Peak, but they are mostly inactive due to the high cost of operations and federal regulations. The Phelps Dodge Lavender Pit mine at Bisbee ceased operations in 1974 and never had a turquoise contractor. All Bisbee turquoise was "lunch pail" mined. It came out of the copper ore mine in miners' lunch pails. Morenci and Turquoise Peak are either inactive or depleted.
Nevada is the country's other major producer, with more than 120 mines which have yielded significant quantities of turquoise. Unlike elsewhere in the US, most Nevada mines have been worked primarily for their gem turquoise and very little has been recovered as a byproduct of other mining operations. Nevada turquoise is found as nuggets, fracture fillings and in breccias as the cement filling interstices between fragments. Because of the geology of the Nevada deposits, a majority of the material produced is hard and dense, being of sufficient quality that no treatment or enhancement is required. While nearly every county in the state has yielded some turquoise, the chief producers are in Lander and Esmeralda counties. Most of the turquoise deposits in Nevada occur along a wide belt of tectonic activity that coincides with the state's zone of thrust faulting. It strikes at a bearing of about 15° and extends from the northern part of Elko County, southward down to the California border southwest of Tonopah. Nevada has produced a wide diversity of colours and mixes of different matrix patterns, with turquoise from Nevada coming in various shades of blue, blue-green, and green. Some of this unusually-coloured turquoise may contain significant zinc and iron, which is the cause of the beautiful bright green to yellow-green shades. Some of the green to green-yellow shades may actually be variscite or faustite, which are secondary phosphate minerals similar in appearance to turquoise. A significant portion of the Nevada material is also noted for its often attractive brown or black limonite veining, producing what is called "spiderweb matrix". While a number of the Nevada deposits were first worked by Native Americans, the total Nevada turquoise production since the 1870s has been estimated at more than , including nearly from the Carico Lake mine. In spite of increased costs, small scale mining operations continue at a number of turquoise properties in Nevada, including the Godber, Orvil Jack and Carico Lake mines in Lander County, the Pilot Mountain Mine in Mineral County, and several properties in the Royston and Candelaria areas of Esmerelda County.
In 1912, the first deposit of distinct, single-crystal turquoise was discovered at Lynch Station in Campbell County, Virginia. The crystals, forming a druse over the mother rock, are very small; is considered large. Until the 1980s Virginia was widely thought to be the only source of distinct crystals; there are now at least 27 other localities.
In an attempt to recoup profits and meet demand, some American turquoise is treated or enhanced to a certain degree. These treatments include innocuous waxing and more controversial procedures, such as dyeing and impregnation (see Treatments). There are some American mines which produce materials of high enough quality that no treatment or alterations are required. Any such treatments which have been performed should be disclosed to the buyer on sale of the material.
Other sources
Turquoise prehistoric artifacts (beads) are known since the fifth millennium BCE from sites in the Eastern Rhodopes in Bulgaria – the source for the raw material is possibly related to the nearby Spahievo lead–zinc ore field. In Spain, turquoise has been found as a minor mineral in the variscite deposits exploited during prehistoric times in Palazuelos de las Cuevas (Zamora) and in Can Tintorer, Gavá (Barcelona).
China has been a minor source of turquoise for 3,000 years or more. Gem-quality material, in the form of compact nodules, is found in the fractured, silicified limestone of Yunxian and Zhushan, Hubei province. Additionally, Marco Polo reported turquoise found in present-day Sichuan. Most Chinese material is exported, but a few carvings worked in a manner similar to jade exist. In Tibet, gem-quality deposits purportedly exist in the mountains of Derge and Nagari-Khorsum in the east and west of the region respectively.
Other notable localities include: Afghanistan; Australia (Victoria and Queensland); north India; northern Chile (Chuquicamata); Cornwall; Saxony; Silesia; and Turkestan.
History of use
The pastel shades of turquoise have endeared it to many great cultures of antiquity: it has adorned the rulers of Ancient Egypt, the Aztecs (and possibly other Pre-Columbian Mesoamericans), Persia, Mesopotamia, the Indus Valley, and to some extent in ancient China since at least the Shang dynasty. Despite being one of the oldest gems, probably first introduced to Europe (through Turkey) with other Silk Road novelties, turquoise did not become important as an ornamental stone in the West until the 14th century, following a decline in the Roman Catholic Church's influence which allowed the use of turquoise in secular jewellery. It was apparently unknown in India until the Mughal period, and unknown in Japan until the 18th century. A common belief shared by many of these civilizations held that turquoise possessed certain prophylactic qualities; it was thought to change colour with the wearer's health and protect him or her from untoward forces.
The Aztecs viewed turquoise as an embodiment of fire and gave it properties such as heat and smokiness. They inlaid turquoise, together with gold, quartz, malachite, jet, jade, coral, and shells, into provocative (and presumably ceremonial) mosaic objects such as masks (some with a human skull as their base), knives, and shields. Natural resins, bitumen and wax were used to bond the turquoise to the objects' base material; this was usually wood, but bone and shell were also used. Like the Aztecs, the Pueblo, Navajo and Apache tribes cherished turquoise for its amuletic use; the latter tribe believe the stone to afford the archer dead aim. In Navajo culture it is used for "a spiritual protection and blessing." Among these peoples turquoise was used in mosaic inlay, in sculptural works, and was fashioned into toroidal beads and freeform pendants. The Ancestral Puebloans (Anasazi) of the Chaco Canyon and surrounding region are believed to have prospered greatly from their production and trading of turquoise objects. The distinctive silver jewellery produced by the Navajo and other Southwestern Native American tribes today is a rather modern development, thought to date from around 1880 as a result of European influences.
In Persia, turquoise was the de facto national stone for millennia, extensively used to decorate objects (from turbans to bridles), mosques, and other important buildings both inside and out, such as the Medresseh-i Shah Husein Mosque of Isfahan. The Persian style and use of turquoise was later brought to India following the establishment of the Mughal Empire there, its influence seen in high purity gold jewellery (together with ruby and diamond) and in such buildings as the Taj Mahal. Persian turquoise was often engraved with devotional words in Arabic script which was then inlaid with gold.
Cabochons of imported turquoise, along with coral, was (and still is) used extensively in the silver and gold jewellery of Tibet and Mongolia, where a greener hue is said to be preferred. Most of the pieces made today, with turquoise usually roughly polished into irregular cabochons set simply in silver, are meant for inexpensive export to Western markets and are probably not accurate representations of the original style.
The Ancient Egyptian use of turquoise stretches back as far as the First Dynasty and possibly earlier; however, probably the most well-known pieces incorporating the gem are those recovered from Tutankhamun's tomb, most notably the Pharaoh's iconic burial mask which was liberally inlaid with the stone. It also adorned rings and great sweeping necklaces called pectorals. Set in gold, the gem was fashioned into beads, used as inlay, and often carved in a scarab motif, accompanied by carnelian, lapis lazuli, and in later pieces, coloured glass. Turquoise, associated with the goddess Hathor, was so liked by the Ancient Egyptians that it became (arguably) the first gemstone to be imitated, the fair structure created by an artificial glazed ceramic product known as faience.
The French conducted archaeological excavations of Egypt from the mid-19th century through the early 20th. These excavations, including that of Tutankhamun's tomb, created great public interest in the western world, subsequently influencing jewellery, architecture, and art of the time. Turquoise, already favoured for its pastel shades since around 1810, was a staple of Egyptian Revival pieces. In contemporary Western use, turquoise is most often encountered cut en cabochon in silver rings, bracelets, often in the Native American style, or as tumbled or roughly hewn beads in chunky necklaces. Lesser material may be carved into fetishes, such as those crafted by the Zuni. While strong sky blues remain superior in value, mottled green and yellowish material is popular with artisans.
Cultural associations
In many cultures of the Old and New Worlds, this gemstone has been esteemed for thousands of years as a holy stone, a bringer of good fortune or a talisman. The oldest evidence for this claim was found in Ancient Egypt, where grave furnishings with turquoise inlay were discovered, dating from approximately 3000 BCE. In the ancient Persian Empire, the sky-blue gemstones were earlier worn round the neck or wrist as protection against unnatural death. If they changed colour, the wearer was thought to have reason to fear the approach of doom. Meanwhile, it has been discovered that the turquoise certainly can change colour, but that this is not necessarily a sign of impending danger. The change can be caused by the light, or by a chemical reaction brought about by cosmetics, dust or the acidity of the skin.
The goddess Hathor was associated with turquoise, as she was the patroness of Serabit el-Khadim, where it was mined. Her titles included "Lady of Turquoise", "Mistress of Turquoise", and "Lady of Turquoise Country".
In Western culture, turquoise is also the traditional birthstone for those born in the month of December. The turquoise is also a stone in the Jewish High Priest's breastplate, described in Exodus chapter 28. The stone is also considered sacred to the indigenous Zuni and Pueblo peoples of the American Southwest. The pre-Columbian Aztec and Maya also considered it to be a valuable and culturally important stone.
Imitations
The Egyptians were the first to produce an artificial imitation of turquoise, in the glazed earthenware product faience. Later glass and enamel were also used, and in modern times more sophisticated porcelain, plastics, and various assembled, pressed, bonded, and sintered products (composed of various copper and aluminium compounds) have been developed: examples of the latter include "Viennese turquoise", made from precipitated aluminium phosphate coloured by copper oleate; and "neolith", a mixture of bayerite and copper(II) phosphate. Most of these products differ markedly from natural turquoise in both physical and chemical properties, but in 1972 Pierre Gilson introduced one fairly close to a true synthetic (it does differ in chemical composition owing to a binder used, meaning it is best described as a simulant rather than a synthetic). Gilson turquoise is made in both a uniform colour and with black "spiderweb matrix" veining not unlike the natural Nevada material.
The most common imitation of turquoise encountered today is dyed howlite and magnesite, both white in their natural states, and the former also having natural (and convincing) black veining similar to that of turquoise. Dyed chalcedony, jasper, and marble is less common, and much less convincing. Other natural materials occasionally confused with or used in lieu of turquoise include: variscite and faustite; chrysocolla (especially when impregnating quartz); lazulite; smithsonite; hemimorphite; wardite; and a fossil bone or tooth called odontolite or "bone turquoise", coloured blue naturally by the mineral vivianite. While rarely encountered today, odontolite was once mined in large quantities—specifically for its use as a substitute for turquoise—in southern France.
These fakes are detected by gemologists using a number of tests, relying primarily on non-destructive, close examination of surface structure under magnification; a featureless, pale blue background peppered by flecks or spots of whitish material is the typical surface appearance of natural turquoise, while manufactured imitations will appear radically different in both colour (usually a uniform dark blue) and texture (usually granular or sugary). Glass and plastic will have a much greater translucency, with bubbles or flow lines often visible just below the surface. Staining between grain boundaries may be visible in dyed imitations.
Some destructive tests may be necessary; for example, the application of diluted hydrochloric acid will cause the carbonates odontolite and magnesite to effervesce and howlite to turn green, while a heated probe may give rise to the pungent smell so indicative of plastic. Differences in specific gravity, refractive index, light absorption (as evident in a material's absorption spectrum), and other physical and optical properties are also considered as means of separation.
Treatments
Turquoise is treated to enhance both its colour and durability (increased hardness and decreased porosity). As is so often the case with any precious stones, full disclosure about treatment is frequently not given. Gemologists can detect these treatments using a variety of testing methods, some of which are destructive, such as the use of a heated probe applied to an inconspicuous spot, which will reveal oil, wax or plastic treatment.
Waxing and oiling
Historically, light waxing and oiling were the first treatments used in ancient times, providing a wetting effect, thereby enhancing the colour and lustre. This treatment is more or less acceptable by tradition, especially because treated turquoise is usually of a higher grade to begin with. Oiled and waxed stones are prone to "sweating" under even gentle heat or if exposed to too much sun, and they may develop a white surface film or bloom over time. (With some skill, oil and wax treatments can be restored.)
Backing
Since finer turquoise is often found as thin seams, it may be glued to a base of stronger foreign material for reinforcement. These stones are termed "backed", and it is standard practice that all thinly cut turquoise in the Southwestern United States is backed. Native indigenous peoples of this region, because of their considerable use and wearing of turquoise, have found that backing increases the durability of thinly cut slabs and cabochons of turquoise. They observe that if the stone is not backed it will often crack. Backing of turquoise is not widely known outside of the Native American and Southwestern United States jewellery trade. Backing does not diminish the value of high quality turquoise, and indeed the process is expected for most thinly cut American commercial gemstones.
Zachery treatment
A proprietary process was created by electrical engineer and turquoise dealer James E. Zachery in the 1980s to improve the stability of medium to high-grade turquoise. The process can be applied in several ways: either through deep penetration on rough turquoise to decrease porosity, by shallow treatment of finished turquoise to enhance color, or both. The treatment can enhance color and improve the turquoise's ability to take a polish. Such treated turquoise can be distinguished in some cases from natural turquoise, without destruction, by energy-dispersive X-ray spectroscopy, which can detect its elevated potassium levels. In some instances, such as with already high-quality, low-porosity turquoise that is treated only for porosity, the treatment is undetectable.
Dyeing
The use of Prussian blue and other dyes (often in conjunction with bonding treatments) to "enhance” its appearance, make uniform or completely change the colour, is regarded as fraudulent by some purists, especially since some dyes may fade or rub off on the wearer. Dyes have also been used to darken the veins of turquoise.
Stabilization
Material treated with plastic or water glass is termed "bonded" or "stabilized" turquoise. This process consists of pressure impregnation of otherwise unsaleable chalky American material by epoxy and plastics (such as polystyrene) and water glass (sodium silicate) to produce a wetting effect and improve durability. Plastic and water glass treatments are far more permanent and stable than waxing and oiling, and can be applied to material too chemically or physically unstable for oil or wax to provide sufficient improvement. Conversely, stabilization and bonding are rejected by some as too radical an alteration.
The epoxy binding technique was first developed in the 1950s and has been attributed to Colbaugh Processing of Arizona, a company that still operates today.
Reconstitution
Perhaps the most extreme of treatments is "reconstitution", wherein fragments of fine turquoise material, too small to be used individually, are powdered and then bonded with resin to form a solid mass. Very often the material sold as "reconstituted turquoise" is artificial, with little or no natural stone, made entirely from resins and dyes. In the trade reconstituted turquoise is often called "block turquoise" or simply "block".
Valuation and care
Hardness and richness of colour are two of the major factors in determining the value of turquoise; while colour is a matter of individual taste, generally speaking, the most desirable is a strong sky to robin egg blue (in reference to the eggs of the American robin). Whatever the colour, for many applications, turquoise should not be soft or chalky; even if treated, such lesser material (to which most turquoise belongs) is liable to fade or discolour over time and will not hold up to normal use in jewellery.
The mother rock or matrix in which turquoise is found can often be seen as splotches or a network of brown or black veins running through the stone in a netted pattern; this veining may add value to the stone if the result is complementary, but such a result is uncommon. Such material is sometimes described as "spiderweb matrix"; it is most valued in the Southwest United States and Far East, but is not highly appreciated in the Near East where unblemished and vein-free material is ideal (regardless of how complementary the veining may be). Uniformity of colour is desired, and in finished pieces the quality of workmanship is also a factor; this includes the quality of the polish and the symmetry of the stone. Calibrated stones—that is, stones adhering to standard jewellery setting measurements—may also be more sought after. Like coral and other opaque gems, turquoise is commonly sold at a price according to its physical size in millimetres rather than weight.
Turquoise is treated in many different ways, some more permanent and radical than others. Controversy exists as to whether some of these treatments should be acceptable, but one can be more or less forgiven universally: This is the light waxing or oiling applied to most gem turquoise to improve its colour and lustre; if the material is of high quality to begin with, very little of the wax or oil is absorbed and the turquoise therefore does not rely on this impermanent treatment for its beauty. All other factors being equal, untreated turquoise will always command a higher price. Bonded and reconstituted material is worth considerably less.
Being a phosphate mineral, turquoise is inherently fragile and sensitive to solvents; perfume and other cosmetics will attack the finish and may alter the colour of turquoise gems, as will skin oils, as will most commercial jewellery cleaning fluids. Prolonged exposure to direct sunlight may also discolour or dehydrate turquoise. Care should therefore be taken when wearing such jewels: cosmetics, including sunscreen and hair spray, should be applied before putting on turquoise jewellery, and they should not be worn to a beach or other sun-bathed environment. After use, turquoise should be gently cleaned with a soft cloth to avoid a buildup of residue, and should be stored in its own container to avoid scratching by harder gems. Turquoise can also be adversely affected if stored in an airtight container.
See also
, with a deep blue color
, with a deep blue color
, with a deep blue color
List of minerals
of pale green color due to trivalent chromium ()
References
Further reading
External links
Aluminium minerals
Copper(II) minerals
Gemstones
Phosphate minerals
Symbols of New Mexico
Triclinic minerals
Symbols of Arizona
Tetrahydrate minerals
Minerals in space group 2 | Turquoise | [
"Physics"
] | 6,581 | [
"Materials",
"Gemstones",
"Matter"
] |
55,695 | https://en.wikipedia.org/wiki/Mirage | A mirage is a naturally-occurring optical phenomenon in which light rays bend via refraction to produce a displaced image of distant objects or the sky. The word comes to English via the French (se) mirer, from the Latin mirari, meaning "to look at, to wonder at".
Mirages can be categorized as "inferior" (meaning lower), "superior" (meaning higher) and "Fata Morgana", one kind of superior mirage consisting of a series of unusually elaborate, vertically stacked images, which form one rapidly-changing mirage.
In contrast to a hallucination, a mirage is a real optical phenomenon that can be captured on camera, since light rays are actually refracted to form the false image at the observer's location. What the image appears to represent, however, is determined by the interpretive faculties of the human mind. For example, inferior images on land are very easily mistaken for the reflections from a small body of water.
Inferior mirage
In an inferior mirage, the mirage image appears below the real object. The real object in an inferior mirage is the (blue) sky or any distant (therefore bluish) object in that same direction. The mirage causes the observer to see a bright and bluish patch on the ground.
Light rays coming from a particular distant object all travel through nearly the same layers of air, and all are refracted at about the same angle. Therefore, rays coming from the top of the object will arrive lower than those from the bottom. The image is usually upside-down, enhancing the illusion that the sky image seen in the distance is a specular reflection on a puddle of water or oil acting as a mirror.
While the aero-dynamics are highly active, the image of the inferior mirage is stable unlike the fata morgana which can change within seconds. Since warmer air rises while cooler air (being denser) sinks, the layers will mix, causing turbulence. The image will be distorted accordingly; it may vibrate or be stretched vertically (towering) or compressed vertically (stooping). A combination of vibration and extension are also possible. If several temperature layers are present, several mirages may mix, perhaps causing double images. In any case, mirages are usually not larger than about half a degree high (roughly the angular diameter of the Sun and Moon) and are from objects between dozens of meters and a few kilometers away.
Heat haze
Heat haze, also called heat shimmer, refers to the inferior mirage observed when viewing objects through a mass of heated air. Common instances when heat haze occurs include images of objects viewed across asphalt concrete (also known as tarmac), roads and over masonry rooftops on hot days, above and behind fire (as in burning candles, patio heaters, and campfires), and through exhaust gases from jet engines. When appearing on roads due to the hot asphalt, it is often referred to as a "highway mirage". It also occurs in deserts; in that case, it is referred to as a "desert mirage". Both tarmac and sand can become very hot when exposed to the sun, easily being more than higher than the air above, enough to make conditions suitable to cause the mirage.
Convection causes the temperature of the air to vary, and the variation between the hot air at the surface of the road and the denser cool air above it causes a gradient in the refractive index of the air. This produces a blurred shimmering effect, which hinders the ability to resolve the image and increases when the image is magnified through a telescope or telephoto lens.
Light from the sky at a shallow angle to the road is refracted by the index gradient, making it appear as if the sky is reflected by the road's surface. This might appear as a pool of liquid (usually water, but possibly others, such as oil) on the road, as some types of liquid also reflect the sky. The illusion moves into the distance as the observer approaches the miraged object giving one the same effect as approaching a rainbow.
Heat haze is not related to the atmospheric phenomenon of haze.
Superior mirage
A superior mirage is one in which the mirage image appears to be located above the real object. A superior mirage occurs when the air below the line of sight is colder than the air above it. This unusual arrangement is called a temperature inversion. During daytime, the normal temperature gradient of the atmosphere is cold air above warm air. Passing through the temperature inversion, the light rays are bent down, and so the image appears above the true object, hence the name superior.
Superior mirages are quite common in polar regions, especially over large sheets of ice that have a uniform low temperature. Superior mirages also occur at more moderate latitudes, although in those cases they are weaker and tend to be less smooth and stable. For example, a distant shoreline may appear to tower and look higher (and, thus, perhaps closer) than it really is. Because of the turbulence, there appear to be dancing spikes and towers. This type of mirage is also called the Fata Morgana, or hafgerðingar in the Icelandic language.
A superior mirage can be right-side up or upside-down, depending on the distance of the true object and the temperature gradient. Often the image appears as a distorted mixture of up and down parts.
Since the earth is round, if the downward bending curvature of light rays is about the same as the curvature of Earth, light rays can travel large distances, including from beyond the horizon. This was observed and documented in 1596, when a ship in search of the Northeast passage became stuck in the ice at Novaya Zemlya, above the Arctic Circle. The Sun appeared to rise two weeks earlier than expected; the real Sun had still been below the horizon, but its light rays followed the curvature of Earth. This effect is often called a Novaya Zemlya mirage. For every that light rays travel parallel to Earth's surface, the Sun will appear 1° higher on the horizon. The inversion layer must have just the right temperature gradient over the whole distance to make this possible.
In the same way, ships that are so far away that they should not be visible above the geometric horizon may appear on or even above the horizon as superior mirages. This may explain some stories about flying ships or coastal cities in the sky, as described by some polar explorers. These are examples of so-called Arctic mirages, or hillingar in Icelandic.
If the vertical temperature gradient is + per (where the positive sign means the temperature increases at higher altitudes) then horizontal light rays will just follow the curvature of Earth, and the horizon will appear flat. If the gradient is less (as it almost always is) the rays are not bent enough and get lost in space, which is the normal situation of a spherical, convex "horizon".
In some situations, distant objects can be elevated or lowered, stretched or shortened with no mirage involved.
Fata Morgana
A Fata Morgana (the name comes from the Italian translation of Morgan le Fay, the fairy, shapeshifting half-sister of King Arthur) is a very complex superior mirage. It appears with alternations of compressed and stretched areas, erect images, and inverted images. A Fata Morgana is also a fast-changing mirage.
Fata Morgana mirages are most common in polar regions, especially over large sheets of ice with a uniform low temperature, but they can be observed almost anywhere. In polar regions, a Fata Morgana may be observed on cold days; in desert areas and over oceans and lakes, a Fata Morgana may be observed on hot days. For a Fata Morgana, temperature inversion has to be strong enough that light rays' curvatures within the inversion are stronger than the curvature of Earth.
The rays will bend and form arcs. An observer needs to be within an atmospheric duct to be able to see a Fata Morgana.
Fata Morgana mirages may be observed from any altitude within Earth's atmosphere, including from mountaintops or airplanes.
Distortions of image and bending of light can produce spectacular effects. In his book Pursuit: The Chase and Sinking of the "Bismarck", Ludovic Kennedy describes an incident that allegedly took place below the Denmark Strait during 1941, following the sinking of the Hood. The Bismarck, while pursued by the British cruisers Norfolk and Suffolk, passed out of sight into a sea mist. Within a matter of seconds, the ship re-appeared steaming toward the British ships at high speed. In alarm the cruisers separated, anticipating an imminent attack, and observers from both ships watched in astonishment as the German battleship fluttered, grew indistinct and faded away. Radar watch during these events indicated that the Bismarck had in fact made no change to her course.
Night-time mirages
The conditions for producing a mirage can occur at night as well as during the day. Under some circumstances mirages of astronomical objects and mirages of lights from moving vehicles, aircraft, ships, buildings, etc. can be observed at night.
Mirage of astronomical objects
A mirage of an astronomical object is a naturally occurring optical phenomenon in which light rays are bent to produce distorted or multiple images of an astronomical object. Mirages can be observed for such astronomical objects as the Sun, the Moon, the planets, bright stars, and very bright comets. The most commonly observed are sunset and sunrise mirages.
See also
Atmospheric refraction
Looming and similar refraction phenomena
Notes
References
Bibliography
External links
All kind of mirages explained
China daily, rare mirage in Penglai
The superior mirage
The inferior mirage
The highway mirage
Fata Morgana Mirage from the Continental Divide Trail
Atmospheric optical phenomena
Optical illusions
de:Luftspiegelung | Mirage | [
"Physics"
] | 1,982 | [
"Physical phenomena",
"Earth phenomena",
"Optical illusions",
"Optical phenomena",
"Atmospheric optical phenomena"
] |
55,888 | https://en.wikipedia.org/wiki/Trusted%20system | In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce).
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.
A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These also enforce the policies of no read-up and no write-down.
Trusted systems in classified information
A subset of trusted systems ("Division B" and "Division A") implement mandatory access control (MAC) labels, and as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.
Central to the concept of U.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is
tamper-proof
always invoked
small enough to be subject to independent testing, the completeness of which can be assured.
According to the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
The dedication of significant system engineering toward minimizing the complexity (not size, as often cited) of the trusted computing base (TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".
TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced Common Criteria (CC), which derive from a blend of technically mature standards from various NATO countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised the Bell–LaPadula model, in which a trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data such as files, disks, or printers) and subjects (active entities that cause information to flow among objects e.g. users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents, e.g. TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.)
The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain Trojan horses, one of the most general classes of attacks (sciz., the popularly reported worms and viruses are specializations of the Trojan horse concept).
The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls, i.e. they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model, "The SeaView Model"
An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed discretionary access controls(DACs), are under the direct control of system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and – in a more flexible and powerful form – by Multics since earlier still) and access control list (ACLs) are familiar examples of DACs.
The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a finite-state machine (FSM) with state criteria, state transition constraints (a set of "operations" that correspond to state transitions), and a descriptive top-level specification, DTLS (entails a user-perceptible interface such as an API, a set of system calls in UNIX or system exits in mainframes). Each element of the aforementioned engenders one or more model operations.
Trusted systems in trusted computing
The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
Trusted systems in policy analysis
In the context of national or homeland security, law enforcement, or social control policy, trusted systems provide conditional prediction about the behavior of people or objects prior to authorizing access to system resources. For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and credit or identity scoring systems in financial and anti-fraud applications. In general, they include any system in which
probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints to control behavior within the system); or
deviation analysis or systems surveillance is used to ensure that behavior within systems complies with expected or authorized parameters.
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints.
In this emergent model, "security" is not geared towards policing but to risk management through surveillance, information exchange, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty, and to a broader philosophical debate about appropriate social governance methodologies.
Trusted systems in information theory
Trusted systems in the context of information theory are based on the following definition:
In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".
The IBM Federal Software Group has suggested that "trust points" provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
See also
Accuracy and precision
Computer security
Data quality
Information quality
Trusted Computing
References
External links
Global Information Society Project – a joint research project
Systems
Security
Computational trust | Trusted system | [
"Engineering"
] | 2,747 | [
"Cybersecurity engineering",
"Computational trust"
] |
55,904 | https://en.wikipedia.org/wiki/Belgrade | Belgrade ( , ; , ) is the capital and largest city of Serbia. It is located at the confluence of the Sava and Danube rivers and at the crossroads of the Pannonian Plain and the Balkan Peninsula. The population of the Belgrade metropolitan area is 1,685,563 according to the 2022 census. It is one of the major cities of Southeast Europe and the third most populous city on the Danube river.
Belgrade is one of the oldest continuously inhabited cities in Europe and the world. One of the most important prehistoric cultures of Europe, the Vinča culture, evolved within the Belgrade area in the 6th millennium BC. In antiquity, Thraco-Dacians inhabited the region and, after 279 BC, Celts settled the city, naming it Singidūn. It was conquered by the Romans under the reign of Augustus and awarded Roman city rights in the mid-2nd century. It was settled by the Slavs in the 520s, and changed hands several times between the Byzantine Empire, the Frankish Empire, the Bulgarian Empire, and the Kingdom of Hungary before it became the seat of the Serbian king Stefan Dragutin in 1284. Belgrade served as capital of the Serbian Despotate during the reign of Stefan Lazarević, and then his successor Đurađ Branković returned it to the Hungarian king in 1427. Noon bells in support of the Hungarian army against the Ottoman Empire during the siege in 1456 have remained a widespread church tradition to this day. In 1521, Belgrade was conquered by the Ottomans and became the seat of the Sanjak of Smederevo. It frequently passed from Ottoman to Habsburg rule, which saw the destruction of most of the city during the Ottoman–Habsburg wars.
Following the Serbian Revolution, Belgrade was once again named the capital of Serbia in 1841. Northern Belgrade remained the southernmost Habsburg post until 1918, when it was attached to the city, due to former Austro-Hungarian territories becoming part of the new Kingdom of Serbs, Croats and Slovenes after World War I. Belgrade was the capital of Yugoslavia from its creation to its dissolution. In a fatally strategic position, the city has been battled over in 115 wars and razed 44 times, being bombed five times and besieged many times.
Being Serbia's primate city, Belgrade has special administrative status within Serbia. It is the seat of the central government, administrative bodies, and government ministries, as well as home to almost all of the largest Serbian companies, media, and scientific institutions. Belgrade is classified as a Beta-Global City. The city is home to the University Clinical Centre of Serbia, a hospital complex with one of the largest capacities in the world; the Church of Saint Sava, one of the largest Orthodox church buildings; and the Belgrade Arena, one of the largest capacity indoor arenas in Europe.
Belgrade hosted major international events such as the Danube River Conference of 1948, the first Non-Aligned Movement Summit (1961), the first major gathering of the OSCE (1977–1978), the Eurovision Song Contest (2008), as well as sports events such as the first FINA World Aquatics Championships (1973), UEFA Euro (1976), Summer Universiade (2009) and EuroBasket three times (1961, 1975, 2005). On 21 June 2023, Belgrade was confirmed host of the BIE- Specialized Exhibition Expo 2027.
History
Prehistory
Chipped stone tools found in Zemun show that the area around Belgrade was inhabited by nomadic foragers in the Palaeolithic and Mesolithic eras. Some of these tools are of Mousterian industry—belonging to Neanderthals rather than modern humans. Aurignacian and Gravettian tools have also been discovered near the area, indicating some settlement between 50,000 and 20,000 years ago.
The first farming people to settle in the region are associated with the Neolithic Starčevo culture, which flourished between 6200 and 5200 BC. There are several Starčevo sites in and around Belgrade, including the eponymous site of Starčevo. The Starčevo culture was succeeded by the Vinča culture (5500–4500 BC), a more sophisticated farming culture that grew out of the earlier Starčevo settlements and also named for a site in the Belgrade region (Vinča-Belo Brdo). The Vinča culture is known for its very large settlements, one of the earliest settlements by continuous habitation and some of the largest in prehistoric Europe. Also associated with the Vinča culture are anthropomorphic figurines such as the Lady of Vinča, the earliest known copper metallurgy in Europe, and a proto-writing form developed prior to the Sumerians and Minoans known as the Old European script, which dates back to around 5300 BC. Within the city proper, on Cetinjska Street, a skull of a Paleolithic human dated to before 5000 BC was discovered in 1890.
Antiquity
Evidence of early knowledge about Belgrade's geographical location comes from a variety of ancient myths and legends. The ridge overlooking the confluence of the Sava and Danube rivers, for example, has been identified as one of the places in the story of Jason and the Argonauts. In the time of antiquity, too, the area was populated by Paleo-Balkan tribes, including the Thracians and the Dacians, who ruled much of Belgrade's surroundings. Specifically, Belgrade was at one point inhabited by the Thraco-Dacian tribe Singi; following Celtic invasion in 279 BC, the Scordisci wrested the city from their hands, naming it Singidūn (d|ūn, fortress). In 34–33 BC, the Roman army reached Belgrade. It became the romanised Singidunum in the 1st century AD and, by the mid-2nd century, the city was proclaimed a municipium by the Roman authorities, evolving into a full-fledged colonia (the highest city class) by the end of the century. While the first Christian Emperor of Rome—Constantine I, also known as Constantine the Great—was born in the territory of Naissus to the city's south, Roman Christianity's champion, Flavius Iovianus (Jovian/Jovan), was born in Singidunum. Jovian reestablished Christianity as the official religion of the Roman Empire, ending the brief revival of traditional Roman religions under his predecessor Julian the Apostate. In 395 AD, the site passed to the Eastern Roman or Byzantine Empire. Across the Sava from Singidunum was the Celtic city of Taurunum (Zemun); the two were connected with a bridge throughout Roman and Byzantine times.
Middle Ages
In 442, the area was ravaged by Attila the Hun. In 471, it was taken by Theodoric the Great, king of the Ostrogoths, who continued into Italy. As the Ostrogoths left, another Germanic tribe, the Gepids, invaded the city. In 539, it was retaken by the Byzantines. In 577, some 100,000 Slavs poured into Thrace and Illyricum, pillaging cities and more permanently settling the region.
The Avars, under Bayan I, conquered the whole region and its new Slavic population by 582. Following Byzantine reconquest, the Byzantine chronicle De Administrando Imperio mentions the White Serbs, who had stopped in Belgrade on their way back home, asking the strategos for lands; they received provinces in the west, towards the Adriatic, which they would rule as subjects to Heraclius (610–641). In 829, Khan Omurtag was able to add Singidunum and its environs to the First Bulgarian Empire. The first record of the name Belograd appeared on April, 16th, 878, in a Papal missive to Bulgarian ruler Boris I. This name would appear in several variants: Alba Bulgarica in Latin, Griechisch Weissenburg in High German, Nándorfehérvár in Hungarian, and Castelbianco in Venetian, among other names, all variations of 'white fortress' or 'Bulgar white fortress'. For about four centuries, the city would become a battleground between the Byzantine Empire, the medieval Kingdom of Hungary, and the Bulgarian Empire. Basil II (976–1025) installed a garrison in Belgrade. The city hosted the armies of the First and the Second Crusade, but, while passing through during the Third Crusade, Frederick Barbarossa and his 190,000 crusaders saw Belgrade in ruins.
King Stefan Dragutin (r. 1276–1282) received Belgrade from his father-in-law, Stephen V of Hungary, in 1284, and it served as the capital of the Kingdom of Syrmia, a vassal state to the Kingdom of Hungary. Dragutin (Hungarian: Dragutin István) is regarded as the first Serbian king to rule over Belgrade.
Following the battles of Maritsa (1371) and Kosovo field (1389), Moravian Serbia, to Belgrade's south, began to fall to the Ottoman Empire.
The northern regions of what is now Serbia persisted as the Serbian Despotate, with Belgrade as its capital. The city flourished under Stefan Lazarević, the son of Serbian prince Lazar Hrebeljanović. Lazarević built a castle with a citadel and towers, of which only the Despot's tower and the west wall remain. He also refortified the city's ancient walls, allowing the Despotate to resist Ottoman conquest for almost 70 years. During this time, Belgrade was a haven for many Balkan peoples fleeing Ottoman rule, and is thought to have had a population ranging between 40,000 and 50,000 people.
In 1427, Stefan's successor Đurađ Branković, returning Belgrade to the Hungarian king, made Smederevo his new capital. Even though the Ottomans had captured most of the Serbian Despotate, Belgrade, known as Nándorfehérvár in Hungarian, was unsuccessfully besieged in 1440 and 1456. As the city presented an obstacle to the Ottoman advance into Hungary and further, over 100,000 Ottoman soldiers besieged it in 1456, in which the Christian army led by the Hungarian General John Hunyadi successfully defended it. The noon bell ordered by Pope Callixtus III commemorates the victory throughout the Christian world to this day, which is now a cultural symbol of Hungary.
Ottoman rule and Austrian invasions
Seven decades after the initial siege, on 28 August 1521, the fort was finally captured by Suleiman the Magnificent with 250,000 Turkish soldiers and over 100 ships. Subsequently, most of the city was razed to the ground and its entire Orthodox Christian population was deported to Istanbul to an area that has since become known as the Belgrade forest.
Belgrade was made the seat of the Pashalik of Belgrade (also known as the Sanjak of Smederevo), and quickly became the second largest Ottoman town in Europe at over 100,000 people, surpassed only by Constantinople. Ottoman rule introduced Ottoman architecture, including numerous mosques, and the city was resurrected—now by Oriental influences.
In 1594, a major Serb rebellion was crushed by the Ottomans. In retribution, Grand Vizier Sinan Pasha ordered the relics of Saint Sava to be publicly torched on the Vračar plateau; in the 20th century, the church of Saint Sava was built to commemorate this event.
Occupied by the Habsburgs three times (1688–1690, 1717–1739, 1789–1791), headed by the Holy Roman Princes Maximilian of Bavaria and Eugene of Savoy, and field marshal Baron Ernst Gideon von Laudon, respectively, Belgrade was quickly recaptured by the Ottomans and substantially razed each time. During this period, the city was affected by the two Great Serbian Migrations, in which hundreds of thousands of Serbs, led by two Serbian Patriarchs, retreated together with the Austrian soldiers into the Habsburg Empire, settling in today's Vojvodina and Slavonia.
Principality and Kingdom of Serbia
At the beginning of the 19th century, Belgrade was predominantly inhabited by a Muslim population. Traces of Ottoman rule and architecture—such as mosques and bazaars, were to remain a prominent part of Belgrade's townscape into the 19th century; several decades, even, after Serbia was granted autonomy from the Ottoman Empire.
During the First Serbian Uprising, Serbian revolutionaries held the city from 8 January 1807 until 1813, when it was retaken by the Ottomans. In 1807, Turks in Belgrade were massacred and forcefully converted to Christianity. The massacre was encouraged by Russia in order to cement divisions between the Serb rebels and the Porte. Around 6,000 Muslims and Jews were forcibly converted to Christianity. Most mosques were converted into churches. Muslims, Jews, Aromanians and Greeks were
subjected to forced labour, and Muslim women were widely made available to young Serb men, and some were taken into slavery. Milenko Stojković bought many of them, and established his harem for which he gained fame. In this circumstances Belgrade demographically transformed from Ottoman to Serb. After the Second Serbian Uprising in 1815, Serbia achieved some sort of sovereignty, which was formally recognised by the Porte in 1830.
The development of Belgrade architecture after 1815 can be divided into four periods. In the first phase, which lasted from 1815 to 1835, the dominant architectural style was still of a Balkan character, with substantial Ottoman influence. At the same time, an interest in joining the European mainstream allowed Central and Western European architecture to flourish. Between 1835 and 1850, the amount of neoclassicist and baroque buildings south of the Austrian border rose considerably, exemplified by St Michael's Cathedral (Serbian: Saborna crkva), completed in 1840. Between 1850 and 1875, new architecture was characterised by a turn towards the newly popular Romanticism, along with older European architectural styles. Typical of Central European cities in the last quarter of the 19th century, the fourth phase was characterised by an eclecticist style based on the Renaissance and Baroque periods.
In 1841, Prince Mihailo Obrenović moved the capital of the Principality of Serbia from Kragujevac to Belgrade. During his first reign (1815–1839), Prince Miloš Obrenović pursued expansion of the city's population through the addition of new settlements, aiming and succeeding to make Belgrade the centre of the Principality's administrative, military and cultural institutions. His project of creating a new market space (the Abadžijska čaršija), however, was less successful; trade continued to be conducted in the centuries-old Donja čaršija and Gornja čaršija. Still, new construction projects were typical for the Christian quarters as the older Muslim quarters declined; from Serbia's autonomy until 1863, the number of Belgrade quarters even decreased, mainly as a consequence of the gradual disappearance of the city's Muslim population. An Ottoman city map from 1863 counts only 9 Muslim quarters (mahalas). The names of only five such neighbourhoods are known today: Ali-pašina, Reis-efendijina, Jahja-pašina, Bajram-begova, and Laz Hadži-Mahmudova. Following the Čukur Fountain incident, Belgrade was bombed by the Ottomans.
On 18 April 1867, the Ottoman government ordered the Ottoman garrison, which had been since 1826 the last representation of Ottoman suzerainty in Serbia, withdrawn from Kalemegdan. The forlorn Porte's only stipulation was that the Ottoman flag continue to fly over the fortress alongside the Serbian one. Serbia's de facto independence dates from this event. In the following years, urban planner Emilijan Josimović had a significant influence on Belgrade. He conceptualised a regulation plan for the city in 1867, in which he proposed the replacement of the town's crooked streets with a grid plan. Of great importance also was the construction of independent Serbian political and cultural institutions, as well as the city's now-plentiful parks. Pointing to Josimović's work, Serbian scholars have noted an important break with Ottoman traditions. However, Istanbul—the capital city of the state to which Belgrade and Serbia de jure still belonged—underwent similar changes.
In May 1868, knez Mihailo was assassinated with his cousin Anka Konstantinović while riding in a carriage in his country residence.
With the Principality's full independence in 1878 and its transformation into the Kingdom of Serbia in 1882, Belgrade once again became a key city in the Balkans, and developed rapidly. Nevertheless, conditions in Serbia remained those of an overwhelmingly agrarian country, even with the opening of a railway to Niš, Serbia's second city. In 1900, the capital had only 70,000 inhabitants (at the time Serbia numbered 2.5 million). Still, by 1905, the population had grown to more than 80,000 and, by the outbreak of World War I in 1914, it had surpassed the 100,000 citizens, disregarding Zemun, which still belonged to Austria-Hungary.
The first-ever projection of motion pictures in the Balkans and Central Europe was held in Belgrade in June 1896 by André Carr, a representative of the Lumière brothers. He shot the first motion pictures of Belgrade in the next year; however, they have not been preserved. The first permanent cinema was opened in 1909 in Belgrade.
World War I: Austro–German invasion
The First World War began on 28 July 1914 when Austria-Hungary declared war on Serbia. Most of the subsequent Balkan offensives occurred near Belgrade. Austro-Hungarian monitors shelled Belgrade on 29 July 1914, and it was taken by the Austro-Hungarian Army under General Oskar Potiorek on 1 December. On 16 December, it was re-taken by Serbian troops under Marshal Radomir Putnik. After a prolonged battle which destroyed much of the city, starting on 6 October 1915, Belgrade fell to German and Austro-Hungarian troops commanded by Field Marshal August von Mackensen on 9 October of the same year. The city was liberated by Serbian and French troops on 1 November 1918, under the command of Marshal Louis Franchet d'Espèrey of France and Crown Prince Alexander of Serbia. Belgrade, devastated as a front-line city, lost the title of largest city in the Kingdom to Subotica for some time.
Kingdom of Yugoslavia
After the war, Belgrade became the capital of the new Kingdom of Serbs, Croats and Slovenes, renamed the Kingdom of Yugoslavia in 1929. The Kingdom was split into banovinas and Belgrade, together with Zemun and Pančevo, formed a separate administrative unit.
During this period, the city experienced fast growth and significant modernisation. Belgrade's population grew to 239,000 by 1931 (with the inclusion of Zemun), and to 320,000 by 1940. The population growth rate between 1921 and 1948 averaged 4.08% a year.
In 1927, Belgrade's first airport opened, and in 1929, its first radio station began broadcasting. The Pančevo Bridge, which crosses the Danube, was opened in 1935, while King Alexander Bridge over the Sava was opened in 1934. On 3 September 1939 the first Belgrade Grand Prix, the last Grand Prix motor racing race before the outbreak of World War II, was held around the Belgrade Fortress and was followed by 80,000 spectators. The winner was Tazio Nuvolari.
World War II: German invasion
On 25 March 1941, the government of regent Crown Prince Paul signed the Tripartite Pact, joining the Axis powers in an effort to stay out of the Second World War and keep Yugoslavia neutral during the conflict. This was immediately followed by mass protests in Belgrade and a military coup d'état led by Air Force commander General Dušan Simović, who proclaimed King Peter II to be of age to rule the realm. As a result, the city was heavily bombed by the Luftwaffe on 6 April 1941, killing up to 2,274 people. Yugoslavia was then invaded by German, Italian, Hungarian, and Bulgarian forces. Belgrade was captured by subterfuge, with six German soldiers led by their officer Fritz Klingenberg feigning threatening size, forcing the city to capitulate.
Belgrade was more directly occupied by the German Army in the same month and became the seat of the puppet Nedić regime, headed by its namesake general. Some of today's parts of Belgrade were incorporated in the Independent State of Croatia in occupied Yugoslavia, another puppet state, where Ustashe regime carried out the Genocide of Serbs.
During the summer and autumn of 1941, in reprisal for guerrilla attacks, the Germans carried out several massacres of Belgrade citizens; in particular, members of the Jewish community were subject to mass shootings at the order of General Franz Böhme, the German Military Governor of Serbia. Böhme rigorously enforced the rule that for every German killed, 100 Serbs or Jews would be shot. Belgrade became the first city in Europe to be declared by the Nazi occupation forces to be judenfrei. The resistance movement in Belgrade was led by Major Žarko Todorović from 1941 until his arrest in 1943.
Just like Rotterdam, which was devastated twice by both German and Allied bombing, Belgrade was bombed once more during World War II, this time by the Allies on 16 April 1944, killing at least 1,100 people. This bombing fell on the Orthodox Christian Easter. Most of the city remained under German occupation until 20 October 1944, when it was liberated by the Red Army and the Communist Yugoslav Partisans.
On 29 November 1945, Marshal Josip Broz Tito proclaimed the Federal People's Republic of Yugoslavia in Belgrade (later renamed to Socialist Federal Republic of Yugoslavia on 7 April 1963).
Socialist Yugoslavia
When the war ended, the city was left with 11,500 demolished housing units. During the post-war period, Belgrade grew rapidly as the capital of the renewed Yugoslavia, developing as a major industrial centre.
In 1948, construction of New Belgrade started. In 1958, Belgrade's first television station began broadcasting. In 1961, Belgrade hosted the first and founding conference of the Non-Aligned Movement under Tito's chairmanship. In 1962, Belgrade Nikola Tesla Airport was built. In 1968, major student protests led to several street clashes between students and the police.
In 1972, Belgrade faced a smallpox outbreak, the last major outbreak of smallpox in Europe since World War II. Between October 1977 and March 1978, the city hosted the first major gathering of the Organization for Security and Co-operation in Europe with the aim of implementing the Helsinki Accords from, while in 1980 Belgrade hosted the UNESCO General Conference. Josip Broz Tito died in May 1980 and his funeral in Belgrade was attended by high officials and state delegations from 128 of the 154 members of the United Nations from all over the world, based on which it became one of the largest funerals in history.
Breakup of Yugoslavia
On 9 March 1991, massive demonstrations led by Vuk Drašković were held in the city against Slobodan Milošević. According to various media outlets, there were between 100,000 and 150,000 people on the streets. Two people were killed, 203 were injured and 108 were arrested during the protests, and later that day tanks were deployed onto the streets to restore order. Many anti-war protests were held in Belgrade, with the largest protests being dedicated to solidarity with the victims from the besieged Sarajevo. Further anti-government protests were held in Belgrade from November 1996 to February 1997 against the same government after alleged electoral fraud in local elections. These protests brought Zoran Đinđić to power, the first mayor of Belgrade since World War II who did not belong to the League of Communists of Yugoslavia or its later offshoot, the Socialist Party of Serbia.
In 1999, during the Kosovo War, the NATO bombing campaign targeted a number a buildings in Belgrade. Among the sites bombed were some ministry buildings, the RTS building, hospitals, Hotel Jugoslavija, the Central Committee building, Avala Tower, and the Chinese embassy. Between 500 and 2,000 civilians were killed in Serbia and Montenegro as a result of the NATO bombings, of which 47 were killed in Belgrade. After the Yugoslav Wars, Serbia became home to the highest number of refugees and internally displaced persons in Europe, with more than a third of these refugees having settled in Belgrade.
After the 2000 presidential elections, Belgrade was the site of major public protests, with over half a million people taking part. These demonstrations resulted in the ousting of president Milošević as a part of the Otpor movement.
Development
In 2014, Belgrade Waterfront, an urban renewal project, was initiated by the Government of Serbia and its Emirati partner, Eagle Hills Properties. Around €3.5 billion was to be jointly invested by the Serbian government and their Emirati partners. The project includes office and luxury apartment buildings, five-star hotels, a shopping mall and the envisioned 'Belgrade Tower'. The project is, however, quite controversial—there are a number of uncertainties regarding its funding, necessity, and its architecture's arguable lack of harmony with the rest of the city.
In addition to Belgrade Waterfront, the city is under rapid development and reconstruction, especially in the area of Novi Beograd, where (as of 2020) apartment and office buildings were under construction to support the burgeoning Belgrade IT sector, now one of Serbia's largest economic players. In September 2020, there were around 2000 active construction sites in Belgrade. The city budget for 2023 stood at 205,5 billion dinars (1.750 billion Euros). The budget for the city of Belgrade has been estimated to be more than 2 billion Euros for 2024.
Geography
Topography
Belgrade lies above sea level and is located at the confluence of the Danube and Sava rivers. The historical core of Belgrade, Kalemegdan, lies on the right banks of both rivers. Since the 19th century, the city has been expanding to the south and east; after World War II, New Belgrade was built on the left bank of the Sava river, connecting Belgrade with Zemun. Smaller, chiefly residential communities across the Danube, like Krnjača, Kotež and Borča, also merged with the city, while Pančevo, a heavily industrialised satellite city, remains separate. The city has an urban area of , while together with its metropolitan area it covers .
On the right bank of the Sava, central Belgrade has a hilly terrain, while the highest point of Belgrade proper is Torlak hill at . The mountains of Avala () and Kosmaj () lie south of the city. Across the Sava and Danube, the land is mostly flat, consisting of alluvial plains and loessial plateaus.
One of the characteristics of the city terrain is mass wasting. On the territory covered by the General Urban Plan there are 1,155 recorded mass wasting points, out of which 602 are active and 248 are labeled as 'high risk'. They cover almost 30% of the city territory and include several types of mass wasting. Downhill creeps are located on the slopes above the rivers, mostly on the clay or loam soils, inclined between 7 and 20%. The most critical ones are in Karaburma, Zvezdara, Višnjica, Vinča and Ritopek, in the Danube valley, and Umka, and especially its neighbourhood of Duboko, in the Sava valley. They have moving and dormant phases, and some of them have been recorded for centuries. Less active downhill creep areas include the entire Terazije slope above the Sava (Kalemegdan, Savamala), which can be seen by the inclination of the Pobednik monument and the tower of the Cathedral Church, and the Voždovac section, between Banjica and Autokomanda.
Landslides encompass smaller areas, develop on the steep cliffs, sometimes being inclined up to 90%. They are mostly located in the artificial loess hills of Zemun: Gardoš, Ćukovac and Kalvarija.
However, the majority of the land movement in Belgrade, some 90%, is triggered by the construction works and faulty water supply system (burst pipes, etc.). The neighbourhood of Mirijevo is considered to be the most successful project of fixing the problem. During the construction of the neighbourhood from the 1970s, the terrain was systematically improved and the movement of the land is today completely halted.
Climate
Under the Köppen climate classification, Belgrade has a humid subtropical climate (Cfa) bordering on a humid continental climate (Dfa) with four seasons and uniformly spread precipitation. Monthly averages range from in January to in July, with an annual mean of . There are, on average, 44.6 days a year when the maximum temperature is at or above , and 95 days when the temperature is above , On the other hand, Belgrade experiences 52.1 days per year in which the minimum temperature falls below , with 13.8 days having a maximum temperature below freezing as well. Belgrade receives about of precipitation a year, with late spring being wettest. The average annual number of sunny hours is 2,020.
Belgrade may experience thunderstorms at any time of the year, experiencing 31 days annually, but it's much more common in spring and summer months. Hail is rare and occurs exclusively in spring or summer.
The highest officially recorded temperature in Belgrade was on 24 July 2007, while on the other end, the lowest temperature was on 10 January 1893. The highest recorded value of daily precipitation was on 15 May 2014.
Administration
Belgrade is a separate territorial unit in Serbia, with its own autonomous city authority. The Assembly of the City of Belgrade has 110 members, elected on four-year terms.
A 13-member City Council, elected by the Assembly and presided over by the mayor and his deputy, has the control and supervision of the city administration, which manages day-to-day administrative affairs. It is divided into 14 Secretariats, each having a specific portfolio such as traffic or health care, and several professional services, agencies and institutes.
The 2022 Belgrade City Assembly election was won by the Serbian Progressive Party, which formed a ruling coalition with the Socialist Party of Serbia. Between 2004 and 2013, the Democratic Party was in power. Due to the importance of Belgrade in political and economic life of Serbia, the office of city's mayor is often described as the third most important office in the state, after the President of the Government and the President of the Republic.
As the capital city, Belgrade is seat of all Serbian state authorities – executive, legislative, judiciary, and the headquarters of almost all national political parties as well as 75 diplomatic missions. This includes the National Assembly, the Presidency, the Government of Serbia and all the ministries, Supreme Court of Cassation and the Constitutional Court.
Municipalities
The city is divided into 17 municipalities. Previously, they were classified into 10 urban (lying completely or partially within borders of the city proper) and 7 suburban municipalities, whose centres are smaller towns. With the new 2010 City statute, they were all given equal status, with the proviso that suburban ones (except Surčin) have certain autonomous powers, chiefly related with construction, infrastructure and public utilities.
Most of the municipalities are situated on the southern side of the Danube and Sava rivers, in the Šumadija region. Three municipalities (Zemun, Novi Beograd, and Surčin), are on the northern bank of the Sava in the Syrmia region and the municipality of Palilula, spanning the Danube, is in both the Šumadija and Banat regions.
Demographics
According to the 2022 census, the statistical city proper has a population of 1,197,714, the urban area (with adjacent urban settlements like Borča, Ovča, Surčin, etc.) has 1,383,875 inhabitants, while the population of the administrative area of the City of Belgrade (often equated with Belgrade's metropolitan area) stands at 1,681,405 people. However, Belgrade's metropolitan area has not been defined, either statistically or administratively, and it sprawls into the neighboring municipalities like Pančevo, Opovo, Pećinci or Stara Pazova.
Belgrade is home to many ethnicities from across the former Yugoslavia and the wider Balkans region. The main ethnic group comprising over 86% of the metropolitan population of Belgrade are Serbs (1,449,241). Some significant minorities include Roma (23,160), Yugoslavs (10,499), Gorani (5,249), Montenegrins (5,134), Russians (4,659), Croats (4,554), Macedonians (4,293), and ethnic Muslims (2,718). Many people came to the city as economic migrants from smaller towns and the countryside, while tens of thousands arrived as refugees from Croatia, Bosnia-Herzegovina and Kosovo, as a result of the Yugoslav wars of the 1990s. The most recent wave of immigration following the Russian invasion of Ukraine saw tens of thousands of Russians and Ukrainians register their residence in Serbia, majority of them in Belgrade.
Between 10,000 and 20,000 Chinese people are estimated to live in Belgrade and, since their arrival in the mid-1990s, Block 70 in New Belgrade has been known colloquially as the Chinese quarter. Many Middle Easterners, mainly from Syria, Iran, Jordan and Iraq, arrived in order to pursue their studies during the 1970s and 1980s, and have remained in the city. Throughout the 19th and early 20th century, small communities of Aromanians, Czechs, Greeks, Germans, Hungarians, Jews, Turks, Armenians and Russian White émigrés also existed in Belgrade. There are two suburban settlements with significant minority population today: Ovča and the village of Boljevci, both with about one quarter of their population being Romanians and Slovaks, respectively. Immigration to Belgrade from other countries accelerates. In 2023, more than 30,000 foreign workers got working and residence permits only in Belgrade.
Although there are several historic religious communities in Belgrade, the religious makeup of the city is relatively homogeneous. The Serbian Orthodox community is by far the largest, with 1,475,168 adherents. There are also 31,914 Muslims, 13,720 Roman Catholics, and 3,128 Protestants.
There once was a significant Jewish community in Belgrade but, following the World War II Nazi occupation of the city and subsequent Jewish emigration, their numbers have fallen from over 10,000 to just 295. Belgrade also used to have one of the largest Buddhist colonies in Europe outside Russia when some 400 mostly Buddhist Kalmyks settled on the outskirts of Belgrade following the Russian Civil War. The first Buddhist temple in Europe was built in Belgrade in 1929. Most of them moved away after the World War II and their temple, Belgrade pagoda, was abandoned, claimed by the new Communist regime and eventually demolished.
Economy
Belgrade is the financial centre of Serbia and Southeast Europe, with a total of of office space. It is also home to the country's Central Bank. 750,550 people are employed (July 2020) in 120,286 companies, 76,307 enterprises and 50,000 shops. The City of Belgrade itself owns of rentable office space.
As of 2019, Belgrade contained 31.4% of Serbia's employed population and generated over 40.4% of its GDP. City GDP in 2023 at purchasing power parity is estimated at $73 bn USD, which is $43,400 per capita in terms of purchasing power parity. Nominal GDP in 2023 is estimated at $31.5 bn USD, which is $18.700 per capita.
New Belgrade is the country's Central business district and one of Southeastern Europe's financial centres. It offers a range of facilities, such as hotels, congress halls (e.g. Sava Centar), Class A and B office buildings, and business parks (e.g. Airport City Belgrade). Over of land is under construction in New Belgrade, with the value of planned construction over the next three years estimated at over 1.5 billion euros. The Belgrade Stock Exchange is also located in New Belgrade.
With 6,924 companies in the IT sector (), Belgrade is one of the foremost information technology hubs in Southeast Europe. Microsoft's Development Center Serbia, located in Belgrade, was, at the time of its establishment, the fifth such programme on the globe. Many global IT companies choose Belgrade as their European or regional centre of operations, such as Asus, Intel, Dell, Huawei, Nutanix, NCR etc. The most famous Belgrade IT startups, among others, are Nordeus, ComTrade Group, MicroE, FishingBooker, and Endava. IT facilities in the city include the Mihajlo Pupin Institute and the ILR, as well as the brand-new IT Park Zvezdara. Many prominent IT innovators began their careers in Belgrade, including Voja Antonić and Veselin Jevrosimović.
In December 2021, the average Belgrade monthly net salary stood at 94,463 RSD ($946) in net terms, with the gross equivalent at 128,509 RSD ($1288), while in New Belgrade CBD is Euros 1,059. 88% of the city's households owned a computer, 89% had a broadband internet connection and 93% had pay television services.
According to Cushman & Wakefield, Knez Mihajlova street is 36th most expensive retail street in the world in terms of renting commercial space.
As an example of the attractiveness of the city and its importance in this part of the continent is the fact that numerous multinational companies choose precisely Belgrade to place its local headquarters. An early example of this was the multinational food-giant McDonald's opening its first ever restaurant in a communist country in Europe in Belgrade.
Culture
According to the BBC, Belgrade is one of the five most creative cities in the world.
Belgrade hosts many annual international cultural events, including the Film Festival, Theatre Festival, Summer Festival, BEMUS, Belgrade Early Music Festival, Book Fair, Belgrade Choir Festival, Eurovision Song Contest 2008, and the Beer Fest. In 2022 Belgrade was also home to the Europride event, even though the Serbian president, Aleksandar Vučić, tried to cancel it. The Nobel Prize winning author Ivo Andrić wrote his most famous work, The Bridge on the Drina, in Belgrade. Other prominent Belgrade authors include Branislav Nušić, Miloš Crnjanski, Borislav Pekić, Milorad Pavić and Meša Selimović. The most internationally prominent artists from Belgrade are Charles Simic, Marina Abramović and Milovan Destil Marković.
Most of Serbia's film industry is based in Belgrade. FEST is an annual film festival that has been held since 1971. Through 2013, the festival had been attended by four million people and had presented almost 4,000 films.
The city was one of the main centres of the Yugoslav new wave in the 1980s: VIS Idoli, Ekatarina Velika, Šarlo Akrobata and Električni Orgazam were all from Belgrade. Other notable Belgrade rock acts include Riblja Čorba, Bajaga i Instruktori and Partibrejkers. Today, it is the centre of the Serbian hip hop scene, with acts such as Beogradski Sindikat, Bad Copy, Škabo, Marčelo, and most of the Bassivity Music stable hailing from or living in the city. There are numerous theatres, the most prominent of which are National Theatre, Theatre on Terazije, Yugoslav Drama Theatre, Zvezdara Theatre, and Atelier 212. The Serbian Academy of Sciences and Arts is also based in Belgrade, as well as the National Library of Serbia. Other major libraries include the Belgrade City Library and the Belgrade University Library. Belgrade's two opera houses are: National Theatre and Madlenianum Opera House. Following the victory of Serbia's representative Marija Šerifović at the Eurovision Song Contest 2007, Belgrade hosted the Contest in 2008.
There are more than 1650 public sculptures in Belgrade.
Museums
The most prominent museum in Belgrade is the National Museum, founded in 1844 and reconstructed from 2003 until June 2018. The museum houses a collection of more than 400,000 exhibits (over 5600 paintings and 8400 drawings and prints, including many foreign masters like Bosch, Juan de Flandes, Titian, Tintoretto, Rubens, Cézanne, G.B. Tiepolo, Renoir, Monet, Lautrec, Matisse, Picasso, Gauguin, Chagall, Van Gogh, Mondrian etc.) and also the famous Miroslav's Gospel. The Ethnographic Museum, established in 1901, contains more than 150,000 items showcasing the rural and urban culture of the Balkans, particularly the countries of former Yugoslavia.
The Museum of Contemporary Art was the first contemporary art museum in Yugoslavia and one of the first museums of this type in the world. Following its foundation in 1965, has amassed a collection of more than 8,000 works from art produced across the former Yugoslavia. The collection represents some of the most notable Serbian and Yugoslav 20th century artists, including Sava Šumanović, Nadežda Petrović, Petar Dobrović, Milena Pavlović-Barili, Milan Konjović, Zora Petrović, Đorđe Andrejević Kun, Vladimir Veličković, Petar Lubarda, Krsto Hegedušić, Mića Popović, Ivan Meštrović, Antun Augustinčić, Toma Rosandić, Olga Jevrić, Olga Jančić, Lojze Dolinar, among others. The museum was closed in 2007, but has since been reopened in 2017 to focus on the modern as well as on the Yugoslav art scenes. Artist Marina Abramović, who was born in Belgrade, held an exhibition in the Museum of Contemporary Art, which the New York Times described as one of the most important cultural happenings in the world in 2019. The exhibition was seen by almost 100,000 visitors. Marina Abramović made a stage speech and performance in front of 20,000 people. In the heart of Belgrade you can also find the Museum of Applied Arts, a museum that has been awarded for the Institution of the Year 2016 by ICOM.
The Military Museum, established in 1878 in Kalemegdan, houses a wide range of more than 25,000 military objects dating from the prehistoric to the medieval to the modern eras. Notable items include Turkish and oriental arms, national banners, and Yugoslav Partisan regalia.
The Museum of Aviation in Belgrade located near Belgrade Nikola Tesla Airport has more than 200 aircraft, of which about 50 are on display, and a few of which are the only surviving examples of their type, such as the Fiat G.50. This museum also displays parts of shot down US and NATO aircraft, such as the F-117 and F-16.
The Nikola Tesla Museum, founded in 1952, preserves the personal items of Nikola Tesla, the inventor after whom the Tesla unit was named. It holds around 160,000 original documents and around 5,700 personal other items including his urn. The last of the major Belgrade museums is the Museum of Vuk and Dositej, which showcases the lives, work and legacy of Vuk Stefanović Karadžić and Dositej Obradović, the 19th century reformer of the Serbian literary language and the first Serbian Minister of Education, respectively. Belgrade also houses the Museum of African Art, founded in 1977, which has a large collection of art from West Africa.
With around 95,000 copies of national and international films, the Yugoslav Film Archive is the largest in the region and among the 10 largest archives in the world. The institution also operates the Museum of Yugoslav Film Archive, with movie theatre and exhibition hall. The archive's long-standing storage problems were finally solved in 2007, when a new modern depository was opened. The Yugoslav Film Archive also exhibits original Charlie Chaplin's stick and one of the first movies by Auguste and Louis Lumière.
The Belgrade City Museum moved into a new building in downtown in 2006. The museum hosts a range of collections covering the history of urban life since prehistory. Belgrade City Museum also includes additional sites, such as Ivo Andrić Museum, Princess Ljubica's Residence, Paja Jovanović Museum, Jovan Cvijić Museum. The Museum of Yugoslavia has collections from the Yugoslav era. Beside paintings, the most valuable are Moon rocks donated by Apollo 11 crew Neil Armstrong, Buzz Aldrin and Michael Collins while visiting Belgrade in 1969 and from mission Apollo 17 donated by Richard Nixon in 1971. The museum also houses Joseph Stalin's sabre with 260 brilliants and diamonds, donated by Stalin himself. The Museum of Science and Technology, founded in 1989, moved to the building of the first city's power plant in Dorćol in 2005.
Architecture
Belgrade has wildly varying architecture, from the centre of Zemun, typical of a Central European town, to the more modern architecture and spacious layout of New Belgrade.
The oldest architecture is found in Kalemegdan Park. Outside of Kalemegdan, the oldest buildings date only from the 18th century, due to its geographic position and frequent wars and destructions.
The oldest public structure in Belgrade is a nondescript Turkish türbe, while the oldest house is a modest clay house on Dorćol, from late 18th century. Western influence began in the 19th century, when the city completely transformed from an oriental town to the contemporary architecture of the time, with influences from neoclassicism, romanticism, and academic art. Serbian architects took over the development from foreign builders in the late 19th century, producing the National Theatre, Stari Dvor, Cathedral Church and later, in the early 20th century, the House of the National Assembly and National Museum, influenced by art nouveau. Elements of Serbo-Byzantine Revival are present in buildings such as Vuk Foundation House, old Post Office in Kosovska street, and sacral architecture, such as St. Mark's Church (based on the Gračanica monastery), and the Church of Saint Sava.
In the socialist period, housing was built quickly and cheaply for the huge influx of people fleeing the countryside following World War II, sometimes resulting in the brutalist architecture of the blokovi ('blocks') of New Belgrade; a socrealism trend briefly ruled, resulting in buildings like the Trade Union Hall. However, in the mid-1950s, modernist trends took over, and still dominate the Belgrade architecture.
Belgrade has the second oldest sewer system in Europe. The Clinical Centre of Serbia spreads over 34 hectares and consists of about 50 buildings, while also has 3,150 beds considered to be the highest number in Europe, and among highest in the world.
Tourism
Lying on the main artery connecting Europe and Asia, as well as, eventually, the Orient Express, Belgrade has been a popular place for travellers through the centuries.
In 1843, on Dubrovačka Street (today Kralj Petar Street ), Serbia's knez Mihailo Obrenović built a large edifice which became the first hotel in Belgrade: Kod jelena ('at the deer's'), in the neighbourhood of Kosančićev Venac. Many criticised the move at the time due to the cost and the size of the building, but it soon became the gathering point of the Principality's wealthiest citizens. Colloquially, the building was also referred to as the staro zdanje, or the 'old edifice'. It remained a hotel until 1903 before being demolished in 1938. After the staro zdanje, numerous hotels were built in the second half of the 19th century: Nacional and Grand, also in Kosančićev Venac, Srpski Kralj, Srpska Kruna, Grčka Kraljica near Kalemegdan, Balkan and Pariz in Terazije, London, etc.
As Belgrade became connected via steamboats and railway (after 1884), the number of visitors grew and new hotels were opened with luxurious commodities. In Savamala, the hotels Bosna and Bristol were opened. Other hotels included Solun and Orient, which was built near the Financial Park. Tourists who arrived by the Orient Express mostly stayed at the Petrograd Hotel in Wilson Square. Hotel Srpski Kralj, at the corner of Uzun Mirkova and Pariska Streets was considered the best hotel in Belgrade during the Interbellum. It was destroyed during World War II.
The historic areas and buildings of Belgrade are among the city's premier attractions. They include Skadarlija, the National Museum and adjacent National Theatre, Zemun, Nikola Pašić Square, Terazije, Students' Square, the Kalemegdan Fortress, Knez Mihailova Street, the Parliament, the Church of Saint Sava, and the Old Palace. On top of this, there are many parks, monuments, museums, cafés, restaurants and shops on both sides of the river. The hilltop Avala Monument and Avala Tower offer views over the city. According to The Guardian, Dorcol is the one of top ten coolest suburbs in Europe.
The elite neighbourhood of Dedinje is situated near the Topčider and Košutnjak parks. The Dedinje Royal Compound which houses the former royal residences of (Kraljevski Dvor and Beli Dvor) is open for visitors. The palace has many valuable artworks. Nearby, Josip Broz Tito's mausoleum, called The House of Flowers, documents the life of the former Yugoslav president.
Ada Ciganlija is a former island on the Sava River, and Belgrade's biggest sports and recreational complex. Today it is connected with the right bank of the Sava via two causeways, creating an artificial lake. It is the most popular destination for Belgraders during the city's hot summers. There are of long beaches and sports facilities for various sports including golf, football, basketball, volleyball, rugby union, baseball, and tennis. During summer there are between 200,000 and 300,000 bathers daily.
Belgrade is also known for tourist activities involving extreme sports such as bungee jumping, water skiing, and paintballing. There are numerous trails on the island, where it is possible to ride a bike, go for a walk, or go jogging. Apart from Ada, Belgrade has total of 16 islands on the rivers, many still unused. Among them, the Great War Island, at the confluence of Sava, stands out as an oasis of unshattered wildlife (especially birds). These areas, along with nearby Small War Island, are protected by the city's government as a nature preserve. There are 37 protected natural resources in the Belgrade urban area, among which eight are geo-heritage sites, i.e. Straževica profile, Mašin Majdan-Topčider, Profile at the Kalemegdan Fortress, Abandoned quarry in Barajevo, Karagača valley, Artesian well in Ovča, Kapela loess profile, and Lake in Sremčica. Other 29 places are biodiversity sites.
Tourist income in 2016 amounted to nearly half a billion euros; with a visit of almost a million registered tourists. Of those, in 2019 more than 100,000 tourists arrived by 742 river cruisers. Average annual growth is between 13% and 14%.
As of 2018, there are three officially designated camp grounds in Belgrade. The oldest one is located in Batajnica, along the Batajnica Road. Named "Dunav", it is one of the most visited campsites in the country. The second one is situated within the complex of the ethno-household "Zornić's House" in the village of Baćevac, while the third is located in Ripanj, on the slopes of Avala mountain. In 2017 some 15,000 overnights were recorded in camps.
Belgrade is a common stop on the Rivers Route, European cycling route known as "Danube Bike Trail" in Serbia as well as on the Sultans Trail, a long-distance hiking footpath between Vienna and Istanbul.
Nightlife
Belgrade has a reputation for vibrant nightlife; many clubs that are open until dawn can be found throughout the city. The most recognisable nightlife features of Belgrade are the barges (splav) spread along the banks of the Sava and Danube Rivers.
Many weekend visitors—particularly from Bosnia and Herzegovina, Croatia and Slovenia—prefer Belgrade nightlife to that of their own capitals due to its perceived friendly atmosphere, plentiful clubs and bars, cheap drinks, lack of significant language barriers, and a lack of night life regulation.
One of the most famous sites for alternative cultural happenings in the city is the SKC (Student Cultural Centre), located right across from Belgrade's highrise landmark, the Belgrade Palace tower. Concerts featuring famous local and foreign bands are often held at the centre. SKC is also the site of various art exhibitions, as well as public debates and discussions.
A more traditional Serbian nightlife experience, accompanied by traditional music known as Starogradska (roughly translated as Old Town Music), typical of northern Serbia's urban environments, is most prominent in Skadarlija, the city's old bohemian neighbourhood where the poets and artists of Belgrade gathered in the 19th and early 20th centuries. Skadar Street (the centre of Skadarlija) and the surrounding neighbourhood are lined with some of Belgrade's best and oldest traditional restaurants (called kafanas in Serbian), which date back to that period. At one end of the neighbourhood stands Belgrade's oldest beer brewery, founded in the first half of the 19th century. One of the city's oldest kafanas is the Znak pitanja ('?').
The Times reported that Europe's best nightlife can be found in Belgrade. In the Lonely Planet 1000 Ultimate Experiences Guide of 2009, Belgrade was placed at the 1st spot among the top 10 party cities in the world.
Sport and recreation
There are approximately one-thousand sports facilities in Belgrade, many of which are capable of serving all levels of sporting events.
Ada Ciganlija island, with its lake and beaches, is one of the most important recreational areas in the city. With a total of 8 km beaches, and a variety of bars, cafés, restaurants and sport facilities, Ada Ciganlija attracts many visitors, especially in summertime.
Košutnjak Park Forest has numerous running and bike trails, sports facilities for a variety of sports, and indoor and outdoor pools. It is a popular destination that is located only 2 km from Ada Ciganlija.
During the 1960s and 1970s Belgrade held a number of major international events such as the first ever World Aquatics Championships in 1973, 1976 European Football Championship and 1973 European Cup Final, European Athletics Championships in 1962 and European Indoor Games in 1969, European Basketball Championships in 1961 and 1975, European Volleyball Championship for men and women in 1975 and World Amateur Boxing Championships in 1978.
Since the early 2000s Belgrade again hosts major sporting events nearly every year. Some of these include EuroBasket 2005, European Handball Championship (men's and women's) in 2012, World Handball Championship for women in 2013, European Volleyball Championships for men in 2005 for men and 2011 for women, the 2006 and 2016 European Water Polo Championship, the European Youth Olympic Festival 2007 and the 2009 Summer Universiade. More recently, Belgrade hosted European Athletics Indoor Championships in 2017 and the basketball EuroLeague Final Four tournaments in 2018 and 2022. Global and continental championships in other sports such as tennis, futsal, judo, karate, wrestling, rowing, kickboxing, table tennis, and chess have also been held in recent years.
The city is home to Serbia's two biggest and most successful football clubs, Red Star Belgrade and Partizan Belgrade. Red Star won the UEFA Champions League (European Cup) in 1991, and Partizan was runner-up in 1966. The two major stadiums in Belgrade are Marakana (Red Star Stadium) and Partizan Stadium. The Eternal derby is between Red Star and Partizan.
With a capacity of 19,384 spectators,Belgrade Arena is one of the largest indoor arenas in Europe. It is used for major sporting events and large concerts. In May 2008, it was the venue for the 53rd Eurovision Song Contest.Aleksandar Nikolić Hall is the main venue of basketball clubs KK Partizan, the European champion of 1992, and KK Crvena Zvezda.
In recent years, Belgrade has also given rise to several world-class tennis players such as Ana Ivanovic, Jelena Janković and Novak Djokovic. Ivanovic and Djokovic are the first female and male Belgraders, respectively, to win Grand Slam singles titles and been ATP number 1 with Jelena Janković. The Serbian national team won the 2010 Davis Cup, beating the French team in the finals played in the Belgrade Arena.
The Belgrade Marathon is held annually since 1988. Belgrade was a candidate to host the 1992 and the 1996 Summer Olympic Games.
Fashion and design
Since 1996, semiannual (autumn/winter and spring/summer seasons) fashion weeks are held citywide. Numerous Serbian and foreign designers and fashion brands have their shows during Belgrade Fashion Week. The festival, which collaborates with London Fashion Week, has helped launch the international careers of local talents such as George Styler and Ana Ljubinković. British fashion designer Roksanda Ilincic, who was born in the city, also frequently presents her runway shows in Belgrade.
In addition to fashion, there are two major design shows held in Belgrade every year which attract international architects and industrial designers such as Karim Rashid, Daniel Libeskind, Patricia Urquiola, and Konstantin Grcic. Both the Mikser Festival and Belgrade Design Week feature lectures, exhibits and competitions. Furthermore, international designers like Sacha Lakic, Ana Kraš, Bojana Sentaler, and Marek Djordjevic are originally from Belgrade.
Media
Belgrade is the most important media hub in Serbia. The city is home to the main headquarters of the national broadcaster Radio Television Serbia (RTS), which is a public service broadcaster. The most popular commercial broadcaster is RTV Pink, a Serbian media multinational, known for its popular entertainment programmes. One of the most popular commercial broadcasters is B92, another media company, which has its own TV station, radio station, and music and book publishing arms, as well as the most popular website on the Serbian internet. Other TV stations broadcasting from Belgrade include 1Prva (formerly Fox televizija), Nova, N1 and others which only cover the greater Belgrade municipal area, such as Studio B.
High-circulation daily newspapers published in Belgrade include , Blic, Alo!, Kurir and Danas. There are two sporting dailies, Sportski žurnal and Sport, and one economic daily, Privredni pregled. A new free distribution daily, 24 sata, was founded in the autumn of 2006. Also, Serbian editions of licensed magazines such as Harper's Bazaar, Elle, Cosmopolitan, National Geographic, Men's Health, Grazia and others have their headquarters in the city.
Education
Belgrade has two state universities and several private institutions of higher education. The University of Belgrade, founded in 1808 as a grande école, is the oldest institution of higher learning in Serbia. Having developed with much of the rest of the city in the 19th century, several university buildings are recognised as forming a constituent part of Belgrade's architecture and cultural heritage. With enrolment numbers of nearly 90,000 students, the university is one of Europe's largest.
The city is also home to 195 primary (elementary) schools and 85 secondary schools. The primary school system has 162 regular schools, 14 special schools, 15 art schools, and 4 adult schools, while the secondary school system has 51 vocational schools, 21 gymnasiums, 8 art schools and 5 special schools. The 230,000 pupils are managed by 22,000 employees in over 500 buildings, covering around .
Transportation
Belgrade has an extensive public transport system consisting of buses (118 urban lines and more than 300 suburban lines), trams (12 lines), trolleybuses (8 lines) and S-Train BG Voz (6 lines). Buses, trolleybuses and trams are run by GSP Beograd and SP Lasta in cooperation with private companies on some bus routes. The S-train network, BG Voz, run by city government in cooperation with Serbian Railways, is a part of the integrated transport system, and has three lines (Batajnica-Ovča and Ovča-Resnik and Belgrade centre-Mladenovac), with more announced. tickets may be purchased either via SMS or in physical paper form via the system. Daily connections link the capital to other towns in Serbia and many other European destinations through the city's central bus station. Since January 2025 all public transport in Belgrade is free.
Beovoz was the suburban/commuter railway network that provided mass-transit services in the city, similar to Paris's RER and Toronto's GO Transit. The main usage of system was to connect the suburbs with the city centre. Beovoz was operated by Serbian Railways. However, this system was abolished back in 2013, mostly due to introduction of more efficient BG Voz. Belgrade is one of the last big European capitals and cities with over a million people to have no metro or subway or other rapid transit system. As of November 2021, the Belgrade Metro is currently under construction, which will have 2 lines. The first line is expected to be operational by August 2028.
The new Belgrade Centre railway station is the hub for almost all national and international trains. The high-speed rail that connects Belgrade with Novi Sad started its service on 19 March 2022. The extension towards Subotica and Budapest is under construction, and there are plans for a southwards extension towards Niš and North Macedonia.
The city is placed along the Pan-European corridors X and VII. The motorway system provides for easy access to Novi Sad and Budapest to the north, Niš to the south, and Zagreb to the west. Expressway is also toward Pančevo and new Expressway construction toward Obrenovac (Montenegro) is scheduled for March 2017. Belgrade bypass is connecting the E70 and E75 motorways and is under construction.
Situated at the confluence of two major rivers, the Danube and the Sava, Belgrade has 11 bridges, the most important of which are Branko's Bridge, Ada Bridge, Pupin Bridge and Gazela Bridge, the last two of which connect the core of the city to New Belgrade. In addition, an 'inner magistral semi-ring' is almost done and includes a new Ada bridge across the Sava river and a new Pupin bridge across the Danube river, which ease commuting within the city and unload traffic from the Gazela Bridge and Branko's Bridge.
The Port of Belgrade is on the Danube, and allows the city to receive goods by river. The city is also served by Belgrade Nikola Tesla Airport, west of the city centre, near Surčin. At its peak in 1986, almost 3 million passengers travelled through the airport, though that number dwindled to a trickle in the 1990s. Following renewed growth in 2000, the number of passengers reached approximately 2 million in 2004 and 2005, over 2.6 million passengers in 2008, reaching over 3 million passengers. A record with over 4 million passengers was broken in 2014, when Belgrade Nikola Tesla Airport became the second fastest growing major airport in Europe. The numbers continued to grow steadily and the all-time peak of over 6 million passengers was reached in 2019.
International relations
Twin towns – sister cities
List of Belgrade's sister and twin cities:
Coventry, UK, since 1957
Chicago, US, since 2005
Ljubljana, Slovenia, since 2010
Skopje, North Macedonia, since 2012
Caruaru, Brazil, since 2010
Shanghai, China, since 2018
Banja Luka, Bosnia and Herzegovina, since 2020
Partner cities
Other friendships and cooperations, protocols, memorandums:
Sarajevo, Bosnia and Herzegovina, since 2018, Memorandum of Understanding on Cooperation
Rabat, Morocco, since 2017, Partnership and Cooperation Agreement
Seoul, South Korea, since 2017, Memorandum of Understanding on Friendly Exchanges and Cooperation
Astana, Kazakhstan, since 2016, Agreement on Cooperation
Tehran, Iran, since 2016, Agreement on Cooperation
Corfu, Greece, since 2010, Protocol on Cooperation
Shenzhen, China, since 2009, Agreement on Cooperation
Zagreb, Croatia, since 2003, Letter of Intent
Kyiv, Ukraine, since 2002, Agreement on Cooperation
Algiers, Algeria, since 1991 declaration of mutual interests
Tel Aviv, Israel, since 1990, Agreement on Cooperation
Bucharest, Romania, since 1999, Agreement on Cooperation
Beijing, China, since 1980, Agreement on Cooperation
Rome, Italy, since 1971, Agreement on Friendship and Cooperation
Athens, Greece, since 1966, Agreement on Friendship and Cooperation
Some of the city's municipalities are also twinned to small cities or districts of other big cities; for details see their respective articles.
Belgrade has received various domestic and international honours, including the French Légion d'honneur (proclaimed 21 December 1920; Belgrade is one of four cities outside France, alongside Liège, Luxembourg and Volgograd, to receive this honour), the Czechoslovak War Cross (awarded 8 October 1925), the Yugoslavian Order of the Karađorđe's Star (awarded 18 May 1939) and the Yugoslavian Order of the People's Hero (proclaimed on 20 October 1974, the 30th anniversary of the overthrow of Nazi German occupation during World War II). All of these decorations were received for the war efforts during World War I and World War II. In 2006, Financial Times''' magazine Foreign Direct Investment awarded Belgrade the title of City of the Future of Southern Europe''.
See also
List of people from Belgrade
List of cities and towns on Danube river
List of metropolitan areas in Europe
Notes
References
Sources
External links
Tourist Organisation of Belgrade
Capitals in Europe
Districts of Serbia
Statistical regions of Serbia
Port cities in Serbia
Ancient cities in Serbia
Populated places established in the 3rd century BC
Populated places on the Danube
Šumadija
Recipients of the Czechoslovak War Cross
Recipients of the Legion of Honour
Populated places on the Sava
Starčevo–Körös–Criș culture | Belgrade | [
"Mathematics"
] | 13,616 | [
"Statistical regions of Serbia",
"Statistical concepts",
"Statistical regions"
] |
55,924 | https://en.wikipedia.org/wiki/Prostate | The prostate is an accessory gland of the male reproductive system and a muscle-driven mechanical switch between urination and ejaculation. It is found in all male mammals. It differs between species anatomically, chemically, and physiologically. Anatomically, the prostate is found below the bladder, with the urethra passing through it. It is described in gross anatomy as consisting of lobes and in microanatomy by zone. It is surrounded by an elastic, fibromuscular capsule and contains glandular tissue, as well as connective tissue.
The prostate produces and contains fluid that forms part of semen, the substance emitted during ejaculation as part of the male sexual response. This prostatic fluid is slightly alkaline, milky or white in appearance. The alkalinity of semen helps neutralize the acidity of the vaginal tract, prolonging the lifespan of sperm. The prostatic fluid is expelled in the first part of ejaculate, together with most of the sperm, because of the action of smooth muscle tissue within the prostate. In comparison with the few spermatozoa expelled together with mainly seminal vesicular fluid, those in prostatic fluid have better motility, longer survival, and better protection of genetic material.
Disorders of the prostate include enlargement, inflammation, infection, and cancer. The word prostate is derived from Ancient Greek (), meaning "one who stands before", "protector", "guardian", with the term originally used to describe the seminal vesicles.
Structure
The prostate is a exocrine gland of the male reproductive system. In adults, it is about the size of a walnut, and has an average weight of about , usually ranging between . The prostate is located in the pelvis. It sits below the urinary bladder and surrounds the urethra. The part of the urethra passing through it is called the prostatic urethra, which joins with the two ejaculatory ducts. The prostate is covered in a surface called the prostatic capsule or prostatic fascia.
The internal structure of the prostate has been described using both lobes and zones. Because of the variation in descriptions and definitions of lobes, the zone classification is used more predominantly.
The prostate has been described as consisting of three or four zones. Zones are more typically able to be seen on histology, or in medical imaging, such as ultrasound or MRI.
The "lobe" classification describes lobes that, while originally defined in the fetus, are also visible in gross anatomy, including dissection and when viewed endoscopically. The five lobes are the anterior lobe or isthmus, the posterior lobe, the right and left lateral lobes, and the middle or median lobe.
Inside of the prostate, adjacent and parallel to the prostatic urethra, there are two longitudinal muscle systems. On the front side (ventrally) runs the urethral dilator (musculus dilatator urethrae), on the backside (dorsally) runs the muscle switching the urethra into the ejaculatory state (musculus ejaculatorius).
Blood and lymphatic vessels
The prostate receives blood through the inferior vesical artery, internal pudendal artery, and middle rectal arteries. These vessels enter the prostate on its outer surface where it meets the bladder, and travel forward to the apex of the prostate. Both the inferior vesical and the middle rectal arteries often arise together directly from the internal iliac arteries. On entering the bladder, the inferior vesical artery splits into a urethral branch, supplying the urethral prostate; and a capsular branch, which travels around the capsule and has smaller branches, which perforate into the prostate.
The veins of the prostate form a network – the prostatic venous plexus, primarily around its front and outer surface. This network also receives blood from the deep dorsal vein of the penis, and is connected via branches to the vesical plexus and internal pudendal veins. Veins drain into the vesical and then internal iliac veins.
The lymphatic drainage of the prostate depends on the positioning of the area. Vessels surrounding the vas deferens, some of the vessels in the seminal vesicle, and a vessel from the posterior surface of the prostate drain into the external iliac lymph nodes. Some of the seminal vesicle vessels, prostatic vessels, and vessels from the anterior prostate drain into internal iliac lymph nodes. Vessels of the prostate itself also drain into the obturator and sacral lymph nodes.
Microanatomy
The prostate consists of glandular and connective tissue. Tall column-shaped cells form the lining (the epithelium) of the glands. These form one layer or may be pseudostratified. The epithelium is highly variable and areas of low cuboidal or flat cells can also be present, with transitional epithelium in the outer regions of the longer ducts. Basal cells surround the luminal epithelial cells in benign glands. The glands are formed as many follicles, which drain into canals and subsequently 12–20 main ducts, These in turn drain into the urethra as it passes through the prostate. There are also a small amount of flat cells, which sit next to the basement membranes of glands, and act as stem cells.
The connective tissue of the prostate is made up of fibrous tissue and smooth muscle. The fibrous tissue separates the gland into lobules. It also sits between the glands and is composed of randomly orientated smooth-muscle bundles that are continuous with the bladder.
Over time, thickened secretions called corpora amylacea accumulate in the gland.
Gene and protein expression
About 20,000 protein-coding genes are expressed in human cells and almost 75% of these genes are expressed in the normal prostate. About 150 of these genes are more specifically expressed in the prostate, with about 20 genes being highly prostate specific. The corresponding specific proteins are expressed in the glandular and secretory cells of the prostatic gland and have functions that are important for the characteristics of semen, including prostate-specific proteins, such as the prostate specific antigen (PSA), and the prostatic acid phosphatase.
Development
In the developing embryo, at the hind end lies an inpouching called the cloaca. This, over the fourth to the seventh week, divides into a urogenital sinus and the beginnings of the anal canal, with a wall forming between these two inpouchings called the urorectal septum. The urogenital sinus divides into three parts, with the middle part forming the urethra; the upper part is largest and becomes the urinary bladder, and the lower part then changes depending on the biological sex of the embryo.
The prostatic part of the urethra develops from the middle, pelvic, part of the urogenital sinus, which is of endodermal origin. Around the end of the third month of embryonic life, outgrowths arise from the prostatic part of the urethra and grow into the surrounding mesenchyme. The cells lining this part of the urethra differentiate into the glandular epithelium of the prostate. The associated mesenchyme differentiates into the dense connective tissue and the smooth muscle of the prostate.
Condensation of mesenchyme, urethra, and Wolffian ducts gives rise to the adult prostate gland, a composite organ made up of several tightly fused glandular and non-glandular components. To function properly, the prostate needs male hormones (androgens), which are responsible for male sex characteristics. The main male hormone is testosterone, which is produced mainly by the testicles. It is dihydrotestosterone (DHT), a metabolite of testosterone, that predominantly regulates the prostate. The prostate gland enlarges over time, until the fourth decade of life.
Function
In ejaculation
The prostate secretes fluid, which becomes part of the semen. Its secretion forms up to 30% of the semen. Semen is the fluid emitted (ejaculated) by males during the sexual response. When sperm are emitted, they are transmitted from the vas deferens into the male urethra via the ejaculatory duct, which lies within the prostate gland. Ejaculation is the expulsion of semen from the urethra. Semen is moved into the urethra following contractions of the smooth muscle of the vas deferens and seminal vesicles, following stimulation, primarily of the glans penis. Stimulation sends nerve signals via the internal pudendal nerves to the upper lumbar spine; the nerve signals causing contraction act via the hypogastric nerves. After traveling into the urethra, the seminal fluid is ejaculated by contraction of the bulbocavernosus muscle. The secretions of the prostate include proteolytic enzymes, prostatic acid phosphatase, fibrinolysin, zinc, and prostate-specific antigen. Together with the secretions from the seminal vesicles, these form the major fluid part of semen. The prostate contains various metals, including zinc, and is known to be the primary source of most metals found in semen, which are released during ejaculation.
In urination
The prostate's changes of shape, which facilitate the mechanical switch between urination and ejaculation, are mainly driven by the two longitudinal muscle systems running along the prostatic urethra. These are the urethral dilator (musculus dilatator urethrae) on the urethra's front side, which contracts during urination and thereby shortens and tilts the prostate in its vertical dimension thus widening the prostatic section of the urethral tube, and the muscle switching the urethra into the ejaculatory state (musculus ejaculatorius) on its backside.
In case of an operation, e.g. because of benign prostatic hyperplasia (BPH), damaging or sparing of these two muscle systems varies considerably depending on the choice of operation type and details of the procedure of the chosen technique. The effects on postoperational urination and ejaculation vary correspondingly.
In stimulation
It is possible for some men to achieve orgasm solely through stimulation of the prostate gland, such as via prostate massage or anal intercourse. This has led to the area of the rectal wall adjacent to the prostate to be popularly referred to as the "male G-spot".
Clinical significance
Inflammation
Prostatitis is inflammation of the prostate gland. It can be caused by infection with bacteria, or other noninfective causes. Inflammation of the prostate can cause painful urination or ejaculation, groin pain, difficulty passing urine, or constitutional symptoms such as fever or tiredness. When inflamed, the prostate becomes enlarged and is tender when touched during digital rectal examination. The bacteria responsible for the infection may be detected by a urine culture.
Acute prostatitis and chronic bacterial prostatitis are treated with antibiotics. Chronic non-bacterial prostatitis, or male chronic pelvic pain syndrome is treated by a large variety of modalities including the medications alpha blockers, non-steroidal anti-inflammatories and amitriptyline, antihistamines, and other anxiolytics. Other treatments that are not medications may include physical therapy, psychotherapy, nerve modulators, and surgery. More recently, a combination of trigger point and psychological therapy has proved effective for category III prostatitis as well.
Prostate enlargement
An enlarged prostate is called prostatomegaly, with benign prostatic hyperplasia (BPH) being the most common cause. BPH refers to an enlargement of the prostate due to an increase in the number of cells that make up the prostate () from a cause that is not a malignancy. It is very common in older men. It is often diagnosed when the prostate has enlarged to the point where urination becomes difficult. Symptoms include needing to urinate often (urinary frequency) or taking a while to get started (urinary hesitancy). If the prostate grows too large, it may constrict the urethra and impede the flow of urine, making urination painful and difficult, or in extreme cases completely impossible, causing urinary retention. Over time, chronic retention may cause the bladder to become larger and cause a backflow of urine into the kidneys (hydronephrosis).
BPH can be treated with medication, a minimally invasive procedure or, in extreme cases, surgery that removes the prostate. In general, treatment often begins with an alpha-1 adrenergic receptor antagonist medication such as tamsulosin, which reduces the tone of the smooth muscle found in the urethra that passes through the prostate, making it easier for urine to pass through. For people with persistent symptoms, procedures may be considered. The surgery most often used in such cases is transurethral resection of the prostate, in which an instrument is inserted through the urethra to remove prostate tissue that is pressing against the upper part of the urethra and restricting the flow of urine. Minimally invasive procedures include transurethral needle ablation of the prostate and transurethral microwave thermotherapy. These outpatient procedures may be followed by the insertion of a temporary stent, to allow normal voluntary urination, without exacerbating irritative symptoms.
Cancer
Prostate cancer is one of the most common cancers affecting older men in the UK, US, Northern Europe and Australia, and a significant cause of death for elderly men worldwide. Often, a person does not have symptoms; when they do occur, symptoms may include urinary frequency, urgency, hesitation and other symptoms associated with BPH. Uncommonly, such cancers may cause weight loss, retention of urine, or symptoms such as back pain due to lesions that have spread outside of the prostate.
A digital rectal examination and the measurement of a prostate-specific antigen (PSA) level are usually the first investigations done to check for prostate cancer. PSA values are difficult to interpret, because a high value might be present in a person without cancer, and a low value can be present in someone with cancer. The next form of testing is often the taking of a prostate biopsy to assess for tumour activity and invasiveness. Because of the significant risk of overdiagnosis with widespread screening in the general population, prostate cancer screening is controversial. If a tumour is confirmed, medical imaging such as an MRI or bone scan may be done to check for the presence of tumour in other parts of the body.
Prostate cancer that is only present in the prostate is often treated with either surgical removal of the prostate or with radiotherapy or by the insertion of small radioactive particles of iodine-125 or palladium-103, called brachytherapy. Cancer that has spread to other parts of the body is usually treated also with hormone therapy, to deprive a tumour of sex hormones (androgens) that stimulate proliferation. This is often done through the use of GnRH analogues or agents (such as bicalutamide) that block the receptors that androgens act on; occasionally, surgical removal of the testes may be done instead. Cancer that does not respond to hormonal treatment, or that progresses after treatment, might be treated with chemotherapy such as docetaxel. Radiotherapy may also be used to help with pain associated with bony lesions.
Sometimes, the decision may be made not to treat prostate cancer. If a cancer is small and localised, the decision may be made to monitor for cancer activity at intervals ("active surveillance") and defer treatment. If a person, because of frailty or other medical conditions or reasons, has a life expectancy less than ten years, then the impacts of treatment may outweigh any perceived benefits.
Surgery
Surgery to remove the prostate is called prostatectomy, and is usually done as a treatment for cancer limited to the prostate, or prostatic enlargement. When it is done, it may be done as open surgery or as laparoscopic (keyhole) surgery. These are done under general anaesthetic. Usually the procedure for cancer is a radical prostatectomy, which means that the seminal vesicles are removed and the vasa deferentia are also tied off. Part of the prostate can also be removed from within the urethra, called transurethral resection of the prostate (TURP). Open surgery may involve a cut that is made in the perineum, or via an approach that involves a cut down the midline from the belly button to the pubic bone. Open surgery may be preferred if there is a suspicion that lymph nodes are involved and they need to be removed or biopsied during a procedure. A perineal approach will not involve lymph node removal and may result in less pain and a faster recovery following an operation. A TURP procedure uses a tube inserted into the urethra via the penis and some form of heat, electricity or laser to remove prostate tissue.
The whole prostate can be removed. Complications that might develop because of surgery include urinary incontinence and erectile dysfunction because of damage to nerves during the operation, particularly if a cancer is very close to nerves. Ejaculation of semen will not occur during orgasm if the vasa deferentia are tied off and seminal vesicles removed, such as during a radical prosatectomy. This will mean a man becomes infertile. Sometimes, orgasm may not be able to occur or may be painful. The penis length may shorten slightly if the part of the urethra within the prostate is also removed. General complications due to surgery can also develop, such as infections, bleeding, inadvertent damage to nearby organs or within the abdomen, and the formation of blood clots.
History
The prostate was first formally identified by Venetian anatomist Niccolò Massa in Anatomiae libri introductorius (Introduction to Anatomy) in 1536 and illustrated by Flemish anatomist Andreas Vesalius in Tabulae anatomicae sex (six anatomical tables) in 1538. Massa described it as a "glandular flesh upon which rests the neck of the bladder," and Vesalius as a "glandular body". The first time a word similar to prostate was used to describe the gland is credited to André du Laurens in 1600, who described it as a term already in use by anatomists at the time. The term was however used at least as early as 1549 by French surgeon Ambroise Pare.
At the time, Du Laurens was describing what was considered to be a pair of organs (not the single two-lobed organ), and the Latin term prostatae that was used was a mistranslation of the term for the Ancient Greek word used to describe the seminal vesicles, parastatai; although it has been argued that surgeons in Ancient Greece and Rome must have at least seen the prostate as an anatomical entity. The term prostatae was taken rather than the grammatically correct prostator (singular) and prostatores (plural) because the gender of the Ancient Greek term was taken as female, when it was in fact male.
The fact that the prostate was one and not two organs was an idea popularised throughout the early 18th century, as was the English language term used to describe the organ, prostate, attributed to William Cheselden. A monograph, "Practical observations on the treatment of the diseases of the prostate gland" by Everard Home in 1811, was important in the history of the prostate by describing and naming anatomical parts of the prostate, including the median lobe. The idea of the five lobes of the prostate was popularized following anatomical studies conducted by American urologist Oswald Lowsley in 1912. John E. McNeal first proposed the idea of "zones" in 1968; McNeal found that the relatively homogeneous cut surface of an adult prostate in no way resembled "lobes" and thus led to the description of "zones".
Prostate cancer was first described in a speech to the Medical and Chiurgical Society of London in 1853 by surgeon John Adams and increasingly described by the late 19th century. Prostate cancer was initially considered a rare disease, probably because of shorter life expectancies and poorer detection methods in the 19th century. The first treatments of prostate cancer were surgeries to relieve urinary obstruction. Samuel David Gross has been credited with the first mention of a prostatectomy, as "too absurd to be seriously entertained" The first removal for prostate cancer (radical perineal prostatectomy) was first performed in 1904 by Hugh H. Young at Johns Hopkins Hospital; partial removal of the gland was conducted by Theodore Billroth in 1867.
Transurethral resection of the prostate (TURP) replaced radical prostatectomy for symptomatic relief of obstruction in the middle of the 20th century because it could better preserve penile erectile function. Radical retropubic prostatectomy was developed in 1983 by Patrick Walsh. In 1941, Charles B. Huggins published studies in which he used estrogen to oppose testosterone production in men with metastatic prostate cancer. This discovery of "chemical castration" won Huggins the 1966 Nobel Prize in Physiology or Medicine.
The role of the gonadotropin-releasing hormone (GnRH) in reproduction was determined by Andrzej W. Schally and Roger Guillemin, who both won the 1977 Nobel Prize in Physiology or Medicine for this work. GnRH receptor agonists, such as leuprorelin and goserelin, were subsequently developed and used to treat prostate cancer. Radiation therapy for prostate cancer was first developed in the early 20th century and initially consisted of intraprostatic radium implants. External beam radiotherapy became more popular as stronger X-ray radiation sources became available in the middle of the 20th century. Brachytherapy with implanted seeds (for prostate cancer) was first described in 1983. Systemic chemotherapy for prostate cancer was first studied in the 1970s. The initial regimen of cyclophosphamide and 5-fluorouracil was quickly joined by multiple regimens using a host of other systemic chemotherapy drugs.
Other animals
The prostate is found only in mammals. The prostate glands of male marsupials are proportionally larger than those of placental mammals. The presence of a functional prostate in monotremes is controversial, and if monotremes do possess functional prostates, they may not make the same contribution to semen as in other mammals.
The structure of the prostate varies, ranging from tubuloalveolar (as in humans) to branched tubular. The gland is particularly well developed in carnivorans and boars, though in other mammals, such as bulls, it can be small and inconspicuous. In other animals, such as marsupials and small ruminants, the prostate is disseminate, meaning not specifically localisable as a distinct tissue, but present throughout the relevant part of the urethra; in other animals, such as red deer and American elk, it may be present as a specific organ and in a disseminate form. In some marsupial species, the size of the prostate gland changes seasonally. The prostate is the only accessory gland that occurs in male dogs. Dogs can produce in one hour as much prostatic fluid as a human can in a day. They excrete this fluid along with their urine to mark their territory. Additionally, dogs are the only species apart from humans seen to have a significant incidence of prostate cancer. The prostate is the only male accessory gland that occurs in cetaceans, consisting of diffuse urethral glands surrounded by a very powerful compressor muscle.
The prostate gland originates with tissues in the urethral wall. This means the urethra, a compressible tube used for urination, runs through the middle of the prostate; enlargement of the prostate can constrict the urethra so that urinating becomes slow and painful.
Prostatic secretions vary among species. They are generally composed of simple sugars and are often slightly alkaline. In eutherian mammals, these secretions usually contain fructose. The prostatic secretions of marsupials usually contain N-Acetylglucosamine or glycogen instead of fructose.
Skene's gland
Because the Skene's gland and the male prostate act similarly by secreting prostate-specific antigen (PSA), which is an ejaculate protein produced in males, and of prostate-specific acid phosphatase, the Skene's gland is sometimes referred to as the "female prostate". Although homologous to the male prostate (developed from the same embryological tissues), various aspects of its development in relation to the male prostate are widely unknown and a matter of research.
See also
Ejaculatory duct
List of distinct cell types in the adult human body
Prostate evolution in monotreme mammals
Seminal vesicles
References
Citations
Sources
Attribution
Portions of the text of this article originate from NIH Publication No. 02-4806, a public domain resource.
External links
Exocrine system
Glands
Human male reproductive system
Mammal male reproductive system
Sex organs
Sexual anatomy | Prostate | [
"Biology"
] | 5,313 | [
"Exocrine system",
"Organ systems",
"Sexual anatomy",
"Sex"
] |
55,931 | https://en.wikipedia.org/wiki/Chronology | Chronology (from Latin , from Ancient Greek , , ; and , -logia) is the science of arranging events in their order of occurrence in time. Consider, for example, the use of a timeline or sequence of events. It is also "the determination of the actual temporal sequence of past events".
Chronology is a part of periodization. It is also a part of the discipline of history including earth history, the earth sciences, and study of the geologic time scale.
Related fields
Chronology is the science of locating historical events in time. It relies mostly upon chronometry, which is also known as timekeeping, and historiography, which examines the writing of history and the use of historical methods. Radiocarbon dating estimates the age of formerly living things by measuring the proportion of carbon-14 isotope in their carbon content. Dendrochronology estimates the age of trees by correlation of the various growth rings in their wood to known year-by-year reference sequences in the region to reflect year-to-year climatic variation. Dendrochronology is used in turn as a calibration reference for radiocarbon dating curves.
Calendar and era
The familiar terms calendar and era (within the meaning of a coherent system of numbered calendar years) concern two complementary fundamental concepts of chronology. For example, during eight centuries the calendar belonging to the Christian era, which era was taken in use in the 8th century by Bede, was the Julian calendar, but after the year 1582 it was the Gregorian calendar. Dionysius Exiguus (about the year 500) was the founder of that era, which is nowadays the most widespread dating system on earth. An epoch is the date (year usually) when an era begins.
Ab Urbe condita era
Ab Urbe condita is Latin for "from the founding of the City (Rome)", traditionally set in 753 BC. It was used to identify the Roman year by a few Roman historians. Modern historians use it much more frequently than the Romans themselves did; the dominant method of identifying Roman years was to name the two consuls who held office that year. Before the advent of the modern critical edition of historical Roman works, AUC was indiscriminately added to them by earlier editors, making it appear more widely used than it actually was.
It was used systematically for the first time only about the year 400, by the Iberian historian Orosius. Pope Boniface IV, in about the year 600, seems to have been the first who made a connection between these this era and Anno Domini. (AD 1 = AUC 754.)
Astronomical era
Dionysius Exiguus' Anno Domini era (which contains only calendar years AD) was extended by Bede to the complete Christian era (which contains, in addition all calendar years BC, but no year zero). Ten centuries after Bede, the French astronomers Philippe de la Hire (in the year 1702) and Jacques Cassini (in the year 1740), purely to simplify certain calculations, put the Julian Dating System (proposed in the year 1583 by Joseph Scaliger) and with it an astronomical era into use, which contains a leap year zero, which precedes the year 1 (AD).
Prehistory
While of critical importance to the historian, methods of determining chronology are used in most disciplines of science, especially astronomy, geology, paleontology and archaeology.
In the absence of written history, with its chronicles and king lists, late 19th century archaeologists found that they could develop relative chronologies based on pottery techniques and styles. In the field of Egyptology, William Flinders Petrie pioneered sequence dating to penetrate pre-dynastic Neolithic times, using groups of contemporary artefacts deposited together at a single time in graves and working backwards methodically from the earliest historical phases of Egypt. This method of dating is known as seriation.
Known wares discovered at strata in sometimes quite distant sites, the product of trade, helped extend the network of chronologies. Some cultures have retained the name applied to them in reference to characteristic forms, for lack of an idea of what they called themselves: "The Beaker People" in northern Europe during the 3rd millennium BCE, for example. The study of the means of placing pottery and other cultural artifacts into some kind of order proceeds in two phases, classification and typology: Classification creates categories for the purposes of description, and typology seeks to identify and analyse changes that allow artifacts to be placed into sequences.
Laboratory techniques developed particularly after mid-20th century helped constantly revise and refine the chronologies developed for specific cultural areas. Unrelated dating methods help reinforce a chronology, an axiom of corroborative evidence. Ideally, archaeological materials used for dating a site should complement each other and provide a means of cross-checking. Conclusions drawn from just one unsupported technique are usually regarded as unreliable.
Synchronism
The fundamental problem of chronology is to synchronize events. By synchronizing an event it becomes possible to relate it to the current time and to compare the event to other events. Among historians, a typical need is to synchronize the reigns of kings and leaders in order to relate the history of one country or region to that of another. For example, the Chronicon of Eusebius (325 A.D.) is one of the major works of historical synchronism. This work has two sections. The first contains narrative chronicles of nine different kingdoms: Chaldean, Assyrian, Median, Lydian, Persian, Hebrew, Greek, Peloponnesian, Asian, and Roman. The second part is a long table synchronizing the events from each of the nine kingdoms in parallel columns.
By comparing the parallel columns, the reader can determine which events were contemporaneous, or how many years separated two different events. To place all the events on the same time scale, Eusebius used an Anno Mundi (A.M.) era, meaning that events were dated from the supposed beginning of the world as computed from the Book of Genesis in the Hebrew Pentateuch. According to the computation Eusebius used, this occurred in 5199 B.C. The Chronicon of Eusebius was widely used in the medieval world to establish the dates and times of historical events. Subsequent chronographers, such as George Syncellus (died circa 811), analyzed and elaborated on the Chronicon by comparing with other chronologies. The last great chronographer was Joseph Justus Scaliger (1540-1609) who reconstructed the lost Chronicon and synchronized all of ancient history in his two major works, De emendatione temporum (1583) and Thesaurus temporum (1606). Much of modern historical datings and chronology of the ancient world ultimately derives from these two works. Scaliger invented the concept of the Julian Day which is still used as the standard unified scale of time for both historians and astronomers.
In addition to the literary methods of synchronism used by traditional chronographers such as Eusebius, Syncellus and Scaliger, it is possible to synchronize events by archaeological or astronomical means. For example, the Eclipse of Thales, described in the first book of Herodotus can potentially be used to date the Lydian War because the eclipse took place during the middle of an important battle in that war. Likewise, various eclipses and other astronomical events described in ancient records can be used to astronomically synchronize historical events. Another method to synchronize events is the use of archaeological findings, such as pottery, to do sequence dating.
See also
Examples
Parian Chronicle
List of timelines – specific chronologies
Timelines of world history – overall historical chronology
Christian chronology
Dionysius Exiguus' Easter table
Easter
Lunar cycle
Millennium question
Paschal full moon
Solar cycle
General
Annals
French revolutionary era
Historiography
Traditional Jewish chronology
Fiction writing
Aspects and examples of non-chronological story-telling:
Flashback
Flashforward
Linearity (writing)
Reverse chronology
Notes
References
Hegewisch, D. H., & Marsh, J. (1837). Introduction to historical chronology. Burlington [Vt.]: C. Goodrich.
B. E. Tumanian, "Measurement of Time in Ancient and Medieval Armenia," Journal for the History of Astronomy 5, 1974, pp. 91–98.
Kazarian, K. A., "History of Chronology by B. E. Tumanian," Journal for the History of Astronomy, 4, 1973, p. 137
Porter, T. M., "The Dynamics of Progress: Time, Method, and Measure". The American Historical Review, 1991.
Further reading
Published in the 18th–19th centuries
Weeks, J. E. (1701). The gentleman's hour glass; or, An introduction to chronology; being a plain and compendious analysis of time. Dublin: James Hoey.
Hodgson, J., Hinton, J., & Wallis, J. (1747). An introduction to chronology:: containing an account of time; also of the most remarkable cycles, epoch's, era's, periods, and moveable feasts. To which is added, a brief account of the several methods proposed for the alteration of the style, the reforming the calendar, and fixing the true time of the celebration of Easter. London: Printed for J. Hinton, at the King's Arms in St Paul's Church-yard.
Smith, T. (1818). An introduction to chronology. New York: Samuel Wood.
Published in the 20th century
Keller, H. R. (1934). The dictionary of dates. New York: The Macmillan company.
Poole, R. L., & Poole, A. L. (1934). Studies in chronology and history. Oxford: Clarendon Press.
Langer, W. L., & Gatzke, H. W. (1963). An encyclopedia of world history, ancient, medieval and modern, chronologically arranged. Boston: Houghton Mifflin.
Momigliano, A. "Pagan and Christian Historiography in the Fourth Century A.D." in A. Momigliano, ed., The Conflict Between Paganism and Christianity in the Fourth Century, The Clarendon Press, Oxford, 1963, pp. 79–99
Williams, N., & Storey, R. L. (1966). Chronology of the modern world: 1763 to the present time. London: Barrie & Rockliffe.
Steinberg, S. H. (1967). Historical tables: 58 B.C.-A.D. 1965. London: Macmillan.
Freeman-Grenville, G. S. P. (1975). Chronology of world history: a calendar of principal events from 3000 BC to AD 1973. London: Collings.
Neugebauer, O. (1975). A History of Ancient Mathematical Astronomy Springer-Verlag.
Bickerman, E. J. (1980). The Chronology of the Ancient World. London: Thames and Hudson.
Whitrow, G. J. (1990). Time in history views of time from prehistory to the present day. Oxford [u.a.]: Oxford Univ. Press.
Aitken, M. (1990). Science-Based Dating in Archaeology. London: Thames and Hudson.
Richards, E. G. (1998). Mapping Time: The Calendar and History. Oxford University Press.
Published in the 21st century
Koselleck, R. "Time and History." The Practice of Conceptual History. Timing History, Spacing Concepts. Palo Alto: Stanford University Press, 2002.
External links
Dating the Past (archived 29 May 2005)
Pragmatic Bayesians: a decade of integrating radiocarbon dates in chronological models (archived 5 April 2005) from the University of Sheffield at the Internet Archive. Accessed 2008-01-04.
Open Library. Works related to chronology
Chattopadhyay, Subhasis. Chronicity and Temporality: A Revisionary Hermeneutics of Time in Prabuddha Bharata or Awakened India 120 (10):606–609 (2015). .
Earth sciences
Time | Chronology | [
"Physics",
"Mathematics"
] | 2,532 | [
"Chronology",
"Physical quantities",
"Time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
55,951 | https://en.wikipedia.org/wiki/Instant%20messaging | Instant messaging (IM) technology is a type of synchronous computer-mediated communication involving the immediate (real-time) transmission of messages between two or more parties over the Internet or another computer network. Originally involving simple text message exchanges, modern IM applications and services (also called "social messengers", "messaging apps", "chat apps" or "chat clients") tend to also feature the exchange of multimedia, emojis, file transfer, VoIP (voice calling), and video chat capabilities.
Instant messaging systems facilitate connections between specified known users (often using a contact list also known as a "buddy list" or "friend list") or in chat rooms, and can be standalone apps or integrated into a wider social media platform, or in a website where it can, for instance, be used for conversational commerce. Originally the term "instant messaging" was distinguished from "text messaging" by being run on a computer network instead of a cellular/mobile network, being able to write longer messages, real-time communication, presence ("status"), and being free (only cost of access instead of per SMS message sent).
Instant messaging was pioneered in the early Internet era; the IRC protocol was the earliest to achieve wide adoption. Later in the 1990s, ICQ was among the first closed and commercialized instant messengers, and several rival services appeared afterwards as it became a popular use of the Internet. Beginning with its first introduction in 2005, BlackBerry Messenger became the first popular example of mobile-based IM, combining features of traditional IM and mobile SMS. Instant messaging remains very popular today; IM apps are the most widely used smartphone apps: in 2018 for instance there were 980 million monthly active users of WeChat and 1.3 billion monthly users of WhatsApp, the largest IM network.
Overview
Instant messaging (IM), sometimes also called "messaging" or "texting", consists of computer-based human communication between two users (private messaging) or more (chat room or "group") in real-time, allowing immediate receipt of acknowledgment or reply. This is in direct contrast to email, where conversations are not in real-time, and the perceived quasi-synchrony of the communications by the users (although many systems allow users to send offline messages that the other user receives when logging in).
Earlier IM networks were limited to text-based communication, not dissimilar to mobile text messaging. As technology has moved forward, IM has expanded to include voice calling using a microphone, videotelephony using webcams, file transfer, location sharing, image and video transfer, voice notes, and other features.
IM is conducted over the Internet or other types of networks (see also LAN messenger). Depending on the IM protocol, the technical architecture can be peer-to-peer (direct point-to-point transmission) or client–server (when all clients have to first connect to the central server). Primary IM services are controlled by their corresponding companies and usually follow the client-server model.
The term "Instant Messenger" is a service mark of Time Warner and may not be used in software not affiliated with AOL in the United States. For this reason, in April 2007, the instant messaging client formerly named Gaim (or gaim) announced that they would be renamed "Pidgin".
Clients
Modern IM services generally provide their own client, either a separately installed software or a browser-based client. They are normally centralised networks run by the servers of the platform's operators, unlike peer-to-peer protocols like XMPP. These usually only work within the same IM network, although some allow limited function with other services (see #Interoperability). Third-party client software applications exist that will connect with most of the major IM services. There is the class of instant messengers that uses the serverless model, which doesn't require servers, and the IM network consists only of clients. There are several serverless messengers: RetroShare, Tox, Bitmessage, Ricochet, Ring. See also: LAN messenger.
Some examples of popular IM services today include Signal, Telegram, WhatsApp Messenger, WeChat, QQ Messenger, Viber, Line, and Snapchat. The popularity of certain apps greatly differ between different countries. Certain apps have an emphasis on certain uses - for example, Skype focuses on video calling, Slack focuses on messaging and file sharing for work teams, and Snapchat focuses on image messages. Some social networking services offer messaging services as a component of their overall platform, such as Facebook's Facebook Messenger, who also own WhatsApp. Others have a direct IM function as an additional adjunct component of their social networking platforms, like Instagram, Reddit, Tumblr, TikTok, Clubhouse and Twitter; this also includes for example dating websites, such as OkCupid or Plenty of Fish, and online gaming chat platforms.
Features
Private and group messaging
Private chat allows users to converse privately with another person or a group. Privacy can also be enhanced in several ways, such as end-to-end encryption by default. Public and group chat features allow users to communicate with multiple people simultaneously.
Calling
Many major IM services and applications offer a call feature for user-to-user voice calls, conference calls, and voice messages. The call functionality is useful for professionals who utilize the application for work purposes and as a hands-free method. Videotelephony using a webcam is also possible by some.
Games and entertainment
Some IM applications include in-app games for entertainment. Yahoo! Messenger, for example, introduced these where users could play a game and viewed by friends in real-time. MSN Messenger featured a number of playable games within the interface. Facebook's Messenger has had a built-in option to play games with people in a chat, including games like Tetris and Blackjack. Discord features multiple games built inside the "activities" tab in voice channels.
Payments
A relatively new feature to instant messaging, peer-to-peer payments are available for financial tasks on top of communication. The lack of a service fee also makes these advantageous to financial applications. IM services such as Facebook Messenger and the WeChat 'super-app' for example offer a payment feature.
History
Early systems
Though the term dates from the 1990s, instant messaging predates the Internet, first appearing on multi-user operating systems like Compatible Time-Sharing System (CTSS) and Multiplexed Information and Computing Service (Multics) in the mid-1960s. Initially, some of these systems were used as notification systems for services like printing, but quickly were used to facilitate communication with other users logged into the same machine. CTSS facilitated communication via text message for up to 30 people.
Parallel to instant messaging were early online chat facilities, the earliest of which was Talkomatic (1973) on the PLATO system, which allowed 5 people to chat simultaneously on a 512 x 512 plasma display (5 lines of text + 1 status line per person). During the bulletin board system (BBS) phenomenon that peaked during the 1980s, some systems incorporated chat features which were similar to instant messaging; Freelancin' Roundtable was one prime example. The first such general-availability commercial online chat service (as opposed to PLATO, which was educational) was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio.
As networks developed, the protocols spread with the networks. Some of these used a peer-to-peer protocol (e.g. talk, ntalk and ytalk), while others required peers to connect to a server (see talker and IRC). The Zephyr Notification Service (still in use at some institutions) was invented at MIT's Project Athena in the 1980s to allow service providers to locate and send messages to users.
Early instant messaging programs were primarily real-time text, where characters appeared as they were typed. This includes the Unix "talk" command line program, which was popular in the 1980s and early 1990s. Some BBS chat programs (i.e. Celerity BBS) also used a similar interface. Modern implementations of real-time text also exist in instant messengers, such as AOL's Real-Time IM as an optional feature.
In the latter half of the 1980s and into the early 1990s, the Quantum Link online service for Commodore 64 computers offered user-to-user messages between concurrently connected customers, which they called "On-Line Messages" (or OLM for short), and later "FlashMail." Quantum Link later became America Online and made AOL Instant Messenger (AIM, discussed later). While the Quantum Link client software ran on a Commodore 64, using only the Commodore's PETSCII text-graphics, the screen was visually divided into sections and OLMs would appear as a yellow bar saying "Message From:" and the name of the sender along with the message across the top of whatever the user was already doing, and presented a list of options for responding. As such, it could be considered a type of graphical user interface (GUI), albeit much more primitive than the later Unix, Windows and Macintosh based GUI IM software. OLMs were what Q-Link called "Plus Services" meaning they charged an extra per-minute fee on top of the monthly Q-Link access costs.
Development of the Internet Relay Chat (IRC) protocol began in 1989, and this would become the Internet's first widespread instant messaging standard.
Graphical messengers
Modern, Internet-wide, GUI-based messaging clients as they are known today, began to take off in the mid-1990s with PowWow, ICQ, and AOL Instant Messenger (AIM). Similar functionality was offered by CU-SeeMe in 1992; though primarily an audio/video chat link, users could also send textual messages to each other. AOL later acquired Mirabilis, the authors of ICQ; establishing dominance in the instant messaging market. A few years later ICQ (then owned by AOL) was awarded two patents for instant messaging by the U.S. patent office. Meanwhile, other companies developed their own software; (Excite, Microsoft (MSN), Ubique, and Yahoo!), each with its own proprietary protocol and client; users therefore had to run multiple client applications if they wished to use more than one of these networks. However, the open protocol IRC continued to be popular by the millennium, and its most popular graphical app was mIRC.
While instant messaging was mainly in use for consumer recreational purposes, in 1998, IBM launched their Lotus Sametime instant messenger software, the first popular example of enterprise-grade instant messaging. In 2000, an open-source application and open standards-based protocol called Extensible Messaging and Presence Protocol (XMPP) was launched, initially branded as Jabber. XMPP servers could act as gateways to other IM protocols, reducing the need to run multiple clients.
Video calling using a webcam also started taking off during this time. Microsoft's NetMeeting, which was focused on business "web conferencing", was one of the earliest; the company then launched Windows Messenger, coming preloaded on Windows XP, featuring video capabilities. Yahoo! Messenger added video capabilities in 2001; by 2005, such features were built-in also in AIM, MSN Messenger, and Skype.
There were a reported 100 million users of instant messaging in 2001. As of 2003, AIM was the globally most popular instant messenger with 195 million users and exchanges of 1.6 billion messages daily. By 2006, AIM controlled 52 percent of the instant messaging market, but rapidly declined shortly thereafter as the company struggled to compete with other services.
Integrated IM and mobile
Instant messaging integrated in other services started picking up pace in the late 2000s. Myspace, the then-largest social networking service, launched Myspace IM in 2006, shortly after Google's Gtalk, which was integrated into its Gmail webmail interface. Facebook Chat launched in 2008, providing IM to users of the social network. By 2010, traditional instant messaging was in sharp decline in favor of these new messaging features on wider social networks, which at the time were not normally called IM. For instance, AIM's userbase had declined by more than half throughout the year 2011.
Standalone instant messenger services were revived, evolving into becoming primarily being used on mobile due to the increasing use of Internet-enabled cell phones and smartphones. Often called "chat apps", to distinguish it from cellular-based SMS and MMS "texting" services, these newer services were specially designed to be run on mobile platforms, as opposed to older services like AIM and MSN; BlackBerry Messenger, released in 2005, was one of the influential pioneers of mobile IM, and led to other companies launching services with proprietary protocols, such as WhatsApp. Mobile instant messaging surpassed SMS in global message volume by 2013. While SMS relied on traditional paid telephone services, IM apps on mobile were available for free or a minor data charge.
Older IM services were eventually shut, including AIM and Yahoo! Messenger, and also Windows Live Messenger, which merged into Skype in 2013. In 2014, it was reported that instant messaging had more users than social networks. Concurrently, rising use of instant messaging at workplaces led to the creation of new services (enterprise application integration (EAI)) often integrated with other enterprise applications such as workflow systems, for example in Skype for Business, Slack and Microsoft Teams. Meanwhile, the launch of Discord in 2015 has marked a notable new example of traditional IM originally designed for desktops.
Interoperability
Most IM protocols are proprietary and are not designed to be interoperable with others, meaning that many IM networks have been incompatible and users have been unable to reach users on other networks. As of 2024, fragmentation of IM services means that a typical user is likely to have to use more networks than ever, including the need to download the apps and signing up, to stay in touch with all their contacts. However, there had been attempts for solutions.
Multi-protocol clients can use any of the IM protocols by using additional local libraries for each protocol. Examples of multi-protocol instant messenger software include Pidgin and Trillian, and more recently Beeper. These third-party clients have often been unable to keep up due to proprietary protocol restrictions and getting locked out of it. For instance, in 2015, WhatsApp started banning users who were using unofficial clients. Major IM providers usually cite the need for formal agreements, and security concerns as reasons for making changes.
Attempted open standards
There have been several attempts in the past to create a unified standard for instant messaging, including:
IETF's Session Initiation Protocol (SIP) and SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE)
Application Exchange (APEX),
Instant Messaging and Presence Protocol (IMPP),
Extensible Messaging and Presence Protocol (XMPP), based on XML, and
Open Mobile Alliance's Instant Messaging and Presence Service (IMPS), developed specifically for mobile devices.
History and agreements
In the early 2000s, when instant messaging was quickly growing, most attempts at producing a unified standard for the-then major IM providers (AOL, Yahoo!, Microsoft) had failed. There was a "bitter row" between AOL and its rivals regarding the opening up of their networks. In 2000, U.S. regulatory Federal Communications Commission (FCC) proposed, and supported by Microsoft chairman Bill Gates, that AOL providing interoperability of its AIM and ICQ instant messengers with Microsoft's MSN Messenger was a condition for the forthcoming AOL-Time Warner merger.
However, in 2004, Microsoft, Yahoo! and AOL agreed to a deal in which Microsoft's enterprise IM server Live Communications Server 2005 would have the possibility to talk to their rival counterparts and vice versa. On October 13, 2005, Microsoft and Yahoo! announced that their IM networks would soon be interoperable, using SIP/SIMPLE. This was finally rolled out to Windows Live Messenger and Yahoo! Messenger users in July 2006. Additionally, in December 2005 by the AOL and Google strategic partnership deal, it was announced that AIM and ICQ users would be able to communicate with Google Talk users. However this feature took until December 2007 to roll out. XMPP provided the best example of open protocol interoperability, having had gateways that connected to Google Talk, Lotus Sametime and others.
Later, RCS was developed by telecommunication companies as an instant messaging protocol to replace SMS under a unified standard. In 2022, the European Union passed the Digital Markets Act, which largely came into effect in early 2023. Among other things, the legislation mandates certain interoperability between the largest IM platforms in use in Europe. As a result, in March 2024, Meta Platforms opened up its WhatsApp and Messenger networks to be interoperable.
Technical
There are two ways to combine the many disparate protocols:
Combine the many disparate protocols inside the IM client application.
Combine the many disparate protocols inside the IM server application. This approach moves the task of communicating with the other services to the server. Clients need not know or care about other IM protocols. For example, LCS 2005 Public IM Connectivity. This approach is popular in XMPP servers; however, the so-called transport projects suffer the same reverse engineering difficulties as any other project involved with closed protocols or formats.
Some approaches allow organizations to deploy their own, private instant messaging network by enabling them to restrict access to the server (often with the IM network entirely behind their firewall) and administer user permissions. Other corporate messaging systems allow registered users to also connect from outside the corporation LAN, by using an encrypted, firewall-friendly, HTTPS-based protocol. Usually, a dedicated corporate IM server has several advantages, such as pre-populated contact lists, integrated authentication, and better security and privacy.
Effects of IM on communication
Workplace communication
Instant messaging has changed how people communicate in the workplace. Enterprise messaging applications like Slack, TeleMessage, Teamnote and Yammer allow companies to enforce policies on how employees message at work and ensure secure storage of sensitive data. They allow employees to separate work information from their personal emails and texts.
Messaging applications may make workplace communication efficient, but they can also have consequences on productivity. A study at Slack showed on average, people spend 10 hours a day on Slack, which is about 67% more time than they spend using email.
Instant messaging is implemented in many video-conferencing tools. A study of chat use during work-related videoconferencing found that chat during meetings allows participants to communicate without interrupting the meeting, plan action around common resources, and enables greater inclusion. The study also found that chat can cause distractions and information asymmetries between participants.
Language
Users sometimes make use of internet slang or text speak to abbreviate common words or expressions to quicken conversations or reduce keystrokes. The language has become widespread, with well-known expressions such as 'lol' translated over to face-to-face language.
Emotions are often expressed in shorthand, such as the abbreviation LOL, BRB and TTYL; respectively laugh(ing) out loud, be right back, and talk to you later. Some, however, attempt to be more accurate with emotional expression over IM. Real time reactions such as (chortle) (snort) (guffaw) or (eye-roll) have been popular at one point. Also there are certain standards that are being introduced into mainstream conversations including, '#' indicates the use of sarcasm in a statement and '*' which indicates a spelling mistake and/or grammatical error in the prior message, followed by a correction.
Business application
Instant messaging products can usually be categorised into two types: Enterprise Instant Messaging (EIM) and Consumer Instant Messaging (CIM). Enterprise solutions use an internal IM server, however this is not always feasible, particularly for smaller businesses with limited budgets. The second option, using a CIM provides the advantage of being inexpensive to implement and has little need for investing in new hardware or server software. IM is increasingly becoming a feature of enterprise software rather than a stand-alone application.
Instant messaging has proven to be similar to personal computers, email, and the World Wide Web, in that its adoption for use as a business communications medium was driven primarily by individual employees using consumer software at work, rather than by formal mandate or provisioning by corporate information technology departments. Tens of millions of the consumer IM accounts in use are being used for business purposes by employees of companies and other organizations. The adoption of IM across corporate networks outside of the control of IT organizations creates risks and liabilities for companies who do not effectively manage and support IM use. IM was initially shunned by the corporate world partly due to security concerns, but by 2003 many had started embracing these new services.
Software
In response to the demand for business-grade IM and the need to ensure security and legal compliance, a new type of instant messaging, called "Enterprise Instant Messaging" ("EIM") was created when Lotus Software launched IBM Lotus Sametime in 1998. Microsoft followed suit shortly thereafter with Microsoft Exchange Instant Messaging, later created a new platform called Microsoft Office Live Communications Server, and released Office Communications Server 2007 in October 2007. Oracle Corporation also jumped into the market with its Oracle Beehive unified collaboration software.
Both IBM Lotus and Microsoft have introduced federation between their EIM systems and some of the public IM networks so that employees may use one interface to both their internal EIM system and their contacts on AOL, MSN, and Yahoo. As of 2010, leading EIM platforms include IBM Lotus Sametime, Microsoft Office Communications Server, Jabber XCP and Cisco Unified Presence. Industry-focused EIM platforms such as Reuters Messaging and Bloomberg Messaging also provide IM abilities to financial services companies.
Security and archiving
Crackers (malicious or black hat hackers) have consistently used IM networks as vectors for delivering phishing attempts, drive-by URLs, and virus-laden file attachments, with over 1100 discrete attacks listed by the IM Security Center in 2004–2007. Hackers use two methods of delivering malicious code through IM: delivery of viruses, trojan horses, or spyware within an infected file, and the use of "socially engineered" text with a web address that entices the recipient to click on a URL connecting him or her to a website that then downloads malicious code.
IM connections sometimes occur in plain text, making them vulnerable to eavesdropping. Also, IM client software often requires the user to expose open UDP ports to the world, raising the threat posed by potential security vulnerabilities.
In the early 2000s, a new class of IT security providers emerged to provide remedies for the risks and liabilities faced by corporations who chose to use IM for business communications. The IM security providers created new products to be installed in corporate networks for the purpose of archiving, content-scanning, and security-scanning IM traffic moving in and out of the corporation. Similar to the e-mail filtering vendors, the IM security providers focus on the risks and liabilities described above.
With the rapid adoption of IM in the workplace, demand for IM security products began to grow in the mid-2000s. By 2007, the preferred platform for the purchase of security software had become the "computer appliance", according to IDC, who estimated that by 2008, 80% of network security products would be delivered via an appliance.
By 2014, however, instant messengers' safety level was still extremely poor. According to a scorecard by the Electronic Frontier Foundation, only 7 out of 39 instant messengers received a perfect score. In contrast, the most popular instant messengers at the time only attained a score of 2 out of 7. A number of studies have shown that IM services are quite vulnerable for providing user privacy.
In 2023, cybersecurity researchers discovered that numerous malicious "mods" exist of the Telegram instant messenger, which is freely available for download from Google Play.
Message history
Instant messages are often logged in a local message history, similar to emails' persistent nature. IM networks may store messages with either local-based device storage (e.g. WhatsApp, Viber, Line, WeChat, Signal etc. software) or cloud-based server storage provided by the service (e.g. Telegram, Skype, Facebook Messenger, Google Meet/Chat, Discord, Slack etc.). Although cloud-based storage is advertised to offer encrypted messages, it poses an increased risk that the IM provider may have access to the decryption keys and view the user's saved messages.
This requires users to trust IM servers and providers because messages can generally be accessed by the company. Companies may be compelled to reveal their user's communication and suspend user accounts for any reason.
Tracking and spying
News reports from 2013 revealed that the NSA is not only collecting emails and IM messages but also tracking relationships between senders and receivers of those chats and emails in a process known as metadata collection. Metadata refers to the data concerned about the chat or email as opposed to contents of messages. It may be used to collect valuable information.
In January 2014, Matthew Campbell and Michael Hurley filed a class-action lawsuit against Facebook for breaching the Electronic Communications Privacy Act. They alleged that the information in their supposedly private messages was being read and used to generate profit, specifically "for purposes including but not limited to data mining and user profiling".
In corporate use of IM, organizational offerings have become very sophisticated in their security and logging measures. An employee or organization member must be granted login credentials and permission to use the messaging system. Creating a specific account for each user allows the organization to identify, track and record all use of their messenger system on their servers.
Encryption
Encryption is the primary method that instant messaging apps use to protect user's data privacy and security. For corporate use, encryption and conversation archiving are usually regarded as important features due to security concerns. There are also a bunch of open source encrypting messengers.
IM does hold potential advantages over SMS. SMS messages are not encrypted, making them insecure, as the content of each SMS message is visible to mobile carriers and governments and can be intercepted by a third party, may leak metadata (such as phone numbers), or be spoofed and the sender of the message can be edited to impersonate another person.
Current instant messaging networks that use end-to-end encryption include Signal, WhatsApp, Wire and iMessage. Applications that have been criticized for lacking or poor encryption methods include Telegram and Confide, as both are prone to error or not having encryption enabled by default.
Compliance risks
In addition to the malicious code threat, using instant messaging at work creates a risk of non-compliance with laws and regulations governing electronic communications in businesses. In the United States alone, there are over 10,000 laws and regulations related to electronic messaging and records retention. The better-known of these include the Sarbanes–Oxley Act, HIPAA, and SEC 17a-3.
Clarification from the Financial Industry Regulatory Authority (FINRA) was issued to member firms in the financial services industry in December 2007, noting that "electronic communications", "email", and "electronic correspondence" may be used interchangeably and can include such forms of electronic messaging as instant messaging and text messaging. Changes to Federal Rules of Civil Procedure, effective December 1, 2006, created a new category for electronic records which may be requested during discovery in legal proceedings.
Most nations also regulate electronic messaging and records retention similarly to the United States. The most common regulations related to IM at work involve producing archived business communications to satisfy government or judicial requests under law. Many instant messaging communications fall into the category of business communications that must be archived and retrievable.
Current user base
As of March 2022, the most used instant messaging apps and services worldwide include: Signal with 100 million, Line with 217 million, Viber with 260 million, Telegram with 700 million, WeChat with 1.2 billion, Facebook Messenger with 1.3 billion, and WhatsApp with 2.0 billion users. There are 25 countries in the world where WhatsApp messenger is not the market leader in IM, such as the United States, Canada, Australia, New Zealand, Denmark, Norway, Sweden, Hungary, Lithuania, Poland, Slovakia, Philippines, and China.
IM apps have varying levels of adoption in different countries. As of April 2022:
WhatsApp by Meta Platforms is the most popular instant messaging network in several countries in South America, Western Europe, Africa, Middle East, South Asia, and Southeast Asia.
Facebook Messenger by Meta Platforms is the most popular instant messaging network in North America, Northern Europe, some Central Europe countries, and Oceania.
Telegram is the most popular instant messaging app in several Eastern Europe countries, and the second preferred option after WhatsApp in several countries in Western Europe, Middle East, South Asia, Southeast Asia, Africa, Central and South America.
Viber by Rakuten has a strong presence in Central and Eastern Europe (Bulgaria, Greece, Serbia, Ukraine, Russia). It is also moderately successful in Philippines and Vietnam.
Line by Naver Corporation is used widely in some countries in Asia (Japan, Taiwan, Thailand).
Instant messaging apps and services that are predominately used in only one country include: KakaoTalk in South Korea, Zalo in Vietnam, WeChat in China, and imo in Qatar.
While not the dominant app for one-to-one messaging in any country, Discord is commonly used among online communities due to its ability to support chats with a large amount of members, topic-based channels, and cloud-based storage.
See also
Terms
/ Messaging
Lists
Comparison of cross-platform instant messaging clients
Comparison of instant messaging protocols
Comparison of user features of messaging platforms
Other
References
External links
Internet culture
Social media
Online chat
Videotelephony
Text messaging | Instant messaging | [
"Technology"
] | 6,192 | [
"Instant messaging",
"Computing and society",
"Social media"
] |
55,955 | https://en.wikipedia.org/wiki/Version%20control | Version control (also known as revision control, source control, and source code management) is the software engineering practice of controlling, organizing, and tracking different versions in history of computer files; primarily source code text files, but generally any type of file.
Version control is a component of software configuration management.
A version control system is a software tool that automates version control. Alternatively, version control is embedded as a feature of some systems such as word processors, spreadsheets, collaborative web docs, and content management systems, e.g., Wikipedia's page history.
Version control includes viewing old versions and enables reverting a file to a previous version.
Overview
As teams develop software, it is common for multiple versions of the same software to be deployed in different sites and for the developers to work simultaneously on updates. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).
At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This ensures that the majority of management of version control steps is hidden behind the scenes.
Moreover, in software development, legal and business practice, and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.
Revision control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.
Many version control systems identify the version of a file as a number or letter, called the version number, version, revision number, revision, or revision level. For example, the first version of a file might be version 1. When the file is changed the next version is 2. Each version is associated with a timestamp and the person making the change. Revisions can be compared, restored, and, with some types of files, merged.
History
IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools. Two source management and version control packages that were heavily used by IBM 360/370 installations were The Librarian and Panvalet.
A full system designed for source code control was started in 1972, Source Code Control System for the same system (OS/360). Source Code Control System's introduction, having been published on December 4, 1975, historically implied it was the first deliberate revision control system. RCS followed just after, with its networked version Concurrent Versions System. The next generation after Concurrent Versions System was dominated by Subversion, followed by the rise of distributed revision control tools such as Git.
Structure
Revision control manages changes to a set of data over time. These changes can be structured in various ways.
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files. Accordingly, some systems such as Git, instead consider changes to the data as a whole, which is less intuitive for simple changes but simplifies more complex changes.
When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in the repository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on.
Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked into any repository. When checking into a different repository, this is interpreted as a merge or patch.
Graph structure
In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk. In reality the structure is more complicated, forming a directed acyclic graph, but for many purposes "tree with merges" is an adequate approximation.
Revisions occur in sequence over time, and thus can be arranged in order, either by revision number or timestamp. Revisions are based on past revisions, though it is possible to largely or completely replace an earlier revision, such as "delete all existing text, insert new text". In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph. If there is branching, so multiple future revisions are based on a past revision, or undoing, so a revision can depend on a revision older than its immediate predecessor, then the resulting graph is instead a directed tree (each node can have more than one child), and has multiple tips, corresponding to the revisions without children ("latest revision on each branch"). In principle the resulting tree need not have a preferred tip ("main" latest revision) – just various different revisions – but in practice one tip is generally identified as HEAD. When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline. Conversely, when a revision can be based on more than one previous revision (when a node can have more than one parent), the resulting process is called a merge, and is one of the most complex aspects of revision control. This most often occurs when changes occur in multiple branches (most often two, but more are possible), which are then merged into a single branch incorporating both changes. If these changes overlap, it may be difficult or impossible to merge, and require manual intervention or rewriting.
In the presence of merges, the resulting graph is no longer a tree, as nodes can have multiple parents, but is instead a rooted directed acyclic graph (DAG). The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure. Thus, while the actual relations between versions form a DAG, this can be considered a tree plus merges, and the trunk itself is a line.
In distributed revision control, in the presence of multiple repositories these may be based on a single original version (a root of the tree), but there need not be an original root - instead there can be a separate root (oldest revision) for each repository. This can happen, for example, if two people start working on a project separately. Similarly, in the presence of multiple data sets (multiple projects) that exchange data or merge, there is no single root, though for simplicity one may think of one project as primary and the other as secondary, merged into the first with or without its own revision history.
Specialized strategies
Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design. A revision table was used to keep track of the changes made. Additionally, the modified areas of the drawing were highlighted using revision clouds.
In Business and Law
Version control is widespread in business and law. Indeed, "contract redline" and "legal blackline" are some of the earliest forms of revision control, and are still employed in business and law with varying degrees of sophistication. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management), supplanting the "manual" electronic implementation of traditional revision control.
Source-management models
Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. If two developers try to change the same file at the same time, without some method of managing access the developers may end up overwriting each other's work. Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging.
Atomic operations
An operation is atomic if the system is left in a consistent state even if the operation is interrupted. The commit operation is usually the most critical in this sense. Commits tell the revision control system to make a group of changes final, and available to all users. Not all revision control systems have atomic commits; Concurrent Versions System lacks this feature.
File locking
The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. Once one developer "checks out" a file, others can read that file, but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout).
File locking has both merits and drawbacks. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in. In a large organization, files can be left "checked out" and locked and forgotten about as developers move between projects - these tools may or may not make it easy to see who has a file checked out.
Version merging
Most version control systems allow multiple developers to edit the same file at the same time. The first developer to "check in" changes to the central repository always succeeds. The system may provide facilities to merge further changes into the central repository, and preserve the changes from the first developer when other developers check in.
Merging two files can be a very delicate operation, and usually possible only if the data structure is simple, as in text files. The result of a merge of two image files might not result in an image file at all. The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types.
The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access, even when a merging capability exists.
Baselines, labels and tags
Most revision control tools will use only one of these similar terms (baseline, label, tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). Typically only one of the terms baseline, label, or tag is used in documentation or discussion; they can be considered synonyms.
In most projects, some snapshots are more significant than others, such as those used to indicate published releases, branches, or milestones.
When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag.
Most formal discussion of configuration management uses the term baseline.
Distributed revision control
Distributed revision control systems (DRCS) take a peer-to-peer approach, as opposed to the client–server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a bona-fide repository.
Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. This results in some important differences from a centralized system:
No canonical, reference copy of the codebase exists by default; only working copies.
Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server.
Rather, communication is only necessary when pushing or pulling changes to or from other peers.
Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss.
Best practices
Following best practices is necessary to obtain the full benefits of version control. Best practice may vary by version control tool and the field to which version control is applied. The generally accepted best practices in software development include: making incremental, small, changes; making commits which involve only one task or fix -- a corollary to this is to commit only code which works and does not knowingly break existing functionality; utilizing branching to complete functionality before release; writing clear and descriptive commit messages, make what why and how clear in either the commit description or the code; and using a consistent branching strategy. Other best software development practices such as code review and automated regression testing may assist in the following of version control best practices.
Costs and benefits
Costs and benefits will vary dependent upon the version control tool chosen and the field in which it is applied. This section speaks to the field of software development, where version control is widely applied.
Costs
In addition to the costs of licensing the version control software, using version control requires time and effort. The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices. Management effort may be required to maintain the discipline needed to follow best practices in order to obtain useful benefit.
Benefits
Allows for reverting changes
A core benefit is the ability to keep history and revert changes, allowing the developer to easily undo changes. This gives the developer more opportunity to experiment, eliminating the fear of breaking existing code.
Branching simplifies deployment, maintenance and development
Branching assists with deployment. Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc.
Damage mitigation, accountability and process and design improvement
There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how.
When bugs arise, knowing what was done when helps with damage mitigation and recovery by assisting in the identification of what problems exist, how long they have existed, and determining problem scope and solutions. Previous versions can be installed and tested to verify conclusions reached by examination of code and commit messages.
Simplifies debugging
Version control can greatly simplify debugging. The application of a test case to multiple versions can quickly identify the change which introduced a bug. The developer need not be familiar with the entire code base and can focus instead on the code that introduced the problem.
Improves collaboration and communication
Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers.
The packaging of commits, branches, and all the associated commit messages and version labels, improves communication between developers, both in the moment and over time. Better communication, whether instant or deferred, can improve the code review process, the testing process, and other critical aspects of the software development process.
Integration
Some of the more advanced revision-control tools offer many other facilities, allowing deeper integration with other tools and software-engineering processes.
Integrated development environment
Plugins are often available for IDEs such as Oracle JDeveloper, IntelliJ IDEA, Eclipse, Visual Studio, Delphi, NetBeans IDE, Xcode, and GNU Emacs (via vc.el). Advanced research prototypes generate appropriate commit messages.
Common terminology
Terminology can vary from system to system, but some terms in common usage include:
Baseline
An approved revision of a document or source file to which subsequent changes can be made. See baselines, labels and tags.
Blame
A search for the author and revision that last modified a particular line.
Branch
A set of files under version control may be branched or forked at a point in time so that, from that time forward, two copies of those files may develop at different speeds or in different ways independently of each other.
Change
A change (or diff, or delta) represents a specific modification to a document under version control. The granularity of the modification considered a change varies between version control systems.
Change list
On many version control systems with atomic multi-change commits, a change list (or CL), change set, update, or patch identifies the set of changes made in a single commit. This can also represent a sequential view of the source code, allowing the examination of source as of any particular changelist ID.
Checkout
To check out (or co) is to create a local working copy from the repository. A user may specify a specific revision or obtain the latest. The term 'checkout' can also be used as a noun to describe the working copy. When a file has been checked out from a shared file server, it cannot be edited by other users. Think of it like a hotel, when you check out, you no longer have access to its amenities.
Clone
Cloning means creating a repository containing the revisions from another repository. This is equivalent to pushing or pulling into an empty (newly initialized) repository. As a noun, two repositories can be said to be clones if they are kept synchronized, and contain the same revisions.
Commit (noun)
Commit (verb)
To commit (check in, ci or, more rarely, install, submit or record) is to write or merge the changes made in the working copy back to the repository. A commit contains metadata, typically the author information and a commit message that describes the change.
Commit message
A short note, written by the developer, stored with the commit, which describes the commit. Ideally, it records why the modification was made, a description of the modification's effect or purpose, and non-obvious aspects of how the change works.
Conflict
A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
Delta compression
Most revision control software uses delta compression, which retains only the differences between successive versions of files. This allows for more efficient storage of many different versions of files.
Dynamic stream
A stream in which some or all file versions are mirrors of the parent stream's versions.
Export
Exporting is the act of obtaining the files from the repository. It is similar to checking out except that it creates a clean directory tree without the version-control metadata used in a working copy. This is often used prior to publishing the contents, for example.
Fetch
See pull.
Forward integration
The process of merging changes made in the main trunk into a development (feature or team) branch.
Head
Also sometimes called tip, this refers to the most recent commit, either to the trunk or to a branch. The trunk and each branch have their own head, though HEAD is sometimes loosely used to refer to the trunk.
Import
Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time.
Initialize
To create a new, empty repository.
Interleaved deltas
Some revision control software uses Interleaved deltas, a method that allows storing the history of text based files in a more efficient way than by using Delta compression.
Label
See tag.
Locking
When a developer locks a file, no one else can update that file until it is unlocked. Locking can be supported by the version control system, or via informal communications between developers (aka social locking).
Mainline
Similar to trunk, but there can be a mainline for each branch.
Merge
A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows:
A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk.
A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.)
Promote
The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent.
Pull, push
Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
Pull request
Repository
Resolve
The act of user intervention to address a conflict between different changes to the same document.
Reverse integration
The process of merging different team branches into the main trunk of the versioning system.
Revision and version
A version is any change in form. In SVK, a Revision is the state at a point in time of the entire tree in the repository.
Share
The act of making one file or folder available in multiple branches at the same time. When a shared file is changed in one branch, it is changed in other branches.
Stream
A container for branched files that has a known relationship to other such containers. Streams form a hierarchy; each stream can inherit various properties (like versions, namespace, workflow rules, subscribers, etc.) from its parent stream.
Tag
A tag or label refers to an important snapshot in time, consistent across many files. These files at that point may all be tagged with a user-friendly, meaningful name or revision number. See baselines, labels and tags.
Trunk
The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master)
Update
An update (or sync, but sync can also mean a combined push and pull) merges changes made in the repository (by other people, for example) into the local working copy. Update is also the term used by some CM tools (CM+, PLS, SMS) for the change package concept (see changelist). Synonymous with checkout in revision control systems that require each repository to have exactly one working copy (common in distributed systems)
Unlocking
Releasing a lock.
Working copy
The working copy is the local copy of files from a repository, at a specific time or revision. All work done to the files in a repository is initially done on a working copy, hence the name. Conceptually, it is a sandbox.
See also
Notes
References
External links
.
. The basics of version control.
Version control systems
Technical communication
Software development process
Distributed version control systems | Version control | [
"Engineering"
] | 5,446 | [
"Software engineering",
"Version control"
] |
55,983 | https://en.wikipedia.org/wiki/Black%20rat | The black rat (Rattus rattus), also known as the roof rat, ship rat, or house rat, is a common long-tailed rodent of the stereotypical rat genus Rattus, in the subfamily Murinae. It likely originated in the Indian subcontinent, but is now found worldwide.
The black rat is black to light brown in colour with a lighter underside. It is a generalist omnivore and a serious pest to farmers because it feeds on a wide range of agricultural crops. It is sometimes kept as a pet. In parts of India, it is considered sacred and respected in the Karni Mata Temple in Deshnoke.
Taxonomy
Mus rattus was the scientific name proposed by Carl Linnaeus in 1758 for the black rat.
Three subspecies were once recognized, but today are considered invalid and are now known to be actually color morphs:
Rattus rattus rattus – roof rat
Rattus rattus alexandrinus – Alexandrine rat
Rattus rattus frugivorus – fruit rat
Characteristics
A typical adult black rat is long, not including a tail, and weighs , depending on the subspecies. Black rats typically live for about one year in the wild and up to four years in captivity. Despite its name, the black rat exhibits several colour forms. It is usually black to light brown in colour with a lighter underside. In England during the 1920s, several variations were bred and shown alongside domesticated brown rats. This included an unusual green-tinted variety.
Origin
The black rat was present in prehistoric Europe and in the Levant during postglacial periods. The black rat in the Mediterranean region differs genetically from its South Asian ancestor by having 38 instead of 42 chromosomes. Its closest relative is the Asian house rat (R. tanezumi) from Southeast Asia. The two diverged about 120,000 years ago in southwestern Asia. It is unclear how the rat made its way to Europe due to insufficient data, although a land route seems more likely based on the distribution of European haplogroup "A". The black rat spread throughout Europe with the Roman conquest, but declined around the 6th century, possibly due to collapse of the Roman grain trade, climate cooling, or the Justinianic Plague. A genetically different rat population of haplogroup A replaced the Roman population in the medieval times in Europe.
It is a resilient vector for many diseases because of its ability to hold so many infectious bacteria in its blood. It was formerly thought to have played a primary role in spreading bacteria contained in fleas on its body, such as the plague bacterium (Yersinia pestis) which is responsible for the Plague of Justinian and the Black Death. However, recent studies have called this theory into question and instead posit humans themselves as the vector, as the movements of the epidemics and the black rat populations do not show historical or geographical correspondence. A study published in 2015 indicates that other Asiatic rodents served as plague reservoirs, from which infections spread as far west as Europe via trade routes, both overland and maritime. Although the black rat was certainly a plague vector in European ports, the spread of the plague beyond areas colonized by rats suggests that the plague was also circulated by humans after reaching Europe.
Distribution and habitat
The black rat originated in India and Southeast Asia, and spread to the Near East and Egypt, and then throughout the Roman Empire, reaching Great Britain as early as the 1st century AD. Europeans subsequently spread it throughout the world. The black rat is again largely confined to warmer areas, having been supplanted by the brown rat (Rattus norvegicus) in cooler regions and urban areas. In addition to the brown rat being larger and more aggressive, the change from wooden structures and thatched roofs to bricked and tiled buildings favored the burrowing brown rats over the arboreal black rats. In addition, brown rats eat a wider variety of foods, and are more resistant to weather extremes.
Black rat populations can increase exponentially under certain circumstances, perhaps having to do with the timing of the fruiting of the bamboo plant, and cause devastation to the plantings of subsistence farmers; this phenomenon is known as mautam in parts of India.
Black rats are thought to have arrived in Australia with the First Fleet, and subsequently spread to many coastal regions in the country.
Black rats adapt to a wide range of habitats. In urban areas they are found around warehouses, residential buildings, and other human settlements. They are also found in agricultural areas, such as in barns and crop fields. In urban areas, they prefer to live in dry upper levels of buildings, so they are commonly found in wall cavities and false ceilings. In the wild, black rats live in cliffs, rocks, the ground, and trees. They are great climbers and prefer to live in palms and trees, such as pine trees. Their nests are typically spherical and made of shredded material, including sticks, leaves, other vegetation and cloth. In the absence of palms or trees, they can burrow into the ground. Black rats are also found around fences, ponds, riverbanks, streams, and reservoirs.
Behaviour and ecology
It is thought that male and female rats have similarly sized home ranges during the winter, but male rats increase the size of their home range during the breeding season. Along with differing between rats of different sex, home range also differs depending on the type of forest in which the black rat inhabits. For example, home ranges in the southern beech forests of the South Island, New Zealand appear to be much larger than the non-beech forests of the North Island. Due to the limited number of rats that are studied in home range studies, the estimated sizes of rat home ranges in different rat demographic groups are inconclusive.
Diet and foraging
Black rats are considered omnivores and eat a wide range of foods, including seeds, fruit, stems, leaves, fungi, and a variety of invertebrates and vertebrates. They are generalists, and thus not very specific in their food preferences, which is indicated by their tendency to feed on any meal provided for cows, swine, chickens, cats and dogs. They are similar to the tree squirrel in their preference of fruits and nuts. They eat about per day and drink about per day. Their diet is high in water content. They are a threat to many natural habitats because they feed on birds and insects. They are also a threat to many farmers, since they feed on a variety of agricultural-based crops, such as cereals, sugar cane, coconuts, cocoa, oranges, and coffee beans.
The black rat displays flexibility in its foraging behaviour. It is a predatory species and adapts to different micro-habitats. It often meets and forages together in close proximity within and between sexes. It tends to forage after sunset. If the food cannot be eaten quickly, it searches for a place to carry and hoard to eat at a later time. Although it eats a broad range of foods, it is a highly selective feeder; only a restricted selection of the foods is dominating. When offered a wide diversity of foods, it eats only a small sample of each. This allows it to monitor the quality of foods that are present year round, such as leaves, as well as seasonal foods, such as herbs and insects. This method of operating on a set of foraging standards ultimately determines the final composition of its meals. Also, by sampling the available food in an area, it maintains a dynamic food supply, balance its nutrient intake, and avoids intoxication by secondary compounds.
Nesting behaviour
Through the usage of tracking devices such as radio transmitters, rats have been found to occupy dens located in trees, as well as on the ground. In Puketi Forest in the Northland Region of New Zealand, rats have been found to form dens together. Rats appear to den and forage in separate areas in their home range depending on the availability of food resources. Research shows that, in New South Wales, the black rat prefers to inhabit lower leaf litter of forest habitat. There is also an apparent correlation between the canopy height and logs and the presence of black rats. This correlation may be a result of the distribution of the abundance of prey as well as available refuges for rats to avoid predators. As found in North Head, New South Wales, there is positive correlation between rat abundance, leaf litter cover, canopy height, and litter depth. All other habitat variables showed little to no correlation. While this species' relative, the brown (Norway) rat, prefers to nest near the ground of a building the black rat will prefer the upper floors and roof. Because of this habit they have been given the common name roof rat.
Diseases
Black rats (or their ectoparasites) can carry a number of pathogens, of which bubonic plague (via the Oriental rat flea), typhus, Weil's disease, toxoplasmosis and trichinosis are the best known. It has been hypothesized that the displacement of black rats by brown rats led to the decline of the Black Death. This theory has, however, been deprecated, as the dates of these displacements do not match the increases and decreases in plague outbreaks.
Rats serve as outstanding vectors for transmittance of diseases because they can carry bacteria and viruses in their systems. A number of bacterial diseases are common to rats, and these include Streptococcus pneumoniae, Corynebacterium kutsheri, Bacillus piliformis, Pasteurella pneumotropica, and Streptobacillus moniliformis, to name a few. All of these bacteria are disease causing agents in humans. In some cases, these diseases are incurable.
Predators
The black rat is prey to cats and owls in domestic settings. In less urban settings, rats are preyed on by weasels, foxes and coyotes. These predators have little effect on the control of the black rat population because black rats are agile and fast climbers. In addition to agility, the black rat also uses its keen sense of hearing to detect danger and quickly evade mammalian and avian predators.
As an invasive species
Damage caused
After Rattus rattus was introduced into the northern islands of New Zealand, they fed on the seedlings, adversely affecting the ecology of the islands. Even after eradication of R. rattus, the negative effects may take decades to reverse. When consuming these seabirds and seabird eggs, these rats reduce the pH of the soil. This harms plant species by reducing nutrient availability in soil, thus decreasing the probability of seed germination. For example, research conducted by Hoffman et al. indicates a large impact on 16 indigenous plant species directly preyed on by R. rattus. These plants displayed a negative correlation in germination and growth in the presence of black rats.
Rats prefer to forage in forest habitats. In the Ogasawara islands, they prey on the indigenous snails and seedlings. Snails that inhabit the leaf litter of these islands showed a significant decline in population on the introduction of Rattus rattus. The black rat shows a preference for snails with larger shells (greater than 10 mm), and this led to a great decline in the population of snails with larger shells. A lack of prey refuges makes it more difficult for the snail to avoid the rat.
Complex pest
The black rat is a complex pest, defined as one that influences the environment in both harmful and beneficial ways. In many cases, after the black rat is introduced into a new area, the population size of some native species declines or goes extinct. This is because the black rat is a good generalist with a wide dietary niche and a preference for complex habitats; this causes strong competition for resources among small animals. This has led to the black rat completely displacing many native species in Madagascar, the Galapagos, and the Florida Keys. In a study by Stokes et al., habitats suitable for the native bush rat, Rattus fuscipes, of Australia are often invaded by the black rat and are eventually occupied by only the black rat. When the abundances of these two rat species were compared in different micro-habitats, both were found to be affected by micro-habitat disturbances, but the black rat was most abundant in areas of high disturbance; this indicates it has a better dispersal ability.
Despite the black rat's tendency to displace native species, it can also aid in increasing species population numbers and maintaining species diversity. The bush rat, a common vector for spore dispersal of truffles, has been extirpated from many micro-habitats of Australia. In the absence of a vector, the diversity of truffle species would be expected to decline. In a study in New South Wales, Australia it was found that, although the bush rat consumes a diversity of truffle species, the black rat consumes as much of the diverse fungi as the natives and is an effective vector for spore dispersal. Since the black rat now occupies many of the micro-habitats that were previously inhabited by the bush rat, the black rat plays an important ecological role in the dispersal of fungal spores. By eradicating the black rat populations in Australia, the diversity of fungi would decline, potentially doing more harm than good.
Control methods
Large-scale rat control programs have been taken to maintain a steady level of the invasive predators in order to conserve the native species in New Zealand such as kokako and mohua. Pesticides, such as pindone and 1080 (sodium fluoroacetate), are commonly distributed via aerial spray by helicopter as a method of mass control on islands infested with invasive rat populations. Bait, such as brodifacoum, is also used along with coloured dyes (used to deter birds from eating the baits) in order to kill and identify rats for experimental and tracking purposes. Another method to track rats is the use of wired cage traps, which are used along with bait, such as rolled oats and peanut butter, to tag and track rats to determine population sizes through methods like mark-recapture and radio-tracking. Tracking tunnels (coreflute tunnels containing an inked card) are also commonly used monitoring devices, as are chew-cards containing peanut butter. Poison control methods are effective in reducing rat populations to nonthreatening sizes, but rat populations often rebound to normal size within months. Besides their highly adaptive foraging behavior and fast reproduction, the exact mechanisms for their rebound is unclear and are still being studied.
In 2010, the Sociedad Ornitológica Puertorriqueña (Puerto Rican Bird Society) and the Ponce Yacht and Fishing Club launched a campaign to eradicate the black rat from the Isla Ratones (Mice Island) and Isla Cardona (Cardona Island) islands off the municipality of Ponce, Puerto Rico.
Decline in population
Eradication projects have eliminated black rats from Lundy in the Bristol Channel (2006) and from the Shiant Islands in the Outer Hebrides (2016). Populations probably survive on other islands (e.g. Inchcolm) and in localised areas of the British mainland. Recent National Biodiversity Network data show populations around the U.K., particularly in ports and port towns.
See also
Karni Mata Temple, Deshnoke, Rajasthan, India.
Polynesian rat
Urban plague
References
Further reading
List of books and articles about rats
External links
Photos and video at ARKive
Rattus
Rodents of Asia
Rodents of Europe
Mammals of Azerbaijan
Mammals of Nepal
Stored-product pests
Mammals described in 1758
Taxa named by Carl Linnaeus
Rodents of Borneo | Black rat | [
"Biology"
] | 3,188 | [
"Pests (organism)",
"Stored-product pests"
] |
55,999 | https://en.wikipedia.org/wiki/Tongue | The tongue is a muscular organ in the mouth of a typical tetrapod. It manipulates food for chewing and swallowing as part of the digestive process, and is the primary organ of taste. The tongue's upper surface (dorsum) is covered by taste buds housed in numerous lingual papillae. It is sensitive and kept moist by saliva and is richly supplied with nerves and blood vessels. The tongue also serves as a natural means of cleaning the teeth. A major function of the tongue is to enable speech in humans and vocalization in other animals.
The human tongue is divided into two parts, an oral part at the front and a pharyngeal part at the back. The left and right sides are also separated along most of its length by a vertical section of fibrous tissue (the lingual septum) that results in a groove, the median sulcus, on the tongue's surface.
There are two groups of glossal muscles. The four intrinsic muscles alter the shape of the tongue and are not attached to bone. The four paired extrinsic muscles change the position of the tongue and are anchored to bone.
Etymology
The word tongue derives from the Old English tunge, which comes from Proto-Germanic *tungōn. It has cognates in other Germanic languages—for example tonge in West Frisian, tong in Dutch and Afrikaans, Zunge in German, tunge in Danish and Norwegian, and tunga in Icelandic, Faroese and Swedish. The ue ending of the word seems to be a fourteenth-century attempt to show "proper pronunciation", but it is "neither etymological nor phonetic". Some used the spelling tunge and tonge as late as the sixteenth century.
In humans
Structure
The tongue is a muscular hydrostat that forms part of the floor of the oral cavity. The left and right sides of the tongue are separated by a vertical section of fibrous tissue known as the lingual septum. This division is along the length of the tongue save for the very back of the pharyngeal part and is visible as a groove called the median sulcus. The human tongue is divided into anterior and posterior parts by the terminal sulcus, which is a "V"-shaped groove. The apex of the terminal sulcus is marked by a blind foramen, the foramen cecum, which is a remnant of the median thyroid diverticulum in early embryonic development. The anterior oral part is the visible part situated at the front and makes up roughly two-thirds the length of the tongue. The posterior pharyngeal part is the part closest to the throat, roughly one-third of its length. These parts differ in terms of their embryological development and nerve supply.
The anterior tongue is, at its apex, thin and narrow. It is directed forward against the lingual surfaces of the lower incisor teeth. The posterior part is, at its root, directed backward, and connected with the hyoid bone by the hyoglossi and genioglossi muscles and the hyoglossal membrane, with the epiglottis by three glossoepiglottic folds of mucous membrane, with the soft palate by the glossopalatine arches, and with the pharynx by the superior pharyngeal constrictor muscle and the mucous membrane. It also forms the anterior wall of the oropharynx.
The average length of the human tongue from the oropharynx to the tip is 10 cm. The average weight of the human tongue from adult males is 99g and for adult females 79g.
In phonetics and phonology, a distinction is made between the tip of the tongue and the blade (the portion just behind the tip). Sounds made with the tongue tip are said to be apical, while those made with the tongue blade are said to be laminal.
Upper surface
The upper surface of the tongue is called the dorsum, and is divided by a groove into symmetrical halves by the median sulcus. The foramen cecum marks the end of this division (at about 2.5 cm from the root of the tongue) and the beginning of the terminal sulcus. The foramen cecum is also the point of attachment of the thyroglossal duct and is formed during the descent of the thyroid diverticulum in embryonic development.
The terminal sulcus is a shallow groove that runs forward as a shallow groove in a V shape from the foramen cecum, forwards and outwards to the margins (borders) of the tongue. The terminal sulcus divides the tongue into a posterior pharyngeal part and an anterior oral part. The pharyngeal part is supplied by the glossopharyngeal nerve and the oral part is supplied by the lingual nerve (a branch of the mandibular branch (V3) of the trigeminal nerve) for somatosensory perception and by the chorda tympani (a branch of the facial nerve) for taste perception.
Both parts of the tongue develop from different pharyngeal arches.
Undersurface
On the undersurface of the tongue is a fold of mucous membrane called the frenulum that tethers the tongue at the midline to the floor of the mouth. On either side of the frenulum are small prominences called sublingual caruncles that the major salivary submandibular glands drain into.
Muscles
The eight muscles of the human tongue are classified as either intrinsic or extrinsic. The four intrinsic muscles act to change the shape of the tongue, and are not attached to any bone. The four extrinsic muscles act to change the position of the tongue, and are anchored to bone.
Extrinsic
The four extrinsic muscles originate from bone and extend to the tongue. They are the genioglossus, the hyoglossus (often including the chondroglossus) the styloglossus, and the palatoglossus. Their main functions are altering the tongue's position allowing for protrusion, retraction, and side-to-side movement.
The genioglossus arises from the mandible and protrudes the tongue. It is also known as the tongue's "safety muscle" since it is the only muscle that propels the tongue forward.
The hyoglossus, arises from the hyoid bone and retracts and depresses the tongue. The chondroglossus is often included with this muscle.
The styloglossus arises from the styloid process of the temporal bone and draws the sides of the tongue up to create a trough for swallowing.
The palatoglossus arises from the palatine aponeurosis, and depresses the soft palate, moves the palatoglossal fold towards the midline, and elevates the back of the tongue during swallowing.
Intrinsic
Four paired intrinsic muscles of the tongue originate and insert within the tongue, running along its length. They are the superior longitudinal muscle, the inferior longitudinal muscle, the vertical muscle, and the transverse muscle. These muscles alter the shape of the tongue by lengthening and shortening it, curling and uncurling its apex and edges as in tongue rolling, and flattening and rounding its surface. This provides shape and helps facilitate speech, swallowing, and eating.
The superior longitudinal muscle runs along the upper surface of the tongue under the mucous membrane, and functions to shorten and curl the tongue upward. It originates near the epiglottis, at the hyoid bone, from the median fibrous septum.
The inferior longitudinal muscle lines the sides of the tongue, and is joined to the styloglossus muscle. It functions to shorten and curl the tongue downward.
The vertical muscle is located in the middle of the tongue, and joins the superior and inferior longitudinal muscles. It functions to flatten the tongue.
The transverse muscle divides the tongue at the middle, and is attached to the mucous membranes that run along the sides. It functions to lengthen and narrow the tongue.
Blood supply
The tongue receives its blood supply primarily from the lingual artery, a branch of the external carotid artery. The lingual veins drain into the internal jugular vein. The floor of the mouth also receives its blood supply from the lingual artery. There is also a secondary blood supply to the root of tongue from the tonsillar branch of the facial artery and the ascending pharyngeal artery.
An area in the neck sometimes called the Pirogov triangle is formed by the intermediate tendon of the digastric muscle, the posterior border of the mylohyoid muscle, and the hypoglossal nerve. The lingual artery is a good place to stop severe hemorrhage from the tongue.
Nerve supply
Innervation of the tongue consists of motor fibers, special sensory fibers for taste, and general sensory fibers for sensation.
Motor supply for all intrinsic and extrinsic muscles of the tongue is supplied by efferent motor nerve fibers from the hypoglossal nerve (CN XII), with the exception of the palatoglossus, which is innervated by the vagus nerve (CN X).
Innervation of taste and sensation is different for the anterior and posterior part of the tongue because they are derived from different embryological structures (pharyngeal arch 1 and pharyngeal arches 3 and 4, respectively).
Anterior two-thirds of tongue (anterior to the vallate papillae):
Taste: chorda tympani branch of the facial nerve (CN VII) via special visceral afferent fibers
Sensation: lingual branch of the mandibular (V3) division of the trigeminal nerve (CN V) via general visceral afferent fibers
Posterior one third of tongue:
Taste and sensation: glossopharyngeal nerve (CN IX) via a mixture of special and general visceral afferent fibers
Base of tongue
Taste and sensation: internal branch of the superior laryngeal nerve (itself a branch of the vagus nerve, CN X)
Lymphatic drainage
The tip of tongue drains to the submental nodes. The left and right halves of the anterior two-thirds of the tongue drains to submandibular lymph nodes, while the posterior one-third of the tongue drains to the jugulo-omohyoid nodes.
Microanatomy
The upper surface of the tongue is covered in masticatory mucosa, a type of oral mucosa, which is of keratinized stratified squamous epithelium. Embedded in this are numerous papillae, some of which house the taste buds and their taste receptors. The lingual papillae consist of filiform, fungiform, vallate and foliate papillae, and only the filiform papillae are not associated with any taste buds.
The tongue can divide itself in dorsal and ventral surface. The dorsal surface is a stratified squamous keratinized epithelium, which is characterized by numerous mucosal projections called papillae. The lingual papillae covers the dorsal side of the tongue towards the front of the terminal groove. The ventral surface is stratified squamous non-keratinized epithelium which is smooth.
Development
The tongue begins to develop in the fourth week of embryonic development from a median swelling – the median tongue bud (tuberculum impar) of the first pharyngeal arch.
In the fifth week a pair of lateral lingual swellings, one on the right side and one on the left, form on the first pharyngeal arch. These lingual swellings quickly expand and cover the median tongue bud. They form the anterior part of the tongue that makes up two-thirds of the length of the tongue, and continue to develop through prenatal development. The line of their fusion is marked by the median sulcus.
In the fourth week, a swelling appears from the second pharyngeal arch, in the midline, called the copula. During the fifth and sixth weeks, the copula is overgrown by a swelling from the third and fourth arches (mainly from the third arch) called the hypopharyngeal eminence, and this develops into the posterior part of the tongue (the other third and the posterior most part of the tongue is developed from the fourth pharyngeal arch). The hypopharyngeal eminence develops mainly by the growth of endoderm from the third pharyngeal arch. The boundary between the two parts of the tongue, the anterior from the first arch and the posterior from the third arch is marked by the terminal sulcus. The terminal sulcus is shaped like a V with the tip of the V situated posteriorly. At the tip of the terminal sulcus is the foramen cecum, which is the point of attachment of the thyroglossal duct where the embryonic thyroid begins to descend.
Function
Taste
Chemicals that stimulate taste receptor cells are known as tastants. Once a tastant is dissolved in saliva, it can make contact with the plasma membrane of the gustatory hairs, which are the sites of taste transduction.
The tongue is equipped with many taste buds on its dorsal surface, and each taste bud is equipped with taste receptor cells that can sense particular classes of tastes. Distinct types of taste receptor cells respectively detect substances that are sweet, bitter, salty, sour, spicy, or taste of umami. Umami receptor cells are the least understood and accordingly are the type most intensively under research. There is a common misconception that different sections of the tongue are exclusively responsible for different basic tastes. Although widely taught in schools in the form of the tongue map, this is incorrect; all taste sensations come from all regions of the tongue, although certain parts are more sensitive to certain tastes.
Mastication
The tongue is an important accessory organ in the digestive system. The tongue is used for crushing food against the hard palate, during mastication and manipulation of food for softening prior to swallowing. The epithelium on the tongue's upper, or dorsal surface is keratinised. Consequently, the tongue can grind against the hard palate without being itself damaged or irritated.
Speech
The tongue is one of the primary articulators in the production of speech, and this is facilitated by both the extrinsic muscles that move the tongue and the intrinsic muscles that change its shape. Specifically, different vowels are articulated by changing the tongue's height and retraction to alter the resonant properties of the vocal tract. These resonant properties amplify specific harmonic frequencies (formants) that are different for each vowel, while attenuating other harmonics. For example, [a] is produced with the tongue lowered and centered and [i] is produced with the tongue raised and fronted. Consonants are articulated by constricting airflow through the vocal tract, and many consonants feature a constriction between the tongue and some other part of the vocal tract. For example, alveolar consonants like [s] and [n] are articulated with the tongue against the alveolar ridge, while velar consonants like [k] and [g] are articulated with the tongue dorsum against the soft palate (velum). Tongue shape is also relevant to speech articulation, for example in retroflex consonants, where the tip of the tongue is curved backward.
Intimacy
The tongue plays a role in physical intimacy and sexuality. The tongue is part of the erogenous zone of the mouth and can be used in intimate contact, as in the French kiss and in oral sex.
Clinical significance
Disease
A congenital disorder of the tongue is that of ankyloglossia also known as tongue-tie. The tongue is tied to the floor of the mouth by a very short and thickened frenulum and this affects speech, eating, and swallowing.
The tongue is prone to several pathologies including glossitis and other inflammations such as geographic tongue, and median rhomboid glossitis; burning mouth syndrome, oral hairy leukoplakia, oral candidiasis (thrush), black hairy tongue, bifid tongue (due to failure in fusion of two lingual swellings of first pharyngeal arch) and fissured tongue.
There are several types of oral cancer that mainly affect the tongue. Mostly these are squamous cell carcinomas.
Food debris, desquamated epithelial cells and bacteria often form a visible tongue coating. This coating has been identified as a major factor contributing to bad breath (halitosis), which can be managed by using a tongue cleaner.
Medication delivery
The sublingual region underneath the front of the tongue is an ideal location for the administration of certain medications into the body. The oral mucosa is very thin underneath the tongue, and is underlain by a plexus of veins. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract. This is the only convenient and efficacious route of administration (apart from Intravenous therapy) of nitroglycerin to a patient suffering chest pain from angina pectoris.
Other animals
The muscles of the tongue evolved in amphibians from occipital somites. Most amphibians show a proper tongue after their metamorphosis. As a consequence, most tetrapod animals—amphibians, reptiles, birds, and mammals—have tongues (the frog family of pipids lack tongue). In mammals such as dogs and cats, the tongue is often used to clean the fur and body by licking. The tongues of these species have a very rough texture, which allows them to remove oils and parasites. Some dogs have a tendency to consistently lick a part of their foreleg, which can result in a skin condition known as a lick granuloma. A dog's tongue also acts as a heat regulator. As a dog increases its exercise the tongue will increase in size due to greater blood flow. The tongue hangs out of the dog's mouth and the moisture on the tongue will work to cool the bloodflow.
Some animals have tongues that are specially adapted for catching prey. For example, chameleons, frogs, pangolins and anteaters have prehensile tongues.
Other animals may have organs that are analogous to tongues, such as a butterfly's proboscis or a radula on a mollusc, but these are not homologous with the tongues found in vertebrates and often have little resemblance in function. For example, butterflies do not lick with their proboscides; they suck through them, and the proboscis is not a single organ, but two jaws held together to form a tube. Many species of fish have small folds at the base of their mouths that might informally be called tongues, but they lack a muscular structure like the true tongues found in most tetrapods.
Society and culture
Figures of speech
The tongue can serve as a metonym for language. For example, the New Testament of the Bible, in the Book of Acts of the Apostles, Jesus' disciples on the Day of Pentecost received a type of spiritual gift: "there appeared unto them cloven tongues like as of fire, and it sat upon each of them. And they were all filled with the Holy Ghost, and began to speak with other tongues ....", which amazed the crowd of Jewish people in Jerusalem, who were from various parts of the Roman Empire but could now understand what was being preached. The phrase mother tongue is used as a child's first language. Many languages have the same word for "tongue" and "language", as did the English language before the Middle Ages.
A common temporary failure in word retrieval from memory is referred to as the tip-of-the-tongue phenomenon. The expression tongue in cheek refers to a statement that is not to be taken entirely seriously – something said or done with subtle ironic or sarcastic humour. A tongue twister is a phrase very difficult to pronounce. Aside from being a medical condition, "tongue-tied" means being unable to say what you want due to confusion or restriction. The phrase "cat got your tongue" refers to when a person is speechless. To "bite one's tongue" is a phrase which describes holding back an opinion to avoid causing offence. A "slip of the tongue" refers to an unintentional utterance, such as a Freudian slip. The "gift of tongues" refers to when one is uncommonly gifted to be able to speak in a foreign language, often as a type of spiritual gift. Speaking in tongues is a common phrase used to describe glossolalia, which is to make smooth, language-resembling sounds that is no true spoken language itself. A deceptive person is said to have a forked tongue, and a smooth-talking person is said to have a .
Gestures
Sticking one's tongue out at someone is considered a childish gesture of rudeness or defiance in many countries; the act may also have sexual connotations, depending on the way in which it is done. However, in Tibet it is considered a greeting. In 2009, a farmer from Fabriano, Italy, was convicted and fined by Italy's highest court for sticking his tongue out at a neighbor with whom he had been arguing - proof of the affront had been captured with a cell-phone camera.
Body art
Tongue piercing and splitting have become more common in western countries in recent decades. One study found that one-fifth of young adults in Israel had at least one type of oral piercing, most commonly the tongue.
Representational art
Protruding tongues appear in the art of several Polynesian cultures.
As food
The tongues of some animals are consumed and sometimes prized as delicacies. Hot-tongue sandwiches frequently appear on menus in kosher delicatessens in America. Taco de lengua (lengua being Spanish for tongue) is a taco filled with beef tongue, and is especially popular in Mexican cuisine. As part of Colombian gastronomy, Tongue in Sauce (Lengua en Salsa) is a dish prepared by frying the tongue and adding tomato sauce, onions and salt. Tongue can also be prepared as birria. Pig and beef tongue are consumed in Chinese cuisine. Duck tongues are sometimes employed in Sichuan dishes, while lamb's tongue is occasionally employed in Continental and contemporary American cooking. Fried cod "tongue" is a relatively common part of fish meals in Norway and in Newfoundland. In Argentina and Uruguay cow tongue is cooked and served in vinegar (lengua a la vinagreta). In the Czech Republic and in Poland, a pork tongue is considered a delicacy, and there are many ways of preparing it. In Eastern Slavic countries, pork and beef tongues are commonly consumed, boiled and garnished with horseradish or jellied; beef tongues fetch a significantly higher price and are considered more of a delicacy. In Alaska, cow tongues are among the more common. Both cow and moose tongues are popular toppings on open-top-sandwiches in Norway, the latter usually amongst hunters.
Tongues of seals and whales have been eaten, sometimes in large quantities, by sealers and whalers, and in various times and places have been sold for food on shore.
Gallery
See also
Electronic tongue
Tongue map
Vocal tract
Further reading
References
External links
University of Manitoba, Anatomy of the Vocal Tract
Sensory organs
Gustatory system
Digestive system
Human mouth anatomy
Speech organs | Tongue | [
"Biology"
] | 4,927 | [
"Digestive system",
"Organ systems"
] |
56,001 | https://en.wikipedia.org/wiki/Conformance%20testing | Conformance testing — an element of conformity assessment, and also known as compliance testing, or type testing — is testing or other activities that determine whether a process, product, or service complies with the requirements of a specification, technical standard, contract, or regulation. Testing is often either logical testing or physical testing. The test procedures may involve other criteria from mathematical testing or chemical testing. Beyond simple conformance, other requirements for efficiency, interoperability, or compliance may apply. Conformance testing may be undertaken by the producer of the product or service being assessed, by a user, or by an accredited independent organization, which can sometimes be the author of the standard being used. When testing is accompanied by certification, the products or services may then be advertised as being certified in compliance with the referred technical standard. Manufacturers and suppliers of products and services rely on such certification including listing on the certification body's website, to assure quality to the end user and that competing suppliers are on the same level.
Aside from the various types of testing, related conformance testing activities include:
Surveillance
Inspection
Auditing
Certification
Accreditation.
Forms of conformance testing
The UK government identifies three forms of testing or assessment:
1st party assessment (self assessment)
2nd party assessment (assessment by a purchaser or user of a product or service)
3rd party assessment (undertaken by an independent organisation)
Typical areas of application
Conformance testing is applied in various industries where a product or service must meet specific quality and/or regulatory standards. This includes areas such as:
biocompatibility proofing
data and communications protocol engineering
document engineering
electronic and electrical engineering
medical procedure proofing
pharmaceutical packaging
software engineering
building construction (fire)
In all such testing, the subject of test is not just the formal conformance in aspects of completeness of filed proofs, validity of referred certificates, and qualification of operating staff. Rather, it also heavily focuses on operational conditions, physical conditions, and applied test environments. By extension conformance testing leads to a vast set of documents and files that allow for reiterating all performed tests.
Software engineering
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Electronic and electrical engineering
In electronic engineering and electrical engineering, some countries and business environments (such as telecommunication companies) require that an electronic product meet certain requirements before they can be sold. Standards for telecommunication products written by standards organizations such as ANSI, the FCC, and IEC have certain criteria that a product must meet before compliance is recognized. In countries such as Japan, China, Korea, and some parts of Europe, products cannot be sold unless they are known to meet those requirements specified in the standards. Usually, manufacturers set their own requirements to ensure product quality, sometimes with levels much higher than what the governing bodies require. Compliance is realized after a product passes a series of tests without occurring some specified mode of failure.
Compliance testing for electronic devices include emissions tests, immunity tests, and safety tests. Emissions tests ensure that a product will not emit harmful electromagnetic interference in communication and power lines. Immunity tests ensure that a product is immune to common electrical signals and electromagnetic interference (EMI) that will be found in its operating environment, such as electromagnetic radiation from a local radio station or interference from nearby products. Safety tests ensure that a product will not create a safety risk from situations such as a failed or shorted power supply, blocked cooling vent, and powerline voltage spikes and dips.
For example, Ericsson's telecommunications research and development subsidiary Telcordia Technologies publishes conformance standards for telecommunication equipment to pass the following tests:
Radiated immunity An antenna is used to subject the device to electromagnetic waves, covering a large frequency range (usually from 80 MHz to 6 GHz).
Radiated emissions One or more antennas are used to measure the amplitude of the electromagnetic waves that a device emits. The amplitude must be under a set limit, with the limit depending on the device's classification.
Conducted immunity Low frequency signals (usually 10 kHz to 80 MHz) are injected onto the data and power lines of a device. This test is used to simulate the coupling of low frequency signals onto the power and data lines, such as from a local AM radio station.
Conducted emissions Similar to radiated emissions, except the signals are measured at the power lines with a filter device.
Electrostatic discharge (ESD) immunity Electrostatic discharges with various properties (rise time, peak voltage, fall time, and half time) are applied to the areas on the device that are likely to be discharged too, such as the faces, near user accessible buttons, etc. Discharges are also applied to a vertical and horizontal ground plane to simulate an ESD event on a nearby surface. Voltages are usually from 2 kV to 15 kV, but commonly go as high as 25 kV or more.
Electrical Fast Transient Burst immunity (EFTB) Bursts of high voltage pulses are applied to the powerlines to simulate events such as repeating voltage spikes from a motor.
Powerline dip immunity The line voltage is slowly dropped down then brought back up.
Powerline surge immunity A surge is applied to the line voltage.
Standardization and agreements
Several international standards relating to conformance testing are published by the International Organization for Standardization (ISO) and covered in the divisions of ICS 03.120.20 for management and ICS 23.040.01 for technical. Other standalone ISO standards include:
ISO/TR 13881:2000 Petroleum and natural gas industries—Classification and conformity assessment of products, processes and services
ISO 18436-4:2008 Condition monitoring and diagnostics of machines—Requirements for qualification and assessment of personnel—Part 4: Field lubricant analysis
ISO/IEC 18009:1999 Information technology—Programming languages—Ada: Conformity assessment of a language processor
Conformity assessment and mutual recognition agreements
Many countries sign mutual recognition agreements (MRAs) with other countries in order to promote trade of and facilitate market access to goods and services, while making it easier to meet a country's conformance testing requirements. Additionally, these agreements have the advantage of increasing confidence in conformance assessment bodies (e.g., testing labs and certification bodies), and by extension, product quality. An example is the IAF MLA which is an agreement for the mutual recognition of accredited certification between IAF Accreditation Body (AB) Member signatories.
See also
Certification
Governance, risk management, and compliance
Standards organizations
Test assertion
Testing, inspection and certification
References
Software testing
Product testing
Standards
Evaluation methods | Conformance testing | [
"Engineering"
] | 1,333 | [
"Software engineering",
"Software testing"
] |
56,061 | https://en.wikipedia.org/wiki/Discrete%20space | In topology, a discrete space is a particularly simple example of a topological space or similar structure, one in which the points form a , meaning they are isolated from each other in a certain sense. The discrete topology is the finest topology that can be given on a set. Every subset is open in the discrete topology so that in particular, every singleton subset is an open set in the discrete topology.
Definitions
Given a set :
A metric space is said to be uniformly discrete if there exists a such that, for any one has either or The topology underlying a metric space can be discrete, without the metric being uniformly discrete: for example the usual metric on the set
Properties
The underlying uniformity on a discrete metric space is the discrete uniformity, and the underlying topology on a discrete uniform space is the discrete topology.
Thus, the different notions of discrete space are compatible with one another.
On the other hand, the underlying topology of a non-discrete uniform or metric space can be discrete; an example is the metric space (with metric inherited from the real line and given by ).
This is not the discrete metric; also, this space is not complete and hence not discrete as a uniform space.
Nevertheless, it is discrete as a topological space.
We say that is topologically discrete but not uniformly discrete or metrically discrete.
Additionally:
The topological dimension of a discrete space is equal to 0.
A topological space is discrete if and only if its singletons are open, which is the case if and only if it does not contain any accumulation points.
The singletons form a basis for the discrete topology.
A uniform space is discrete if and only if the diagonal is an entourage.
Every discrete topological space satisfies each of the separation axioms; in particular, every discrete space is Hausdorff, that is, separated.
A discrete space is compact if and only if it is finite.
Every discrete uniform or metric space is complete.
Combining the above two facts, every discrete uniform or metric space is totally bounded if and only if it is finite.
Every discrete metric space is bounded.
Every discrete space is first-countable; it is moreover second-countable if and only if it is countable.
Every discrete space is totally disconnected.
Every non-empty discrete space is second category.
Any two discrete spaces with the same cardinality are homeomorphic.
Every discrete space is metrizable (by the discrete metric).
A finite space is metrizable only if it is discrete.
If is a topological space and is a set carrying the discrete topology, then is evenly covered by (the projection map is the desired covering)
The subspace topology on the integers as a subspace of the real line is the discrete topology.
A discrete space is separable if and only if it is countable.
Any topological subspace of (with its usual Euclidean topology) that is discrete is necessarily countable.
Any function from a discrete topological space to another topological space is continuous, and any function from a discrete uniform space to another uniform space is uniformly continuous. That is, the discrete space is free on the set in the category of topological spaces and continuous maps or in the category of uniform spaces and uniformly continuous maps. These facts are examples of a much broader phenomenon, in which discrete structures are usually free on sets.
With metric spaces, things are more complicated, because there are several categories of metric spaces, depending on what is chosen for the morphisms. Certainly the discrete metric space is free when the morphisms are all uniformly continuous maps or all continuous maps, but this says nothing interesting about the metric structure, only the uniform or topological structure. Categories more relevant to the metric structure can be found by limiting the morphisms to Lipschitz continuous maps or to short maps; however, these categories don't have free objects (on more than one element). However, the discrete metric space is free in the category of bounded metric spaces and Lipschitz continuous maps, and it is free in the category of metric spaces bounded by 1 and short maps. That is, any function from a discrete metric space to another bounded metric space is Lipschitz continuous, and any function from a discrete metric space to another metric space bounded by 1 is short.
Going the other direction, a function from a topological space to a discrete space is continuous if and only if it is locally constant in the sense that every point in has a neighborhood on which is constant.
Every ultrafilter on a non-empty set can be associated with a topology on with the property that non-empty proper subset of is an open subset or else a closed subset, but never both. Said differently, subset is open or closed but (in contrast to the discrete topology) the subsets that are open and closed (i.e. clopen) are and . In comparison, subset of is open and closed in the discrete topology.
Examples and uses
A discrete structure is often used as the "default structure" on a set that doesn't carry any other natural topology, uniformity, or metric; discrete structures can often be used as "extreme" examples to test particular suppositions. For example, any group can be considered as a topological group by giving it the discrete topology, implying that theorems about topological groups apply to all groups. Indeed, analysts may refer to the ordinary, non-topological groups studied by algebraists as "discrete groups". In some cases, this can be usefully applied, for example in combination with Pontryagin duality. A 0-dimensional manifold (or differentiable or analytic manifold) is nothing but a discrete and countable topological space (an uncountable discrete space is not second-countable). We can therefore view any discrete countable group as a 0-dimensional Lie group.
A product of countably infinite copies of the discrete space of natural numbers is homeomorphic to the space of irrational numbers, with the homeomorphism given by the continued fraction expansion. A product of countably infinite copies of the discrete space is homeomorphic to the Cantor set; and in fact uniformly homeomorphic to the Cantor set if we use the product uniformity on the product. Such a homeomorphism is given by using ternary notation of numbers. (See Cantor space.) Every fiber of a locally injective function is necessarily a discrete subspace of its domain.
In the foundations of mathematics, the study of compactness properties of products of is central to the topological approach to the ultrafilter lemma (equivalently, the Boolean prime ideal theorem), which is a weak form of the axiom of choice.
Indiscrete spaces
In some ways, the opposite of the discrete topology is the trivial topology (also called the indiscrete topology), which has the fewest possible open sets (just the empty set and the space itself). Where the discrete topology is initial or free, the indiscrete topology is final or cofree: every function from a topological space to an indiscrete space is continuous, etc.
See also
Cylinder set
List of topologies
Taxicab geometry
References
General topology
Metric spaces
Topological spaces
Topology | Discrete space | [
"Physics",
"Mathematics"
] | 1,451 | [
"General topology",
"Mathematical structures",
"Space (mathematics)",
"Metric spaces",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
56,079 | https://en.wikipedia.org/wiki/Krull%20dimension | In commutative algebra, the Krull dimension of a commutative ring R, named after Wolfgang Krull, is the supremum of the lengths of all chains of prime ideals. The Krull dimension need not be finite even for a Noetherian ring. More generally the Krull dimension can be defined for modules over possibly non-commutative rings as the deviation of the poset of submodules.
The Krull dimension was introduced to provide an algebraic definition of the dimension of an algebraic variety: the dimension of the affine variety defined by an ideal I in a polynomial ring R is the Krull dimension of R/I.
A field k has Krull dimension 0; more generally, k[x1, ..., xn] has Krull dimension n. A principal ideal domain that is not a field has Krull dimension 1. A local ring has Krull dimension 0 if and only if every element of its maximal ideal is nilpotent.
There are several other ways that have been used to define the dimension of a ring. Most of them coincide with the Krull dimension for Noetherian rings, but can differ for non-Noetherian rings.
Explanation
We say that a chain of prime ideals of the form
has length n. That is, the length is the number of strict inclusions, not the number of primes; these differ by 1. We define the Krull dimension of to be the supremum of the lengths of all chains of prime ideals in .
Given a prime ideal in R, we define the of , written , to be the supremum of the lengths of all chains of prime ideals contained in , meaning that . In other words, the height of is the Krull dimension of the localization of R at . A prime ideal has height zero if and only if it is a minimal prime ideal. The Krull dimension of a ring is the supremum of the heights of all maximal ideals, or those of all prime ideals. The height is also sometimes called the codimension, rank, or altitude of a prime ideal.
In a Noetherian ring, every prime ideal has finite height. Nonetheless, Nagata gave an example of a Noetherian ring of infinite Krull dimension. A ring is called catenary if any inclusion of prime ideals can be extended to a maximal chain of prime ideals between and , and any two maximal chains between
and have the same length. A ring is called universally catenary if any finitely generated algebra over it is catenary. Nagata gave an example of a Noetherian ring which is not catenary.
In a Noetherian ring, a prime ideal has height at most n if and only if it is a minimal prime ideal over an ideal generated by n elements (Krull's height theorem and its converse). It implies that the descending chain condition holds for prime ideals in such a way the lengths of the chains descending from a prime ideal are bounded by the number of generators of the prime.
More generally, the height of an ideal I is the infimum of the heights of all prime ideals containing I. In the language of algebraic geometry, this is the codimension of the subvariety of Spec() corresponding to I.
Schemes
It follows readily from the definition of the spectrum of a ring Spec(R), the space of prime ideals of R equipped with the Zariski topology, that the Krull dimension of R is equal to the dimension of its spectrum as a topological space, meaning the supremum of the lengths of all chains of irreducible closed subsets. This follows immediately from the Galois connection between ideals of R and closed subsets of Spec(R) and the observation that, by the definition of Spec(R), each prime ideal of R corresponds to a generic point of the closed subset associated to by the Galois connection.
Examples
The dimension of a polynomial ring over a field k[x1, ..., xn] is the number of variables n. In the language of algebraic geometry, this says that the affine space of dimension n over a field has dimension n, as expected. In general, if R is a Noetherian ring of dimension n, then the dimension of R[x] is n + 1. If the Noetherian hypothesis is dropped, then R[x] can have dimension anywhere between n + 1 and 2n + 1.
For example, the ideal has height 2 since we can form the maximal ascending chain of prime ideals.
Given an irreducible polynomial , the ideal is not prime (since , but neither of the factors are), but we can easily compute the height since the smallest prime ideal containing is just .
The ring of integers Z has dimension 1. More generally, any principal ideal domain that is not a field has dimension 1.
An integral domain is a field if and only if its Krull dimension is zero. Dedekind domains that are not fields (for example, discrete valuation rings) have dimension one.
The Krull dimension of the zero ring is typically defined to be either or . The zero ring is the only ring with a negative dimension.
A ring is Artinian if and only if it is Noetherian and its Krull dimension is ≤0.
An integral extension of a ring has the same dimension as the ring does.
Let R be an algebra over a field k that is an integral domain. Then the Krull dimension of R is less than or equal to the transcendence degree of the field of fractions of R over k. The equality holds if R is finitely generated as an algebra (for instance by the Noether normalization lemma).
Let R be a Noetherian ring, I an ideal and be the associated graded ring (geometers call it the ring of the normal cone of I). Then is the supremum of the heights of maximal ideals of R containing I.
A commutative Noetherian ring of Krull dimension zero is a direct product of a finite number (possibly one) of local rings of Krull dimension zero.
A Noetherian local ring is called a Cohen–Macaulay ring if its dimension is equal to its depth. A regular local ring is an example of such a ring.
A Noetherian integral domain is a unique factorization domain if and only if every height 1 prime ideal is principal.
For a commutative Noetherian ring the three following conditions are equivalent: being a reduced ring of Krull dimension zero, being a field or a direct product of fields, being von Neumann regular.
Of a module
If R is a commutative ring, and M is an R-module, we define the Krull dimension of M to be the Krull dimension of the quotient of R making M a faithful module. That is, we define it by the formula:
where AnnR(M), the annihilator, is the kernel of the natural map R → EndR(M) of R into the ring of R-linear endomorphisms of M.
In the language of schemes, finitely generated modules are interpreted as coherent sheaves, or generalized finite rank vector bundles.
For non-commutative rings
The Krull dimension of a module over a possibly non-commutative ring is defined as the deviation of the poset of submodules ordered by inclusion. For commutative Noetherian rings, this is the same as the definition using chains of prime ideals. The two definitions can be different for commutative rings which are not Noetherian.
See also
Analytic spread
Dimension theory (algebra)
Gelfand–Kirillov dimension
Hilbert function
Homological conjectures in commutative algebra
Krull's principal ideal theorem
Notes
Bibliography
Irving Kaplansky, Commutative rings (revised ed.), University of Chicago Press, 1974, . Page 32.
Sect.4.7.
Commutative algebra
Dimension | Krull dimension | [
"Physics",
"Mathematics"
] | 1,660 | [
"Geometric measurement",
"Physical quantities",
"Fields of abstract algebra",
"Theory of relativity",
"Commutative algebra",
"Dimension"
] |
56,098 | https://en.wikipedia.org/wiki/Monte%20Carlo%20method | Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
Overview
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs
Generate inputs randomly from a probability distribution over the domain
Perform a deterministic computation of the outputs
Aggregate the results
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is , the value of can be approximated using a Monte Carlo method:
Draw a square, then inscribe a quadrant within it
Uniformly scatter a given number of points over the square
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, . Multiply the result by 4 to estimate .
In this procedure the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the quadrant). Aggregating the results yields our final result, the approximation of .
There are two important considerations:
If the points are not uniformly distributed, then the approximation will be poor.
The approximation is generally poor if only a few points are randomly placed in the whole square. On average, the approximation improves as more points are placed.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously used for statistical sampling.
Application
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean ( the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples ( particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Simple Monte Carlo
Suppose one wants to know the expected value μ of a population (and knows that μ exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ by running n simulations and averaging the simulations’ results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ exists. A sufficiently large n will produce a value for m that is arbitrarily close to μ; more formally, it will be the case that, for any ε > 0, |μ – m| ≤ ε.
Typically, the algorithm to obtain m is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
An example
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If n is large enough, m will be within ε of μ for any ε > 0.
Determining a sufficiently large n
General formula
Let ε = |μ – m| > 0. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m is indeed within ε of μ. Let z be the z-score corresponding to that confidence level.
Let s2 be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k of “sample” simulations. Choose a k; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes s2 in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi−1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes, mk is the mean of the k results.
n is sufficiently large when
If n ≤ k, then mk = m; sufficient sample simulations were done to ensure that mk is within ε of μ. If n > k, then n simulations can be run “from scratch,” or, since k simulations have already been done, one can just run n – k more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
A formula when simulations' results are bounded
An alternate formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for ε that is twice the maximum allowed difference between μ and m. Let 0 < δ < 100 be the desired confidence level, expressed as a percentage. Let every simulation result r1, r2, …ri, … rn be such that a ≤ ri ≤ b for finite a and b. To have confidence of at least δ that |μ – m| < ε/2, use a value for n such that
For example, if δ = 99%, then n ≥ 2(b – a )2ln(2/0.01)/ε2 ≈ 10.6(b – a )2/ε2.
Computational costs
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
History
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
Definitions
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
Monte Carlo simulation versus "what if" scenarios
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
Applications
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
Physical sciences
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
Engineering
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
Climate change and radiative forcing
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
Computational biology
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
Computer graphics
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
Applied statistics
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
Artificial intelligence for games
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
Design and visuals
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
Search and rescue
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
Finance and business
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
Law
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
Library science
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
Other
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
Use in mathematics
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Integration
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
Simulation and optimization
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
Philosophy
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
See also
Auxiliary-field Monte Carlo
Biology Monte Carlo method
Direct simulation Monte Carlo
Dynamic Monte Carlo method
Ergodicity
Genetic algorithms
Kinetic Monte Carlo
List of software for Monte Carlo molecular modeling
Mean-field particle methods
Monte Carlo method for photon transport
Monte Carlo methods for electron transport
Monte Carlo N-Particle Transport Code
Morris method
Multilevel Monte Carlo method
Quasi-Monte Carlo method
Sobol sequence
Temporal difference learning
References
Citations
Sources
External links
Numerical analysis
Statistical mechanics
Computational physics
Sampling techniques
Statistical approximations
Stochastic simulation
Randomized algorithms
Risk analysis methodologies | Monte Carlo method | [
"Physics",
"Mathematics"
] | 7,879 | [
"Monte Carlo methods",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Statistical approximations",
"Numerical analysis",
"Statistical mechanics",
"Approximations"
] |
56,099 | https://en.wikipedia.org/wiki/Red%20dwarf | A red dwarf is the smallest kind of star on the main sequence. Red dwarfs are by far the most common type of fusing star in the Milky Way, at least in the neighborhood of the Sun. However, due to their low luminosity, individual red dwarfs cannot be easily observed. From Earth, not one star that fits the stricter definitions of a red dwarf is visible to the naked eye. Proxima Centauri, the star nearest to the Sun, is a red dwarf, as are fifty of the sixty nearest stars. According to some estimates, red dwarfs make up three-quarters of the fusing stars in the Milky Way.
The coolest red dwarfs near the Sun have a surface temperature of about and the smallest have radii about 9% that of the Sun, with masses about 7.5% that of the Sun. These red dwarfs have spectral types of L0 to L2. There is some overlap with the properties of brown dwarfs, since the most massive brown dwarfs at lower metallicity can be as hot as and have late M spectral types.
Definitions and usage of the term "red dwarf" vary on how inclusive they are on the hotter and more massive end. One definition is synonymous with stellar M dwarfs (M-type main sequence stars), yielding a maximum temperature of and . One includes all stellar M-type main-sequence and all K-type main-sequence stars (K dwarf), yielding a maximum temperature of and . Some definitions include any stellar M dwarf and part of the K dwarf classification. Other definitions are also in use. Many of the coolest, lowest mass M dwarfs are expected to be brown dwarfs, not true stars, and so those would be excluded from any definition of red dwarf.
Stellar models indicate that red dwarfs less than are fully convective. Hence, the helium produced by the thermonuclear fusion of hydrogen is constantly remixed throughout the star, avoiding helium buildup at the core, thereby prolonging the period of fusion. Low-mass red dwarfs therefore develop very slowly, maintaining a constant luminosity and spectral type for trillions of years, until their fuel is depleted. Because of the comparatively short age of the universe, no red dwarfs yet exist at advanced stages of evolution.
Definition
The term "red dwarf" when used to refer to a star does not have a strict definition. One of the earliest uses of the term was in 1915, used simply to contrast "red" dwarf stars from hotter "blue" dwarf stars. It became established use, although the definition remained vague. In terms of which spectral types qualify as red dwarfs, different researchers picked different limits, for example K8–M5 or "later than K5". Dwarf M star, abbreviated dM, was also used, but sometimes it also included stars of spectral type K.
In modern usage, the definition of a red dwarf still varies. When explicitly defined, it typically includes late K- and early to mid-M-class stars, but in many cases it is restricted just to M-class stars. In some cases all K stars are included as red dwarfs, and occasionally even earlier stars.
The most recent surveys place the coolest true main-sequence stars into spectral types L2 or L3. At the same time, many objects cooler than about M6 or M7 are brown dwarfs, insufficiently massive to sustain hydrogen-1 fusion. This gives a significant overlap in spectral types for red and brown dwarfs. Objects in that spectral range can be difficult to categorize.
Description and characteristics
Red dwarfs are very-low-mass stars. As a result, they have relatively low pressures, a low fusion rate, and hence, a low temperature. The energy generated is the product of nuclear fusion of hydrogen into helium by way of the proton–proton (PP) chain mechanism. Hence, these stars emit relatively little light, sometimes as little as that of the Sun, although this would still imply a power output on the order of 1022 watts (10 trillion gigawatts or 10 ZW). Even the largest red dwarfs (for example HD 179930, HIP 12961 and Lacaille 8760) have only about 10% of the Sun's luminosity. In general, red dwarfs less than transport energy from the core to the surface by convection. Convection occurs because of opacity of the interior, which has a high density compared to the temperature. As a result, energy transfer by radiation is decreased, and instead convection is the main form of energy transport to the surface of the star. Above this mass, a red dwarf will have a region around its core where convection does not occur.
Because low-mass red dwarfs are fully convective, helium does not accumulate at the core, and compared to larger stars such as the Sun, they can burn a larger proportion of their hydrogen before leaving the main sequence. As a result, red dwarfs have estimated lifespans far longer than the present age of the universe, and stars less than have not had time to leave the main sequence. The lower the mass of a red dwarf, the longer the lifespan. It is believed that the lifespan of these stars exceeds the expected 10-billion-year lifespan of the Sun by the third or fourth power of the ratio of the solar mass to their masses; thus, a red dwarf may continue burning for 10 trillion years. As the proportion of hydrogen in a red dwarf is consumed, the rate of fusion declines and the core starts to contract. The gravitational energy released by this size reduction is converted into heat, which is carried throughout the star by convection.
According to computer simulations, the minimum mass a red dwarf must have to eventually evolve into a red giant is ; less massive objects, as they age, would increase their surface temperatures and luminosities becoming blue dwarfs and finally white dwarfs.
The less massive the star, the longer this evolutionary process takes. A red dwarf (approximately the mass of the nearby Barnard's Star) would stay on the main sequence for 2.5 trillion years, followed by five billion years as a blue dwarf, during which the star would have one third of the Sun's luminosity () and a surface temperature of 6,500–8,500 kelvins.
The fact that red dwarfs and other low-mass stars still remain on the main sequence when more massive stars have moved off the main sequence allows the age of star clusters to be estimated by finding the mass at which the stars move off the main sequence. This provides a lower limit to the age of the Universe and also allows formation timescales to be placed upon the structures within the Milky Way, such as the Galactic halo and Galactic disk.
All observed red dwarfs contain "metals", which in astronomy are elements heavier than hydrogen and helium. The Big Bang model predicts that the first generation of stars should have only hydrogen, helium, and trace amounts of lithium, and hence would be of low metallicity. With their extreme lifespans, any red dwarfs that were a part of that first generation (population III stars) should still exist today. Low-metallicity red dwarfs, however, are rare. The accepted model for the chemical evolution of the universe anticipates such a scarcity of metal-poor dwarf stars because only giant stars are thought to have formed in the metal-poor environment of the early universe. As giant stars end their short lives in supernova explosions, they spew out the heavier elements needed to form smaller stars. Therefore, dwarfs became more common as the universe aged and became enriched in metals. While the basic scarcity of ancient metal-poor red dwarfs is expected, observations have detected even fewer than predicted. The sheer difficulty of detecting objects as dim as red dwarfs was thought to account for this discrepancy, but improved detection methods have only confirmed the discrepancy.
The boundary between the least massive red dwarfs and the most massive brown dwarfs depends strongly on the metallicity. At solar metallicity the boundary occurs at about , while at zero metallicity the boundary is around . At solar metallicity, the least massive red dwarfs theoretically have temperatures around , while measurements of red dwarfs in the solar neighbourhood suggest the coolest stars have temperatures of about and spectral classes of about L2. Theory predicts that the coolest red dwarfs at zero metallicity would have temperatures of about . The least massive red dwarfs have radii of about , while both more massive red dwarfs and less massive brown dwarfs are larger.
Spectral standard stars
The spectral standards for M type stars have changed slightly over the years, but settled down somewhat since the early 1990s. Part of this is due to the fact that even the nearest red dwarfs are fairly faint, and their colors do not register well on photographic emulsions used in the early to mid 20th century. The study of mid- to late-M dwarfs has significantly advanced only in the past few decades, primarily due to development of new astrographic and spectroscopic techniques, dispensing with photographic plates and progressing to charged-couple devices (CCDs) and infrared-sensitive arrays.
The revised Yerkes Atlas system (Johnson & Morgan, 1953) listed only two M type spectral standard stars: HD 147379 (M0V)
and HD 95735/Lalande 21185 (M2V). While HD 147379 was not considered a standard by expert classifiers in later compendia of standards, Lalande 21185 is still a primary standard for M2V. Robert Garrison does not list any "anchor" standards among the red dwarfs, but Lalande 21185 has survived as a M2V standard through many compendia. The review on MK classification by Morgan & Keenan (1973) did not contain red dwarf standards.
In the mid-1970s, red dwarf standard stars were published by Keenan & McNeil (1976) and Boeshaar (1976), but there was little agreement among the standards. As later cooler stars were identified through the 1980s, it was clear that an overhaul of the red dwarf standards was needed. Building primarily upon the Boeshaar standards, a group at Steward Observatory (Kirkpatrick, Henry, & McCarthy, 1991) filled in the spectral sequence from K5V to M9V. It is these M type dwarf standard stars which have largely survived as the main standards to the modern day. There have been negligible changes in the red dwarf spectral sequence since 1991. Additional red dwarf standards were compiled by Henry et al. (2002), and D. Kirkpatrick has recently
reviewed the classification of red dwarfs and standard stars in Gray & Corbally's 2009 monograph. The M dwarf primary spectral standards are: GJ 270 (M0V), GJ 229A (M1V), Lalande 21185 (M2V), Gliese 581 (M3V), Gliese 402 (M4V), GJ 51 (M5V), Wolf 359 (M6V), van Biesbroeck 8 (M7V), VB 10 (M8V), LHS 2924 (M9V).
Planets
Many red dwarfs are orbited by exoplanets, but large Jupiter-sized planets are comparatively rare. Doppler surveys of a wide variety of stars indicate about 1 in 6 stars with twice the mass of the Sun are orbited by one or more of Jupiter-sized planets, versus 1 in 16 for Sun-like stars and the frequency of close-in giant planets (Jupiter size or larger) orbiting red dwarfs is only 1 in 40. On the other hand, microlensing surveys indicate that long-orbital-period Neptune-mass planets are found around one in three red dwarfs. Observations with HARPS further indicate 40% of red dwarfs have a "super-Earth" class planet orbiting in the habitable zone where liquid water can exist on the surface. Computer simulations of the formation of planets around low-mass stars predict that Earth-sized planets are most abundant, but more than 90% of the simulated planets are at least 10% water by mass, suggesting that many Earth-sized planets orbiting red dwarf stars are covered in deep oceans.
At least four and possibly up to six exoplanets were discovered orbiting within the Gliese 581 planetary system between 2005 and 2010. One planet has about the mass of Neptune, or 16 Earth masses (). It orbits just from its star, and is estimated to have a surface temperature of , despite the dimness of its star. In 2006, an even smaller exoplanet (only ) was found orbiting the red dwarf OGLE-2005-BLG-390L; it lies from the star and its surface temperature is .
In 2007, a new, potentially habitable exoplanet, , was found, orbiting Gliese 581. The minimum mass estimated by its discoverers (a team led by Stephane Udry) is . The discoverers estimate its radius to be 1.5 times that of Earth (). Since then Gliese 581d, which is also potentially habitable, was discovered.
Gliese 581c and d are within the habitable zone of the host star, and are two of the most likely candidates for habitability of any exoplanets discovered so far. Gliese 581g, detected September 2010, has a near-circular orbit in the middle of the star's habitable zone. However, the planet's existence is contested.
On 23 February 2017 NASA announced the discovery of seven Earth-sized planets orbiting the red dwarf star TRAPPIST-1 approximately 39 light-years away in the constellation Aquarius. The planets were discovered through the transit method, meaning we have mass and radius information for all of them. TRAPPIST-1e, f, and g appear to be within the habitable zone and may have liquid water on the surface.
Habitability
Modern evidence suggests that planets in red dwarf systems are extremely unlikely to be habitable. In spite of their great numbers and long lifespans, there are several factors which may make life difficult on planets around a red dwarf. First, planets in the habitable zone of a red dwarf would be so close to the parent star that they would likely be tidally locked. For a nearly circular orbit, this would mean that one side would be in perpetual daylight and the other in eternal night. This could create enormous temperature variations from one side of the planet to the other. Such conditions would appear to make it difficult for forms of life similar to those on Earth to evolve. And it appears there is a great problem with the atmosphere of such tidally locked planets: the perpetual night zone would be cold enough to freeze the main gases of their atmospheres, leaving the daylight zone bare and dry. On the other hand, though, a theory proposes that either a thick atmosphere or planetary ocean could potentially circulate heat around such a planet.
Variability in stellar energy output may also have negative impacts on the development of life. Red dwarfs are often flare stars, which can emit gigantic flares, doubling their brightness in minutes. This variability makes it difficult for life to develop and persist near a red dwarf. While it may be possible for a planet orbiting close to a red dwarf to keep its atmosphere even if the star flares, more-recent research suggests that these stars may be the source of constant high-energy flares and very large magnetic fields, diminishing the possibility of life as we know it.
See also
References
Sources
Neptune-Size Planet Orbiting Common Star Hints at Many More
External links
Variable stars AAVSO
Stellar Flares Publications about Flares by the Stellar Activity Group (UCM)
Red Dwarfs Jumk.de
Red Star Rising : Small, cool stars may be hot spots for life – Scientific American (November 2005)
Star types
Stellar phenomena | Red dwarf | [
"Physics",
"Astronomy"
] | 3,234 | [
"Physical phenomena",
"Stellar phenomena",
"Star types",
"Astronomical classification systems"
] |
56,101 | https://en.wikipedia.org/wiki/Ethnic%20conflict | An ethnic conflict is a conflict between two or more ethnic groups. While the source of the conflict may be political, social, economic or religious, the individuals in conflict must expressly fight for their ethnic group's position within society. This criterion differentiates ethnic conflict from other forms of struggle.
Academic explanations of ethnic conflict generally fall into one of three schools of thought: primordialist, instrumentalist or constructivist. Recently, some have argued for either top-down or bottom-up explanations for ethnic conflict. Intellectual debate has also focused on whether ethnic conflict has become more prevalent since the end of the Cold War, and on devising ways of managing conflicts, through instruments such as consociationalism and federalisation.
Theories of causes
It is argued that rebel movements are more likely to organize around ethnicity because ethnic groups are more apt to be aggrieved, better able to mobilize, and more likely to face difficult bargaining challenges compared to other groups. The causes of ethnic conflict are debated by political scientists and sociologists. Official academic explanations generally fall into one of three schools of thought: primordialist, instrumentalist, and constructivist. More recent scholarship draws on all three schools.
Primordialist accounts
Proponents of primordialist accounts argue that "[e]thnic groups and nationalities exist because there are traditions of belief and action towards primordial objects such as biological features and especially territorial location". Primordialist accounts rely on strong ties of kinship among members of ethnic groups. Donald L. Horowitz argues that this kinship "makes it possible for ethnic groups to think in terms of family resemblances".
Clifford Geertz, a founding scholar of primordialism, asserts that each person has a natural connection to perceived kinsmen. In time and through repeated conflict, essential ties to one's ethnicity will coalesce and will interfere with ties to civil society. Ethnic groups will consequently always threaten the survival of civil governments but not the existence of nations formed by one ethnic group. Thus, when considered through a primordial lens, ethnic conflict in multi-ethnic society is inevitable.
A number of political scientists argue that the root causes of ethnic conflict do not involve ethnicity per se but rather institutional, political, and economic factors. These scholars argue that the concept of ethnic war is misleading because it leads to an essentialist conclusion that certain groups are doomed to fight each other when in fact the wars between them that occur are often the result of political decisions.
Moreover, primordial accounts do not account for the spatial and temporal variations in ethnic violence. If these "ancient hatreds" are always simmering under the surface and are at the forefront of people's consciousness, then ethnic groups should constantly be ensnared in violence. However, ethnic violence occurs in sporadic outbursts. For example, Varshney points out that although Yugoslavia broke up due to ethnic violence in the 1990s, it had enjoyed a long peace of decades before the USSR collapsed. Therefore, some scholars claim that it is unlikely that primordial ethnic differences alone caused the outbreak of violence in the 1990s.
Primordialists have reformulated the "ancient hatreds" hypothesis and have focused more on the role of human nature. Petersen argues that the existence of hatred and animosity does not have to be rooted in history for it to play a role in shaping human behavior and action: "If 'ancient hatred' means a hatred consuming the daily thoughts of great masses of people, then the 'ancient hatreds' argument deserves to be readily dismissed. However, if hatred is conceived as a historically formed 'schema' that guides action in some situations, then the conception should be taken more seriously."
Instrumentalist accounts
Anthony Smith notes that the instrumentalist account "came to prominence in the 1960s and 1970s in the United States, in the debate about (white) ethnic persistence in what was supposed to have been an effective melting pot". This new theory sought explained persistence as the result of the actions of community leaders, "who used their cultural groups as sites of mass mobilization and as constituencies in their competition for power and resources, because they found them more effective than social classes". In this account of ethnic identification, ethnicity and race are viewed as instrumental means to achieve particular ends.
Whether ethnicity is a fixed perception or not is not crucial in the instrumentalist accounts. Moreover, the scholars of this school generally do not oppose the view that ethnic difference plays a part in many conflicts. They simply claim that ethnic difference is not sufficient to explain conflicts.
Mass mobilization of ethnic groups can only be successful if there are latent ethnic differences to be exploited, otherwise politicians would not even attempt to make political appeals based on ethnicity and would focus instead on economic or ideological appeals. For these reasons, it is difficult to completely discount the role of inherent ethnic differences. Additionally, ethnic entrepreneurs, or elites, could be tempted to mobilize ethnic groups in order to gain their political support in democratizing states. Instrumentalists theorists especially emphasize this interpretation in ethnic states in which one ethnic group is promoted at the expense of other ethnicities.
Furthermore, ethnic mass mobilization is likely to be plagued by collective action problems, especially if ethnic protests are likely to lead to violence. Instrumentalist scholars have tried to respond to these shortcomings. For example, Russell Hardin argues that ethnic mobilization faces problems of coordination and not collective action. He points out that a charismatic leader acts as a focal point around which members of an ethnic group coalesce. The existence of such an actor helps to clarify beliefs about the behavior of others within an ethnic group.
Constructivist accounts
A third, constructivist, set of accounts stress the importance of the socially constructed nature of ethnic groups, drawing on Benedict Anderson's concept of the imagined community. Proponents of this account point to Rwanda as an example because the Tutsi/Hutu distinction was codified by the Belgian colonial power in the 1930s on the basis of cattle ownership, physical measurements and church records. Identity cards were issued on this basis, and these documents played a key role in the genocide of 1994.
Some argue that constructivist narratives of historical master cleavages are unable to account for local and regional variations in ethnic violence. For example, Varshney highlights that in the 1960s "racial violence in the USA was heavily concentrated in northern cities; southern cities though intensely politically engaged, did not have riots". A constructivist master narrative is often a country level variable whereas studies of incidences of ethnic violence are often done at the regional and local level.
Scholars of ethnic conflict and civil wars have introduced theories that draw insights from all three traditional schools of thought. In The Geography of Ethnic Violence, Monica Duffy Toft shows how ethnic group settlement patterns, socially constructed identities, charismatic leaders, issue indivisibility, and state concern with precedent setting can lead rational actors to escalate a dispute to violence, even when doing so is likely to leave contending groups much worse off. Such research addresses empirical puzzles that are difficult to explain using primordialist, instrumentalist, or constructivist approaches alone. As Varshney notes, "pure essentialists and pure instrumentalists do not exist anymore".
History
Study in the post-Cold War world
One of the most debated issues relating to ethnic conflict is whether it has become more or less prevalent in the post–Cold War period. Even
though a decline in the rate of new ethnic conflicts was evident in the late 1990s, ethnic conflict remains the most common form of
armed intrastate conflict today. At the end of the Cold War, academics including Samuel P. Huntington and Robert D. Kaplan predicted a proliferation of conflicts fueled by civilisational clashes, Tribalism, resource scarcity and overpopulation.
The violent ethnic conflicts in Nigeria, Mali, Sudan and other countries in the Sahel region have been exacerbated by droughts, food shortages, land degradation, and population growth.
However, some theorists contend that this does not represent a rise in the incidence of ethnic conflict, because many of the proxy wars fought during the Cold War as ethnic conflicts were actually hot spots of the Cold War. Research shows that the fall of Communism and the increase in the number of capitalist states were accompanied by a decline in total warfare, interstate wars, ethnic wars, revolutionary wars, and the number of refugees and displaced persons. Indeed, some scholars have questioned whether the concept of ethnic conflict is useful at all. Others have attempted to test the "clash of civilisations" thesis, finding it to be difficult to operationalise and that civilisational conflicts have not risen in intensity in relation to other ethnic conflicts since the end of the Cold War.
A key question facing scholars who attempt to adapt their theories of interstate violence to explain or predict large-scale ethnic violence is whether ethnic groups could be considered "rational" actors.
Prior to the end of the Cold War, the consensus among students of large-scale violence was that ethnic groups should be considered irrational actors, or semi-rational at best. If true, general explanations of ethnic violence would be impossible. In the years since, however, scholarly consensus has shifted to consider that ethnic groups may in fact be counted as rational actors, and the puzzle of their apparently irrational actions (for example, fighting over territory of little or no intrinsic worth) must therefore be explained in some other way. As a result, the possibility of a general explanation of ethnic violence has grown, and collaboration between comparativist and international-relations sub-fields has resulted in increasingly useful theories of ethnic conflict.
Public goods provision
A major source of ethnic conflict in multi-ethnic democracies is over the access to state patronage. Conflicts over state resources between ethnic groups can increase the likelihood of ethnic violence. In ethnically divided societies, demand for public goods decreases as each ethnic group derives more utility from benefits targeted at their ethnic group in particular. These benefits would be less valued if all other ethnic groups had access to them. Targeted benefits are more appealing because ethnic groups can solidify or heighten their social and economic status relative to other ethnic groups whereas broad programmatic policies will not improve their relative worth. Politicians and political parties in turn, have an incentive to favor co-ethnics in their distribution of material benefits. Over the long run, ethnic conflict over access to state benefits is likely to lead to the ethnification of political parties and the party system as a whole where the political salience of ethnic identity increase leading to a self-fulfilling equilibrium: If politicians only distribute benefits on an ethnic basis, voters will see themselves primarily belonging to an ethnic group and view politicians the same way. They will only vote for the politician belonging to the same ethnic group. In turn, politicians will refrain from providing public goods because it will not serve them well electorally to provide services to people not belonging to their ethnic group. In democratizing societies, this could lead to ethnic outbidding and lead to extreme politicians pushing out moderate co-ethnics. Patronage politics and ethnic politics eventually reinforce each other, leading to what Chandra terms a "patronage democracy".
The existence of patronage networks between local politicians and ethnic groups make it easier for politicians to mobilize ethnic groups and instigate ethnic violence for electoral gain since the neighborhood or city is already polarized along ethnic lines. The dependence of ethnic groups on their co-ethnic local politician for access to state resources is likely to make them more responsive to calls of violence against other ethnic groups. Therefore, the existence of these local patronage channels generates incentives for ethnic groups to engage in politically motivated violence.
While the link between ethnic heterogeneity and under provision of public goods is generally accepted, there is little consensus around the causal mechanism underlying this relationship. To identify possible causal stories, Humphreys and Habyarimana ran a series of behavioral games in Kampala, Uganda, that involved several local participants completing joint tasks and allocating money amongst them. Contrary to the conventional wisdom, they find that participants did not favor the welfare of their co-ethnics disproportionately. It was only when anonymity was removed and everyone's ethnicity was known did co-ethnics decide to favor each other. Humphreys and Habyarimana argue that cooperation among co-ethnics is primarily driven by reciprocity norms that tend to be stronger among co-ethnics. The possibility of social sanctions compelled those who would not otherwise cooperate with their co-ethnics to do so. The authors find no evidence to suggest that co-ethnics display a greater degree of altruism towards each other or have the same preferences. Ethnic cooperation takes place because co-ethnics have common social networks and therefore can monitor each other and can threaten to socially sanction any transgressors.
Ethnic conflict amplification
Online social media
In the early twenty-first century, the online social networking service Facebook has played a role in amplifying ethnic violence in the Rohingya genocide that started in October 2016 and in ethnic violence in Ethiopia during 2019–2020.
The United Nations Human Rights Council described Facebook as having been "a useful instrument for those seeking to spread hate" and complained that Facebook was unable to provide data on the extent of its role in the genocide.
During 2019–2020, posts on Facebook dominated the Internet in Ethiopia and played a major role in encouraging ethnic violence. An October 2019 Facebook post led to the deaths of 70 people in Ethiopia. In mid-2020, ethnic tensions in Ethiopia were amplified by online hate speech on Facebook that followed the 29 June assassination of Hachalu Hundessa. The Hachalu Hundessa riots, in which mobs "lynched, beheaded, and dismembered their victims", took place with "almost-instant and widespread sharing of hate speech and incitement to violence on Facebook, which whipped up people's anger", according to David Gilbert writing in Vice. People "call[ed] for genocide and attacks against specific religious or ethnic groups" and "openly post[ed] photographs of burned-out cars, buildings, schools and houses", according to Network Against Hate Speech, an Ethiopian citizens' group. Berhan Taye of Access Now stated that in Ethiopia, offline violence quickly leads to online "calls for ethnic attacks, discrimination, and destruction of property [that] goes viral". He stated, "Facebook's inaction helps propagate hate and polarization in a country and has a devastating impact on the narrative and extent of the violence."
Ethnic conflict resolution
Institutional ethnic conflict resolution
A number of scholars have attempted to synthesize the methods available for the resolution, management or transformation of their ethnic conflict. John Coakley, for example, has developed a typology of the methods of conflict resolution that have been employed by states, which he lists as: indigenization, accommodation, assimilation, acculturation, population transfer, boundary alteration, genocide and ethnic suicide. John McGarry and Brendan O'Leary have developed a taxonomy of eight macro-political ethnic conflict regulation methods, which they note are often employed by states in combination with each other. They include a number of methods that they note are clearly morally unacceptable.
With increasing interest in the field of ethnic conflict, many policy analysts and political scientists theorized potential resolutions and tracked the results of institutional policy implementation. As such, theories often focus on which Institutions are the most appropriate for addressing ethnic conflict.
Consociationalism
Consociationalism is a power sharing agreement which coopts the leaders of ethnic groups into the central state's government. Each nation or ethnic group is represented in the government through a supposed spokesman for the group. In the power sharing agreement, each group has veto powers to varying degrees, dependent on the particular state. Moreover, the norm of proportional representation is dominant: each group is represented in the government in a percentage that reflects the ethnicity's demographic presence in the state. Another requirement for Arend Lijphart is that the government must be composed of a "grand coalition" of the ethnic group leaders which supposes a top-down approach to conflict resolution.
In theory, this leads to self governance and protection for the ethnic group. Many scholars maintain that since ethnic tension erupts into ethnic violence when the ethnic group is threatened by a state, then veto powers should allow the ethnic group to avoid legislative threats. Switzerland is often characterized as a successful consociationalist state.
A recent example of a consociational government is the post-conflict Bosnian government that was agreed upon in the Dayton Accords in 1995. A tripartite presidency was chosen and must have a Croat, a Serb, and a Bosniak. The presidents take turns acting as the forefront executive in terms of 8 months for 4 years. Many have credited this compromise of a consociational government in Bosnia for the end of the violence and the following long-lasting peace.
In contrast to Lijphart, several political scientists and policy analysts have condemned consociationalism. One of the many critiques is that consociationalism locks in ethnic tensions and identities. This assumes a primordial stance that ethnic identities are permanent and not subject to change. Furthermore, this does not allow for any "others" that might want to partake in the political process. As of 2012 a Jewish Bosnian is suing the Bosnian government for precluding him from running for presidential office since only a Croat, Serb, or Bosniak can run under the consociational government. Determining ethnic identities in advance and implementing a power sharing system on the basis of these fixed identities is inherently discriminatory against minority groups that might be not be recognized. Moreover, it discriminates against those who do not choose to define their identity on an ethnic or communal basis. In power sharing-systems that are based on pre-determined identities, there is a tendency to rigidly fix shares of representation on a permanent basis which will not reflect changing demographics over time. The categorization of individuals in particular ethnic groups might be controversial anyway and might in fact fuel ethnic tensions.
The inherent weaknesses in using pre-determined ethnic identities to form power sharing systems has led Ljiphart to argue that adopting a constructivist approach to consociationalism can increase its likelihood of success. The self-determination of ethnic identities is more likely to be "non-discriminatory, neutral, flexible and self-adjusting". For example, in South Africa, the toxic legacy of apartheid meant that successful consociation could only be built on the basis of the self-determination of groups. Ljiphart claims that because ethnic identities are often "unclear, fluid and flexible," self-determination is likely to be more successful than pre-determination of ethnic groups. A constructivist approach to consociational theory can therefore strengthen its value as a method to resolve ethnic conflict.
Another critique points to the privileging of ethnic identity over personal political choice. Howard has deemed consociationalism as a form of ethnocracy and not a path to true pluralistic democracy. Consociationalism assumes that a politician will best represent the will of his co-ethnics above other political parties. This might lead to the polarization of ethnic groups and the loss of non-ethnic ideological parties.
Horowitz has argued that a single transferable vote system could prevent the ethnification of political parties because voters cast their ballots in order of preference. This means that a voter could cast some of his votes to parties other than his co-ethnic party. This in turn would compel political parties to broaden their manifestos to appeal to voters across the ethnic divide to hoover up second and third preference votes.
Federalism
The theory of implementing federalism in order to curtail ethnic conflict assumes that self-governance reduces "demands for sovereignty". Hechter argues that some goods such as language of education and bureaucracy must be provided as local goods, instead of statewide, in order to satisfy more people and ethnic groups. Some political scientists such as Stroschein contend that ethnofederalism, or federalism determined along ethnic lines, is "asymmetric" as opposed to the equal devolution of power found in non-ethnic federal states, such as the United States. In this sense, special privileges are granted to specific minority groups as concessions and incentives to end violence or mute conflict.
The Soviet Union divided its structure into ethnic federal states termed Union Republics. Each Union Republic was named after a titular ethnic group who inhabited the area as a way to Sovietize nationalist sentiments during the 1920s. Brubaker asserts that these titular republics were formed in order to absorb any potential elite led nationalist movements against the Soviet center by incentivizing elite loyalty through advancement in the Soviet political structure.
Thus, federalism provides some self-governance for local matters in order to satisfy some of the grievances which might cause ethnic conflict among the masses. Moreover, federalism brings in the elites and ethnic entrepreneurs into the central power structure; this prevents a resurgence of top-down ethnic conflict.
Nevertheless, after the fall of the USSR many critiques of federalism as an institution to resolve ethnic conflict emerged. The devolution of power away from the central state can weaken ties to the central state. Moreover, the parallel institutions created to serve a particular nation or ethnic group might provide significant resources for secession from the central state. As most states are unwilling to give up an integral portion of their territory, secessionist movements may trigger violence.
Furthermore, some competing elite political players may not be in power; they would remain unincorporated into the central system. These competing elites can gain access through federal structures and their resources to solidify their political power in the structure. According to V.P. Gagnon this was the case in the former Yugoslavia and its disintegration into its ethnic federal states. Ethnic entrepreneurs were able to take control of the institutionally allocated resources to wage war on other ethnic groups.
Non-territorial autonomy
A recent theory of ethnic tension resolution is non-territorial autonomy or NTA. NTA has emerged in recent years as an alternative solution to ethnic tensions and grievances in places that are likely to breed conflict. For this reason, NTA has been promoted as a more practical and state building solution than consociationalism. NTA, alternatively known as non-cultural autonomy (NCA), is based on the difference of jus solis and jus sanguinis, the principles of territory versus that of personhood. It gives rights to ethnic groups to self-rule and govern matters potentially concerning but limited to: education, language, culture, internal affairs, religion, and the internally established institutions needed to promote and reproduce these facets. In contrast to federalism, the ethnic groups are not assigned a titular sub-state, but rather the ethnic groups are dispersed throughout the state unit. Their group rights and autonomy are not constrained to a particular territory within the state. This is done in order not to weaken the center state such as in the case of ethnofederalism.
The origin of NTA can be traced back to the Marxists works of Otto Bauer and Karl Renner. NTA was employed during the interwar period, and the League of Nations sought to add protection clauses for national minorities in new states. In the 1920s, Estonia granted some cultural autonomy to the German and Jewish populations in order to ease conflicts between the groups and the newly independent state.
In Europe, most notably in Belgium, NTA laws have been enacted and created parallel institutions and political parties in the same country. In Belgium, NTA has been integrated within the federal consociational system. Some scholars of ethnic conflict resolution claim that the practice of NTA will be employed dependent on the concentration and size of the ethnic group asking for group rights.
Other scholars, such as Clarke, argue that the successful implementation of NTA rests on the acknowledgement in a state of "universal" principles: true rule of law, established human rights, stated guarantees to minorities and their members to use their own quotidien language, religion, and food practices, and a framework of anti-discrimination legislation in order to enforce these rights. Moreover, no individual can be forced to adhere, identify, or emphasize a particular identity (such as race, gender, sexuality, etc.) without their consent in order for NTA to function for its purpose.
Nonetheless, Clarke critiques the weaknesses of NTA in areas such as education, a balance between society wide norms and intracommunity values; policing, for criminal matters and public safety; and political representation, which limits the political choices of an individual if based solely on ethnicity. Furthermore, the challenge in evaluating the efficacy of NTA lies in the relatively few legal implementations of NTA.
Cultural rights
Emphasizing the limits of approaches that focus mainly on institutional answers to ethnic conflicts—which are essentially driven by ethnocultural dynamics of which political and/or economic factors are but elements—Gregory Paul Meyjes urges the use of intercultural communication and cultural-rights based negotiations as tools with which to effectively and sustainably address inter-ethnic strife. Meyjes argues that to fully grasp, preempt, and/or resolve such conflicts—whether with or without the aid of territorial or non-territorial institutional mechanism(s) -- a cultural rights approach grounded in intercultural knowledge and skill is essential.
Ethnic conflict resolution outside formal institutions
Informal inter-ethnic engagement
Institutionalist arguments for resolving ethnic conflict often focus on national-level institutions and do not account for regional and local variation in ethnic violence within a country. Despite similar levels of ethnic diversity in a country, some towns and cities have often found to be especially prone to ethnic violence. For example, Ashutosh Varshney, in his study of ethnic violence in India, argues that strong inter-ethnic engagement in villages often disincentivizes politicians from stoking ethnic violence for electoral gain. Informal interactions include joint participation in festivals, families from different communities eating together or allowing their children to play with one another. Every day engagement between ethnic groups at the village level can help to sustain the peace in the face of national level shocks like an ethnic riot in another part of the country. In times of ethnic tension, these communities can quell rumors, police neighborhoods and come together to resist any attempts by politicians to polarize the community. The stronger the inter-ethnic networks are, the harder it is for politicians to polarize the community even if it may be in their political interest to do so.
Formal inter-ethnic associations
In cities where the population tends to be much higher, informal interactions between ethnic groups might not be sufficient to prevent violence. This is because many more links are needed to connect everyone, and therefore it is much more difficult to form and strengthen inter-ethnic ties. In cities, formal inter-ethnic associations like trade unions, business associations and professional organizations are more effective in encouraging inter-ethnic interactions that could prevent ethnic violence in the future. These organizations force ethnic groups to come together based on shared economic interests that overcomes any pre-existing ethnic differences. For example, inter-ethnic business organizations serve to connect the business interests of different ethnic groups which would increase their desire to maintain ethnic harmony. Any ethnic tension or outbreak of violence will go against their economic interests and therefore, over time, the salience of ethnic identity diminishes.
Interactions between ethnic groups in formal settings can also help countries torn apart by ethnic violence to recover and break down ethnic divisions. Paula Pickering, a political scientist, who studies peace-building efforts in Bosnia, finds that formal workplaces are often the site where inter-ethnic ties are formed. She claims that mixed workplaces lead to repeated inter-ethnic interaction where norms of professionalism compel everyone to cooperate and to treat each other with respect, making it easier for individuals belonging to the minority group to reach out and form relationships with everyone else. Nevertheless, Giuliano's research in Russia has shown that economic grievances, even in a mixed workplace, can be politicized on ethnic lines.
Examples of ethnic conflicts
Human rights in Ethiopia
Ethnic violence in Konso
Ethnic violence against Amaro Koore
Sampit conflict
Caste War of Yucatán
Russo-Ukrainian War
First Chechen War
Indo-Pakistani war of 1947–1948
First Sino-Japanese War
Moro conflict
Basque conflict
Bersiap
Maluku sectarian conflict
Caucasian War
Yugoslav Wars
Galikoma massacre
Cyprus problem
The Troubles
Transnistria War
War in Abkhazia (1992–1993)
Insurgency in the North Caucasus
Nagorno-Karabakh conflict
Philippine–American War
Polish-Ukrainian War
Polish-Ukrainian ethnic conflict during World War II
Russo-Turkish wars
Occupation of Poland during World War II
Hungarian-Romanian War
American Indian Wars
Armenian genocide
Amhara genocide
Isaaq genocide
Rwandan genocide
Rohingya genocide
Guatemalan genocide
Israeli–Lebanese conflict
Arab–Israeli conflict
Communal conflicts in Nigeria
Sudanese nomadic conflicts
Oromo–Somali clashes
Tuareg rebellions (disambiguation) (multiple)
Kurdish–Turkish conflict (1978–present)
Central African Republic conflict (2013–2014)
Ethnic conflict in Nagaland
Internal conflict in Myanmar
Jihadist insurgency in Burkina Faso
Sabra and Shatila massacre
Sri Lankan Civil War
2020 Dungan–Kazakh ethnic clashes
2023 Manipur violence
See also
Communal violence
Cultural conflict
Cultural rights
Democide
Diaspora politics
Ethnic cleansing
Ethnic hatred
Ethnic nationalism
Ethnic violence
Genocide
Hate crime
List of ethnic cleansing campaigns
List of ongoing military conflicts
List of ethnic riots
Political cleansing of population
Population cleansing
Social cleansing
Accelerationism
References
External links
European Centre for Minority Issues
INCORE International Conflict Research
Political Studies Association Specialist Group on Ethnopolitics
Minority Rights Group International
Party-Directed Mediation: Facilitating Dialogue Between Individuals by Gregorio Billikopf, free complete book PDF download, at the University of California (3rd Edition, 2014). Special focus on multiethnic and multicultural conflicts.
Party-Directed Mediation from Internet Archive (3rd Edition, multiple file formats including PDF, EPUB, and others)
Conflict (process)
Wars by type | Ethnic conflict | [
"Biology"
] | 6,106 | [
"Behavior",
"Aggression",
"Human behavior",
"Conflict (process)"
] |
56,106 | https://en.wikipedia.org/wiki/Wildfire | A wildfire, forest fire, or a bushfire is an unplanned, uncontrolled and unpredictable fire in an area of combustible vegetation. Depending on the type of vegetation present, a wildfire may be more specifically identified as a bushfire (in Australia), desert fire, grass fire, hill fire, peat fire, prairie fire, vegetation fire, or veld fire. Some natural forest ecosystems depend on wildfire. Modern forest management often engages in prescribed burns to mitigate fire risk and promote natural forest cycles. However, controlled burns can turn into wildfires by mistake.
Wildfires can be classified by cause of ignition, physical properties, combustible material present, and the effect of weather on the fire. Wildfire severity results from a combination of factors such as available fuels, physical setting, and weather. Climatic cycles with wet periods that create substantial fuels, followed by drought and heat, often precede severe wildfires. These cycles have been intensified by climate change.
Wildfires are a common type of disaster in some regions, including Siberia (Russia), California, Washington, Oregon, Texas, Florida, (United States), British Columbia (Canada), and Australia. Areas with Mediterranean climates or in the taiga biome are particularly susceptible. Wildfires can severely impact humans and their settlements. Effects include for example the direct health impacts of smoke and fire, as well as destruction of property (especially in wildland–urban interfaces), and economic losses. There is also the potential for contamination of water and soil.
At a global level, human practices have made the impacts of wildfire worse, with a doubling in land area burned by wildfires compared to natural levels. Humans have impacted wildfire through climate change (e.g. more intense heat waves and droughts), land-use change, and wildfire suppression. The carbon released from wildfires can add to carbon dioxide concentrations in the atmosphere and thus contribute to the greenhouse effect. This creates a climate change feedback.
Naturally occurring wildfires can have beneficial effects on those ecosystems that have evolved with fire. In fact, many plant species depend on the effects of fire for growth and reproduction.
Ignition
The ignition of a fire takes place through either natural causes or human activity (deliberate or not).
Natural causes
Natural occurrences that can ignite wildfires without the involvement of humans include lightning, volcanic eruptions, sparks from rock falls, and spontaneous combustions.
Human activity
Sources of human-caused fire may include arson, accidental ignition, or the uncontrolled use of fire in land-clearing and agriculture such as the slash-and-burn farming in Southeast Asia. In the tropics, farmers often practice the slash-and-burn method of clearing fields during the dry season.
In middle latitudes, the most common human causes of wildfires are equipment generating sparks (chainsaws, grinders, mowers, etc.), overhead power lines, and arson.
Arson may account for over 20% of human caused fires. However, in the 2019–20 Australian bushfire season "an independent study found online bots and trolls exaggerating the role of arson in the fires." In the 2023 Canadian wildfires false claims of arson gained traction on social media; however, arson is generally not a main cause of wildfires in Canada. In California, generally 6–10% of wildfires annually are arson.
Coal seam fires burn in the thousands around the world, such as those in Burning Mountain, New South Wales; Centralia, Pennsylvania; and several coal-sustained fires in China. They can also flare up unexpectedly and ignite nearby flammable material.
Spread
The spread of wildfires varies based on the flammable material present, its vertical arrangement and moisture content, and weather conditions. Fuel arrangement and density is governed in part by topography, as land shape determines factors such as available sunlight and water for plant growth. Overall, fire types can be generally characterized by their fuels as follows:
Ground fires are fed by subterranean roots, duff on the forest floor, and other buried organic matter. Ground fires typically burn by smoldering, and can burn slowly for days to months, such as peat fires in Kalimantan and Eastern Sumatra, Indonesia, which resulted from a riceland creation project that unintentionally drained and dried the peat.
Crawling or surface fires are fueled by low-lying vegetative matter on the forest floor such as leaf and timber litter, debris, grass, and low-lying shrubbery. This kind of fire often burns at a relatively lower temperature than crown fires (less than ) and may spread at slow rate, though steep slopes and wind can accelerate the rate of spread. This fuel type is especially susceptible to ignition due to spotting .
Ladder fires consume material between low-level vegetation and tree canopies, such as small trees, downed logs, and vines. Kudzu, Old World climbing fern, and other invasive plants that scale trees may also encourage ladder fires.
Crown, canopy, or aerial fires burn suspended material at the canopy level, such as tall trees, vines, and mosses. The ignition of a crown fire, termed crowning, is dependent on the density of the suspended material, canopy height, canopy continuity, sufficient surface and ladder fires, vegetation moisture content, and weather conditions during the blaze. Stand-replacing fires lit by humans can spread into the Amazon rain forest, damaging ecosystems not particularly suited for heat or arid conditions.
Physical properties
Wildfires occur when all the necessary elements of a fire triangle come together in a susceptible area: an ignition source is brought into contact with a combustible material such as vegetation that is subjected to enough heat and has an adequate supply of oxygen from the ambient air. A high moisture content usually prevents ignition and slows propagation, because higher temperatures are needed to evaporate any water in the material and heat the material to its fire point.
Dense forests usually provide more shade, resulting in lower ambient temperatures and greater humidity, and are therefore less susceptible to wildfires. Less dense material such as grasses and leaves are easier to ignite because they contain less water than denser material such as branches and trunks. Plants continuously lose water by evapotranspiration, but water loss is usually balanced by water absorbed from the soil, humidity, or rain. When this balance is not maintained, often as a consequence of droughts, plants dry out and are therefore more flammable.
A wildfire front is the portion sustaining continuous flaming combustion, where unburned material meets active flames, or the smoldering transition between unburned and burned material. As the front approaches, the fire heats both the surrounding air and woody material through convection and thermal radiation. First, wood is dried as water is vaporized at a temperature of . Next, the pyrolysis of wood at releases flammable gases. Finally, wood can smolder at or, when heated sufficiently, ignite at . Even before the flames of a wildfire arrive at a particular location, heat transfer from the wildfire front warms the air to , which pre-heats and dries flammable materials, causing materials to ignite faster and allowing the fire to spread faster. High-temperature and long-duration surface wildfires may encourage flashover or torching: the drying of tree canopies and their subsequent ignition from below.
Wildfires have a rapid forward rate of spread (FROS) when burning through dense uninterrupted fuels. They can move as fast as in forests and in grasslands. Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing. They may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks. Torching and fires in tree canopies encourage spotting, and dry ground fuels around a wildfire are especially vulnerable to ignition from firebrands. Spotting can create spot fires as hot embers and firebrands ignite fuels downwind from the fire. In Australian bushfires, spot fires are known to occur as far as from the fire front.
Especially large wildfires may affect air currents in their immediate vicinities by the stack effect: air rises as it is heated, and large wildfires create powerful updrafts that will draw in new, cooler air from surrounding areas in thermal columns. Great vertical differences in temperature and humidity encourage pyrocumulus clouds, strong winds, and fire whirls with the force of tornadoes at speeds of more than . Rapid rates of spread, prolific crowning or spotting, the presence of fire whirls, and strong convection columns signify extreme conditions.
Intensity variations during day and night
Intensity also increases during daytime hours. Burn rates of smoldering logs are up to five times greater during the day due to lower humidity, increased temperatures, and increased wind speeds. Sunlight warms the ground during the day which creates air currents that travel uphill. At night the land cools, creating air currents that travel downhill. Wildfires are fanned by these winds and often follow the air currents over hills and through valleys. Fires in Europe occur frequently during the hours of 12:00 p.m. and 2:00 p.m. Wildfire suppression operations in the United States revolve around a 24-hour fire day that begins at 10:00 a.m. due to the predictable increase in intensity resulting from the daytime warmth.
Climate change effects
Increasing risks due to climate change
Climate change promotes the type of weather that makes wildfires more likely. In some areas, an increase of wildfires has been attributed directly to climate change. Evidence from Earth's past also shows more fire in warmer periods. Climate change increases evapotranspiration. This can cause vegetation and soils to dry out. When a fire starts in an area with very dry vegetation, it can spread rapidly. Higher temperatures can also lengthen the fire season. This is the time of year in which severe wildfires are most likely, particularly in regions where snow is disappearing.
Weather conditions are raising the risks of wildfires. In the United States, about 3 million acres burned each year from 1985 to 1995, but since 2004, between 4 and 10 million acres have burned each year, with the trend continuing to increase. However, the total area burnt by wildfires has decreased worldwide, mostly because savanna has been converted to cropland, so there are fewer trees to burn.
Climate variability including heat waves, droughts, and El Niño, and regional weather patterns, such as high-pressure ridges, can increase the risk and alter the behavior of wildfires dramatically. Years of high precipitation can produce rapid vegetation growth, which when followed by warmer periods can encourage more widespread fires and longer fire seasons. High temperatures dry out the fuel loads and make them more flammable, increasing tree mortality and posing significant risks to global forest health. Since the mid-1980s, in the Western US, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season, or the most fire-prone time of the year. A 2019 study indicates that the increase in fire risk in California may be partially attributable to human-induced climate change.
In the summer of 1974–1975 (southern hemisphere), Australia suffered its worst recorded wildfire, when 15% of Australia's land mass suffered "extensive fire damage". Fires that summer burned up an estimated . In Australia, the annual number of hot days (above 35 °C) and very hot days (above 40 °C) has increased significantly in many areas of the country since 1950. The country has always had bushfires but in 2019, the extent and ferocity of these fires increased dramatically. For the first time catastrophic bushfire conditions were declared for Greater Sydney. New South Wales and Queensland declared a state of emergency but fires were also burning in South Australia and Western Australia.
In 2019, extreme heat and dryness caused massive wildfires in Siberia, Alaska, Canary Islands, Australia, and in the Amazon rainforest. The fires in the latter were caused mainly by illegal logging. The smoke from the fires expanded on huge territory including major cities, dramatically reducing air quality.
As of August 2020, the wildfires in that year were 13% worse than in 2019 due primarily to climate change, deforestation and agricultural burning. The Amazon rainforest's existence is threatened by fires. Record-breaking wildfires in 2021 occurred in Turkey, Greece and Russia, thought to be linked to climate change.
Carbon dioxide and other emissions from fires
The carbon released from wildfires can add to greenhouse gas concentrations. Climate models do not yet fully reflect this feedback.
Wildfires release large amounts of carbon dioxide, black and brown carbon particles, and ozone precursors such as volatile organic compounds and nitrogen oxides (NOx) into the atmosphere. These emissions affect radiation, clouds, and climate on regional and even global scales. Wildfires also emit substantial amounts of semi-volatile organic species that can partition from the gas phase to form secondary organic aerosol (SOA) over hours to days after emission. In addition, the formation of the other pollutants as the air is transported can lead to harmful exposures for populations in regions far away from the wildfires. While direct emissions of harmful pollutants can affect first responders and residents, wildfire smoke can also be transported over long distances and impact air quality across local, regional, and global scales.The health effects of wildfire smoke, such as worsening cardiovascular and respiratory conditions, extend beyond immediate exposure, contributing to nearly 16,000 annual deaths, a number expected to rise to 30,000 by 2050. The economic impact is also significant, with projected costs reaching $240 billion annually by 2050, surpassing other climate-related damages.
Over the past century, wildfires have accounted for 20–25% of global carbon emissions, the remainder from human activities. Global carbon emissions from wildfires through August 2020 equaled the average annual emissions of the European Union. In 2020, the carbon released by California's wildfires was significantly larger than the state's other carbon emissions.
Forest fires in Indonesia in 1997 were estimated to have released between 0.81 and 2.57 gigatonnes (0.89 and 2.83 billion short tons) of CO2 into the atmosphere, which is between 13–40% of the annual global carbon dioxide emissions from burning fossil fuels.
In June and July 2019, fires in the Arctic emitted more than 140 megatons of carbon dioxide, according to an analysis by CAMS. To put that into perspective this amounts to the same amount of carbon emitted by 36 million cars in a year. The recent wildfires and their massive CO2 emissions mean that it will be important to take them into consideration when implementing measures for reaching greenhouse gas reduction targets accorded with the Paris climate agreement. Due to the complex oxidative chemistry occurring during the transport of wildfire smoke in the atmosphere, the toxicity of emissions was indicated to increase over time.
Atmospheric models suggest that these concentrations of sooty particles could increase absorption of incoming solar radiation during winter months by as much as 15%. The Amazon is estimated to hold around 90 billion tons of carbon. As of 2019, the earth's atmosphere has 415 parts per million of carbon, and the destruction of the Amazon would add about 38 parts per million.
Some research has shown wildfire smoke can have a cooling effect.
Research in 2007 stated that black carbon in snow changed temperature three times more than atmospheric carbon dioxide. As much as 94 percent of Arctic warming may be caused by dark carbon on snow that initiates melting. The dark carbon comes from fossil fuels burning, wood and other biofuels, and forest fires. Melting can occur even at low concentrations of dark carbon (below five parts per billion).
Prevention
Wildfire prevention refers to the preemptive methods aimed at reducing the risk of fires as well as lessening its severity and spread. Prevention techniques aim to manage air quality, maintain ecological balances, protect resources, and to affect future fires. Prevention policies must consider the role that humans play in wildfires, since, for example, 95% of forest fires in Europe are related to human involvement.
Wildfire prevention programs around the world may employ techniques such as wildland fire use (WFU) and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions. Other objectives can include maintenance of healthy forests, rangelands, and wetlands, and support of ecosystem diversity.
Strategies for wildfire prevention, detection, control and suppression have varied over the years. One common and inexpensive technique to reduce the risk of uncontrolled wildfires is controlled burning: intentionally igniting smaller less-intense fires to minimize the amount of flammable material available for a potential wildfire. Vegetation may be burned periodically to limit the accumulation of plants and other debris that may serve as fuel, while also maintaining high species diversity. While other people claim that controlled burns and a policy of allowing some wildfires to burn is the cheapest method and an ecologically appropriate policy for many forests, they tend not to take into account the economic value of resources that are consumed by the fire, especially merchantable timber. Some studies conclude that while fuels may also be removed by logging, such thinning treatments may not be effective at reducing fire severity under extreme weather conditions.
Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure. Communities in the Philippines also maintain fire lines wide between the forest and their village, and patrol these lines during summer months or seasons of dry weather. Continued residential development in fire-prone areas and rebuilding structures destroyed by fires has been met with criticism. The ecological benefits of fire are often overridden by the economic and safety benefits of protecting structures and human life.
Detection
The demand for timely, high-quality fire information has increased in recent years. Fast and effective detection is a key factor in wildfire fighting. Early detection efforts were focused on early response, accurate results in both daytime and nighttime, and the ability to prioritize fire danger. Fire lookout towers were used in the United States in the early 20th century and fires were reported using telephones, carrier pigeons, and heliographs. Aerial and land photography using instant cameras were used in the 1950s until infrared scanning was developed for fire detection in the 1960s. However, information analysis and delivery was often delayed by limitations in communication technology. Early satellite-derived fire analyses were hand-drawn on maps at a remote site and sent via overnight mail to the fire manager. During the Yellowstone fires of 1988, a data station was established in West Yellowstone, permitting the delivery of satellite-based fire information in approximately four hours.
Public hotlines, fire lookouts in towers, and ground and aerial patrols can be used as a means of early detection of forest fires. However, accurate human observation may be limited by operator fatigue, time of day, time of year, and geographic location. Electronic systems have gained popularity in recent years as a possible resolution to human operator error. These systems may be semi- or fully automated and employ systems based on the risk area and degree of human presence, as suggested by GIS data analyses. An integrated approach of multiple systems can be used to merge satellite data, aerial imagery, and personnel position via Global Positioning System (GPS) into a collective whole for near-realtime use by wireless Incident Command Centers.
Local sensor networks
A small, high risk area that features thick vegetation, a strong human presence, or is close to a critical urban area can be monitored using a local sensor network. Detection systems may include wireless sensor networks that act as automated weather systems: detecting temperature, humidity, and smoke. These may be battery-powered, solar-powered, or tree-rechargeable: able to recharge their battery systems using the small electrical currents in plant material. Larger, medium-risk areas can be monitored by scanning towers that incorporate fixed cameras and sensors to detect smoke or additional factors such as the infrared signature of carbon dioxide produced by fires. Additional capabilities such as night vision, brightness detection, and color change detection may also be incorporated into sensor arrays.
The Department of Natural Resources signed a contract with PanoAI for the installation of 360 degree 'rapid detection' cameras around the Pacific northwest, which are mounted on cell towers and are capable of 24/7 monitoring of a 15 mile radius. Additionally, Sensaio Tech, based in Brazil and Toronto, has released a sensor device that continuously monitors 14 different variables common in forests, ranging from soil temperature to salinity. This information is connected live back to clients through dashboard visualizations, while mobile notifications are provided regarding dangerous levels.
Satellite and aerial monitoring
Satellite and aerial monitoring through the use of planes, helicopter, or UAVs can provide a wider view and may be sufficient to monitor very large, low risk areas. These more sophisticated systems employ GPS and aircraft-mounted infrared or high-resolution visible cameras to identify and target wildfires. Satellite-mounted sensors such as Envisat's Advanced Along Track Scanning Radiometer and European Remote-Sensing Satellite's Along-Track Scanning Radiometer can measure infrared radiation emitted by fires, identifying hot spots greater than . The National Oceanic and Atmospheric Administration's Hazard Mapping System combines remote-sensing data from satellite sources such as Geostationary Operational Environmental Satellite (GOES), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Advanced Very High Resolution Radiometer (AVHRR) for detection of fire and smoke plume locations. However, satellite detection is prone to offset errors, anywhere from for MODIS and AVHRR data and up to for GOES data. Satellites in geostationary orbits may become disabled, and satellites in polar orbits are often limited by their short window of observation time. Cloud cover and image resolution may also limit the effectiveness of satellite imagery. Global Forest Watch provides detailed daily updates on fire alerts.
In 2015 a new fire detection tool is in operation at the U.S. Department of Agriculture (USDA) Forest Service (USFS) which uses data from the Suomi National Polar-orbiting Partnership (NPP) satellite to detect smaller fires in more detail than previous space-based products. The high-resolution data is used with a computer model to predict how a fire will change direction based on weather and land conditions.
In 2014, an international campaign was organized in South Africa's Kruger National Park to validate fire detection products including the new VIIRS active fire data. In advance of that campaign, the Meraka Institute of the Council for Scientific and Industrial Research in Pretoria, South Africa, an early adopter of the VIIRS 375 m fire product, put it to use during several large wildfires in Kruger.
Since 2021 NASA has provided active fire locations in near real-time via the Fire Information for Resource Management System (FIRMS).
Between 2022–2023, wildfires throughout North America prompted an uptake in the delivery and design of various technologies using artificial intelligence for early detection, prevention, and prediction of wildfires.
Suppression
Wildfire suppression depends on the technologies available in the area in which the wildfire occurs. In less developed nations the techniques used can be as simple as throwing sand or beating the fire with sticks or palm fronds. In more advanced nations, the suppression methods vary due to increased technological capacity. Silver iodide can be used to encourage snow fall, while fire retardants and water can be dropped onto fires by unmanned aerial vehicles, planes, and helicopters. Complete fire suppression is no longer an expectation, but the majority of wildfires are often extinguished before they grow out of control. While more than 99% of the 10,000 new wildfires each year are contained, escaped wildfires under extreme weather conditions are difficult to suppress without a change in the weather. Wildfires in Canada and the US burn an average of per year.
Above all, fighting wildfires can become deadly. A wildfire's burning front may also change direction unexpectedly and jump across fire breaks. Intense heat and smoke can lead to disorientation and loss of appreciation of the direction of the fire, which can make fires particularly dangerous. For example, during the 1949 Mann Gulch fire in Montana, United States, thirteen smokejumpers died when they lost their communication links, became disoriented, and were overtaken by the fire. In the Australian February 2009 Victorian bushfires, at least 173 people died and over 2,029 homes and 3,500 structures were lost when they became engulfed by wildfire.
Costs of wildfire suppression
The suppression of wild fires takes up a large amount of a country's gross domestic product which directly affects the country's economy. While costs vary wildly from year to year, depending on the severity of each fire season, in the United States, local, state, federal and tribal agencies collectively spend tens of billions of dollars annually to suppress wildfires. In the United States, it was reported that approximately $6 billion was spent between 2004–2008 to suppress wildfires in the country. In California, the U.S. Forest Service spends about $200 million per year to suppress 98% of wildfires and up to $1 billion to suppress the other 2% of fires that escape initial attack and become large.
Wildland firefighting safety
Wildland fire fighters face several life-threatening hazards including heat stress, fatigue, smoke and dust, as well as the risk of other injuries such as burns, cuts and scrapes, animal bites, and even rhabdomyolysis. Between 2000 and 2016, more than 350 wildland firefighters died on-duty.
Especially in hot weather conditions, fires present the risk of heat stress, which can entail feeling heat, fatigue, weakness, vertigo, headache, or nausea. Heat stress can progress into heat strain, which entails physiological changes such as increased heart rate and core body temperature. This can lead to heat-related illnesses, such as heat rash, cramps, exhaustion or heat stroke. Various factors can contribute to the risks posed by heat stress, including strenuous work, personal risk factors such as age and fitness, dehydration, sleep deprivation, and burdensome personal protective equipment. Rest, cool water, and occasional breaks are crucial to mitigating the effects of heat stress.
Smoke, ash, and debris can also pose serious respiratory hazards for wildland firefighters. The smoke and dust from wildfires can contain gases such as carbon monoxide, sulfur dioxide and formaldehyde, as well as particulates such as ash and silica. To reduce smoke exposure, wildfire fighting crews should, whenever possible, rotate firefighters through areas of heavy smoke, avoid downwind firefighting, use equipment rather than people in holding areas, and minimize mop-up. Camps and command posts should also be located upwind of wildfires. Protective clothing and equipment can also help minimize exposure to smoke and ash.
Firefighters are also at risk of cardiac events including strokes and heart attacks. Firefighters should maintain good physical fitness. Fitness programs, medical screening and examination programs which include stress tests can minimize the risks of firefighting cardiac problems. Other injury hazards wildland firefighters face include slips, trips, falls, burns, scrapes, and cuts from tools and equipment, being struck by trees, vehicles, or other objects, plant hazards such as thorns and poison ivy, snake and animal bites, vehicle crashes, electrocution from power lines or lightning storms, and unstable building structures.
Fire retardants
Fire retardants are used to slow wildfires by inhibiting combustion. They are aqueous solutions of ammonium phosphates and ammonium sulfates, as well as thickening agents. The decision to apply retardant depends on the magnitude, location and intensity of the wildfire. In certain instances, fire retardant may also be applied as a precautionary fire defense measure.
Typical fire retardants contain the same agents as fertilizers. Fire retardants may also affect water quality through leaching, eutrophication, or misapplication. Fire retardant's effects on drinking water remain inconclusive. Dilution factors, including water body size, rainfall, and water flow rates lessen the concentration and potency of fire retardant. Wildfire debris (ash and sediment) clog rivers and reservoirs increasing the risk for floods and erosion that ultimately slow and/or damage water treatment systems. There is continued concern of fire retardant effects on land, water, wildlife habitats, and watershed quality, additional research is needed. However, on the positive side, fire retardant (specifically its nitrogen and phosphorus components) has been shown to have a fertilizing effect on nutrient-deprived soils and thus creates a temporary increase in vegetation.
Modeling
Impacts on the natural environment
On the atmosphere
Most of Earth's weather and air pollution resides in the troposphere, the part of the atmosphere that extends from the surface of the planet to a height of about . The vertical lift of a severe thunderstorm or pyrocumulonimbus can be enhanced in the area of a large wildfire, which can propel smoke, soot (black carbon), and other particulate matter as high as the lower stratosphere. Previously, prevailing scientific theory held that most particles in the stratosphere came from volcanoes, but smoke and other wildfire emissions have been detected from the lower stratosphere. Pyrocumulus clouds can reach over wildfires. Satellite observation of smoke plumes from wildfires revealed that the plumes could be traced intact for distances exceeding . Computer-aided models such as CALPUFF may help predict the size and direction of wildfire-generated smoke plumes by using atmospheric dispersion modeling.
Wildfires can affect local atmospheric pollution, and release carbon in the form of carbon dioxide. Wildfire emissions contain fine particulate matter which can cause cardiovascular and respiratory problems. Increased fire byproducts in the troposphere can increase ozone concentrations beyond safe levels.
On ecosystems
Wildfires are common in climates that are sufficiently moist to allow the growth of vegetation but feature extended dry, hot periods. Such places include the vegetated areas of Australia and Southeast Asia, the veld in southern Africa, the fynbos in the Western Cape of South Africa, the forested areas of the United States and Canada, and the Mediterranean Basin.
High-severity wildfire creates complex early seral forest habitat (also called "snag forest habitat"), which often has higher species richness and diversity than unburned old forest. Plant and animal species in most types of North American forests evolved with fire, and many of these species depend on wildfires, and particularly high-severity fires, to reproduce and grow. Fire helps to return nutrients from plant matter back to the soil. The heat from fire is necessary to the germination of certain types of seeds, and the snags (dead trees) and early successional forests created by high-severity fire create habitat conditions that are beneficial to wildlife. Early successional forests created by high-severity fire support some of the highest levels of native biodiversity found in temperate conifer forests. Post-fire logging has no ecological benefits and many negative impacts; the same is often true for post-fire seeding. The exclusion of wildfires can contribute to vegetation regime shifts, such as woody plant encroachment.
Although some ecosystems rely on naturally occurring fires to regulate growth, some ecosystems suffer from too much fire, such as the chaparral in southern California and lower-elevation deserts in the American Southwest. The increased fire frequency in these ordinarily fire-dependent areas has upset natural cycles, damaged native plant communities, and encouraged the growth of non-native weeds. Invasive species, such as Lygodium microphyllum and Bromus tectorum, can grow rapidly in areas that were damaged by fires. Because they are highly flammable, they can increase the future risk of fire, creating a positive feedback loop that increases fire frequency and further alters native vegetation communities.
In the Amazon rainforest, drought, logging, cattle ranching practices, and slash-and-burn agriculture damage fire-resistant forests and promote the growth of flammable brush, creating a cycle that encourages more burning. Fires in the rainforest threaten its collection of diverse species and produce large amounts of CO2. Also, fires in the rainforest, along with drought and human involvement, could damage or destroy more than half of the Amazon rainforest by 2030. Wildfires generate ash, reduce the availability of organic nutrients, and cause an increase in water runoff, eroding other nutrients and creating flash flood conditions. A 2003 wildfire in the North Yorkshire Moors burned off of heather and the underlying peat layers. Afterwards, wind erosion stripped the ash and the exposed soil, revealing archaeological remains dating to 10,000 BC. Wildfires can also have an effect on climate change, increasing the amount of carbon released into the atmosphere and inhibiting vegetation growth, which affects overall carbon uptake by plants.
On waterways
Debris and chemical runoff into waterways after wildfires can make drinking water sources unsafe. Though it is challenging to quantify the impacts of wildfires on surface water quality, research suggests that the concentration of many pollutants increases post-fire. The impacts occur during active burning and up to years later. Increases in nutrients and total suspended sediments can happen within a year while heavy metal concentrations may peak 1–2 years after a wildfire.
Benzene is one of many chemicals that have been found in drinking water systems after wildfires. Benzene can permeate certain plastic pipes and thus require long times to be removed from the water distribution infrastructure. Researchers estimated that, in worst case scenarios, more than 286 days of constant flushing of a contaminated HDPE service line were needed to reduce benzene below safe drinking water limits. Temperature increases caused by fires, including wildfires, can cause plastic water pipes to generate toxic chemicals such as benzene.
On plant and animals
Impacts on humans
Wildfire risk is the chance that a wildfire will start in or reach a particular area and the potential loss of human values if it does. Risk is dependent on variable factors such as human activities, weather patterns, availability of wildfire fuels, and the availability or lack of resources to suppress a fire. Wildfires have continually been a threat to human populations. However, human-induced geographic and climatic changes are exposing populations more frequently to wildfires and increasing wildfire risk. It is speculated that the increase in wildfires arises from a century of wildfire suppression coupled with the rapid expansion of human developments into fire-prone wildlands. Wildfires are naturally occurring events that aid in promoting forest health. Global warming and climate changes are causing an increase in temperatures and more droughts nationwide which contributes to an increase in wildfire risk.
Airborne hazards
The most noticeable adverse effect of wildfires is the destruction of property. However, hazardous chemicals released also significantly impact human health.
Wildfire smoke is composed primarily of carbon dioxide and water vapor. Other common components present in lower concentrations are carbon monoxide, formaldehyde, acrolein, polyaromatic hydrocarbons, and benzene. Small airborne particulates (in solid form or liquid droplets) are also present in smoke and ash debris. 80–90% of wildfire smoke, by mass, is within the fine particle size class of 2.5 micrometers in diameter or smaller.
Carbon dioxide in smoke poses a low health risk due to its low toxicity. Rather, carbon monoxide and fine particulate matter, particularly 2.5 μm in diameter and smaller, have been identified as the major health threats. High levels of heavy metals, including lead, arsenic, cadmium, and copper were found in the ash debris following the 2007 Californian wildfires. A national clean-up campaign was organised in fear of the health effects from exposure. In the devastating California Camp Fire (2018) that killed 85 people, lead levels increased by around 50 times in the hours following the fire at a site nearby (Chico). Zinc concentration also increased significantly in Modesto, 150 miles away. Heavy metals such as manganese and calcium were found in numerous California fires as well. Other chemicals are considered to be significant hazards but are found in concentrations that are too low to cause detectable health effects.
The degree of wildfire smoke exposure to an individual is dependent on the length, severity, duration, and proximity of the fire. People are exposed directly to smoke via the respiratory tract through inhalation of air pollutants. Indirectly, communities are exposed to wildfire debris that can contaminate soil and water supplies.
The U.S. Environmental Protection Agency (EPA) developed the air quality index (AQI), a public resource that provides national air quality standard concentrations for common air pollutants. The public can use it to determine their exposure to hazardous air pollutants based on visibility range.
Health effects
Wildfire smoke contains particulates that may have adverse effects upon the human respiratory system. Evidence of the health effects should be relayed to the public so that exposure may be limited. The evidence can also be used to influence policy to promote positive health outcomes.
Inhalation of smoke from a wildfire can be a health hazard. Wildfire smoke is composed of combustion products i.e. carbon dioxide, carbon monoxide, water vapor, particulate matter, organic chemicals, nitrogen oxides and other compounds. The principal health concern is the inhalation of particulate matter and carbon monoxide.
Particulate matter (PM) is a type of air pollution made up of particles of dust and liquid droplets. They are characterized into three categories based on particle diameter: coarse PM, fine PM, and ultrafine PM. Coarse particles are between 2.5 micrometers and 10 micrometers, fine particles measure 0.1 to 2.5 micrometers, and ultrafine particle are less than 0.1 micrometer. lmpact on the body upon inhalation varies by size. Coarse PM is filtered by the upper airways and can accumulate and cause pulmonary inflammation. This can result in eye and sinus irritation as well as sore throat and coughing. Coarse PM is often composed of heavier and more toxic materials that lead to short-term effects with stronger impact.
Smaller PM moves further into the respiratory system creating issues deep into the lungs and the bloodstream. In asthma patients, PM2.5 causes inflammation but also increases oxidative stress in the epithelial cells. These particulates also cause apoptosis and autophagy in lung epithelial cells. Both processes damage the cells and impact cell function. This damage impacts those with respiratory conditions such as asthma where the lung tissues and function are already compromised. Particulates less than 0.1 micrometer are called ultrafine particle (UFP). It is a major component of wildfire smoke. UFP can enter the bloodstream like PM2.5–0.1 however studies show that it works into the blood much quicker. The inflammation and epithelial damage done by UFP has also shown to be much more severe. PM2.5 is of the largest concern in regards to wildfire. This is particularly hazardous to the very young, elderly and those with chronic conditions such as asthma, chronic obstructive pulmonary disease (COPD), cystic fibrosis and cardiovascular conditions. The illnesses most commonly associated with exposure to fine PM from wildfire smoke are bronchitis, exacerbation of asthma or COPD, and pneumonia. Symptoms of these complications include wheezing and shortness of breath and cardiovascular symptoms include chest pain, rapid heart rate and fatigue.
Asthma exacerbation
Several epidemiological studies have demonstrated a close association between air pollution and respiratory allergic diseases such as bronchial asthma.
An observational study of smoke exposure related to the 2007 San Diego wildfires revealed an increase both in healthcare utilization and respiratory diagnoses, especially asthma among the group sampled. Projected climate scenarios of wildfire occurrences predict significant increases in respiratory conditions among young children. PM triggers a series of biological processes including inflammatory immune response, oxidative stress, which are associated with harmful changes in allergic respiratory diseases.
Although some studies demonstrated no significant acute changes in lung function among people with asthma related to PM from wildfires, a possible explanation for these counterintuitive findings is the increased use of quick-relief medications, such as inhalers, in response to elevated levels of smoke among those already diagnosed with asthma.
There is consistent evidence between wildfire smoke and the exacerbation of asthma.
Asthma is one of the most common chronic disease among children in the United States, affecting an estimated 6.2 million children. Research on asthma risk focuses specifically on the risk of air pollution during the gestational period. Several pathophysiology processes are involved in this. Considerable airway development occurs during the 2nd and 3rd trimesters and continues until 3 years of age. It is hypothesized that exposure to these toxins during this period could have consequential effects, as the epithelium of the lungs during this time could have increased permeability to toxins. Exposure to air pollution during parental and pre-natal stage could induce epigenetic changes which are responsible for the development of asthma. Studies have found significant association between PM2.5, NO2 and development of asthma during childhood despite heterogeneity among studies. Furthermore, maternal exposure to chronic stressors is most likely present in distressed communities, and as this can be correlated with childhood asthma, it may further explain links between early childhood exposure to air pollution, neighborhood poverty, and childhood risk.
Carbon monoxide danger
Carbon monoxide (CO) is a colorless, odorless gas that can be found at the highest concentration at close proximity to a smoldering fire. Thus, it is a serious threat to the health of wildfire firefighters. CO in smoke can be inhaled into the lungs where it is absorbed into the bloodstream and reduces oxygen delivery to the body's vital organs. At high concentrations, it can cause headaches, weakness, dizziness, confusion, nausea, disorientation, visual impairment, coma, and even death. Even at lower concentrations, such as those found at wildfires, individuals with cardiovascular disease may experience chest pain and cardiac arrhythmia. A recent study tracking the number and cause of wildfire firefighter deaths from 1990 to 2006 found that 21.9% of the deaths occurred from heart attacks.
Another important and somewhat less obvious health effect of wildfires is psychiatric diseases and disorders. Both adults and children from various countries who were directly and indirectly affected by wildfires were found to demonstrate different mental conditions linked to their experience with the wildfires. These include post-traumatic stress disorder (PTSD), depression, anxiety, and phobias.
Epidemiology
The Western US has seen an increase in both the frequency and intensity of wildfires over the last several decades. This has been attributed to the arid climate of there and the effects of global warming. An estimated 46 million people were exposed to wildfire smoke from 2004 to 2009 in the Western US. Evidence has demonstrated that wildfire smoke can increase levels of airborne particulate.
The EPA has defined acceptable concentrations of PM in the air, through the National Ambient Air Quality Standards and monitoring of ambient air quality has been mandated. Due to these monitoring programs and the incidence of several large wildfires near populated areas, epidemiological studies have been conducted and demonstrate an association between human health effects and an increase in fine particulate matter due to wildfire smoke.
An increase in PM smoke emitted from the Hayman fire in Colorado in June 2002, was associated with an increase in respiratory symptoms in patients with COPD. Looking at the wildfires in Southern California in 2003, investigators have shown an increase in hospital admissions due to asthma symptoms while being exposed to peak concentrations of PM in smoke. Another epidemiological study found a 7.2% (95% confidence interval: 0.25%, 15%) increase in risk of respiratory related hospital admissions during smoke wave days with high wildfire-specific particulate matter 2.5 compared to matched non-smoke-wave days.
Children participating in the Children's Health Study were also found to have an increase in eye and respiratory symptoms, medication use and physician visits. Mothers who were pregnant during the fires gave birth to babies with a slightly reduced average birth weight compared to those who were not exposed. Suggesting that pregnant women may also be at greater risk to adverse effects from wildfire. Worldwide, it is estimated that 339,000 people die due to the effects of wildfire smoke each year.
Besides the size of PM, their chemical composition should also be considered. Antecedent studies have demonstrated that the chemical composition of PM2.5 from wildfire smoke can yield different estimates of human health outcomes as compared to other sources of smoke such as solid fuels.
Post-fire risks
After a wildfire, hazards remain. Residents returning to their homes may be at risk from falling fire-weakened trees. Humans and pets may also be harmed by falling into ash pits. The Intergovernmental Panel on Climate Change (IPCC) also reports that wildfires cause significant damage to electric systems, especially in dry regions.
Chemically contaminated drinking water, at levels of hazardous waste concern, is a growing problem. In particular, hazardous waste scale chemical contamination of buried water systems was first discovered in the U.S. in 2017, and has since been increasingly documented in Hawaii, Colorado, and Oregon after wildfires. In 2021, Canadian authorities adapted their post-fire public safety investigation approaches in British Columbia to screen for this risk, but have not found it as of 2023. Another challenge is that private drinking wells and the plumbing within a building can also become chemically contaminated and unsafe. Households experience a wide-variety of significant economic and health impacts related to this contaminated water. Evidence-based guidance on how to inspect and test wildfire impacted wells and building water systems was developed for the first time in 2020. In Paradise, California, for example, the 2018 Camp Fire caused more than $150 million dollars worth of damage. This required almost a year of time to decontaminate and repair the municipal drinking water system from wildfire damage.
The source of this contamination was first proposed after the 2018 Camp Fire in California as originating from thermally degraded plastics in water systems, smoke and vapors entering depressurized plumbing, and contaminated water in buildings being sucked into the municipal water system. In 2020, it was first shown that thermal degradation of plastic drinking water materials was one potential contamination source. In 2023, the second theory was confirmed where contamination could be sucked into pipes that lost water pressure.
Other post-fire risks, can increase if other extreme weather follows. For example, wildfires make soil less able to absorb precipitation, so heavy rainfall can result in more severe flooding and damages like mud slides.
At-risk groups
Firefighters
Firefighters are at greatest risk for acute and chronic health effects resulting from wildfire smoke exposure. Some of the most common health conditions that firefighters acquire from prolonged smoke inhalation include cardiovascular and respiratory diseases. For example, wildland firefighters can get hypoxia, which is a condition in which the body does not receive enough oxygen. Due to firefighters' occupational duties, they are frequently exposed to hazardous chemicals at close proximity for longer periods of time. A case study on the exposure of wildfire smoke among wildland firefighters shows that firefighters are exposed to significant levels of carbon monoxide and respiratory irritants above OSHA-permissible exposure limits (PEL) and ACGIH threshold limit values (TLV). 5–10% are overexposed.
Between 2001 and 2012, over 200 fatalities occurred among wildland firefighters. In addition to heat and chemical hazards, firefighters are also at risk for electrocution from power lines; injuries from equipment; slips, trips, and falls; injuries from vehicle rollovers; heat-related illness; insect bites and stings; stress; and rhabdomyolysis.
Residents
Residents in communities surrounding wildfires are exposed to lower concentrations of chemicals, but they are at a greater risk for indirect exposure through water or soil contamination. Exposure to residents is greatly dependent on individual susceptibility. Vulnerable persons such as children (ages 0–4), the elderly (ages 65 and older), smokers, and pregnant women are at an increased risk due to their already compromised body systems, even when the exposures are present at low chemical concentrations and for relatively short exposure periods. They are also at risk for future wildfires and may move away to areas they consider less risky.
Wildfires affect large numbers of people in Western Canada and the United States. In California alone, more than 350,000 people live in towns and cities in "very high fire hazard severity zones".
Direct risks to building residents in fire-prone areas can be moderated through design choices such as choosing fire-resistant vegetation, maintaining landscaping to avoid debris accumulation and to create firebreaks, and by selecting fire-retardant roofing materials. Potential compounding issues with poor air quality and heat during warmer months may be addressed with MERV 11 or higher outdoor air filtration in building ventilation systems, mechanical cooling, and a provision of a refuge area with additional air cleaning and cooling, if needed.
History
The first evidence of wildfires is fossils of the giant fungi Prototaxites preserved as charcoal, discovered in South Wales and Poland, dating to the Silurian period (about ). Smoldering surface fires started to occur sometime before the Early Devonian period . Low atmospheric oxygen during the Middle and Late Devonian was accompanied by a decrease in charcoal abundance. Additional charcoal evidence suggests that fires continued through the Carboniferous period. Later, the overall increase of atmospheric oxygen from 13% in the Late Devonian to 30–31% by the Late Permian was accompanied by a more widespread distribution of wildfires. Later, a decrease in wildfire-related charcoal deposits from the late Permian to the Triassic periods is explained by a decrease in oxygen levels.
Wildfires during the Paleozoic and Mesozoic periods followed patterns similar to fires that occur in modern times. Surface fires driven by dry seasons are evident in Devonian and Carboniferous progymnosperm forests. Lepidodendron forests dating to the Carboniferous period have charred peaks, evidence of crown fires. In Jurassic gymnosperm forests, there is evidence of high frequency, light surface fires. The increase of fire activity in the late Tertiary is possibly due to the increase of C4-type grasses. As these grasses shifted to more mesic habitats, their high flammability increased fire frequency, promoting grasslands over woodlands. However, fire-prone habitats may have contributed to the prominence of trees such as those of the genera Eucalyptus, Pinus and Sequoia, which have thick bark to withstand fires and employ pyriscence.
Human involvement
The human use of fire for agricultural and hunting purposes during the Paleolithic and Mesolithic ages altered pre-existing landscapes and fire regimes. Woodlands were gradually replaced by smaller vegetation that facilitated travel, hunting, seed-gathering and planting. In recorded human history, minor allusions to wildfires were mentioned in the Bible and by classical writers such as Homer. However, while ancient Hebrew, Greek, and Roman writers were aware of fires, they were not very interested in the uncultivated lands where wildfires occurred. Wildfires were used in battles throughout human history as early thermal weapons. From the Middle Ages, accounts were written of occupational burning as well as customs and laws that governed the use of fire. In Germany, regular burning was documented in 1290 in the Odenwald and in 1344 in the Black Forest. In the 14th century Sardinia, firebreaks were used for wildfire protection. In Spain during the 1550s, sheep husbandry was discouraged in certain provinces by Philip II due to the harmful effects of fires used in transhumance. As early as the 17th century, Native Americans were observed using fire for many purposes including cultivation, signaling, and warfare. Scottish botanist David Douglas noted the native use of fire for tobacco cultivation, to encourage deer into smaller areas for hunting purposes, and to improve foraging for honey and grasshoppers. Charcoal found in sedimentary deposits off the Pacific coast of Central America suggests that more burning occurred in the 50 years before the Spanish colonization of the Americas than after the colonization. In the post-World War II Baltic region, socio-economic changes led more stringent air quality standards and bans on fires that eliminated traditional burning practices. In the mid-19th century, explorers from observed Australian Aborigines using fire for ground clearing, hunting, and regeneration of plant food in a method later named fire-stick farming. Such careful use of fire has been employed for centuries in lands protected by Kakadu National Park to encourage biodiversity.
Wildfires typically occur during periods of increased temperature and drought. An increase in fire-related debris flow in alluvial fans of northeastern Yellowstone National Park was linked to the period between AD 1050 and 1200, coinciding with the Medieval Warm Period. However, human influence caused an increase in fire frequency. Dendrochronological fire scar data and charcoal layer data in Finland suggests that, while many fires occurred during severe drought conditions, an increase in the number of fires during 850 BC and 1660 AD can be attributed to human influence. Charcoal evidence from the Americas suggested a general decrease in wildfires between 1 AD and 1750 compared to previous years. However, a period of increased fire frequency between 1750 and 1870 was suggested by charcoal data from North America and Asia, attributed to human population growth and influences such as land clearing practices. This period was followed by an overall decrease in burning in the 20th century, linked to the expansion of agriculture, increased livestock grazing, and fire prevention efforts. A meta-analysis found that 17 times more land burned annually in California before 1800 compared to recent decades (1,800,000 hectares/year compared to 102,000 hectares/year).
According to a paper published in the journal Science, the number of natural and human-caused fires decreased by 24.3% between 1998 and 2015. Researchers explain this as a transition from nomadism to settled lifestyle and intensification of agriculture that lead to a drop in the use of fire for land clearing.
Increases of certain tree species (i.e. conifers) over others (i.e. deciduous trees) can increase wildfire risk, especially if these trees are also planted in monocultures.
Some invasive species, moved in by humans (i.e., for the pulp and paper industry) have in some cases also increased the intensity of wildfires. Examples include species such as Eucalyptus in California and gamba grass in Australia.
Society and culture
Wildfires have a place in many cultures. "To spread like wildfire" is a common idiom in English, meaning something that "quickly affects or becomes known by more and more people".
Wildfire activity has been attributed as a major factor in the development of Ancient Greece. In modern Greece, as in many other regions, it is the most common disaster caused by a natural hazard and figures prominently in the social and economic lives of its people.
In 1937, U.S. President Franklin D. Roosevelt initiated a nationwide fire prevention campaign, highlighting the role of human carelessness in forest fires. Later posters of the program featured Uncle Sam, characters from the Disney movie Bambi, and the official mascot of the U.S. Forest Service, Smokey Bear. The Smokey Bear fire prevention campaign has yielded one of the most popular characters in the United States; for many years there was a living Smokey Bear mascot, and it has been commemorated on postage stamps.
There are also significant indirect or second-order societal impacts from wildfire, such as demands on utilities to prevent power transmission equipment from becoming ignition sources, and the cancelation or nonrenewal of homeowners insurance for residents living in wildfire-prone areas.
See also
List of wildfires
Wildfire risk indices:
References
Sources
{{Columns-list|colwidth=45em|
(HTML version)
(HTML version)
Attribution
Articles containing video clips
Emergency management
Fire prevention
Types of fire
Natural disasters
Pollution
Wildfire ecology
Weather lore | Wildfire | [
"Physics"
] | 11,582 | [
"Weather",
"Natural disasters",
"Physical phenomena",
"Weather lore"
] |
56,107 | https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings%20algorithm | In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value).
Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
History
The algorithm is named in part for Nicholas Metropolis, the first coauthor of a 1953 paper, entitled Equation of State Calculations by Fast Computing Machines, with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. For many years the algorithm was known simply as the Metropolis algorithm. The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, W.K. Hastings extended it to the more general case. The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear.
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with Stanisław Ulam, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics". Further historical clarification is made by Gubernatis in a 2005 journal article recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)". In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with John Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death, Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
Description
The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density , provided that we know a function proportional to the density and the values of can be calculated. The requirement that must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice.
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a Markov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function of the current and candidate sample values with respect to the desired distribution.
The method used to propose new candidates is characterized by the probability distribution (sometimes written ) of a new proposed sample given the previous sample . This is called the proposal density, proposal function, or jumping distribution. A common choice for is a Gaussian distribution centered at , so that points closer to are more likely to be visited next, making the sequence of samples into a Gaussian random walk. In the original paper by Metropolis et al. (1953), was suggested to be a uniform distribution limited to some maximum distance from . More complicated proposal functions are also possible, such as those of Hamiltonian Monte Carlo, Langevin Monte Carlo, or preconditioned Crank–Nicolson.
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Metropolis algorithm (symmetric proposal distribution)
Let be a function that is proportional to the desired probability density function (a.k.a. a target distribution).
Initialization: Choose an arbitrary point to be the first observation in the sample and choose a proposal function . In this section, is assumed to be symmetric; in other words, it must satisfy .
For each iteration t:
Propose a candidate for the next sample by picking from the distribution .
Calculate the acceptance ratio , which will be used to decide whether to accept or reject the candidate. Because f is proportional to the density of P, we have that .
Accept or reject:
Generate a uniform random number .
If , then accept the candidate by setting ,
If , then reject the candidate and set instead.
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. at specific point is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is . If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of corresponding to an ), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of , while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density .
Compared with an algorithm like adaptive rejection sampling that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
The samples are autocorrelated. Even though over the long term they do correctly follow , a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burn-in period is typically necessary, where an initial number of samples are thrown away.
On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other high-dimensional statistical models used nowadays in many disciplines.
In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality. This is especially applicable when the multivariate distribution is composed of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods, the adaptive rejection Metropolis sampling algorithm, a simple one-dimensional Metropolis–Hastings step, or slice sampling.
Formal derivation
The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution . To accomplish this, the algorithm uses a Markov process, which asymptotically reaches a unique stationary distribution such that .
A Markov process is uniquely defined by its transition probabilities , the probability of transitioning from any given state to any other given state . It has a unique stationary distribution when the following two conditions are met:
Existence of stationary distribution: there must exist a stationary distribution . A sufficient but not necessary condition is detailed balance, which requires that each transition is reversible: for every pair of states , the probability of being in state and transitioning to state must be equal to the probability of being in state and transitioning to state , .
Uniqueness of stationary distribution: the stationary distribution must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite.
The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distribution is chosen to be . The derivation of the algorithm starts with the condition of detailed balance:
which is re-written as
The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distribution is the conditional probability of proposing a state given , and the acceptance distribution is the probability to accept the proposed state . The transition probability can be written as the product of them:
Inserting this relation in the previous equation, we have
The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice:
For this Metropolis acceptance ratio , either or and, either way, the condition is satisfied.
The Metropolis–Hastings algorithm can thus be written as follows:
Initialise
Pick an initial state .
Set .
Iterate
Generate a random candidate state according to .
Calculate the acceptance probability .
Accept or reject:
generate a uniform random number ;
if , then accept the new state and set ;
if , then reject the new state, and copy the old state forward .
Increment: set .
Provided that specified conditions are met, the empirical distribution of saved states will approach . The number of iterations () required to effectively estimate depends on the number of factors, including the relationship between and the proposal distribution and the desired accuracy of estimation. For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process.
It is important to notice that it is not clear, in a general problem, which distribution one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
Use in numerical integration
A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space and a probability distribution over , . Metropolis–Hastings can estimate an integral of the form of
where is a (measurable) function of interest.
For example, consider a statistic and its probability distribution , which is a marginal distribution. Suppose that the goal is to estimate for on the tail of . Formally, can be written as
and, thus, estimating can be accomplished by estimating the expected value of the indicator function , which is 1 when and zero otherwise.
Because is on the tail of , the probability to draw a state with on the tail of is proportional to , which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate on the tails. This can be done e.g. by using a sampling distribution to favor those states (e.g. with ).
Step-by-step instructions
Suppose that the most recent value sampled is . To follow the Metropolis–Hastings algorithm, we next draw a new proposal state with probability density and calculate a value
where
is the probability (e.g., Bayesian posterior) ratio between the proposed sample and the previous sample , and
is the ratio of the proposal density in two directions (from to and conversely).
This is equal to 1 if the proposal density is symmetric.
Then the new state is chosen according to the following rules.
If
else:
The Markov chain is started from an arbitrary initial value , and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of represent a sample from the distribution .
The algorithm works best if the proposal density matches the shape of the target distribution , from which direct sampling is difficult, that is .
If a Gaussian proposal density is used, the variance parameter has to be tuned during the burn-in period.
This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last samples.
The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an -dimensional Gaussian target distribution. These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the Bernstein–von Mises theorem.
If is too small, the chain will mix slowly (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to ). On the other hand,
if is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph.
Bayesian Inference
MCMC can be used to draw samples from the posterior distribution of a statistical model.
The acceptance probability is given by:
where is the likelihood, the prior probability density and the (conditional) proposal probability.
See also
Genetic algorithms
Mean-field particle methods
Metropolis light transport
Multiple-try Metropolis
Parallel tempering
Sequential Monte Carlo
Simulated annealing
References
Notes
Further reading
Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004.
Chib, Siddhartha; Greenberg, Edward (1995). "Understanding the Metropolis–Hastings Algorithm". The American Statistician, 49(4), 327–335.
David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics - Simulation and Computation, 44:2 332–349, 2015
Bolstad, William M. (2010) Understanding Computational Bayesian Statistics, John Wiley & Sons
Monte Carlo methods
Markov chain Monte Carlo
Statistical algorithms | Metropolis–Hastings algorithm | [
"Physics"
] | 3,482 | [
"Monte Carlo methods",
"Computational physics"
] |
56,108 | https://en.wikipedia.org/wiki/Penrose%20triangle | The Penrose triangle, also known as the Penrose tribar, the impossible tribar, or the impossible triangle, is a triangular impossible object, an optical illusion consisting of an object which can be depicted in a perspective drawing. It cannot exist as a solid object in ordinary three-dimensional Euclidean space, although its surface can be embedded isometrically (bent but not stretched) in five-dimensional Euclidean space. It was first created by the Swedish artist Oscar Reutersvärd in 1934. Independently from Reutersvärd, the triangle was devised and popularized in the 1950s by psychiatrist Lionel Penrose and his son, the mathematician and Nobel Prize laureate Roger Penrose, who described it as "impossibility in its purest form". It is featured prominently in the works of artist M. C. Escher, whose earlier depictions of impossible objects partly inspired it.
Description
The tribar/triangle appears to be a solid object, made of three straight beams of square cross-section which meet pairwise at right angles at the vertices of the triangle they form. The beams may be broken, forming cubes or cuboids.
This combination of properties cannot be realized by any three-dimensional object in ordinary Euclidean space. Such an object can exist in certain Euclidean 3-manifolds. A surface with the same geodesic distances as the depicted surface of the tribar, but without its flat shape and right angles, are to be preserved, can also exist in 5-dimensional Euclidean space, which is the lowest-dimensional Euclidean space within which this surface can be isometrically embedded. There also exist three-dimensional solid shapes each of which, when viewed from a certain angle, appears the same as the 2-dimensional depiction of the Penrose triangle on this page (such as – for example – the adjacent image depicting a sculpture in Perth, Australia). The term "Penrose Triangle" can refer to the 2-dimensional depiction or the impossible object itself.
If a line is traced around the Penrose triangle, a 4-loop Möbius strip is formed.
Depictions
M.C. Escher's lithograph Waterfall (1961) depicts a watercourse that flows in a zigzag along the long sides of two elongated Penrose triangles, so that it ends up two stories higher than it began. The resulting waterfall, forming the short sides of both triangles, drives a water wheel. Escher points out that in order to keep the wheel turning, some water must occasionally be added to compensate for evaporation. A third Penrose triangle lies between the other two, formed by two segments of waterway and a support tower.
Sculptures
See also
Impossible trident
Shepard elephant
Penrose stairs
References
External links
An article about impossible triangle sculpture in Perth
Escher for Real constructions
Topology
Impossible objects
Triangles
1934 introductions | Penrose triangle | [
"Physics",
"Mathematics"
] | 563 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
56,109 | https://en.wikipedia.org/wiki/Brown%20rat | The brown rat (Rattus norvegicus), also known as the common rat, street rat, sewer rat, wharf rat, Hanover rat, Norway rat and Norwegian rat, is a widespread species of common rat. One of the largest muroids, it is a brown or grey rodent with a body length of up to long, and a tail slightly shorter than that. It weighs between . Thought to have originated in northern China and neighbouring areas, this rodent has now spread to all continents except Antarctica, and is the dominant rat in Europe and much of North America. With rare exceptions, the brown rat lives wherever humans live, particularly in urban areas.
Selective breeding of the brown rat has produced the fancy rat (rats kept as pets), as well as the laboratory rat (rats used as model organisms in biological research). Both fancy rats and laboratory rats are of the domesticated subspecies Rattus norvegicus domestica. Studies of wild rats in New York City have shown that populations living in different neighborhoods can evolve distinct genomic profiles over time, by slowly accruing different traits.
Naming and etymology
The brown rat was originally called the "Hanover rat" by people wishing to link problems in 18th-century England with the House of Hanover. It is not known for certain why the brown rat is named Rattus norvegicus (Norwegian rat), as it did not originate from Norway. However, the English naturalist John Berkenhout, author of the 1769 book Outlines of the Natural History of Great Britain, is most likely responsible for popularizing the misnomer. Berkenhout gave the brown rat the binomial name Rattus norvegicus, believing it had migrated to England from Norwegian ships in 1728.
By the early to the middle part of the 19th century, British academics believed that the brown rat was not native to Norway, hypothesizing (incorrectly) that it may have come from Ireland, Gibraltar or across the English Channel with William the Conqueror. As early as 1850, however, a new hypothesis of the rat's origins was beginning to develop. The British novelist Charles Dickens acknowledged this in his weekly journal, All the Year Round, writing:
It is frequently called, in books and otherwise, the 'Norway rat', and it is said to have been imported into this country in a ship-load of timber from Norway. Against this hypothesis stands the fact that when the brown rat had become common in this country, it was unknown in Norway, although there was a small animal like a rat, but really a lemming, which made its home there.
Academics began to prefer this etymology of the brown rat towards the end of the 19th century, as seen in the 1895 text Natural History by American scholar Alfred Henry Miles:
The brown rat is the species common in England, and best known throughout the world. It is said to have travelled from Persia to England less than two hundred years ago and to have spread from thence to other countries visited by English ships.
Though the assumptions surrounding this species' origins were not yet the same as modern ones, by the 20th century, it was believed among naturalists that the brown rat did not originate in Norway, rather the species came from central Asia and (likely) China.
Description
The fur is usually brown or dark grey, while the underparts are lighter grey or brown. The brown rat is a rather large murid and can weigh twice as much as a black rat (Rattus rattus) and many times more than a house mouse (Mus musculus). The head and body length ranges from while the tail ranges in length from , therefore being shorter than the head and body. Adult weight ranges from . Large individuals can reach but are not expected outside of domestic specimens. Stories of rats attaining sizes as big as cats are exaggerations, or misidentifications of larger rodents, such as the coypu and muskrat. It is common for breeding wild brown rats to weigh (sometimes considerably) less than . The heaviest live Rattus norvegicus on record is and they can reach a maximum length of .
Brown rats have acute hearing, are sensitive to ultrasound, and possess a very highly developed olfactory sense. Their average heart rate is 300 to 400 beats per minute, with a respiratory rate of around 100 per minute. The vision of a pigmented rat is poor, around 20/600, while a non-pigmented (albino) with no melanin in its eyes has both around 20/1200 vision and a terrible scattering of light within its vision. Brown rats are dichromats which perceive colors rather like a human with red-green colorblindness, and their colour saturation may be quite faint. Their blue perception, however, also has UV receptors, allowing them to see ultraviolet lights that humans and some other species cannot.
Biology and behavior
The brown rat is nocturnal and is a good swimmer, both on the surface and underwater, and has been observed climbing slim round metal poles several feet in order to reach garden bird feeders. Brown rats dig well, and often excavate extensive burrow systems. A 2007 study found brown rats to possess metacognition, a mental ability previously only found in humans and some other primates, but further analysis suggested they may have been following simple operant conditioning principles.
Communication
Brown rats are capable of producing ultrasonic vocalizations. As pups, young rats use different types of ultrasonic cries to elicit and direct maternal search behavior, as well as to regulate their mother's movements in the nest. Although pups produce ultrasounds around any other rats at the age of 7 days, by 14 days old they significantly reduce ultrasound production around male rats as a defensive response. Adult rats will emit ultrasonic vocalizations in response to predators or perceived danger; the frequency and duration of such cries depends on the sex and reproductive status of the rat. The female rat also emit ultrasonic vocalizations during mating.
Rats may also emit short, high frequency, ultrasonic, socially induced vocalization during rough and tumble play, before receiving morphine, or mating, and when tickled. The vocalization, described as a distinct "chirping", has been likened to laughter, and is interpreted as an expectation of something rewarding. Like most rat vocalizations, the chirping is too high in pitch for humans to hear without special equipment. Bat detectors are often used by pet owners for this purpose.
In research studies, the chirping is associated with positive emotional feelings, and social bonding occurs with the tickler, resulting in the rats becoming conditioned to seek the tickling. However, as the rats age, the tendency to chirp appears to decline.
Brown rats also produce communicative noises capable of being heard by humans. The most commonly heard in domestic rats is bruxing, or teeth-grinding, which is most usually triggered by happiness, but can also be 'self-comforting' in stressful situations, such as a visit to the vet. The noise is best described as either a quick clicking or 'burring' sound, varying from animal to animal. Vigorous bruxing can be accompanied by boggling, where the eyes of the rat rapidly bulge and retract due to movement of the lower jaw muscles behind the eye socket.
In addition, they commonly squeak along a range of tones from high, abrupt pain squeaks to soft, persistent 'singing' sounds during confrontations.
Diet
The brown rat is a true omnivore and consumes almost anything, but cereals form a substantial part of its diet. The most-liked foods of brown rats include scrambled eggs, raw carrots, and cooked corn kernels. The least-liked foods are raw beets, peaches and raw celery.
Foraging behavior is often population-specific, and varies by environment and food source. Brown rats living near a hatchery in West Virginia catch fingerling fish.
Some colonies along the banks of the Po River in Italy dive for mollusks, a practice demonstrating social learning among members of this species. Rats on the island of Norderoog in the North Sea stalk and kill sparrows and ducks.
Also preyed upon by brown rats are chicks, mice and small lizards. Examination of a wild brown rat stomachs in Germany revealed 4,000 food items, most of which were plants, although studies have shown that brown rats prefer meat when given the option. In metropolitan areas, they survive mainly on discarded human food and anything else that can be eaten without negative consequences.
Reproduction and life cycle
The brown rat can breed throughout the year if conditions are suitable, with a female producing up to five litters a year. The gestation period is only 21 days, and litters can number up to 14, although seven is common. They reach sexual maturity in about five weeks. Under ideal conditions (for the rat), this means that the population of females could increase by a factor of three and a half (half a litter of 7) in 8 weeks (5 weeks for sexual maturity and 3 weeks of gestation), corresponding to a population growing by a factor of 10 in just 15 weeks. As a result, the population can grow from 2 to 15,000 in a year. The maximum life span is three years, although most barely manage one. A yearly mortality rate of 95% is estimated, with predators and interspecies conflict as major causes.
When lactating, female rats display a 24-hour rhythm of maternal behavior, and will usually spend more time attending to smaller litters than large ones.
Brown rats live in large, hierarchical groups, either in burrows or subsurface places, such as sewers and cellars. When food is in short supply, the rats lower in social order are the first to die. If a large fraction of a rat population is exterminated, the remaining rats will increase their reproductive rate, and quickly restore the old population level.
The female is capable of becoming pregnant immediately after giving birth, and can nurse one litter while pregnant with another. She is able to produce and raise two healthy litters of normal size and weight without significantly changing her own food intake. However, when food is restricted, she can extend pregnancy by over two weeks, and give birth to litters of normal number and weight.
Mating behaviors
Males can ejaculate multiple times in a row, and this increases the likelihood of pregnancy as well as decreases the number of stillborns. Multiple ejaculation also means that males can mate with multiple females, and they exhibit more ejaculatory series when there are several oestrous females present. Males also copulate at shorter intervals than females. In group mating, females often switch partners.
Dominant males have higher mating success and also provide females with more ejaculate, and females are more likely to use the sperm of dominant males for fertilization.
In mating, female rats show a clear mating preference for unknown males versus males that they have already mated with (also known as the Coolidge effect), and will often resume copulatory behavior when introduced to a novel sexual partner.
Females also prefer to mate with males who have not experienced social stress during adolescence, and can determine which males were stressed even without any observed difference in sexual performance of males experiencing stress during adolescence and not.
Social behavior
Rats commonly groom each other and sleep together. Rats are said to establish an order of hierarchy, so one rat will be dominant over another one. Groups of rats tend to "play fight", which can involve any combination of jumping, chasing, tumbling, and "boxing". Play fighting involves rats going for each other's necks, while serious fighting involves strikes at the others' back ends. If living space becomes limited, rats may turn to aggressive behavior, which may result in the death of some animals, reducing the burden over the living space.
Rats, like most mammals, also form family groups of a mother and her young. This applies to both groups of males and females. However, rats are territorial animals, meaning that they usually act aggressively towards or scared of strange rats. Rats will fluff up their hair, hiss, squeal, and move their tails around when defending their territory. Rats will chase each other, groom each other, sleep in group nests, wrestle with each other, have dominance squabbles, communicate, and play in various other ways with each other. Huddling is an additional important part of rat socialization. Huddling, an extreme form of herding and like chattering or "bruxing" is often used to communicate that they are feeling threatened and not to come near. The common rat has been more successful at inhabiting and building communities on 6 continents and are the only species to have occupied more land than humans.
During the wintry months, rats will huddle into piles – usually cheek-to-cheek – to control humidity and keep the air warm as a heat-conserving function. Just like elderly rats are commonly groomed and nursed by their companions, nestling rats especially depend on heat from their mother, since they cannot regulate their own temperature. Other forms of interaction include: crawling under, which is literally the act of crawling underneath one another (this is common when the rat is feeling ill and helps them breathe); walking over to find a space next to their closest friend, also explained in the name; allo-grooming, so-called to distinguish it from self-grooming; and nosing, where a rat gently pushes with its nose at another rat near the neck.
Burrowing
Rats are known to burrow extensively, both in the wild and in captivity, if given access to a suitable substrate. Rats generally begin a new burrow adjacent to an object or structure, as this provides a sturdy "roof" for the section of the burrow nearest to the ground's surface. Burrows usually develop to eventually include multiple levels of tunnels, as well as a secondary entrance. Older male rats will generally not burrow, while young males and females will burrow vigorously.
Burrows provide rats with shelter and food storage, as well as safe, thermo-regulated nest sites. Rats use their burrows to escape from perceived threats in the surrounding environment; for example, rats will retreat to their burrows following a sudden, loud noise or while fleeing an intruder. Burrowing can therefore be described as a "pre-encounter defensive behavior", as opposed to a "post-encounter defensive behavior", such as flight, freezing, or avoidance of a threatening stimulus.
Distribution and habitat
Possibly originating from the plains of northern China and Mongolia, the brown rat spread to other parts of the world sometime in the Middle Ages. The question of when brown rats became commensal with humans remains unsettled, but as a species, they have spread and established themselves along routes of human migration and now live almost everywhere humans are.
The brown rat may have been present in Europe as early as 1553, a conclusion drawn from an illustration and description by Swiss naturalist Conrad Gesner in his book Historiae animalium, published 1551–1558. Though Gesner's description could apply to the black rat, his mention of a large percentage of albino specimens—not uncommon among wild populations of brown rats—adds credibility to this conclusion. Reliable reports dating to the 18th century document the presence of the brown rat in Ireland in 1722, England in 1730, France in 1735, Germany in 1750, and Spain in 1800, becoming widespread during the Industrial Revolution. It did not reach North America until around 1750–1755.
As it spread from Asia, the brown rat generally displaced the black rat in areas where humans lived. In addition to being larger and more aggressive, the change from wooden structures and thatched roofs to bricked and tiled buildings favored the burrowing brown rats over the arboreal black rats. In addition, brown rats eat a wider variety of foods, and are more resistant to weather extremes.
In the absence of humans, brown rats prefer damp environments, such as river banks. However, the great majority are now linked to man-made environments, such as sewage systems.
It is often said that there are as many rats in cities as people, but this varies from area to area depending on climate, living conditions, etc. Brown rats in cities tend not to wander extensively, often staying within of their nest if a suitable concentrated food supply is available, but they will range more widely where food availability is lower. It is difficult to determine the extent of their home range because they do not utilize a whole area but rather use regular runways to get from one location to another. There is great debate over the size of the population of rats in New York City, with estimates from almost 100 million rats to as few as 250,000. Experts suggest that New York is a particularly attractive place for rats because of its aging infrastructure and high poverty rates. In 2023, the city appointed Kathleen Corradi as the first Rat Czar, a position created to address the city's rat population. The position focuses on instituting policies measures to curb the population such as garbage regulation and additional rat trapping. In addition to sewers, rats are very comfortable living in alleyways and residential buildings, as there is usually a large and continuous food source in those areas.
In the United Kingdom, some figures show that the rat population has been rising, with estimations that 81 million rats reside in the UK Those figures would mean that there are 1.3 rats per person in the country. High rat populations in the UK are often attributed to the mild climate, which allow them higher survival rates during the winter. With the increase in global temperature and glacier retreat, it is estimated that brown rat populations will see an increase.
In tropical and desert regions, brown rat occurrence tends to be limited to human-modified habitats. Contiguous rat-free areas in the world include the continent of Antarctica, the Arctic, some isolated islands, the Canadian province of Alberta, and certain conservation areas in New Zealand. Most of Australia apart from the eastern and south-eastern coastal areas does not have reports of substantial rat occurrences.
Antarctica is uninhabitable by rats. The Arctic has extremely cold winters that rats cannot survive outdoors, and the human population density is extremely low, making it difficult for rats to travel from one habitation to another, although they have arrived in many coastal areas by ship. When the occasional rat infestation is found and eliminated, the rats are unable to re-infest it from an adjacent one. Isolated islands are also able to eliminate rat populations because of low human population density and the geographic distance from other rat populations.
Rats as invasive species
Many parts of the world have been populated by rats secondarily, where rats are now important invasive species that compete with and threaten local fauna. For instance, Norway rats reached North America between 1750 and 1775 and even in the early 20th century, from 1925 to 1927, 50% of ships entering the port of New York were rat infested.
Faroe Islands
The brown rat was first observed on the Faroe Islands in 1768. It is thought that the first individuals arrived on the southernmost island, Suðuroy, via the wreck of a Norwegian ship that had stranded on the Scottish Isle of Lewis on its way from Trondheim to Dublin. The drifting wreck, carrying brown rats, drifted northwards until it reached the village of Hvalba. Dispersion afterwards appears to have been fast, including all of Suðuroy within a year. In 1769, they were observed in Tórshavn on the southern part of Streymoy, and a decade later, in the villages in the northern part of this island.
From here, they crossed the strait and occupied Eysturoy during the years 1776 to 1779. In 1779, they reached Vagar. Whether the rats dispersed from the already established population in Suðuroy, or they were brought to the Faroe Islands with other ships is unknown. The Northern islands were invaded by the brown rat more than 100 years later, after Norwegians built and operated a whaling station in the village of Hvannasund on Borðoy from 1898 to 1920. From there, the brown rat spread to the neighbouring islands of Viðoy and Kunoy. A recent genomic analysis reveals three independent introductions of the invasive brown rat to the Faroe Islands.
Today the brown rat is found on seven of the eighteen Faroese islands, and is common in and around human habitations as well as in the wild. Although the brown rat is now common on all of the largest Faroese islands, only sparse information on the population is available in the literature. An investigation for infection with the spirochaete Leptospira interrogans did not find any infected animals, suggesting that Leptospira prevalence rates on the Faroe Islands may be among the lowest recorded worldwide.
Alaska
Hawadax Island (formerly known as Rat Island) in Alaska is thought to have been the first island in the Aleutians to be invaded by Norway rats (the Brown rat) when a Japanese ship went aground in the 1780s. They had a devastating effect on the native bird life. An eradication program was started in 2007 and the island was declared rat-free in June 2009.
Alberta
Alberta is the largest rat-free populated area in the world. Rat invasions of Alberta were stopped and rats were eliminated by very aggressive government rat control measures, starting during the 1950s.
The only Rattus species that is capable of surviving the climate of Alberta is the brown rat, which can only survive in the prairie region of the province, and even then must overwinter in buildings. Although it is a major agricultural area, Alberta is far from any seaport and only a portion of its eastern boundary with Saskatchewan provides a favorable entry route for rats. Brown rats cannot survive in the wild boreal forest to the north, the Rocky Mountains to the west, nor can they safely cross the semiarid High Plains of Montana to the south. The first brown rat did not reach Alberta until 1950, and in 1951, the province launched a rat-control program that included shooting, poisoning, and gassing rats, and bulldozing or burning down some rat-infested buildings. The effort was backed by legislation that required every person and every municipality to destroy and prevent the establishment of designated pests. If they failed, the provincial government could carry out the necessary measures and charge the costs to the landowner or municipality.
In the first year of the rat control program, of arsenic trioxide were spread throughout 8,000 buildings on farms along the Saskatchewan border. However, in 1953 the much safer and more effective rodenticide warfarin was introduced to replace arsenic. Warfarin is an anticoagulant that was approved as a drug for human use in 1954 and is much safer to use near humans and other large animals than arsenic. By 1960, the number of rat infestations in Alberta had dropped to below 200 per year. In 2002, the province finally recorded its first year with zero rat infestations, and from 2002 to 2007 there were only two infestations found. After an infestation of rats in the Medicine Hat landfill was found in 2012, the province's rat-free status was questioned, but provincial government rat control specialists brought in excavating machinery, dug out, shot, and poisoned 147 rats in the landfill, and no live rats were found thereafter. In 2013, the number of rat infestations in Alberta dropped to zero again. Alberta defines an infestation as two or more rats found at the same location, since a single rat cannot reproduce. About a dozen single rats enter Alberta in an average year and are killed by provincial rat control specialists before they can reproduce.
Only zoos, universities, and research institutes are allowed to keep caged rats in Alberta, and possession of unlicensed rats, including fancy rats by anyone else is punishable by a penalty of up to C$5,000 or up to 60 days in jail.
The adjacent and similarly landlocked province of Saskatchewan initiated a rat control program in 1972, and has managed to reduce the number of rats in the province substantially, although they have not been eliminated. The Saskatchewan rat control program has considerably reduced the number of rats trying to enter Alberta.
New Zealand
First arriving before 1800 (perhaps on James Cook's vessels), brown rats pose a serious threat to many of New Zealand's native wildlife. Rat eradication programmes within New Zealand have led to rat-free zones on offshore islands and even on fenced "ecological islands" on the mainland. Before an eradication effort was launched in 2001, the sub-Antarctic Campbell Island had the highest population density of brown rats in the world.
Diseases
Similar to other rodents, brown rats may carry a number of pathogens, which can result in disease, including Weil's disease, rat bite fever, cryptosporidiosis, viral hemorrhagic fever, Q fever and hantavirus pulmonary syndrome. In the United Kingdom, brown rats are an important reservoir for Coxiella burnetii, the bacterium that causes Q fever, with seroprevalence for the bacteria found to be as high as 53% in some wild populations.
This species can also serve as a reservoir for Toxoplasma gondii, the parasite that causes toxoplasmosis, though the disease usually spreads from rats to humans when domestic cats feed on infected brown rats. The parasite has a long history with the brown rat, and there are indications that the parasite has evolved to alter an infected rat's perception to cat predation, making it more susceptible to predation and increasing the likelihood of transmission.
Surveys and specimens of brown rat populations throughout the world have shown this species is often associated with outbreaks of trichinosis, but the extent to which the brown rat is responsible in transmitting Trichinella larvae to humans and other synanthropic animals is at least somewhat debatable. Trichinella pseudospiralis, a parasite previously not considered to be a potential pathogen in humans or domestic animals, has been found to be pathogenic in humans and carried by brown rats.
They can also be responsible for transmitting Angiostrongylus larvae to humans by eating raw or undercooked snails, slugs, molluscs, crustaceans, water and/or vegetables contaminated with them.
Brown rats are sometimes mistakenly thought to be a major reservoir of bubonic plague, a possible cause of the Black Death. However, the bacterium responsible, Yersinia pestis, is commonly endemic in only a few rodent species and is usually transmitted zoonotically by rat fleas—common carrier rodents today include ground squirrels and wood rats. However, brown rats may suffer from plague, as can many nonrodent species, including dogs, cats, and humans. During investigations of the plague epidemic in San Francisco in 1907, >1% of collected rats were infected with Y. pestis. The original carrier for the plague-infected fleas thought to be responsible for the Black Death was the black rat, and it has been hypothesized that the displacement of black rats by brown rats led to the decline of bubonic plague. This theory has, however, been deprecated, as the dates of these displacements do not match the increases and decreases in plague outbreaks.
During the COVID-19 pandemic, one study of New York City sewer rats showed that 17 percent of the city's brown rat population had become infected with SARS-CoV-2.
In captivity
Uses in science
Selective breeding of white-marked rats rescued from being killed in a now-outlawed sport called rat baiting has produced the pink-eyed white laboratory rat. Like mice, these rats are frequently subjects of medical, psychological and other biological experiments, and constitute an important model organism. This is because they grow quickly to sexual maturity and are easy to keep and to breed in captivity. When modern biologists refer to "rats", they almost always mean Rattus norvegicus.
As pets
The brown rat is kept as a pet in many parts of the world. Australia, the United Kingdom, and the United States are just a few of the countries that have formed fancy rat associations similar in nature to the American Kennel Club, establishing standards, orchestrating events, and promoting responsible pet ownership.
The many different types of domesticated brown rats include variations in coat patterns, as well as the style of the coat, such as Hairless or Rex, and more recently developed variations in body size and structure, including dwarf and tailless fancy rats.
Working rats
A working rat is a rat trained for specific tasks as a working animal. In many cases, working rats are domesticated brown rats. However, other species, notably the Gambian pouched rat, have been trained to assist humans.
See also
References
Further reading
List of books and articles about rats
External links
Overviews
Rat behaviour and biology A detailed set of pages by biologist Anne Hanson
Norway (Brown) Rat Fact sheet including information on habits, habitat, threats and prevention tips
Rattus norvegicus at the University of Michigan Museum of Zoology
Life cycle data sheet for Rattus norvegicus written by biologist João Pedro de Magalhães
Rats and Mice: Overview Online version of the Merck veterinary manual
ARKive Still photos and videos
Rattus norvegicus genome and use as model animal
Nature: Rat Genome
Rat genome database
View the rat genome in Ensembl
Rattus
Mammals described in 1769
Animals that use echolocation
Cosmopolitan mammals
Laboratory rats
Stored-product pests
Taxa named by John Berkenhout | Brown rat | [
"Biology"
] | 6,050 | [
"Pests (organism)",
"Stored-product pests"
] |
56,110 | https://en.wikipedia.org/wiki/Impossible%20object | An impossible object (also known as an impossible figure or an undecidable figure) is a type of optical illusion that consists of a two-dimensional figure which is instantly and naturally understood as representing a projection of a three-dimensional object but cannot exist as a solid object. Impossible objects are of interest to psychologists, mathematicians and artists without falling entirely into any one discipline.
Notable examples
Notable impossible objects include:
Explanations
Impossible objects can be unsettling because of our natural desire to interpret 2D drawings as three-dimensional objects. This is why a drawing of a Necker cube would most likely be seen as a cube, rather than "two squares connected with diagonal lines, a square surrounded by irregular planar figures, or any other planar figure". Looking at different parts of an impossible object makes one reassess the 3D nature of the object, which confuses the mind.
In most cases the impossibility becomes apparent after viewing the figure for a few seconds. However, the initial impression of a 3D object remains even after it has been contradicted. There are also more subtle examples of impossible objects where the impossibility does not become apparent spontaneously and it is necessary to consciously examine the geometry of the implied object to determine that it is impossible.
Roger Penrose wrote about describing and defining impossible objects mathematically using the algebraic topology concept of cohomology.
History
An early example of an impossible object comes from Apolinère Enameled, a 1916 advertisement painted by Marcel Duchamp. It depicts a girl painting a bed-frame with white enamelled paint, and deliberately includes conflicting perspective lines, to produce an impossible object. To emphasise the deliberate impossibility of the shape, a piece of the frame is missing.
Swedish artist Oscar Reutersvärd was one of the first to deliberately design many impossible objects. He has been called "the father of impossible figures". In 1934, he drew the Penrose triangle, some years before the Penroses. In Reutersvärd's version, the sides of the triangle are broken up into cubes.
In 1956, British psychiatrist Lionel Penrose and his son, mathematician Roger Penrose, submitted a short article to the British Journal of Psychology titled "Impossible Objects: A Special Type of Visual Illusion". This was illustrated with the Penrose triangle and Penrose stairs. The article referred to Escher, whose work had sparked their interest in the subject, but not Reutersvärd, of whom they were unaware. The article was published in 1958.
From the 1930s onwards, Dutch artist M. C. Escher produced many drawings featuring paradoxes of perspective gradually working towards impossible objects. In 1957, he produced his first drawing containing a true impossible object: Cube with Magic Ribbons. He produced many further drawings featuring impossible objects, sometimes with the entire drawing being an impossible object. Waterfall and Belvedere are good examples of impossible constructions. His work did much to draw the attention of the public to impossible objects.
Some contemporary artists are also experimenting with impossible figures, for example, Jos de Mey, Shigeo Fukuda, Sandro del Prete, István Orosz (Utisz), Guido Moretti, Tamás F. Farkas, Mathieu Hamaekers, and Kokichi Sugihara.
Constructed impossible objects
Although possible to represent in two dimensions, it is not geometrically possible for such an object to exist in the physical world. However, some models of impossible objects have been constructed, such that when they are viewed from a very specific point, the illusion is maintained. Rotating the object or changing the viewpoint breaks the illusion, and therefore many of these models rely on forced perspective or having parts of the model appearing to be further or closer than they actually are.
The notion of an "interactive impossible object" is an impossible object that can be viewed from any angle without breaking the illusion.
See also
References
Further reading
Bower, Gordon H. (editor), (1990). Psychology of Learning & Motivation. Academic Press. Volume 26. p. 107.
Mathematical Circus, Martin Gardner 1979 (Chapter 1 – Optical Illusions)
Optical Illusions, Bruno Ernst 2006
External links
Impossible World
The M.C. Escher Project
Art of Reutersvard
"Escher for Real" (3D objects)
Inconsistent Images
Echochrome, a video game that incorporates impossible objects into its gameplay
Optical illusions | Impossible object | [
"Physics"
] | 880 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
56,112 | https://en.wikipedia.org/wiki/Necker%20cube | The Necker cube is an optical illusion that was first published as a rhomboid in 1832 by Swiss crystallographer Louis Albert Necker. It is a simple wire-frame, two dimensional drawing of a cube with no visual cues as to its orientation, so it can be interpreted to have either the lower-left or the upper-right square as its front side.
Ambiguity
The Necker cube is an ambiguous drawing.
Each part of the picture is ambiguous by itself, yet the human visual system picks an interpretation of each part that makes the whole consistent. The Necker cube is sometimes used to test computer models of the human visual system to see whether they can arrive at consistent interpretations of the image the same way humans do.
Humans do not usually see an inconsistent interpretation of the cube. A cube whose edges cross in an inconsistent way is an example of an impossible object, specifically an impossible cube.
With the cube on the left, most people see the lower-left face as being in front most of the time. This is possibly because people view objects from above, with the top side visible, far more often than from below, with the bottom visible, so the brain "prefers" the interpretation that the cube is viewed from above.
There is evidence that by focusing on different parts of the figure, one can force a more stable perception of the cube. The intersection of the two faces that are parallel to the observer forms a rectangle, and the lines that converge on the square form a "y-junction" at the two diagonally opposite sides. If an observer focuses on the upper "y-junction" the lower left face will appear to be in front. The upper right face will appear to be in front if the eyes focus on the lower junction. Blinking while being on the second perception will probably cause you to switch to the first one.
It is possible to cause the switch to occur by focusing on different parts of the cube. If one sees the first interpretation on the right it is possible to cause a switch to the second by focusing on the base of the cube until the switch occurs to the second interpretation. Similarly, if one is viewing the second interpretation, focusing on the left side of the cube may cause a switch to the first.
The Necker cube has shed light on the human visual system. The phenomenon has served as evidence of the human brain being a neural network with two distinct equally possible interchangeable stable states. Sidney Bradford, blind from the age of ten months but regaining his sight following an operation at age 52, did not perceive the ambiguity that normal-sighted observers do, but rather perceived only a flat image.
During the 1970s, undergraduates in the Psychology Department of City University, London, were provided with assignments to measure their Introversion-Extroversion orientations by the time it took for them to switch between the Front and Back perceptions of the Necker Cube.
Apparent viewpoint
The orientation of the Necker cube can also be altered by shifting the observer's point of view. When seen from apparent above, one face tends to be seen closer; and in contrast, when seen from a subjective viewpoint that is below, a different face comes to the fore.
References in popular culture
The Necker cube is discussed to such extent in Robert J. Sawyer's 1998 science fiction novel Factoring Humanity that "Necker" becomes a verb, meaning to impel one's brain to switch from one perspective or perception to another.
The Necker cube is used to illustrate how vampires in Peter Watts' science fiction novels Blindsight (2006) and Echopraxia (2014) have superior pattern recognition skills. One of the pieces of evidence is that vampires can see both interpretations of the Necker Cube simultaneously, which sets them apart from baseline humanity.
See also
Ambigram
Binocular rivalry
Crow T. Robot
Multistable perception
Pareidolia
Rhombille tiling
Schroeder stairs
Spinning Dancer
References
Citations
External links
History of the cube and a Java applet
1832 introductions
Optical illusions
Impossible objects
Cubes | Necker cube | [
"Physics"
] | 818 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
56,114 | https://en.wikipedia.org/wiki/Urbanization | Urbanization (or urbanisation in British English) is the population shift from rural to urban areas, the corresponding decrease in the proportion of people living in rural areas, and the ways in which societies adapt to this change. It can also mean population growth in urban areas instead of rural ones. It is predominantly the process by which towns and cities are formed and become larger as more people begin living and working in central areas.
Although the two concepts are sometimes used interchangeably, urbanization should be distinguished from urban growth. Urbanization refers to the proportion of the total national population living in areas classified as urban, whereas urban growth strictly refers to the absolute number of people living in those areas. It is predicted that by 2050 about 64% of the developing world and 86% of the developed world will be urbanized. This is predicted to generate artificial scarcities of land, lack of drinking water, playgrounds and so on for most urban dwellers. The predicted urban population growth is equivalent to approximately 3 billion urbanites by 2050, much of which will occur in Africa and Asia. Notably, the United Nations has also recently projected that nearly all global population growth from 2017 to 2030 will be by cities, with about 1.1 billion new urbanites over the next 10 years. In the long term, urbanization is expected to significantly impact the quality of life in negative ways.
Urbanization is relevant to a range of disciplines, including urban planning, geography, sociology, architecture, economics, education, statistics, and public health. The phenomenon has been closely linked to globalization, modernization, industrialization, marketization, administrative/institutional power, and the sociological process of rationalization. Urbanization can be seen as a specific condition at a set time (e.g. the proportion of total population or area in cities or towns), or as an increase in that condition over time. Therefore, urbanization can be quantified either in terms of the level of urban development relative to the overall population, or as the rate at which the urban proportion of the population is increasing. Urbanization creates enormous social, economic and environmental challenges, which provide an opportunity for sustainability with the "potential to use resources much less or more efficiently, to create more sustainable land use and to protect the biodiversity of natural ecosystems." However, current urbanization trends have shown that massive urbanization has led to unsustainable ways of living. Developing urban resilience and urban sustainability in the face of increased urbanization is at the centre of international policy in Sustainable Development Goal 11 "Sustainable cities and communities."
Urbanization is not merely a modern phenomenon, but a rapid and historic transformation of human social roots on a global scale, whereby predominantly rural culture is being rapidly replaced by predominantly urban culture. The first major change in settlement patterns was the accumulation of hunter-gatherers into villages many thousands of years ago. Village culture is characterized by common bloodlines, intimate relationships, and communal behaviour, whereas urban culture is characterized by distant bloodlines, unfamiliar relations, and competitive behaviour. This unprecedented movement of people is forecast to continue and intensify during the next few decades, mushrooming cities to sizes unthinkable only a century ago. As a result, the world urban population growth curve has up till recently followed a quadratic-hyperbolic pattern.
History
From the development of the earliest cities in Indus valley civilization, Mesopotamia and Egypt until the 18th century, an equilibrium existed between the vast majority of the population who were engaged in subsistence agriculture in a rural context, and small centres of populations in the towns where economic activity consisted primarily of trade at markets and manufactures on a small scale. Due to the primitive and relatively stagnant state of agriculture throughout this period, the ratio of rural to urban population remained at a fixed equilibrium. However, a significant increase in the percentage of the global urban population can be traced in the 1st millennium BCE.
With the onset of the British Agricultural Revolution and Industrial Revolution in the late 18th century, this relationship was finally broken and an unprecedented growth in urban population took place over the course of the 19th century, both through continued migration from the countryside and due to the tremendous demographic expansion that occurred at that time. In England and Wales, the proportion of the population living in cities with more than 20,000 people jumped from 17% in 1801 to 54% in 1891. Moreover, and adopting a broader definition of urbanization, while the urbanized population in England and Wales represented 72% of the total in 1891, for other countries the figure was 37% in France, 41% in Prussia and 28% in the United States.
As labourers were freed up from working the land due to higher agricultural productivity they converged on the new industrial cities like Manchester and Birmingham which were experiencing a boom in commerce, trade, and industry. Growing trade around the world also allowed cereals to be imported from North America and refrigerated meat from Australasia and South America. Spatially, cities also expanded due to the development of public transport systems, which facilitated commutes of longer distances to the city centre for the working class.
Urbanization rapidly spread across the Western world and, since the 1950s, it has begun to take hold in the developing world as well. At the turn of the 20th century, just 15% of the world population lived in cities. According to the UN, the year 2007 witnessed the turning point when more than 50% of the world population were living in cities, for the first time in human history.
Yale University in June 2016 published urbanization data from the time period 3700 BC to 2000 AD, the data was used to make a video showing the development of cities on the world during the time period. The origins and spread of urban centres around the world were also mapped by archaeologists.
Causes
Urbanization occurs either organically or planned as a result of individual, collective and state action. Living in a city can be culturally and economically beneficial since it can provide greater opportunities for access to the labour market, better education, housing, and safety conditions, and reduce the time and expense of commuting and transportation. Conditions like density, proximity, diversity, and marketplace competition are elements of an urban environment that deemed beneficial. However, there are also harmful social phenomena that arise: alienation, stress, increased cost of living, and mass marginalization that are connected to an urban way of living. Suburbanization, which is happening in the cities of the largest developing countries, may be regarded as an attempt to balance these harmful aspects of urban life while still allowing access to the large extent of shared resources.
In cities, money, services, wealth and opportunities are centralized. Many rural inhabitants come to the city to seek their fortune and alter their social position. Businesses, which provide jobs and exchange capital, are more concentrated in urban areas. Whether the source is trade or tourism, it is also through the ports or banking systems, commonly located in cities, that foreign money flows into a country.
Many people move into cities for economic opportunities, but this does not fully explain the very high recent urbanization rates in places like China and India. Rural flight is a contributing factor to urbanization. In rural areas, often on small family farms or collective farms in villages, it has historically been difficult to access manufactured goods, though the relative overall quality of life is very subjective, and may certainly surpass that of the city. Farm living has always been susceptible to unpredictable environmental conditions, and in times of drought, flood or pestilence, survival may become extremely problematic.
In a New York Times article concerning the acute migration away from farming in Thailand, life as a farmer was described as "hot and exhausting". "Everyone says the farmer works the hardest but gets the least amount of money". In an effort to counter this impression, the Agriculture Department of Thailand is seeking to promote the impression that farming is "honorable and secure".
However, in Thailand, urbanization has also resulted in massive increases in problems such as obesity. Shifting from a rural environment to an urbanized community also caused a transition to a diet that was mainly carbohydrate-based to a diet higher in fat and sugar, consequently causing a rise in obesity. City life, especially in modern urban slums of the developing world, is certainly hardly immune to pestilence or climatic disturbances such as floods, yet continues to strongly attract migrants. Examples of this were the 2011 Thailand floods and 2007 Jakarta flood. Urban areas are also far more prone to violence, drugs, and other urban social problems. In the United States, industrialization of agriculture has negatively affected the economy of small and middle-sized farms and strongly reduced the size of the rural labour market.
Particularly in the developing world, conflict over land rights due to the effects of globalization has led to less politically powerful groups, such as farmers, losing or forfeiting their land, resulting in obligatory migration into cities. In China, where land acquisition measures are forceful, there has been far more extensive and rapid urbanization (54%) than in India (36%), where peasants form militant groups (e.g. Naxalites) to oppose such efforts. Obligatory and unplanned migration often results in the rapid growth of slums.
This is also similar to areas of violent conflict, where people are driven off their land due to violence.
Cities offer a larger variety of services, including specialist services not found in rural areas. These services require workers, resulting in more numerous and varied job opportunities. Elderly people may be forced to move to cities where there are doctors and hospitals that can cater to their health needs. Varied and high-quality educational opportunities are another factor in urban migration, as well as the opportunity to join, develop, and seek out social communities.
Urbanization also creates opportunities for women that are not available in rural areas. This creates a gender-related transformation where women are engaged in paid employment and have access to education. This may cause fertility to decline. However, women are sometimes still at a disadvantage due to their unequal position in the labour market, their inability to secure assets independently from male relatives and exposure to violence.
People in cities are more productive than in rural areas. An important question is whether this is due to agglomeration effects or whether cities simply attract those who are more productive. Urban geographers have shown that there exists a large productivity gain due to locating in dense agglomerations. It is thus possible that agents locate in cities in order to benefit from these agglomeration effects.
Dominant conurbation
The dominant conurbation(s) of a country can get more benefits from the same things cities offer, attracting the rural population and urban and suburban populations from other cities. Dominant conurbations are quite often disproportionately large cities, but do not have to be. For instance Greater Manila is a conurbation instead of a city. Its total population of 20 million (over 20% national population) make it a primate city, but Quezon City (2.7 million), the largest municipality in Greater Manila, and Manila (1.6 million), the capital, are normal cities instead. A conurbation's dominance can be measured by output, wealth, and especially population, each expressed as a percentage of the entire country's. Greater Seoul is one conurbation that dominates South Korea. It is home to 50% of the entire national population.
Though Greater Busan-Ulsan (15%, 8 million) and Greater Osaka (14%, 18 million) dominate their respective countries, their populations are moving to their even more dominant rivals, Seoul and Tokyo respectively.
Economic effects
As cities develop, costs will skyrocket. This often takes the working class out of the market, including officials and employees of the local districts. For example, Eric Hobsbawm's book The age of revolution: 1789–1848 (published 1962 and 2005) chapter 11, stated "Urban development in our period was a gigantic process of class segregation, which pushed the new labouring poor into great morasses of misery outside the centres of government, business, and the newly specialized residential areas of the bourgeoisie. The almost universal European division into a 'good' west end and a 'poor' east end of large cities developed in this period." This is probably caused by the south-west wind which carries coal smoke and other pollutants down, making the western edges of towns better than the eastern ones.
Similar problems now affect less developed countries, as rapid development of cities makes inequality worse. The drive to grow quickly and be efficient can lead to less fair urban development. Think tanks such as the Overseas Development Institute have proposed policies that encourage labour-intensive to make use of the migration of less skilled workers. One problem these migrant workers are involved with is the growth of slums. In many cases, the rural-urban unskilled migrant workers are attracted by economic opportunities in cities. Unfortunately, they cannot find a job and or pay for houses in urban areas and have to live in slums.
Urban problems, along with developments in their facilities, are also fuelling suburb development trends in less developed nations, though the trend for core cities in said nations tends to continue to become ever denser. Development of cities is often viewed negatively, but there are positives in cutting down on transport costs, creating new job opportunities, providing education and housing, and transportation. Living in cities permits individuals and families to make use of their closeness to workplaces and diversity. While cities have more varied markets and goods than rural areas, facility congestion, domination of one group, high overhead and rental costs, and the inconvenience of trips across them frequently combine to make marketplace competition harsher in cities than in rural areas.
In many developing countries where economies are growing, the growth is often random and based on a small number of industries. Youths in these nations lack access to financial services and business advisory services, cannot get credit to start a business, and have no entrepreneurial skills. Therefore, they cannot seize opportunities in these industries. Making sure adolescents have access to excellent schools and infrastructure to work in such industries and improve schools is compulsory to promote a fair society.
Environmental effects
Furthermore, urbanization improves environmental eminence through superior facilities and standards in urban areas as compared to rural areas. Lastly, urbanization curbs pollution emissions by increasing innovations. In his 2009 book Whole Earth Discipline, Stewart Brand argues that the effects of urbanization are primarily positive for the environment. First, the birth rate of new urban dwellers falls immediately to replacement rate and keeps falling, reducing environmental stresses caused by population growth. Secondly, emigration from rural areas reduces destructive subsistence farming techniques, such as improperly implemented slash and burn agriculture. Alex Steffen also speaks of the environmental benefits of increasing the urbanization level in "Carbon Zero: Imagining Cities that can save the planet",.
However, existing infrastructure and city planning practices are not sustainable. In July 2013 a report issued by the United Nations Department of Economic and Social Affairs warned that with 2.4 billion more people by 2050, the amount of food produced will have to increase by 70%, straining food resources, especially in countries already facing food insecurity due to changing environmental conditions. The mix of changing environmental conditions and the growing population of urban regions, according to UN experts, will strain basic sanitation systems and health care, and potentially cause a humanitarian and environmental disaster.
Urban heat island
Urban heat islands have become a growing concern over the years. An urban heat island is formed when industrial areas absorb and retain heat. Much of the solar energy reaching rural areas is used to evaporate water from plants and soil. In cities, there are less vegetation and exposed soil. Most of the sun's energy is instead absorbed by buildings and asphalt; leading to higher surface temperatures. Vehicles, factories, and heating and cooling units in factories and homes release even more heat. As a result, cities are often warmer than other areas near them. Urban heat islands also make the soil drier and absorb less carbon dioxide from emissions. A Qatar University study found that land-surface temperatures in Doha increased annually by 0.65 °C from 2002 to 2013 and 2023.
Water quality
Urban runoff, polluted water created by rainfall on impervious surfaces, is a common effect of urbanization. Precipitation from rooftops, roads, parking lots and sidewalks flows to storm drains, instead of percolating into groundwater. The contaminated stormwater in the drains is typically untreated and flows to nearby streams, rivers or coastal bays.
Eutrophication in water bodies is another effect large populations in cities have on the environment. When rain occurs in these large cities, it filters CO2 and other pollutants in the air onto the ground. These chemicals are washed directly into rivers, streams, and oceans, making water worse and damaging ecosystems in them.
Eutrophication is a process which causes low levels of oxygen in water and algal blooms that may harm aquatic life. Harmful algal blooms make dangerous toxins. They live best in nitrogen- and phosphorus-rich places which include the oceans contaminated by the aforementioned chemicals. In these ideal conditions, they choke surface water, blocking sunlight and nutrients from other life forms. Overgrowth of algal blooms makes water worse overall and disrupts the natural balance of aquatic ecosystems. Furthermore, as algal blooms die, CO2 is produced. This makes the ocean more acidic, a process called acidification.
The ocean's surface can absorb CO2 from the Earth's atmosphere as emissions increase with the rise in urban development. In fact, the ocean absorbs a quarter of the CO2 produced by humans. This helps to lessen the harmful effects of greenhouse gases. But it also makes the ocean more acidic. A drop in pH the prevents the proper formation of calcium carbonate, which sea creatures need to build or keep shells or skeletons. This is especially true for many species of molluscs and coral. However, some species have been able to thrive in a more acidic environment.
Food waste
Rapid growth of communities creates new challenges in the developed world and one such challenge is an increase in food waste also known as urban food waste. Food waste is the disposal of food products that can no longer be used due to unused products, expiration, or spoilage. The increase of food waste can raise environmental concerns such as increase production of methane gases and attraction of disease vectors. Landfills are the third leading cause of the release of methane, causing a concern on its impact to our ozone and on the health of individuals. Accumulation of food waste causes increased fermentation, which increases the risk of rodent and bug migration. An increase in migration of disease vectors creates greater potential of disease spreading to humans.
Waste management systems vary on all scales from global to local and can also be influenced by lifestyle. Waste management was not a primary concern until after the Industrial Revolution. As urban areas continued to grow along with the human population, proper management of solid waste became an apparent concern. To address these concerns, local governments sought solutions with the lowest economic impacts which meant implementing technical solutions at the very last stage of the process. Current waste management reflects these economically motivated solutions, such as incineration or unregulated landfills. Yet, a growing increase for addressing other areas of life cycle consumption has occurred from initial stage reduction to heat recovery and recycling of materials. For example, concerns for mass consumption and fast fashion have moved to the forefront of the urban consumers' priorities. Aside from environmental concerns (e.g. climate change effects), other urban concerns for waste management are public health and land access.
Habitat fragmentation
Urbanization can have a large effect on biodiversity by causing a division of habitats and thereby alienation of species, a process known as habitat fragmentation. Habitat fragmentation does not destroy the habitat, as seen in habitat loss, but rather breaks it apart with things like roads and railways This change may affect a species ability to sustain life by separating it from the environment in which it is able to easily access food, and find areas that they may hide from predation With proper planning and management, fragmentation can be avoided by adding corridors that aid in the connection of areas and allow for easier movement around urbanized regions.
Depending on the various factors, such as level of urbanization, both increases or decreases in "species richness" can be seen. This means that urbanization may be detrimental to one species but also help facilitate the growth of others. In instances of housing and building development, many times vegetation is completely removed immediately in order to make it easier and less expensive for construction to occur, thereby obliterating any native species in that area. Habitat fragmentation can filter species with limited dispersal capacity. For example, aquatic insects are found to have lower species richness in urban landscapes. The more urbanized the surrounding of habitat is, the fewer species can reach the habitat. Other times, such as with birds, urbanization may allow for an increase in richness when organisms are able to adapt to the new environment. This can be seen in species that may find food while scavenging developed areas or vegetation that has been added after urbanization has occurred i.e. planted trees in city areas
Health and social effects
In the developing world, urbanization does not translate into a significant increase in life expectancy. Rapid urbanization has led to increased mortality from non-communicable diseases associated with lifestyle, including cancer and heart disease. Differences in mortality from contagious diseases vary depending on the particular disease and location.
Urban health levels are on average better in comparison to rural areas. However, residents in poor urban areas such as slums and informal settlements suffer "disproportionately from disease, injury, premature death, and the combination of ill-health and poverty entrenches disadvantage over time." Many of the urban poor have difficulty accessing health services due to their inability to pay for them; so they resort to less qualified and unregulated providers.
While urbanization is associated with improvements in public hygiene, sanitation and access to health care, it also entails changes in occupational, dietary, and exercise patterns. It can have mixed effects on health patterns, alleviating some problems, and accentuating others.
Nutrition
Traditionally, rural populations have tended to eat plant-based diets rich in grains, fruits and vegetables, and with low fat content. However, rural people migrating to urban areas often shift towards diets that rely more on processed foods characterized by a higher content of meat, sugars, refined grains and fats. Urban residents typically have reduced time available for at-home food preparation combined with increased disposable income, facilitating access to convenience foods and ready-to-eat meals.
One such effect is the formation of food deserts. Nearly 23.5 million people in the United States lack access to supermarkets within one mile of their home. Several studies suggest that long distances to a grocery store are associated with higher rates of obesity and other health disparities.
Food deserts in developed countries often correspond to areas with a high density of fast food chains and convenience stores that offer little to no fresh food. Urbanization has been shown to be associated with the consumption of less fresh fruits, vegetables, and whole grains and a higher consumption of processed foods and sugar-sweetened beverages. Poor access to healthy food and high intakes of fat, sugar and salt are associated with a greater risk for obesity, diabetes and related chronic disease. Overall, body mass index and cholesterol levels increase sharply with national income and the degree of urbanization.[40]
Food deserts in the United States are most commonly found in low-income and predominately African American neighbourhoods. One study on food deserts in Denver, Colorado found that, in addition to minorities, the affected neighbourhoods also had a high proportion of children and new births. In children, urbanization is associated with a lower risk of under-nutrition but a higher risk of being overweight.
Infections
Urbanization has also been linked to the spread of communicable diseases, which can spread more rapidly in the favourable environment with more people living in a smaller area. Such diseases can be respiratory infections and gastrointestinal infections. Other infections could be infections, which need a vector to spread to humans. An example of this could be dengue fever.
Asthma
Urbanization has also been associated with an increased risk of asthma as well. Throughout the world, as communities transition from rural to more urban societies, the number of people affected by asthma increases. The odds of reduced rates of hospitalization and death from asthmas has decreased for children and young adults in urbanized municipalities in Brazil. This finding indicates that urbanization may have a negative impact on population health particularly affecting people's susceptibility to asthma.
In low and middle income countries many factors contribute to the high numbers of people with asthma. Similar to areas in the United States with increasing urbanization, people living in growing cities in low income countries experience high exposure to air pollution, which increases the prevalence and severity of asthma among these populations. Links have been found between exposure to traffic-related air pollution and allergic diseases. Children living in poor, urban areas in the United States now have an increased risk of morbidity due to asthma in comparison to other low-income children in the United States. In addition, children with croup living in urban areas have higher hazard ratios for asthma than similar children living in rural areas. Researchers suggest that this difference in hazard ratios is due to the higher levels of air pollution and exposure to environmental allergens found in urban areas.
Exposure to elevated levels of ambient air pollutants such as nitrogen dioxide (NO2), carbon monoxide (CO), and particulate matter with a diameter of less than 2.5 micrometres (PM2.5), can cause DNA methylation of CpG sites in immune cells, which increases children's risk of developing asthma. Studies have shown a positive correlation between Foxp3 methylation and children's exposure to NO2, CO, and PM2.5. Furthermore, any amount of exposure to high levels of air pollution have shown long term effects on the Foxp3 region.
Despite the increase in access to health services that usually accompanies urbanization, the rise in population density negatively affects air quality ultimately mitigating the positive value of health resources as more children and young adults develop asthma due to high pollution rates. However, urban planning, as well as emission control, can lessen the effects of traffic-related air pollution on allergic diseases such as asthma.
Crime
Historically, crime and urbanization have gone hand in hand. The simplest explanation is that areas with a higher population density are surrounded by greater availability of goods. Committing crimes in urbanized areas is also more feasible. Modernization has led to more crime as well, as the modern media has raised greater awareness of the income gap between the rich and the poor. This leads to feelings of deprivation, which in turn can lead to crime. In some regions where urbanization happens in wealthier areas, a rise in property crime and a decrease in violent crime is seen.
Data shows that there is an increase in crime in urbanized areas. Some factors include per capita income, income inequality, and overall population size. There is also a smaller association between unemployment rate, police expenditures and crime. The presence of crime also has the ability to produce more crime. These areas have less social cohesion and therefore less social control. This is evident in the geographical regions that crime occurs in. As most crime tends to cluster in city centres, the further the distance from the centre of the city, the lower the occurrence of crimes are.
Migration is also a factor that can increase crime in urbanized areas. People from one area are displaced and forced to move into an urbanized society. Here they are in a new environment with new norms and social values. This can lead to less social cohesion and more crime.
Physical activity
Although urbanization tends to produce more negative effects, one positive effect that urbanization has impacted is an increase in physical activity in comparison to rural areas. Residents of rural areas and communities in the United States have higher rates of obesity and engage in less physical activity than urban residents. Rural residents consume a higher percent of fat calories and are less likely to meet the guidelines for physical activity and more likely to be physically inactive. In comparison to regions within the United States, the west has the lowest prevalence of physical inactivity and the south has the highest prevalence of physical inactivity. Metropolitan and large urban areas across all regions have the highest prevalence of physical activity among residents.
Barriers such as geographic isolation, busy and unsafe roads, and social stigmas lead to decreased physical activity in rural environments. Faster speed limits on rural roads prohibits the ability to have bike lanes, sidewalks, footpaths, and shoulders along the side of the roads. Less developed open spaces in rural areas, like parks and trails, suggest that there is lower walkability in these areas in comparison to urban areas. Many residents in rural settings have to travel long distances to utilize exercise facilities, taking up too much time in the day and deterring residents from using recreational facilities to obtain physical activity. Additionally, residents of rural communities are traveling further for work, decreasing the amount of time that can be spent on leisure physical activity and significantly decreases the opportunity to partake in active transportation to work.
Neighbourhoods and communities with nearby fitness venues, a common feature of urbanization, have residents that partake in increased amounts of physical activity. Communities with sidewalks, street lights, and traffic signals have residents participating in more physical activity than communities without those features. Having a variety of destinations close to where people live, increases the use of active transportation, such as walking and biking. Active transportation is also enhanced in urban communities where there is easy access to public transportation due to residents walking or biking to transportation stops.
In a study comparing different regions in the United States, opinions across all areas were shared that environmental characteristics like access to sidewalks, safe roads, recreational facilities, and enjoyable scenery are positively associated with participation in leisure physical activity. Perceiving that resources are nearby for physical activity increases the likelihood that residents of all communities will meet the guidelines and recommendations for appropriate physical activity. Specific to rural residents, the safety of outdoor developed spaces and convenient availability to recreational facilities matters most when making decisions on increasing physical activity. In order to combat the levels of inactivity in rural residents, more convenient recreational features, such as the ones discussed in this paragraph, need to be implemented into rural communities and societies.
Mental health
Urbanization factors that contribute to mental health can be thought of as factors that affect the individual and factors that affect the larger social group. At the macro, social group level, changes related to urbanization are thought to contribute to social disintegration and disorganization. These macro factors contribute to social disparities which affect individuals by creating perceived insecurity. Perceived insecurity can be due problems with the physical environment, such as issues with personal safety, or problems with the social environment, such as a loss of positive self-concepts from negative events. Increased stress is a common individual psychological stressor that accompanies urbanization and is thought to be due to perceived insecurity. Changes in social organization, a consequence of urbanization, are thought to lead to reduced social support, increased violence, and overcrowding. It is these factors that are thought to contribute to increased stress.
A 2004 study of 4.4 million Swedish residents found that .
Changing forms
Different forms of urbanization can be classified depending on the style of architecture and planning methods as well as the historic growth of areas.
In cities of the developed world urbanization traditionally exhibited a concentration of human activities and settlements around the downtown area, the so-called in-migration. In-migration refers to migration from former colonies and similar places. The fact that many immigrants settle in impoverished city centres led to the notion of the "peripheralization of the core", which simply describes that people who used to be at the periphery of the former empires now live right in the centre.
Recent developments, such as inner-city redevelopment schemes, mean that new arrivals in cities no longer necessarily settle in the centre. In some developed regions, the reverse effect, originally called counter urbanization has occurred, with cities losing population to rural areas, and is particularly common for richer families. This has been possible because of improved communications and has been caused by factors such as the fear of crime and poor urban environments. It has contributed to the phenomenon of shrinking cities experienced by some parts of the industrialized world.
Rural migrants are attracted by the possibilities that cities can offer, but often settle in shanty towns and experience extreme poverty. The inability of countries to provide adequate housing for these rural migrants is related to overurbanization, a phenomenon in which the rate of urbanization grows more rapidly than the rate of economic development, leading to high unemployment and high demand for resources. In the 1980s, this was attempted to be tackled with the urban bias theory which was promoted by Michael Lipton.
Most of the urban poor in developing countries unable to find work can spend their lives in insecure, poorly paid jobs. According to research by the Overseas Development Institute pro-poor urbanization will require labour-intensive growth, supported by labour protection, flexible land use regulation and investments in basic services.'
Suburbanization
When the residential area shifts outward, this is called suburbanization. A number of researchers and writers suggest that suburbanization has gone so far to form new points of concentration outside the downtown both in developed and developing countries such as India. This networked, poly-centric form of concentration is considered by some emerging pattern of urbanization. It is called variously edge city (Garreau, 1991), network city (Batten, 1995), postmodern city (Dear, 2000), or exurb, though the latter term now refers to a less dense area beyond the suburbs. Los Angeles is the best-known example of this type of urbanization. In the United States, this process has reversed as of 2011, with "re-urbanization" occurring as suburban flight due to chronically high transport costs.
Planned urbanization
Urbanization can be planned urbanization or organic. Planned urbanization, i.e.: planned community or the garden city movement, is based on an advance plan, which can be prepared for military, aesthetic, economic or urban design reasons. Examples can be seen in many ancient cities; although with exploration came the collision of nations, which meant that many invaded cities took on the desired planned characteristics of their occupiers. Many ancient organic cities experienced redevelopment for military and economic purposes, new roads carved through the cities, and new parcels of land were cordoned off serving various planned purposes giving cities distinctive geometric designs. UN agencies prefer to see urban infrastructure installed before urbanization occurs. Landscape planners are responsible for landscape infrastructure (public parks, sustainable urban drainage systems, greenways etc.) which can be planned before urbanization takes place, or afterwards to revitalize an area and create greater livability within a region. Concepts of control of the urban expansion are considered in the American Institute of Planners.
As population continues to grow and urbanize at unprecedented rates, new urbanism and smart growth techniques are implemented to create a transition into developing environmentally, economically, and socially sustainable cities. Additionally, a more well-rounded approach articulates the importance to promote participation of non-state actors, which could include businesses, research and non-profit organizations and, most importantly, local citizens. Smart Growth and New Urbanism's principles include walkability, mixed-use development, comfortable high-density design, land conservation, social equity, and economic diversity. Mixed-use communities work to fight gentrification with affordable housing to promote social equity, decrease automobile dependency to lower use of fossil fuels, and promote a localized economy. Walkable communities have a 38% higher average GDP per capita than less walkable urban metros (Leinberger, Lynch). By combining economic, environmental, and social sustainability, cities will become equitable, resilient, and more appealing than urban sprawl that overuses land, promotes automobile use, and segregates the population economically.
Water scarcity
Urbanization throughout the world
Presently, most countries in the world are urbanized, with the global urbanization average numbering 56.2% in 2020. However, there are great differences between some regions; the nations of Europe, the Middle East, the Americas and East Asia are predominantly urbanized. Meanwhile, two large belts (from central to eastern Africa, and from central to southeast Asia) of very lowly urbanized countries exist, as seen on the map here. These labeled countries are among the least urbanized.
As of 2022, urbanization rates are over 80% in the United States, Canada, Mexico, Brazil, Argentina, Chile, Japan, Australia, the United Kingdom, France, Finland, Denmark, Israel, Spain and South Korea. South America is the most urbanized continent in the world, accounting for more than 80% of its total population living in urban areas. It is also the only continent where the urbanization rate is over 80%.
See also
Historical
Neolithic Revolution
Oppidum
Polis
Urban revolution
Regional
Urbanization in Africa
Urbanization in China
Urbanization in India
Urbanization in Pakistan
Urbanization in the United States
References
Sources
Further reading
Bairoch, Paul. Cities and economic development: from the dawn of history to the present (U of Chicago Press, 1991). online review
Goldfield, David. ed. Encyclopedia of American Urban History (2 vol 2006); 1056pp; Excerpt and text search
Hoffmann, Ellen M., et al. "Is the push-pull paradigm useful to explain rural-urban migration? A case study in Uttarakhand, India." PloS one 14.4 (2019): e0214511. online
Lees, Andrew. The city: A world history (New Oxford World History, 2015), 160pp.
McShane, Clay. "The State of the Art in North American Urban History," Journal of Urban History (2006) 32#4 pp 582–597, identifies a loss of influence by such writers as Lewis Mumford, Robert Caro, and Sam Warner, a continuation of the emphasis on narrow, modern time periods, and a general decline in the importance of the field. Comments by Timothy Gilfoyle and Carl Abbott contest the latter conclusion.
External links
World Urbanization Prospects, the 2014 Revision, Website of the United Nations Population Division
Urbanization in Bulgaria
NASA Night Satellite Imagery – City lights can provide a simple, visual measure of urbanization
Geopolis: research group, University of Paris-Diderot, France
The Natural History of Urbanization, by Lewis Mumford
The World System urbanization dynamics, by Andrey Korotayev
Brief review of world socio-demographic trends includes a review of global urbanization trends
World Economic and Social Survey 2013, United Nations Department of Economic and Social Affairs.
Human migration
Industrial Revolution
Land use
Urban planning | Urbanization | [
"Engineering"
] | 7,953 | [
"Urban planning",
"Architecture"
] |
56,129 | https://en.wikipedia.org/wiki/Noetherian%20ring | In mathematics, a Noetherian ring is a ring that satisfies the ascending chain condition on left and right ideals; if the chain condition is satisfied only for left ideals or for right ideals, then the ring is said left-Noetherian or right-Noetherian respectively. That is, every increasing sequence of left (or right) ideals has a largest element; that is, there exists an such that:
Equivalently, a ring is left-Noetherian (respectively right-Noetherian) if every left ideal (respectively right-ideal) is finitely generated. A ring is Noetherian if it is both left- and right-Noetherian.
Noetherian rings are fundamental in both commutative and noncommutative ring theory since many rings that are encountered in mathematics are Noetherian (in particular the ring of integers, polynomial rings, and rings of algebraic integers in number fields), and many general theorems on rings rely heavily on the Noetherian property (for example, the Lasker–Noether theorem and the Krull intersection theorem).
Noetherian rings are named after Emmy Noether, but the importance of the concept was recognized earlier by David Hilbert, with the proof of Hilbert's basis theorem (which asserts that polynomial rings are Noetherian) and Hilbert's syzygy theorem.
Characterizations
For noncommutative rings, it is necessary to distinguish between three very similar concepts:
A ring is left-Noetherian if it satisfies the ascending chain condition on left ideals.
A ring is right-Noetherian if it satisfies the ascending chain condition on right ideals.
A ring is Noetherian if it is both left- and right-Noetherian.
For commutative rings, all three concepts coincide, but in general they are different. There are rings that are left-Noetherian and not right-Noetherian, and vice versa.
There are other, equivalent, definitions for a ring R to be left-Noetherian:
Every left ideal I in R is finitely generated, i.e. there exist elements in I such that .
Every non-empty set of left ideals of R, partially ordered by inclusion, has a maximal element.
Similar results hold for right-Noetherian rings.
The following condition is also an equivalent condition for a ring R to be left-Noetherian and it is Hilbert's original formulation:
Given a sequence of elements in R, there exists an integer such that each is a finite linear combination with coefficients in R.
For a commutative ring to be Noetherian it suffices that every prime ideal of the ring is finitely generated. However, it is not enough to ask that all the maximal ideals are finitely generated, as there is a non-Noetherian local ring whose maximal ideal is principal (see a counterexample to Krull's intersection theorem at Local ring#Commutative case.)
Properties
If R is a Noetherian ring, then the polynomial ring is Noetherian by the Hilbert's basis theorem. By induction, is a Noetherian ring. Also, , the power series ring, is a Noetherian ring.
If is a Noetherian ring and is a two-sided ideal, then the quotient ring is also Noetherian. Stated differently, the image of any surjective ring homomorphism of a Noetherian ring is Noetherian.
Every finitely-generated commutative algebra over a commutative Noetherian ring is Noetherian. (This follows from the two previous properties.)
A ring R is left-Noetherian if and only if every finitely generated left R-module is a Noetherian module.
If a commutative ring admits a faithful Noetherian module over it, then the ring is a Noetherian ring.
(Eakin–Nagata) If a ring A is a subring of a commutative Noetherian ring B such that B is a finitely generated module over A, then A is a Noetherian ring.
Similarly, if a ring A is a subring of a commutative Noetherian ring B such that B is faithfully flat over A (or more generally exhibits A as a pure subring), then A is a Noetherian ring (see the "faithfully flat" article for the reasoning).
Every localization of a commutative Noetherian ring is Noetherian.
A consequence of the Akizuki–Hopkins–Levitzki theorem is that every left Artinian ring is left Noetherian. Another consequence is that a left Artinian ring is right Noetherian if and only if it is right Artinian. The analogous statements with "right" and "left" interchanged are also true.
A left Noetherian ring is left coherent and a left Noetherian domain is a left Ore domain.
(Bass) A ring is (left/right) Noetherian if and only if every direct sum of injective (left/right) modules is injective. Every left injective module over a left Noetherian module can be decomposed as a direct sum of indecomposable injective modules. See also #Implication on injective modules below.
In a commutative Noetherian ring, there are only finitely many minimal prime ideals. Also, the descending chain condition holds on prime ideals.
In a commutative Noetherian domain R, every element can be factorized into irreducible elements (in short, R is a factorization domain). Thus, if, in addition, the factorization is unique up to multiplication of the factors by units, then R is a unique factorization domain.
Examples
Any field, including the fields of rational numbers, real numbers, and complex numbers, is Noetherian. (A field only has two ideals — itself and (0).)
Any principal ideal ring, such as the integers, is Noetherian since every ideal is generated by a single element. This includes principal ideal domains and Euclidean domains.
A Dedekind domain (e.g., rings of integers) is a Noetherian domain in which every ideal is generated by at most two elements.
The coordinate ring of an affine variety is a Noetherian ring, as a consequence of the Hilbert basis theorem.
The enveloping algebra U of a finite-dimensional Lie algebra is a both left and right Noetherian ring; this follows from the fact that the associated graded ring of U is a quotient of , which is a polynomial ring over a field (the PBW theorem); thus, Noetherian. For the same reason, the Weyl algebra, and more general rings of differential operators, are Noetherian.
The ring of polynomials in finitely-many variables over the integers or a field is Noetherian.
Rings that are not Noetherian tend to be (in some sense) very large. Here are some examples of non-Noetherian rings:
The ring of polynomials in infinitely-many variables, X1, X2, X3, etc. The sequence of ideals (X1), (X1, X2), (X1, X2, X3), etc. is ascending, and does not terminate.
The ring of all algebraic integers is not Noetherian. For example, it contains the infinite ascending chain of principal ideals: (2), (21/2), (21/4), (21/8), ...
The ring of continuous functions from the real numbers to the real numbers is not Noetherian: Let In be the ideal of all continuous functions f such that f(x) = 0 for all x ≥ n. The sequence of ideals I0, I1, I2, etc., is an ascending chain that does not terminate.
The ring of stable homotopy groups of spheres is not Noetherian.
However, a non-Noetherian ring can be a subring of a Noetherian ring. Since any integral domain is a subring of a field, any integral domain that is not Noetherian provides an example. To give a less trivial example,
The ring of rational functions generated by x and y /xn over a field k is a subring of the field k(x,y) in only two variables.
Indeed, there are rings that are right Noetherian, but not left Noetherian, so that one must be careful in measuring the "size" of a ring this way. For example, if L is a subgroup of Q2 isomorphic to Z, let R be the ring of homomorphisms f from Q2 to itself satisfying f(L) ⊂ L. Choosing a basis, we can describe the same ring R as
This ring is right Noetherian, but not left Noetherian; the subset I ⊂ R consisting of elements with a = 0 and γ = 0 is a left ideal that is not finitely generated as a left R-module.
If R is a commutative subring of a left Noetherian ring S, and S is finitely generated as a left R-module, then R is Noetherian. (In the special case when S is commutative, this is known as Eakin's theorem.) However, this is not true if R is not commutative: the ring R of the previous paragraph is a subring of the left Noetherian ring S = Hom(Q2, Q2), and S is finitely generated as a left R-module, but R is not left Noetherian.
A unique factorization domain is not necessarily a Noetherian ring. It does satisfy a weaker condition: the ascending chain condition on principal ideals. A ring of polynomials in infinitely-many variables is an example of a non-Noetherian unique factorization domain.
A valuation ring is not Noetherian unless it is a principal ideal domain. It gives an example of a ring that arises naturally in algebraic geometry but is not Noetherian.
Noetherian group rings
Consider the group ring of a group over a ring . It is a ring, and an associative algebra over if is commutative. For a group and a commutative ring , the following two conditions are equivalent.
The ring is left-Noetherian.
The ring is right-Noetherian.
This is because there is a bijection between the left and right ideals of the group ring in this case, via the -associative algebra homomorphism
Let be a group and a ring. If is left/right/two-sided Noetherian, then is left/right/two-sided Noetherian and is a Noetherian group. Conversely, if is a Noetherian commutative ring and is an extension of a Noetherian solvable group (i.e. a polycyclic group) by a finite group, then is two-sided Noetherian. On the other hand, however, there is a Noetherian group whose group ring over any Noetherian commutative ring is not two-sided Noetherian.
Key theorems
Many important theorems in ring theory (especially the theory of commutative rings) rely on the assumptions that the rings are Noetherian.
Commutative case
Over a commutative Noetherian ring, each ideal has a primary decomposition, meaning that it can be written as an intersection of finitely many primary ideals (whose radicals are all distinct) where an ideal Q is called primary if it is proper and whenever xy ∈ Q, either x ∈ Q or y n ∈ Q for some positive integer n. For example, if an element is a product of powers of distinct prime elements, then and thus the primary decomposition is a direct generalization of prime factorization of integers and polynomials.
A Noetherian ring is defined in terms of ascending chains of ideals. The Artin–Rees lemma, on the other hand, gives some information about a descending chain of ideals given by powers of ideals . It is a technical tool that is used to prove other key theorems such as the Krull intersection theorem.
The dimension theory of commutative rings behaves poorly over non-Noetherian rings; the very fundamental theorem, Krull's principal ideal theorem, already relies on the "Noetherian" assumption. Here, in fact, the "Noetherian" assumption is often not enough and (Noetherian) universally catenary rings, those satisfying a certain dimension-theoretic assumption, are often used instead. Noetherian rings appearing in applications are mostly universally catenary.
Non-commutative case
Goldie's theorem
Implication on injective modules
Given a ring, there is a close connection between the behaviors of injective modules over the ring and whether the ring is a Noetherian ring or not. Namely, given a ring R, the following are equivalent:
R is a left Noetherian ring.
(Bass) Each direct sum of injective left R-modules is injective.
Each injective left R-module is a direct sum of indecomposable injective modules.
(Faith–Walker) There exists a cardinal number such that each injective left module over R is a direct sum of -generated modules (a module is -generated if it has a generating set of cardinality at most ).
There exists a left R-module H such that every left R-module embeds into a direct sum of copies of H.
The endomorphism ring of an indecomposable injective module is local and thus Azumaya's theorem says that, over a left Noetherian ring, each indecomposable decomposition of an injective module is equivalent to one another (a variant of the Krull–Schmidt theorem).
See also
Noetherian scheme
Artinian ring
Notes
References
Atiyah, M. F., MacDonald, I. G. (1969). Introduction to commutative algebra. Addison-Wesley-Longman.
Chapter X of
External links
Ring theory | Noetherian ring | [
"Mathematics"
] | 2,946 | [
"Fields of abstract algebra",
"Ring theory"
] |
602,905 | https://en.wikipedia.org/wiki/69230%20Hermes | 69230 Hermes is a sub-kilometer sized asteroid and binary system on an eccentric orbit, classified as a potentially hazardous asteroid and near-Earth object of the Apollo group, that passed Earth at approximately twice the distance of the Moon on 30 October 1937. The asteroid was named after Hermes from Greek mythology. It is noted for having been the last remaining named lost asteroid, rediscovered in 2003. The S-type asteroid has a rotation period of 13.9 hours. Its synchronous companion was discovered in 2003. The primary and secondary are similar in size; they measure approximately and in diameter, respectively.
Discovery
Hermes was discovered by German astronomer Karl Reinmuth in images taken at Heidelberg Observatory on 28 October 1937. Only four days of observations could be made before it became too faint to be seen in the telescopes of the day. This was not enough to calculate an orbit, and Hermes became a lost asteroid. It thus did not receive a number, but Reinmuth nevertheless named it after the Greek god Hermes. It was the third unnumbered but named asteroid, having only the provisional designation . The two others long lost were (1862) Apollo, discovered in 1932 and numbered in 1973, and (2101) Adonis, discovered in 1936 and numbered in 1977.
On 15 October 2003, Brian A. Skiff of the LONEOS project made an asteroid observation that, when the orbit was calculated backwards in time (by Timothy B. Spahr, Steven Chesley and Paul Chodas), turned out to be a rediscovery of Hermes. It has been assigned sequential number 69230. Additional precovery observations were published by the Minor Planet Center, the earliest being found in images taken serendipitously by the MPG/ESO 2.2-m La Silla telescope on 16 September 2000.
Naming
This minor planet was named after the Greek god Hermes, who is the messenger of the gods and son of Zeus and Maia (see also and ). Recovered and numbered in late 2003, Hermes was originally named by the Astronomical Calculation Institute as early as 1937. The official naming citation was published by the Minor Planet Center on 9 November 2003 ().
Orbit and classification
Hermes is an Apollo asteroid, a subgroup of near-Earth asteroids that cross the orbit of Earth. It orbits the Sun at a distance of 0.6–2.7 AU once every 2 years and 2 months (778 days; semi-major axis of 1.66 AU). Its orbit has an eccentricity of 0.62 and an inclination of 6° with respect to the ecliptic. Due to its eccentricity, Hermes is also a Mars- and Venus-crosser. Frequent close approaches to both Earth and Venus make it unusually challenging to forecast its orbit more than a century in advance, though there is no impact risk within that timeframe.
Close approaches
The asteroid has an Earth minimum orbital intersection distance of which translates into 1.6 LD. On 30 October 1937, Hermes passed from Earth, and on 26 April 1942, from Earth. In retrospect it turned out that Hermes came even closer to the Earth in 1942 than in 1937, within 1.7 lunar distances; the second pass was unobserved at the height of the Second World War. For decades, Hermes was known to have made the closest known approach of an asteroid to the Earth. Not until 1989 was a closer approach (by 4581 Asclepius) observed. At closest approach, Hermes was moving 5° per hour across the sky and reached 8th magnitude.
Physical characteristics
Spectral type
Hermes is a stony S-type asteroid, as reported by Andy Rivkin and Richard Binzel. It has been characterized as a Sq-subtype using the SpeX instrument at NASA Infrared Telescope Facility. Sq-types transition to the Q-type asteroid.
Lightcurves
Three rotational lightcurves of Hermes were obtained from photometric observations in October 2003. Lightcurve analysis gave a well-defined rotation period between 13.892 and 13.894 hours with a brightness variation between and 0.06 and 0.08 magnitude, which indicates that the body has a nearly spherical shape ().
Binary system
Radar observations led by Jean-Luc Margot at Arecibo Observatory and Goldstone in October and November 2003 showed Hermes to be a binary asteroid. The primary and secondary components have nearly identical radii of and , respectively, and their orbital separation is only 1,200 metres, much smaller than the Hill radius of 35 km.
The two components are in double synchronous rotation (similar to the trans-Neptunian system Pluto and Charon). Hermes is one of only four systems of that kind known in the near-Earth object population. The other three are , , and .
In popular culture
In the 1978 novel The Hermes Fall by John Baxter, the asteroid endangers the Earth in 1980. It is not explicitly made clear as to whether or not the Hermes asteroid from The Hermes Fall is 69230 Hermes.
Notes
References
External links
Arecibo 2003 press release
Hermes radar results at UCLA
Asteroids with Satellites, Robert Johnston, johnstonsarchive.net
Dictionary of Minor Planet Names, Google books
069230
Discoveries by Karl Wilhelm Reinmuth
Named minor planets
069230
069230
069230
19371028
Recovered astronomical objects | 69230 Hermes | [
"Astronomy"
] | 1,090 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
603,121 | https://en.wikipedia.org/wiki/Gravestone | A gravestone or tombstone is a marker, usually stone, that is placed over a grave. A marker set at the head of the grave may be called a headstone. An especially old or elaborate stone slab may be called a funeral stele, stela, or slab. The use of such markers is traditional for Chinese, Jewish, Christian, and Islamic burials, as well as other traditions. In East Asia, the tomb's spirit tablet is the focus for ancestral veneration and may be removable for greater protection between rituals. Ancient grave markers typically incorporated funerary art, especially details in stone relief. With greater literacy, more markers began to include inscriptions of the deceased's name, date of birth, and date of death, often along with a personal message or prayer. The presence of a frame for photographs of the deceased is also increasingly common.
Use
The stele (plural: stelae), as it is called in an archaeological context, is one of the oldest forms of funerary art. Originally, a tombstone was the stone lid of a stone coffin, or the coffin itself, and a gravestone was the stone slab (or ledger stone) that was laid flat over a grave. Now, all three terms ("stele", "tombstone" or "gravestone") are also used for markers set (usually upright) at the head of the grave. Some graves in the 18th century also contained footstones to demarcate the foot end of the grave. This sometimes developed into full kerb sets that marked the whole perimeter of the grave. Footstones were rarely annotated with more than the deceased's initials and year of death, and sometimes a memorial mason and plot reference number. Many cemeteries and churchyards have removed those extra stones to ease grass cutting by machine mower. In some UK cemeteries, the principal, and indeed only, marker is placed at the foot of the grave.
Owing to soil movement and downhill creep on gentle slopes, older headstones and footstones can often be found tilted at an angle. Over time, this movement can result in the stones being sited several metres away from their original location.
Graves and any related memorials are a focus for mourning and remembrance. The names of relatives are often added to a gravestone over the years, so that one marker may chronicle the passing of an entire family spread over decades. Since gravestones and a plot in a cemetery or churchyard cost money, they are also a symbol of wealth or prominence in a community. Some gravestones were even commissioned and erected to their own memory by people who were still living, as a testament to their wealth and status. In a Christian context, the very wealthy often erected elaborate memorials within churches rather than having simply external gravestones. Crematoria frequently offer similar alternatives to families who do not have a grave to mark, but who want a focus for their mourning and for remembrance. Carved or cast commemorative plaques inside the crematorium for example may serve this purpose.
Materials
A cemetery may follow national codes of practice or independently prescribe the size and use of certain materials, especially in a conservation area. Some may limit the placing of a wooden memorial to six months after burial, after which a more permanent memorial must be placed. Others may require stones of a certain shape or position to facilitate grass-cutting. Headstones of granite, marble and other kinds of stone are usually created, installed, and repaired by monumental masons. Cemeteries require regular inspection and maintenance, as stones may settle, topple and, on rare occasions, fall and injure people; or graves may simply become overgrown and their markers lost or vandalised.
Restoration is a specialized job for a monumental mason. Even overgrowth removal requires care to avoid damaging the carving. For example, ivy should only be cut at the base roots and left to naturally die off, never pulled off forcefully. Many materials have been used as markers.
Stone
Fieldstones. In many cultures markers for graves other than enclosed areas, such as planted with characteristic plants particularly in northern Europe the yew, were natural fieldstones, some unmarked and others decorated or incised using a metal awl. Typical motifs for the carving included a symbol and the deceased's name and age.
Granite. Granite is a hard stone and requires skill to carve by hand. Modern methods of carving include using computer-controlled rotary bits and sandblasting over a rubber stencil. Leaving the letters, numbers and emblems exposed on the stone, the blaster can create virtually any kind of artwork or epitaph.
Marble and limestone. Both limestone and marble take carving well. Marble is a recrystallised form of limestone. The mild acid in rainwater can slowly dissolve marble and limestone over time, which can make inscriptions unreadable. Portland stone was a type of limestone commonly used in Englandafter weathering, fossiliferous deposits tend to appear on the surface. Marble became popular from the early 19th century, though its extra cost limited its appeal.
Sandstone. Sandstone is durable, yet soft enough to carve easily. Some sandstone markers are so well preserved that individual chisel marks are discernible, while others have delaminated and crumbled to dust. Delamination occurs when moisture gets between the layers of the sandstone. As it freezes and expands the layers flake off. In the 17th century, sandstone replaced field stones in Colonial America. Yorkstone was a common sandstone material used in England.
Slate. Slate can have a pleasing texture but is slightly porous and prone to delamination. Slate was commonly used by colonial New England carvers, especially in Boston where elaborate slate markers were shipped down the Atlantic coast as far south as Charleston and Savanah. It takes lettering well, often highlighted with white paint or gilding.
Schist. Schist Was a common material for grave making in the American Colonies during the 17th and 18th Century. While harder to Carve than Sandstone or Slate, lettering and symbols usually had to be carved deeper into the stone and therefore held up well over long periods of time. While not as durable as most slate, most have held up well against the elements.
Metal, wood and plants
Iron. Iron grave markers and decorations were popular during the Victorian era in the United Kingdom and elsewhere, often being produced by specialist foundries or the local blacksmith. Cast iron headstones have lasted for generations while wrought ironwork often only survives in a rusted or eroded state. In eastern Värmland, Sweden, iron crosses instead of stones have been popular since the 18th century.
White bronze. Actually sand cast zinc, but called white bronze for marketing purposes. Almost all, if not all, zinc grave markers were made by the Monumental Bronze Company of Bridgeport, CT, between 1874 and 1914. The company set up subsidiaries in Detroit, Philadelphia, New Orleans, and Des Moines; a Chicago subsidiary was named the American Bronze Company, while the St. Thomas White Bronze Monument Company was set up in Ontario, Canada. They are in cemeteries of the period all across the U.S. and Canada. They were sold as more durable than marble, about 1/3 less expensive and progressive.
Wood. This was a popular material during the Georgian and Victorian era, and almost certainly before, in Great Britain and elsewhere. Some could be very ornate, although few survive beyond 50–100 years due to natural decomposition or termites and other wood boring insects. In Hungary, the kopjafa is a traditional carved wooden grave marker.
Planting. Trees or shrubs, particularly roses, may be planted, especially to mark the location of ashes. This may be accompanied by a small inscribed metal or wooden marker.
Inscriptions
Markers sometimes bear inscriptions. The information on the headstone generally includes the name of the deceased and their date of birth and death. Such information can be useful to genealogists and local historians. Larger cemeteries may require a discreet reference code as well to help accurately fix the location for maintenance. The cemetery owner, church, or, as in the UK, national guidelines might encourage the use of 'tasteful' and accurate wording in inscriptions. The placement of inscriptions is traditionally placed on the forward-facing side of the memorial but can also be seen in some cases on the reverse and around the edges of the stone itself. Some families request that an inscription be made on the portion of the memorial that will be underground.
In addition, some gravestones also bear epitaphs in praise of the deceased or quotations from religious texts, such as "requiescat in pace". In a few instances the inscription is in the form of a plea, admonishment, testament of faith, claim to fame or even a curseWilliam Shakespeare's inscription famously declares
Or a warning about mortality, such as this Persian poetry carved on an ancient tombstone in the Tajiki capital of Dushanbe.
Or a simpler warning of inevitability of death:
Headstone engravers faced their own "year 2000 problem" when still-living people, as many as 500,000 in the United States alone, pre-purchased headstones with pre-carved death years beginning with 19–.
Bas-relief carvings of a religious nature or of a profile of the deceased can be seen on some headstones, especially up to the 19th century. Since the invention of photography, a gravestone might include a framed photograph or cameo of the deceased; photographic images or artwork (showing the loved one, or some other image relevant to their life, interests or achievements) are sometimes now engraved onto smooth stone surfaces.
Some headstones use lettering made of white metal fixed into the stone, which is easy to read but can be damaged by ivy or frost. Deep carvings on a hard-wearing stone may weather many centuries exposed in graveyards and still remain legible. Those fixed on the inside of churches, on the walls, or on the floor (often as near the altar as possible) may last much longer: such memorials were often embellished with a monumental brass. Irish geologist Patrick Wyse Jackson mused on gravestone legibility in 1993 with regards to the different types of stone available:
The choice of language and/or script on gravestones has been studied by sociolinguists as indicators of language choices and language loyalty. For example, by studying cemeteries used by immigrant communities, some languages were found to be carved "long after the language ceased to be spoken" in the communities. In other cases, a language used in the inscription may indicate a religious affiliation.
Marker inscriptions have also been used for political purposes, such as the grave marker installed in January 2008 at Cave Hill Cemetery in Louisville, Kentucky by Mathew Prescott, an employee of PETA. The grave marker is located near the grave of KFC founder Harland Sanders and bears the acrostic message "KFC tortures birds". The group placed its grave marker to promote its contention that KFC is cruel to chickens.
Form and decoration
Gravestones may be simple upright slabs with semi-circular, rounded, gabled, pointed-arched, pedimental, square or other shaped tops. During the 18th century, they were often decorated with memento mori (symbolic reminders of death) such as skulls or winged skulls, winged cherub heads, heavenly crowns, or the picks and shovels of the gravedigger. Somewhat unusual were more elaborate allegorical figures, such as Old Father Time, or emblems of trade or status, or even some event from the life of the deceased (particularly how they died). Large tomb chests, false sarcophagi as the actual remains were in the earth below, or smaller coped chests were commonly used by the gentry as a means of commemorating a number of members of the same family. In the 19th century, headstone styles became very diverse, ranging from plain to highly decorated, and often using crosses on a base or other shapes differing from the traditional slab. By this time popular designs were shifting from symbols of death like Winged heads and Skulls to Urns and Willow trees. Marble also became overwhelmingly popular as a grave material during the 1800s in the United States. More elaborately carved markers, such as crosses or angels also became popular during this time. Simple curb surrounds, sometimes filled with glass chippings, were popular during the mid-20th century.
Islamic headstones are traditionally more a rectangular upright shaft, often topped with a carved topknot symbolic of a turban; but in Western countries more local styles are often used.
Some form of simple decoration may be employed. Special emblems on tombstones indicate several familiar themes in many faiths. Some examples are:
Anchor: Steadfast hope
Angel of grief: Sorrow
Arch: Rejoined with partner in Heaven
Birds: The soul
Book: Faith, wisdom
Cherub: Divine wisdom or justice
Column: Noble life
Broken column: Early death
Conch shell: Wisdom
Cross, anchor and Bible: Trials, victory and reward
Crown: Reward and glory
Dolphin: Salvation, bearer of souls to Heaven
Dove: Purity, love and Holy Spirit
Evergreen: Eternal life
Garland: Victory over death
Gourds: Deliverance from grief
Hands: A relation or partnership (see Reference 3)
Heart: Devotion
Horseshoe: Protection against evil
Hourglass: Time and its swift flight
IHS: Stylized version of iota-eta-sigma, a Greek abbreviation of "Iesus Hominum Salvator" ("Jesus, savior of mankind"); alternatively treated as an initialism for "in Hoc Signo (Vinces)": "In this sign you shall conquer." Commonly indicates Roman Catholic faith, the latter especially Society of Jesus.
Ivy: Faithfulness, memory, and undying friendship
Lamb: Innocence, young age
Lamp: Immortality
Laurel: Victory, fame
Lily: Purity and resurrection
Lion: Strength, resurrection
Mermaid: Dualism of Christfully God, fully man
Oak: Strength
Olive branch: Forgiveness, and peace
Palms: Martyrdom, or victory over death
Peacock: Eternal life
Pillow: a deathbed, eternal sleep
Poppy: Eternal sleep
Rooster: Awakening, courage and vigilance
Shell: Birth and resurrection
Skeleton: Life's brevity
Snake in a circle: Everlasting life in Heaven
Square and Compasses: Freemasonry
Star of David: Judaism
Swallow: Motherhood
Broken sword: Life cut short
Crossed swords: Life lost in battle
Torch: Eternal life if upturned, death if extinguished
Tree trunk: The beauty of life
Triangle: Truth, equality and the trinity
Tzedakah box (pushke): Righteousness, for it is written "...to do righteousness and justice" (Gen 18:19) and "the doing of righteousness and justice is preferable to the Lord than sacrificial offering" (Proverbs 21:3).
Shattered urn: Old age, mourning if draped
Weeping willow: Mourning, grief
Greek letters might also be used:
(alpha and omega): The beginning and the end
(chi rho): The first letters spelling the name of Christ
Safety
Over time a headstone may settle or its fixings weaken. After several instances where unstable stones have fallen in dangerous circumstances, some burial authorities "topple test" headstones by firm pressure to check for stability. They may then tape them off or flatten them.
This procedure has proved controversial in the UK, where an authority's duty of care to protect visitors is complicated because it often does not have any ownership rights over the dangerous marker. Authorities that have knocked over stones during testing or have unilaterally lifted and laid flat any potentially hazardous stones have been criticised, after grieving relatives have discovered that their relative's marker has been moved. Since 2007 Consistory Court and local authority guidance now restricts the force used in a topple test and requires an authority to consult relatives before moving a stone. In addition, before laying a stone flat, it must be recorded for posterity.
Gravestone Cleaning
Gravestone cleaning is a practice that both professionals and volunteers can do to preserve gravestones and increase their life spans. Before cleaning any gravestones, permission must be given to the cleaner by a "descendant, the sexton, cemetery superintendent or the town, in that order. If unsure who to ask, go to your town cemetery keeper and inquire." A gravestone can be cleaned to remove human vandalism and graffiti, biological growth such as algae or lichen, and other minerals, soiling, or staining.
One of the most important tenets of gravestone cleaning is "do no harm." In the United States, the National Park Service has published a list of guidelines that outline the best practices of gravestone cleaning:
Do
"Do no harm
Do select the gentlest cleaning method to accomplish the task
Do perform small test patches before cleaning the entire stone
Do follow manufacturers’ recommendations
Do follow manufacturers’ safety guidance
Do exercise patience
Don't
Don’t remove original surfaces
Don’t use bleach or other salt laden cleaners
Don’t power wash with high pressures
Don’t sand blast or use harsh mechanical methods such as power tools
Don’t use strong acids or bases"
Image gallery
See also
Gravestone rubbing
Khachkar
Mausoleum
Megalith
Murder stone
Sarcophagus
Scottish gravestones
The Devil's Chair (urban legend)
Tombstone tourist
Viewlogy
References
Sources
External links
In Search Of Gravestones Old And Curious by W.T. Vincent, 1896, from Project Gutenberg
Azeri.org, Sofi Hamid Cemetery
World Burial Index Photographs of memorial inscriptions plus free surname search
A Very Grave Matter Old New England gravestones
Historic Headstones Online Project to transcribe content from historic headstones
Pennsylvania German tombstones
Stockton University includes gravestone imagery in New Jersey
How to clean a Grave marker by Ralf Heckenbach
Stone Quarries and Beyond
"Memorializing the Civil War Dead: Modernity and Corruption under the Grant Administration", by Bruce S. Elliott, in Markers XXVI, Association for Gravestone Studies, 2011, pp. 15–55. (Reprinted with permission of the "Association for Gravestone Studies". (Details the beginning of the mass production of cemetery stones and the increased use of the sand blast process.)
Cleaning Grave Markers (a video created by the United States National Park Service on how to clean gravestones and other monuments)
Burial monuments and structures
Monumental masons
Stone monuments and memorials
Stones | Gravestone | [
"Physics"
] | 3,729 | [
"Stones",
"Physical objects",
"Matter"
] |
603,273 | https://en.wikipedia.org/wiki/Magnification | Magnification is the process of enlarging the apparent size, not physical size, of something. This enlargement is quantified by a size ratio called optical magnification. When this number is less than one, it refers to a reduction in size, sometimes called de-magnification.
Typically, magnification is related to scaling up visuals or images to be able to see more detail, increasing resolution, using microscope, printing techniques, or digital processing. In all cases, the magnification of the image does not change the perspective of the image.
Examples of magnification
Some optical instruments provide visual aid by magnifying small or distant subjects.
A magnifying glass, which uses a positive (convex) lens to make things look bigger by allowing the user to hold them closer to their eye.
A telescope, which uses its large objective lens or primary mirror to create an image of a distant object and then allows the user to examine the image closely with a smaller eyepiece lens, thus making the object look larger.
A microscope, which makes a small object appear as a much larger image at a comfortable distance for viewing. A microscope is similar in layout to a telescope except that the object being viewed is close to the objective, which is usually much smaller than the eyepiece.
A slide projector, which projects a large image of a small slide on a screen. A photographic enlarger is similar.
A zoom lens, a system of camera lens elements for which the focal length and angle of view can be varied.
Size ratio (optical magnification)
Optical magnification is the ratio between the apparent size of an object (or its size in an image) and its true size, and thus it is a dimensionless number. Optical magnification is sometimes referred to as "power" (for example "10× power"), although this can lead to confusion with optical power.
Linear or transverse magnification
For real images, such as images projected on a screen, size means a linear dimension (measured, for example, in millimeters or inches).
Angular magnification
For optical instruments with an eyepiece, the linear dimension of the image seen in the eyepiece (virtual image at infinite distance) cannot be given, thus size means the angle subtended by the object at the focal point (angular size). Strictly speaking, one should take the tangent of that angle (in practice, this makes a difference only if the angle is larger than a few degrees). Thus, angular magnification is given by:
where is the angle subtended by the object at the front focal point of the objective and is the angle subtended by the image at the rear focal point of the eyepiece.
For example, the mean angular size of the Moon's disk as viewed from Earth's surface is about 0.52°. Thus, through binoculars with 10× magnification, the Moon appears to subtend an angle of about 5.2°.
By convention, for magnifying glasses and optical microscopes, where the size of the object is a linear dimension and the apparent size is an angle, the magnification is the ratio between the apparent (angular) size as seen in the eyepiece and the angular size of the object when placed at the conventional closest distance of distinct vision: from the eye.
By instrument
Single lens
The linear magnification of a thin lens is
where is the focal length, is the distance from the lens to the object, and as the distance of the object with respect to the front focal point. A sign convention is used such that and (the image distance from the lens) are positive for real object and image, respectively, and negative for virtual object and images, respectively. of a converging lens is positive while for a diverging lens it is negative.
For real images, is negative and the image is inverted. For virtual images, is positive and the image is upright.
With being the distance from the lens to the image, the height of the image and the height of the object, the magnification can also be written as:
Note again that a negative magnification implies an inverted image.
The image magnification along the optical axis direction , called longitudinal magnification, can also be defined. The Newtonian lens equation is stated as , where and as on-axis distances of an object and the image with respect to respective focal points, respectively. is defined as
and by using the Newtonian lens equation,
The longitudinal magnification is always negative, means that, the object and the image move toward the same direction along the optical axis. The longitudinal magnification varies much faster than the transverse magnification, so the 3-dimensional image is distorted.
Photography
The image recorded by a photographic film or image sensor is always a real image and is usually inverted. When measuring the height of an inverted image using the cartesian sign convention (where the x-axis is the optical axis) the value for will be negative, and as a result will also be negative. However, the traditional sign convention used in photography is "real is positive, virtual is negative". Therefore, in photography: Object height and distance are always and positive. When the focal length is positive the image's height, distance and magnification are and positive. Only if the focal length is negative, the image's height, distance and magnification are and negative. Therefore, the formulae are traditionally presented as
Magnifying glass
The maximum angular magnification (compared to the naked eye) of a magnifying glass depends on how the glass and the object are held, relative to the eye. If the lens is held at a distance from the object such that its front focal point is on the object being viewed, the relaxed eye (focused to infinity) can view the image with angular magnification
Here, is the focal length of the lens in centimeters. The constant 25 cm is an estimate of the "near point" distance of the eye—the closest distance at which the healthy naked eye can focus. In this case the angular magnification is independent from the distance kept between the eye and the magnifying glass.
If instead the lens is held very close to the eye and the object is placed closer to the lens than its focal point so that the observer focuses on the near point, a larger angular magnification can be obtained, approaching
A different interpretation of the working of the latter case is that the magnifying glass changes the diopter of the eye (making it myopic) so that the object can be placed closer to the eye resulting in a larger angular magnification.
Microscope
The angular magnification of a microscope is given by
where is the magnification of the objective and the magnification of the eyepiece. The magnification of the objective depends on its focal length and on the distance between objective back focal plane and the focal plane of the eyepiece (called the tube length):
The magnification of the eyepiece depends upon its focal length and is calculated by the same equation as that of a magnifying glass:
Note that both astronomical telescopes as well as simple microscopes produce an inverted image, thus the equation for the magnification of a telescope or microscope is often given with a minus sign.
Telescope
The angular magnification of an optical telescope is given by
in which is the focal length of the objective lens in a refractor or of the primary mirror in a reflector, and is the focal length of the eyepiece.
Measurement of telescope magnification
Measuring the actual angular magnification of a telescope is difficult, but it is possible to use the reciprocal relationship between the linear magnification and the angular magnification, since the linear magnification is constant for all objects.
The telescope is focused correctly for viewing objects at the distance for which the angular magnification is to be determined and then the object glass is used as an object the image of which is known as the exit pupil. The diameter of this may be measured using an instrument known as a Ramsden dynameter which consists of a Ramsden eyepiece with micrometer hairs in the back focal plane. This is mounted in front of the telescope eyepiece and used to evaluate the diameter of the exit pupil. This will be much smaller than the object glass diameter, which gives the linear magnification (actually a reduction), the angular magnification can be determined from
Maximum usable magnification
With any telescope, microscope or lens,
a maximum magnification exists beyond which the image looks bigger but shows no more detail. It occurs when the finest detail the instrument can resolve is magnified to match the finest detail the eye can see. Magnification beyond this maximum is sometimes called "empty magnification".
For a good quality telescope operating in good atmospheric conditions, the maximum usable magnification is limited by diffraction. In practice it is considered to be 2× the aperture in millimetres or 50× the aperture in inches; so, a diameter telescope has a maximum usable magnification of 120×.
With an optical microscope having a high numerical aperture and using oil immersion, the best possible resolution is corresponding to a magnification of around 1200×. Without oil immersion, the maximum usable magnification is around 800×. For details, see limitations of optical microscopes.
Small, cheap telescopes and microscopes are sometimes supplied with the eyepieces that give magnification far higher than is usable.
The maximum relative to the minimum magnification of an optical system is known as zoom ratio.
"Magnification" of displayed images
Magnification figures on pictures displayed in print or online can be misleading. Editors of journals and magazines routinely resize images to fit the page, making any magnification number provided in the figure legend incorrect. Images displayed on a computer screen change size based on the size of the screen. A scale bar (or micron bar) is a bar of stated length superimposed on a picture. When the picture is resized the bar will be resized in proportion. If a picture has a scale bar, the actual magnification can easily be calculated. Where the scale (magnification) of an image is important or relevant, including a scale bar is preferable to stating magnification.
See also
Lens
Magnifying glass
Microscope
Optical telescope
Screen magnifier
References
Optics
Articles containing video clips | Magnification | [
"Physics",
"Chemistry"
] | 2,133 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
603,278 | https://en.wikipedia.org/wiki/Nanosensor | Nanosensors are nanoscale devices that measure physical quantities and convert these to signals that can be detected and analyzed. There are several ways proposed today to make nanosensors; these include top-down lithography, bottom-up assembly, and molecular self-assembly. There are different types of nanosensors in the market and in development for various applications, most notably in defense, environmental, and healthcare industries. These sensors share the same basic workflow: a selective binding of an analyte, signal generation from the interaction of the nanosensor with the bio-element, and processing of the signal into useful metrics.
Characteristics
Nanomaterials-based sensors have several benefits in sensitivity and specificity over sensors made from traditional materials, due to nanomaterial features not present in bulk material that arise at the nanoscale. Nanosensors can have increased specificity because they operate at a similar scale as natural biological processes, allowing functionalization with chemical and biological molecules, with recognition events that cause detectable physical changes. Enhancements in sensitivity stem from the high surface-to-volume ratio of nanomaterials, as well as novel physical properties of nanomaterials that can be used as the basis for detection, including nanophotonics. Nanosensors can also potentially be integrated with nanoelectronics to add native processing capability to the nanosensor.
In addition to their sensitivity and specificity, nanosensors offer significant advantages in cost and response times, making them suitable for high-throughput applications. Nanosensors provide real-time monitoring compared to traditional detection methods such as chromatography and spectroscopy. These traditional methods may take days to weeks to obtain results and often require investment in capital costs as well as time for sample preparation.
One-dimensional nanomaterials such as nanowires and nanotubes are well suited for use in nanosensors, as compared to bulk or thin-film planar devices. They can function both as transducers and wires to transmit the signal. Their high surface area can cause large signal changes upon binding of an analyte. Their small size can enable extensive multiplexing of individually addressable sensor units in a small device. Their operation is also "label free" in the sense of not requiring fluorescent or radioactive labels on the analytes. Zinc oxide nanowire is used for gas sensing applications, given that it exhibits high sensitivity toward low concentration of gas under ambient conditions and can be fabricated easily with low cost.
There are several challenges for nanosensors, including avoiding drift and fouling, developing reproducible calibration methods, applying preconcentration and separation methods to attain a proper analyte concentration that avoids saturation, and integrating the nanosensor with other elements of a sensor package in a reliable manufacturable manner. Because nanosensors are a relatively new technology, there are many unanswered questions regarding nanotoxicology, which currently limits their application in biological systems.
Potential applications for nanosensors include medicine, detection of contaminants and pathogens, and monitoring manufacturing processes and transportation systems. By measuring changes in physical properties (volume, concentration, displacement and velocity, gravitational, electrical, and magnetic forces, pressure, or temperature) nanosensors may be able to distinguish between and recognize certain cells at the molecular level in order to deliver medicine or monitor development to specific places in the body. The type of signal transduction defines the major classification system for nanosensors. Some of the main types of nanosensor readouts include optical, mechanical, vibrational, or electromagnetic.
As an example of classification, nanosensors that use molecularly imprinted polymers (MIP) can be divided into three categories, which are electrochemical, piezoelectric, or spectroscopic sensors. Electrochemical sensors induce a change in the electrochemical properties of the sensing material, which includes charge, conductivity, and electric potential. Piezoelectric sensors either convert mechanical force into electric force or vice versa. This force is then transduced into a signal. MIP spectroscopic sensors can be divided into three subcategories, which are chemiluminescent sensors, surface plasmon resonance sensors, and fluorescence sensors. As the name suggests, these sensors produce light based signals in forms of chemiluminescence, resonance, and fluorescence. As described by the examples, the type of change that the sensor detects and type of signal it induces depend on the type of sensor
Mechanisms of operation
There are multiple mechanisms by which a recognition event can be transduced into a measurable signal; generally, these take advantage of the nanomaterial sensitivity and other unique properties to detect a selectively bound analyte.
Electrochemical nanosensors are based on detecting a resistance change in the nanomaterial upon binding of an analyte, due to changes in scattering or to the depletion or accumulation of charge carriers. One possibility is to use nanowires such as carbon nanotubes, conductive polymers, or metal oxide nanowires as gates in field-effect transistors, although as of 2009 they had not yet been demonstrated in real-world conditions. Chemical nanosensors contain a chemical recognition system (receptor) and a physiochemical transducer, in which the receptor interacts with analyte to produce electrical signals. In one case, upon interaction of the analyte with the receptor, the nanoporous transducer had a change in impedance which was determined as the sensor signal. Other examples include electromagnetic or plasmonic nanosensors, spectroscopic nanosensors such as surface-enhanced Raman spectroscopy, magnetoelectronic or spintronic nanosensors, and mechanical nanosensors.
Biological nanosensors consist of a bio-receptor and a transducer. The transduction method of choice is currently fluorescence because of the high sensitivity and relative ease of measurement. The measurement can be achieved by using the following methods: binding active nanoparticles to active proteins within the cell, using site-directed mutagenesis to produce indicator proteins, allowing for real-time measurements, or by creating a nanomaterial (e.g. nanofibers) with attachment sites for the bio-receptors. Even though electrochemical nanosensors can be used to measure intracellular properties, they are typically less selective for biological measurements, as they lack the high specificity of bio-receptors (e.g. antibody, DNA).
Photonic devices can also be used as nanosensors to quantify concentrations of clinically relevant samples. A principle of operation of these sensors is based on the chemical modulation of a hydrogel film volume that incorporates a Bragg grating. As the hydrogel swells or shrinks upon chemical stimulation, the Bragg grating changes color and diffracts light at different wavelengths. The diffracted light can be correlated with the concentration of a target analyte.
Another type of nanosensor is one that works through a colorimetric basis. Here, the presence of the analyte causes a chemical reaction or morphological alteration for a visible color change to occur. One such application, is that gold nanoparticles can be used for the detection of heavy metals. Many harmful gases can also be detected by a colorimetric change, such as through the commercially available Dräger Tube. These provide an alternative to bulky, lab-scale systems, as these can be miniaturized to be used for point-of-sample devices. For example, many chemicals are regulated by the Environmental Protection Agency and require extensive testing to ensure contaminant levels are within the appropriate limits. Colorimetric nanosensors provide a method for on-site determination of many contaminants.
Production methods
The production method plays a central role in determining the characteristics of the manufactured nanosensor in that the function of nanosensor can be made through controlling the surface of nanoparticles. There are two main approaches in the manufacturing of nanosensors: top-down methods, which begin with a pattern generated at a larger scale, and then reduced to microscale. Bottom-up methods start with atoms or molecules that build up to nanostructures.
Top-down methods
Lithography
It involves starting out with a larger block of some material and carving out the desired form. These carved out devices, notably put to use in specific microelectromechanical systems used as microsensors, generally only reach the micro size, but the most recent of these have begun to incorporate nanosized components. One of the most common method is called electron beam lithography. Although very costly, this technique effectively forms a distribution of circular or ellipsoidal plots on the two dimensional surface. Another method is electrodeposition, which requires conductive elements to produce miniaturized devices.
Fiber pulling
This method consists in using a tension device to stretch the major axis of a fiber while it is heated, to achieve nano-sized scales. This method is specially used in optical fiber to develop optical-fiber-based nanosensors.
Chemical etching
Two different types of chemical etching have been reported. In the Turner method, a fiber is etched to a point while placed in the meniscus between hydrofluoric acid and an organic overlayer. This technique has been shown to produce fibers with large taper angles (thus increasing the light reaching the tip of the fiber) and tip diameters comparable to the pulling method. The second method is tube etching, which involves etching an optical fiber with a single-component solution of hydrogen fluoride. A silica fiber, surrounded with an organic cladding, is polished and one end is placed in a container of hydrofluoric acid. The acid then begins to etch away the tip of the fiber without destroying the cladding. As the silica fiber is etched away, the polymer cladding acts as a wall, creating microcurrents in the hydrofluoric acid that, coupled with capillary action, cause the fiber to be etched into the shape of a cone with large, smooth tapers. This method shows much less susceptibility to environmental parameters than the Turner method.
Bottom-up methods
This type of methods involve assembling the sensors out of smaller components, usually individual atoms or molecules. This is done by arranging atoms in specific patterns, which has been achieved in laboratory tests through use of atomic force microscopy, but is still difficult to achieve en masse and is not economically viable.
Self-assembly
Also known as “growing”, this method most often entails an already complete set of components that would automatically assemble themselves into a finished product. Accurately being able to reproduce this effect for a desired sensor in a laboratory would imply that scientists could manufacture nanosensors much more quickly and potentially far more cheaply by letting numerous molecules assemble themselves with little or no outside influence, rather than having to manually assemble each sensor.
Although the conventional fabrication techniques have proven to be efficient, further improvements in the production method can lead to minimization of cost and enhancement in performance. Challenges with current production methods include uneven distribution, size, and shape of nanoparticles, which all lead to limitation in performance. In 2006, researchers in Berlin patented their invention of a novel diagnostic nanosensor fabricated with nanosphere lithography (NSL), which allows precise control oversize and shape of nanoparticles and creates nanoislands. The metallic nanoislands produced an increase in signal transduction and thus increased sensitivity of the sensor. The results also showed that the sensitivity and specification of the diagnostic nanosensor depend on the size of the nanoparticles, that decreasing the nanoparticle size increases the sensitivity.
Current density is influenced by distribution, size, or shape of nanoparticles. These properties can be improved by exploitation of capillary forces. In recent research, capillary forces were induced by applying five microliters of ethanol and, as result, individual nanoparticles have been merged in a larger islands (i.e. 20 micrometer-sized) particles separated by 10 micrometers on average, while the smaller ones were dissolved and absorbed. On the other hand, applying twice as much (i.e. 10 microliters) of ethanol has damaged the nanolayers, while applying too small (i.e. two microliters) of ethanol has failed to spread across them.
Applications
One of the first working examples of a synthetic nanosensor was built by researchers at the Georgia Institute of Technology in 1999. It involved attaching a single particle onto the end of a carbon nanotube and measuring the vibrational frequency of the nanotube both with and without the particle. The discrepancy between the two frequencies allowed the researchers to measure the mass of the attached particle.
Since then, increasing amounts of research have gone into nanosensors, whereby modern nanosensors have been developed for many applications. Currently, the applications of nanosensors in the market include: healthcare, defense and military, and others such as food, environment, and agriculture.
Defense and military
Nanoscience as a whole has many potential applications in the defense and military sector- including chemical detection, decontamination, and forensics. Some nanosensors in development for defense applications include nanosensors for the detection of explosives or toxic gases. Such nanosensors work on the principle that gas molecules can be distinguished based on their mass using, for example, piezoelectric sensors. If a gas molecule is adsorbed at the surface of the detector, the resonance frequency of the crystal changes and this can be measured as a change in electrical properties. In addition, field effect transistors, used as potentiometers, can detect toxic gases if their gate is made sensitive to them.
In a similar application, nanosensors can be utilized in military and law enforcement clothing and gear. The Navy Research Laboratory's Institute for Nanoscience has studied quantum dots for application in nanophotonics and identifying biological materials. Nanoparticles layered with polymers and other receptor molecules will change color when contacted by analytes such as toxic gases. This alerts the user that they are in danger. Other projects involve embedding clothing with biometric sensors to relay information regarding the user's health and vitals, which would be useful for monitoring soldiers in combat.
Surprisingly, some of the most challenging aspects in creating nanosensors for defense and military use are political in nature, rather than technical. Many different government agencies must work together to allocate budgets and share information and progress in testing; this can be difficult with such large and complex institutions. In addition, visas and immigration status can become an issue for foreign researchers - as the subject matter is very sensitive, government clearance can sometimes be required. Finally, there are currently not well defined or clear regulations on nanosensor testing or applications in the sensor industry, which contributes to the difficulty of implementation.
Food and the environment
Nanosensors can improve various sub-areas within food and environment sectors including food processing, agriculture, air and water quality monitoring, and packaging and transport. Due to their sensitivity, as well as their tunability and resulting binding selectivity, nanosensors are very effective and can be designed for a wide variety of environmental applications. Such applications of nanosensors help in a convenient, rapid, and ultrasensitive assessment of many types of environmental pollutants.
Chemical sensors are useful for analyzing odors from food samples and detecting atmospheric gases. The "electronic nose" was developed in 1988 to determine the quality and freshness of food samples using traditional sensors, but more recently the sensing film has been improved with nanomaterials. A sample is placed in a chamber where volatile compounds become concentrated in the gas phase, whereby the gas is then pumped through the chamber to carry the aroma to the sensor that measures its unique fingerprint. The high surface area to volume ratio of the nanomaterials allows for greater interaction with analytes and the nanosensor's fast response time enables the separation of interfering responses. Chemical sensors, too, have been built using nanotubes to detect various properties of gaseous molecules. Many carbon nanotube based sensors are designed as field effect transistors, taking advantage of their sensitivity. The electrical conductivity of these nanotubes will change due to charge transfer and chemical doping by other molecules, enabling their detection. To enhance their selectivity, many of these involve a system by which nanosensors are built to have a specific pocket for another molecule. Carbon nanotubes have been used to sense ionization of gaseous molecules while nanotubes made out of titanium have been employed to detect atmospheric concentrations of hydrogen at the molecular level. Some of these have been designed as field effect transistors, while others take advantage of optical sensing capabilities. Selective analyte binding is detected through spectral shift or fluorescence modulation. In a similar fashion, Flood et al. have shown that supramolecular host–guest chemistry offers quantitative sensing using Raman scattered light as well as SERS.
Other types of nanosensors, including quantum dots and gold nanoparticles, are currently being developed to detect pollutants and toxins in the environment. These take advantage of the localized surface plasmon resonance (LSPR) that arises at the nanoscale, which results in wavelength specific absorption. This LSPR spectrum is particularly sensitive, and its dependence on nanoparticle size and environment can be used in various ways to design optical sensors. To take advantage of the LSPR spectrum shift that occurs when molecules bind to the nanoparticle, their surfaces can be functionalized to dictate which molecules will bind and trigger a response. For environmental applications, quantum dot surfaces can be modified with antibodies that bind specifically to microorganisms or other pollutants. Spectroscopy can then be used to observe and quantify this spectrum shift, enabling precise detection, potentially on the order of molecules. Similarly, fluorescent semiconducting nanosensors may take advantage of fluorescence resonance energy transfer (FRET) to achieve optical detection. Quantum dots can be used as donors, and will transfer electronic excitation energy when positioned near acceptor molecules, thus losing their fluorescence. These quantum dots can be functionalized to determine which molecules will bind, upon which fluorescence will be restored. Gold nanoparticle-based optical sensors can be used to detect heavy metals very precisely; for example, mercury levels as low as 0.49 nanometers. This sensing modality takes advantage of FRET, in which the presence of metals inhibits the interaction between quantum dots and gold nanoparticles, and quenches the FRET response. Another potential implementation takes advantage of the size dependence of the LSPR spectrum to achieve ion sensing. In one study, Liu et al. functionalized gold nanoparticles with a Pb2+ sensitive enzyme to produce a lead sensor. Generally, the gold nanoparticles would aggregate as they approached each other, and the change in size would result in a color change. Interactions between the enzyme and Pb2+ ions would inhibit this aggregation, and thus the presence of ions could be detected.
The main challenge associated with using nanosensors in food and the environment is determining their associated toxicity and overall effect on the environment. Currently, there is insufficient knowledge on how the implementation of nanosensors will affect the soil, plants, and humans in the long-term. This is difficult to fully address because nanoparticle toxicity depends heavily on the type, size, and dosage of the particle as well as environmental variables including pH, temperature, and humidity. To mitigate potential risk, research is being done to manufacture safe, nontoxic nanomaterials, as part of an overall effort towards green nanotechnology.
Healthcare
Nanosensors possess great potential for diagnostic medicine, enabling early identification of disease without reliance on observable symptoms. Ideal nanosensor implementations look to emulate the response of immune cells in the body, incorporating both diagnostic and immune response functionalities, while transmitting data to allow for monitoring of the sensor input and response. However, this model remains a long-term goal, and research is currently focused on the immediate diagnostic capabilities of nanosensors. The intracellular implementation of nanosensor synthesized with biodegradable polymers induces signals that enable real-time monitoring and thus paves way for advancement in drug delivery and treatment.
One example of these nanosensors involves using the fluorescence properties of cadmium selenide quantum dots as sensors to uncover tumors within the body. A downside to the cadmium selenide dots, however, is that they are highly toxic to the body. As a result, researchers are working on developing alternate dots made out of a different, less toxic material while still retaining some of the fluorescence properties. In particular, they have been investigating the particular benefits of zinc sulfide quantum dots which, though they are not quite as fluorescent as cadmium selenide, can be augmented with other metals including manganese and various lanthanide elements. In addition, these newer quantum dots become more fluorescent when they bond to their target cells.
Another application of nanosensors involves using silicon nanowires in IV lines to monitor organ health. The nanowires are sensitive to detect trace biomarkers that diffuse into the IV line through blood which can monitor kidney or organ failure. These nanowires would allow for continuous biomarker measurement, which provides some benefits in terms of temporal sensitivity over traditional biomarker quantification assays such as ELISA.
Nanosensors can also be used to detect contamination in organ implants. The nanosensor is embedded into the implant and detects contamination in the cells surrounding the implant through an electric signal sent to a clinician or healthcare provider. The nanosensor can detect whether the cells are healthy, inflammatory, or contaminated with bacteria. However, a main drawback is found within the long term use of the implant, where tissue grows on top of the sensors, limiting their ability to compress. This impedes the production of electrical charges, thus shortening the lifetime of these nanosensors, as they use the piezoelectric effect to self-power.
Similarly to those used to measure atmospheric pollutants, gold-particle based nanosensors are used to give an early diagnosis to several types of cancer by detecting volatile organic compounds (VOCs) in breath, as tumor growth is associated with peroxidation of the cell membrane. Another cancer related application, though still in mice probing stage, is the use of peptide-coated nanoparticles as activity-based sensors to detect lung cancer. The two main advantages of the use of nanoparticles to detect diseases is that it allows early stage detection, as it can detect tumors the size in the order of millimeters. It also provides a cost-effective, easy-to-use, portable, and non-invasive diagnostic tool.
A recent effort towards advancement in nanosensor technology has employed molecular imprinting, which is a technique used to synthesize polymer matrices that act as a receptor in molecular recognition. Analogous to the enzyme-substrate lock and key model, molecular imprinting uses template molecules with functional monomers to form polymer matrices with specific shape corresponding to its target template molecules, thus increasing the selectivity and affinity of the matrices. This technique has enabled nanosensors to detect chemical species. In the field of biotechnology, molecularly imprinted polymers (MIP) are synthesized receptors that have shown promising, cost-effective alternatives to natural antibodies in that they are engineered to have high selectivity and affinity. For example, an experiment with MI sensor containing nanotips with non-conductive polyphenol nano-coating (PPn coating) showed selective detection of E7 protein and thus demonstrated potential use of these nanosensors in detection and diagnosis of human papillomavirus, other human pathogens, and toxins. As shown above, nanosensors with molecular imprinting technique are capable of selectively detecting ultrasensitive chemical species in that by artificially modifying the polymer matrices, molecular imprinting increases the affinity and selectivity. Although molecularly imprinted polymers provide advantages in selective molecular recognition of nanosensors, the technique itself is relatively recent and there still remains challenges such as attenuation signals, detection systems lacking effective transducers, and surfaces lacking efficient detection. Further investigation and research on the field of molecularly imprinted polymers is crucial for development of highly effective nanosensors.
In order to develop smart health care with nanosensors, a network of nanosensors, often called nanonetwork, need to be established to overcome the size and power limitations of individual nanosensors. Nanonetworks not only mitigates the existing challenges but also provides numerous improvements. Cell-level resolution of nanosensors will enable treatments to eliminate side effects, enable continuous monitoring and reporting of patients’ conditions.
Nanonetworks require further study in that nanosensors are different from traditional sensors. The most common mechanism of sensor networks are through electromagnetic communications. However, the current paradigm is not applicable to nanodevices due to their low range and power. Optical signal transduction has been suggested as an alternative to the classical electromagnetic telemetry and has monitoring applications in human bodies. Other suggested mechanisms include bioinspired molecular communications, wired and wireless active transport in molecular communications, Forster energy transfer, and more. It is crucial to build an efficient nanonetwork so that it can be applied in fields such as medical implants, body area networks (BAN), internet of nano things (IoNT), drug delivery and more. With an adept nanonetwork, bio implantable nanodevices can provide higher accuracy, resolution, and safety compared to macroscale implants. Body area networks (BAN) enable sensors and actuators to collect physical and physiological data from the human body to better anticipate any diseases, which will thus facilitate the treatment. Potential applications of BAN include cardiovascular disease monitoring, insulin management, artificial vision and hearing, and hormonal therapy management. The Internet of Bio-Nano Things refers to networks of nanodevices that can be accessed by the internet. Development of IoBNT has paved the way to new treatments and diagnostic techniques. Nanonetworks may also help drug delivery by increasing localization and circulation time of drugs.
Existing challenges with the aforementioned applications include biocompatibility of the nano implants, physical limitations leading to lack of power and memory storage, and bio compatibility of the transmitter and receiver design of IoBNT. The nanonetwork concept has numerous areas for improvements: these include developing nanomachines, protocol stack issues, power provisioning techniques, and more.
There are still stringent regulations in place for the development of standards for nanosensors to be used in the medical industry, due to insufficient knowledge of the adverse effects of nanosensors as well as potential cytotoxic effects of nanosensors. Additionally, there can be a high cost of raw materials such as silicon, nanowires, and carbon nanotubes, which prevent commercialization and manufacturing of nanosensors requiring scale-up for implementation. To mitigate the drawback of cost, researchers are looking into manufacturing nanosensors made of more cost-effective materials. There is also a high degree of precision needed to reproducibly manufacture nanosensors, due to their small size and sensitivity to different synthesis techniques, which creates additional technical challenges to be overcome.
See also
Nanotechnology
List of nanotechnology topics
Surface plasmon resonance
References
External links
Weighing the Very Small: 'Nanobalance' Based on Carbon Nanotubes Shows New Application for Nanomechanics, Georgia Tech Research News.
Emerging Technologies and the Environment
Nanotechnology and Societal Transformation
Nanotechnology, Privacy and Shifting Social Conventions
Nanotechnology and Surveillance
Nanotechnology
Nanomedicine | Nanosensor | [
"Materials_science",
"Engineering"
] | 5,798 | [
"Nanomedicine",
"Nanotechnology",
"Materials science"
] |
603,351 | https://en.wikipedia.org/wiki/Project%20Daedalus | Project Daedalus (named after Daedalus, the Greek mythological designer who crafted wings for human flight) was a study conducted between 1973 and 1978 by the British Interplanetary Society to design a plausible uncrewed interstellar probe. Intended mainly as a scientific probe, the design criteria specified that the spacecraft had to use existing or near-future technology and had to be able to reach its destination within a human lifetime. Alan Bond led a team of scientists and engineers who proposed using a fusion rocket to reach Barnard's Star 5.9 light years away. The trip was estimated to take 50 years, but the design was required to be flexible enough that it could be sent to any other target star.
All the papers produced by the study are available in a BIS book, Project Daedalus: Demonstrating the Engineering Feasibility of Interstellar Travel.
Concept
Daedalus would be constructed in Earth orbit and have an initial mass of 54,000 tonnes including 50,000 tonnes of fuel and 500 tonnes of scientific payload. Daedalus was to be a two-stage spacecraft. The first stage would operate for two years, taking the spacecraft to 7.1% of light speed (0.071 c), and then after it was jettisoned, the second stage would fire for 1.8 years, taking the spacecraft up to about 12% of light speed (0.12 c), before being shut down for a 46-year cruise period. Due to the extreme temperature range of operation required, from near absolute zero to 1600 K, the engine bells and support structure would be made of molybdenum alloyed with titanium, zirconium, and carbon, which retains strength even at cryogenic temperatures. A major stimulus for the project was Friedwardt Winterberg's inertial confinement fusion drive concept, for which he received the Hermann Oberth gold medal award.
This velocity is well beyond the capabilities of chemical rockets or even the type of nuclear pulse propulsion studied during Project Orion. According to Dr. Tony Martin, controlled-fusion engine and the nuclear–electric systems have very low thrust, equipment to convert nuclear energy into electrical has a large mass, which results in small acceleration, which would take a century to achieve the desired speed; thermodynamic nuclear engines of the NERVA type require a great quantity of fuel, photon rockets have to generate power at a rate of 3 W per kg of vehicle mass and require mirrors with absorptivity of less than 1 part in 106, interstellar ramjet's problems are tenuous interstellar medium with a density of about 1 atom/cm3, a large diameter funnel, and high power required for its electric field. Thus the only suitable propulsion method for the project was thermonuclear pulse propulsion.
Daedalus would be propelled by a fusion rocket using pellets of a deuterium/helium-3 mix that would be ignited in the reaction chamber by inertial confinement using electron beams. The electron beam system would be powered by a set of induction coils trapping energy from the plasma exhaust stream. 250 pellets would be detonated per second, and the resulting plasma would be directed by a magnetic nozzle. The computed burn-up fraction for the fusion fuels was 0.175 and 0.133 producing exhaust velocities of 10,600 km/s and 9,210 km/s respectively. Due to scarcity of helium-3 on Earth, it was to be mined from the atmosphere of Jupiter by large hot-air balloon supported robotic factories over a 20-year period, or from a less distant source, such as the Moon.
The second stage would have two 5-metre optical telescopes and two 20-metre radio telescopes. About 25 years after launch these telescopes would begin examining the area around Barnard's Star to learn more about any accompanying planets. This information would be sent back to Earth, using the 40-metre diameter second stage engine bell as a communications dish, and targets of interest would be selected. Since the spacecraft would not decelerate, upon reaching Barnard's Star, Daedalus would carry 18 autonomous sub-probes that would be launched between 7.2 and 1.8 years before the main craft entered the target system. These sub-probes would be propelled by nuclear-powered ion drives and would carry cameras, spectrometers, and other sensory equipment. The sub-probes would fly past their targets, still travelling at 12% of the speed of light, and transmit their findings back to the Daedalus' second stage, mothership, for relay back to Earth.
The ship's payload bay containing its sub-probes, telescopes, and other equipment would be protected from the interstellar medium during transit by a beryllium disc, up to 7 mm thick, weighing up to 50 tonnes. This erosion shield would be made from beryllium due to its lightness and high latent heat of vaporisation. Larger obstacles that might be encountered while passing through the target system would be dispersed by an artificially generated cloud of particles, ejected by support vehicles called dust bugs about 200 km ahead of the vehicle. The spacecraft would carry a number of robot wardens capable of autonomously repairing damage or malfunctions.
Specifications
Overall length: 190 metres
Payload mass: 450 tonnes
Variants
A quantitative engineering analysis of a self-replicating variation on Project Daedalus was published in 1980 by Robert Freitas. The non-replicating design was modified to include all subsystems necessary for self-replication. Use the probe to deliver a seed factory, with a mass of about 443 metric tons, to a distant site. Have the seed factory replicate many copies of itself on-site, to increase its total manufacturing capacity, then use the resulting automated industrial complex to construct probes, with a seed factory on board, over a 1,000-year period. Each REPRO would weigh over 10 million tons due to the extra fuel needed to decelerate from 12% of lightspeed.
Another possibility is to equip the Daedalus with a magnetic sail similar to the magnetic scoop on a Bussard ramjet to use the destination star heliosphere as a brake, making carrying deceleration fuel unnecessary, allowing a much more in-depth study of the star system chosen.
See also
Breakthrough Starshot
Project Icarus
Project Longshot
Enzmann starship
Further reading
References
External links
Project Daedalus, The Encyclopedia of Astrobiology Astronomy and Spaceflight
Starship Daedalus
Project Daedalus – Origins
The Daedalus Starship
Renderings of the Daedalus Starship to scale
Project Daedalus
Project Daedalus: The Propulsion System Part 1; Theoretical considerations and calculations. 2. Review of Advanced Propulsion Systems
Title: Project Daedalus. Authors: Bond, A.; Martin, A. R. Publication: Journal of the British Interplanetary Society Supplement, p. S5-S7 Publication Date: 00/1978 Origin: ARI ARI Keywords: Miscellanea, Philosophical Aspects, Extraterrestrial Life Comment: A&AA ID. AAA021.015.025 Bibliographic Code: 1978JBIS...31S...5B
British Interplanetary Society: Project Daedalus, video rendering by Hazegrayart
Hypothetical spacecraft
Interstellar travel
Nuclear spacecraft propulsion | Project Daedalus | [
"Astronomy",
"Technology"
] | 1,506 | [
"Hypothetical spacecraft",
"Astronomical hypotheses",
"Interstellar travel",
"Exploratory engineering"
] |
603,468 | https://en.wikipedia.org/wiki/Coniine | Coniine is a poisonous chemical compound, an alkaloid present in and isolable from poison hemlock (Conium maculatum), where its presence has been a source of significant economic, medical, and historico-cultural interest; coniine is also produced by the yellow pitcher plant (Sarracenia flava), and fool's parsley (Aethusa cynapium). Its ingestion and extended exposure are toxic to humans and all classes of livestock; its mechanism of poisoning involves disruption of the central nervous system, with death caused by respiratory paralysis. The biosynthesis of coniine contains as its penultimate step the non-enzymatic cyclisation of 5-oxooctylamine to γ-coniceine, a Schiff base differing from coniine only by its carbon-nitrogen double bond in the ring. This pathway results in natural coniine that is a mixture—a racemate—composed of two enantiomers, the stereoisomers (S)-(+)-coniine and (R)-(−)-coniine, depending on the direction taken by the chain that branches from the ring. Both enantiomers are toxic, with the (R)-enantiomer being the more biologically active and toxic of the two in general. Coniine holds a place in organic chemistry history as being the first of the important class of alkaloids to be synthesized, by Albert Ladenburg in 1886, and it has been synthesized in the laboratory in a number of unique ways through to modern times.
Hemlock poisoning has been a periodic human concern, a regular veterinary concern, and has had significant occurrences in human and cultural history. Notably, in 399 BC, Socrates was sentenced to death by drinking a coniine-containing mixture of poison hemlock.
Natural origins
Poison hemlock (Conium maculatum) contains highly toxic amounts of coniine. Its presence on farmland is an issue for livestock farmers because animals will eat it if they are not well fed or the hemlock is mixed in with pasture grass. The coniine is present in Conium maculatum as a mixture of the R-(−)- and S-(+)-enantiomers.
Coniine is also found in Sarracenia flava, the yellow pitcher plant. The yellow pitcher plant is a carnivorous plant endemic to the southeastern United States. The plant uses a mixture of sugar and coniine to simultaneously attract and poison insects, which then fall into a digestive tube. Coniine is also found in Aethusa cynapium, commonly known as fool's parsley.
History of natural isolates
The history of coniine is understandably tied to the poison hemlock plant, since the natural product was not synthesizable until the 1880s. Jews in the Middle East were poisoned by coniine after consuming quail in the area that usually ate hemlock seeds, and Greeks on the island of Lesbos who also consumed quail suffered from the same poisoning, causing myoglobinuria and acute kidney injury. The most famous hemlock poisoning occurred in 399 BCE, when the philosopher Socrates is believed to have consumed a liquid infused with hemlock to carry out his death sentence, his having been convicted of impiety toward the gods, and the corruption of youth. Hemlock juice was often used to execute criminals in ancient Greece.
Hemlock has had a limited medical use throughout history. The Greeks used it not just as capital punishment, but also as an antispasmodic and treatment for arthritis. Books from the 10th century attest to medical use by the Anglo-Saxons. In the Middle Ages it was believed that hemlock could be used to cure rabies; in later European times it came to be associated with flying ointments in witchcraft. Native Americans used hemlock extract as arrow poison.
Yellow pitcher plant, or Sarracenia flava, contains coniine. Aethusa cynapium contains cynopine, which is similar to coniine.
Pharmacology and toxicology
The (R)-(−) enantiomer of coniine is the more biologically active, at least in one system (TE-671 cells expressing human fetal nicotinic neuromuscular receptors), and in mouse bioassay, the same enantiomer and the racemic mixture are about two-fold more toxic than the (S)-(+) enantiomer (see below).
Coniine, as racemate or as pure enantiomer, begins by binding and stimulating the nicotinic receptor on the post-synaptic membrane of the neuromuscular junction. The subsequent depolarization results in nicotinic toxicity; as coniine stays bound to the receptor, the nerve stays depolarized, inactivating it. This results, systemically, in a flaccid paralysis, an action similar to that of succinylcholine since they are both depolarizing neuromuscular blockers. Symptoms of paralysis generally occur within a half-hour, although death may take several hours. The central nervous system is not affected: the person remains conscious and aware until respiratory paralysis results in cessation of breathing. The flaccid, muscular paralysis is an ascending paralysis, lower limbs being first affected. The person may have a hypoxic convulsion just prior to death, disguised by the muscular paralysis such that the person may just weakly shudder. Cause of death is lack of oxygen to the brain and heart as a consequence of respiratory paralysis, so that a poisoned person may recover if artificial ventilation can be maintained until the toxin is removed from the victim's system.
The values (in mouse, i.v. administered) for the R-(−) and S-(+) enantiomers, and the racemate, are approximately 7 and 12, and 8 milligrams per kilogram, respectively.
Chemical properties
(+/–)-Coniine was first isolated by Giesecke, but the formula was suggested by Blyth and definitely established by Hofmann.
D-(S)-Coniine has since been determined to be a colorless alkaline liquid, with a penetrating odour and a burning taste; has D0° 0.8626 and D19° 0.8438, refractive index n23°D 1.4505, and is dextrorotatory, [α]19°D +15.7° (see related comments under Specific rotation section below). L-(R)-Coniine has [α]21°D 15° and in other respects resembles its D-isomer, but the salts have slightly different melting points; the platinichloride has mp. 160 °C (Löffler and Friedrich report 175 °C), the aurichloride mp. 59 °C.
Solubility
Coniine is slightly soluble (1 in 90) in cold water, less so in hot water, so that a clear cold solution becomes turbid when warmed. On the other hand, the base dissolves about 25% of water at room temperature. It mixes with alcohol in all proportions, is readily soluble in ether and most organic solvents. Coniine dissolves in carbon disulfide, forming a complex thiocarbamate.
Crystallization
Coniine solidifies into a soft crystalline mass at −2 °C. It slowly oxidizes in the air. The salts crystallize well and are soluble in water or alcohol. The hydrochloride, B•HCl, crystallizes from water in rhombs, mp. 220 °C, [α]20°D +10.1°; the hydrobromide, in needles, mp. 211 °C, and the D-acid tartrate, B•C4H6O6•2 H2O, in rhombic crystals, mp. 54 °C. The platinichloride, (B•HCl)2•PtCl4•H2O, separates from concentrated solution as an oil, which solidifies to a mass of orange-yellow crystals, mp. 175 °C (dry). The aurichloride, B•HAuCl4, crystallizes on standing, mp. 77 °C. The picrate forms small yellow needles, mp. 75 °C, from hot water. The 2,4-dinitrobenzoyl- and 3,5-dinitrobenzoyl-derivates have mps. 139.0–139.5 °C and 108–9 °C respectively. The precipitate afforded by potassium cadmium iodide solution is crystalline, mp. 118 °C, while that given by nicotine with this reagent is amorphous.
Color changes
Coniine gives no coloration with sulfuric or nitric acid. Sodium nitroprusside gives a deep red color, which disappears on warming, but reappears on cooling, and is changed to blue or violet by aldehydes.
Specific rotation
The stereochemical composition of "coniine" is a matter of some importance, since its two enantiomers do not have identical biological properties, and many of the older pharmacological studies on this compound were carried out using the naturally-occurring isomeric mixture. S-(+)-Coniine has a specific rotation, [α]D, of +8.4° (c = 4.0, in CHCl3). These authors note that Ladenburg's value, +15°, is for a "neat", i.e. undiluted, sample. A similarly high value of +16° for the [α]D of "coniine" is given, without explicit citation of the source, in The Merck Index. The value of +7.7° (c = 4.0, CHCl3) for synthetic S-(+)-coniine and -7.9° (c = 0.5, CHCl3) for synthetic R-(−)-coniine is given by other chemists. The hydrochloride salts of the (S)-(+) and (R)-(−) enantiomers of coniine have values of [α]D of +4.6° and -5.2°, respectively (c = 0.5, in methanol).
Synthesis
The original synthesis (shown below) of Coniine was performed by Ladenburg in 1886. Ladenburg heated N-methylpyridine iodide to 250 °C, to obtain 2-methylpyridine. He then performed a Knoevenagel condensation with acetaldehyde in anhydrous zinc chloride to yield 2-propenylpyridine. In fact, Ladenburg used paraldehyde, a cyclic trimer of acetaldehyde that readily forms acetaldehyde upon heating. Finally, 2-propenylpyridine was reduced with metallic sodium in ethanol to provide racemic (±) coniine. Fractional crystallisation of racemic coniine with (+)-tartaric acid yielded enantiopure coniine.
The scheme proposed by Ladenburg gave poor yields, so the quest for alternative routes was open. A slightly better yield is observed if 2-methylpyridine and acetaldehyde are heated in a sealed tube with hydrochloric acid for 10 hours. A mixture of 2-propenylpyridine and 2-chloropropylpyridine is formed and is subsequently reduced by sodium in ethanol to give rac-coniine. Note: although the scheme below shows a single enantiomer of coniine, the final reaction produces a racemic mixture that is then separated
In 1907, another route with better yield was proposed. First, 2-(2'-hydroxypropyl)pyridine is reduced with phosphorus and fuming hydroiodic acid at 125 °C. Second, the product is treated with zinc dust and water. Finally, the product of the second step is treated with sodium in ethanol. Note: although the graphic below shows a single enantiomer of coniine, this reaction produces a racemic mixture that is then purified and separated.
A number of other syntheses of coniine have been effected, of which that of Diels and Alder is of special interest. The initial adduct of pyridine and dimethyl acetylenedicarboxylate is tetramethylquinolizine-1,2,3,4-tetracarboxylate, which on oxidation with dilute nitric acid is converted into trimethyl indolizine-tricarboxylate. This, on hydrolysis and decarboxylation, furnishes indolizine, the octahydro-derivate of which, also known as octahydropyrrocoline is converted by the cyanogen bromide method successively into the bromocyanamide, cyanamide and rac.-coniine. A synthesis of the alkaloid, starting from indolizine (pyrrocoline) is described by Ochiai and Tsuda.
The preparation of L-(R)-coniine by the reduction of β-coniceine (L-propenylpiperidine) by Löffler and Friedrich provides means for converting conhydrine to L-(R)-coniine. Hess and Eichel reported, incorrectly, that pelletierine was the aldehyde (β-2-piperidyl-propaldehyde) corresponding to coniine, and yielded rac-coniine when its hydrazone was heated with sodium ethoxide in ethanol at 156–170 °C. According to these authors, D-(S)-coniine is rendered almost optically inactive when heated with barium hydroxide and alcohol at 180–230 °C. Leithe has shown by observation of the optical rotation of (+)-pipecolic acid (piperidine-2-carboxylic acid) and some of its derivatives under varying conditions, that it must belong to the D-series of amino acids.
Currently, Coniine, and many other alkaloids, can be synthesized stereoselectively. For example, Pd-catalyzed 1,3-chirality transfer reaction can stereospecifically transform a single enantiomer of an allyl alcohol into a cyclic structure (in this case a piperidine). In this way, starting from (S)-alcohol an (S)-enantiomer of Coniine is obtained and vice versa. Remarkably, the separation of racemic alcohol into different enantiomers is done with the help of Candida antarctica lipase.
Biosynthesis
The biosynthesis of coniine is still being investigated, but much of the pathway has been elucidated. Originally thought to use 4 acetyl groups as feed compounds for the polyketide synthase that forms coniine, it is in fact derived from two malonyl and a butyryl CoA, which are derived in the usual way from acetyl-CoA.
Further elongation of butyryl-CoA using 2 malonyl-CoA forms 5-ketooctanal. Ketooctanal then undergoes transamination using alanine:5-keto-octanal aminotransferase. The amine then spontaneously cyclizes and is dehydrated to form the coniine precursor γ–coniceine. This is then reduced using NADPH dependent y-coniceine reductase to form coniine.
In popular culture
Coniine is the murder weapon in Agatha Christie's mystery novel Five Little Pigs.
The R and S 2-Propylpiperidine stereoisomers are a neurotoxin present in a slug-like lifeform in The Expanse. The toxin is shown as causing almost instant death upon skin contact in the show.
References
Further reading
External links
Information on hemlock from the University of Bristol
Mitch Tucker student work, Hemlock and Death of Socrates, at the University of Oklahoma
Neurotoxins
Piperidine alkaloids
Nicotinic antagonists
Plant toxins
2-Piperidinyl compounds | Coniine | [
"Chemistry"
] | 3,441 | [
"Chemical ecology",
"Plant toxins",
"Piperidine alkaloids",
"Alkaloids by chemical classification",
"Neurochemistry",
"Neurotoxins"
] |
603,690 | https://en.wikipedia.org/wiki/Voltage%20clamp | The voltage clamp is an experimental method used by electrophysiologists to measure the ion currents through the membranes of excitable cells, such as neurons, while holding the membrane voltage at a set level. A basic voltage clamp will iteratively measure the membrane potential, and then change the membrane potential (voltage) to a desired value by adding the necessary current. This "clamps" the cell membrane at a desired constant voltage, allowing the voltage clamp to record what currents are delivered. Because the currents applied to the cell must be equal to (and opposite in charge to) the current going across the cell membrane at the set voltage, the recorded currents indicate how the cell reacts to changes in membrane potential. Cell membranes of excitable cells contain many different kinds of ion channels, some of which are voltage-gated. The voltage clamp allows the membrane voltage to be manipulated independently of the ionic currents, allowing the current–voltage relationships of membrane channels to be studied.
History
The concept of the voltage clamp is attributed to Kenneth Cole and George Marmont in the spring of 1947. They inserted an internal electrode into the giant axon of a squid and began to apply a current. Cole discovered that it was possible to use two electrodes and a feedback circuit to keep the cell's membrane potential at a level set by the experimenter.
Cole developed the voltage clamp technique before the era of microelectrodes, so his two electrodes consisted of fine wires twisted around an insulating rod. Because this type of electrode could be inserted into only the largest cells, early electrophysiological experiments were conducted almost exclusively on squid axons.
Squids squirt jets of water when they need to move quickly, as when escaping a predator. To make this escape as fast as possible, they have an axon that can reach 1 mm in diameter (signals propagate more quickly down large axons). The squid giant axon was the first preparation that could be used to voltage clamp a transmembrane current, and it was the basis of Hodgkin and Huxley's pioneering experiments on the properties of the action potential.
Alan Hodgkin realized that, to understand ion flux across the membrane, it was necessary to eliminate differences in membrane potential. Using experiments with the voltage clamp, Hodgkin and Andrew Huxley published 5 papers in the summer of 1952 describing how ionic currents give rise to the action potential. The final paper proposed the Hodgkin–Huxley model which mathematically describes the action potential. The use of voltage clamps in their experiments to study and model the action potential in detail has laid the foundation for electrophysiology; for which they shared the 1963 Nobel Prize in Physiology or Medicine.
Technique
The voltage clamp is a current generator. Transmembrane voltage is recorded through a "voltage electrode", relative to ground, and a "current electrode" passes current into the cell. The experimenter sets a "holding voltage", or "command potential", and the voltage clamp uses negative feedback to maintain the cell at this voltage. The electrodes are connected to an amplifier, which measures membrane potential and feeds the signal into a feedback amplifier. This amplifier also gets an input from the signal generator that determines the command potential, and it subtracts the membrane potential from the command potential (Vcommand – Vm), magnifies any difference, and sends an output to the current electrode. Whenever the cell deviates from the holding voltage, the operational amplifier generates an "error signal", that is the difference between the command potential and the actual voltage of the cell. The feedback circuit passes current into the cell to reduce the error signal to zero. Thus, the clamp circuit produces a current equal and opposite to the ionic current.
Variations of the voltage clamp technique
Two-electrode voltage clamp using microelectrodes
The two-electrode voltage clamp (TEVC) technique is used to study properties of membrane proteins, especially ion channels. Researchers use this method most commonly to investigate membrane structures expressed in Xenopus oocytes. The large size of these oocytes allows for easy handling and manipulability.
The TEVC method utilizes two low-resistance pipettes, one sensing voltage and the other injecting current. The microelectrodes are filled with conductive solution and inserted into the cell to artificially control membrane potential. The membrane acts as a dielectric as well as a resistor, while the fluids on either side of the membrane function as capacitors. The microelectrodes compare the membrane potential against a command voltage, giving an accurate reproduction of the currents flowing across the membrane. Current readings can be used to analyze the electrical response of the cell to different applications.
This technique is favored over single-microelectrode clamp or other voltage clamp techniques when conditions call for resolving large currents. The high current-passing capacity of the two-electrode clamp makes it possible to clamp large currents that are impossible to control with single-electrode patch techniques. The two-electrode system is also desirable for its fast clamp settling time and low noise. However, TEVC is limited in use with regard to cell size. It is effective in larger-diameter oocytes, but more difficult to use with small cells. Additionally, TEVC method is limited in that the transmitter of current must be contained in the pipette. It is not possible to manipulate the intracellular fluid while clamping, which is possible using patch clamp techniques. Another disadvantage involves "space clamp" issues. Cole's voltage clamp used a long wire that clamped the squid axon uniformly along its entire length. TEVC microelectrodes can provide only a spatial point source of current that may not uniformly affect all parts of an irregularly shaped cell.
Dual-cell voltage clamp
The dual-cell voltage clamp technique is a specialized variation of the two electrode voltage clamp, and is only used in the study of gap junction channels. Gap junctions are pores that directly link two cells through which ions and small molecules flow freely. When two cells in which gap junction proteins, typically connexins or innexins, are expressed, either endogenously or via injection of mRNA, a junction channel will form between the cells. Since two cells are present in the system, two sets of electrodes are used. A recording electrode and a current injecting electrode are inserted into each cell, and each cell is clamped individually (each set of electrodes is attached to a separate apparatus, and integration of data is performed by computer). To record junctional conductance, the current is varied in the first cell while the recording electrode in the second cell records any changes in Vm for the second cell only. (The process can be reversed with the stimulus occurring in the second cell and recording occurring in the first cell.) Since no variation in current is being induced by the electrode in the recorded cell, any change in voltage must be induced by current crossing into the recorded cell, through the gap junction channels, from the cell in which the current was varied.
Single-electrode voltage clamp
This category describes a set of techniques in which one electrode is used for voltage clamp. Continuous single-electrode clamp (SEVC-c) technique is often used with patch-clamp recording. Discontinuous single-electrode voltage-clamp (SEVC-d) technique is used with penetrating intracellular recording. This single electrode carries out the functions of both current injection and voltage recording.
Continuous single-electrode clamp (SEVC-c)
The "patch-clamp" technique allows the study of individual ion channels. It uses an electrode with a relatively large tip (> 1 micrometer) that has a smooth surface (rather than a sharp tip). This is a "patch-clamp electrode" (as distinct from a "sharp electrode" used to impale cells). This electrode is pressed against a cell membrane and suction is applied to pull the cell's membrane inside the electrode tip. The suction causes the cell to form a tight seal with the electrode (a "gigaohm seal", as the resistance is more than a gigaohm).
SEV-c has the advantage that you can record from small cells that would be impossible to impale with two electrodes. However:
Microelectrodes are imperfect conductors; in general, they have a resistance of more than a million ohms. They rectify (i.e., change their resistance with voltage, often in an irregular manner), they sometimes have unstable resistance if clogged by cell contents. Thus, they will not faithfully record the voltage of the cell, especially when it is changing quickly, nor will they faithfully pass current.
Voltage and current errors: SEV-c circuitry does not actually measure the voltage of the cell being clamped (as does a two-electrode clamp). The patch-clamp amplifier is like a two-electrode clamp, except the voltage measuring and current passing circuits are connected (in the two-electrode clamp, they are connected through the cell). The electrode is attached to a wire that contacts the current/voltage loop inside the amplifier. Thus, the electrode has only an indirect influence on the feedback circuit. The amplifier reads only the voltage at the top of the electrode, and feeds back current to compensate. But, if the electrode is an imperfect conductor, the clamp circuitry has only a distorted view of the membrane potential. Likewise, when the circuit passes back current to compensate for that (distorted) voltage, the current will be distorted by the electrode before it reaches the cell. To compensate for this, the electrophysiologist uses the lowest resistance electrode possible, makes sure that the electrode characteristics do not change during an experiment (so the errors will be constant), and avoids recording currents with kinetics likely to be too fast for the clamp to follow accurately. The accuracy of SEV-c goes up the slower and smaller are the voltage changes it is trying to clamp.
Series resistance errors: The currents passed to the cell must go to ground to complete the circuit. The voltages are recorded by the amplifier relative to ground. When a cell is clamped at its natural resting potential, there is no problem; the clamp is not passing current and the voltage is being generated only by the cell. But, when clamping at a different potential, series resistance errors become a concern; the cell will pass current across its membrane in an attempt to return to its natural resting potential. The clamp amplifier opposes this by passing current to maintain the holding potential. A problem arises because the electrode is between the amplifier and the cell; i.e., the electrode is in series with the resistor that is the cell's membrane. Thus, when passing current through the electrode and the cell, Ohm's law tells us that this will cause a voltage to form across both the cell's and the electrode's resistance. As these resistors are in series, the voltage drops will add. If the electrode and the cell membrane have equal resistances (which they usually do not), and if the experimenter command a 40mV change from the resting potential, the amplifier will pass enough current until it reads that it has achieved that 40mV change. However, in this example, half of that voltage drop is across the electrode. The experimenter thinks he or she has moved the cell voltage by 40 mV, but has moved it only by 20 mV. The difference is the "series resistance error". Modern patch-clamp amplifiers have circuitry to compensate for this error, but these compensate only 70-80% of it. The electrophysiologist can further reduce the error by recording at or near the cell's natural resting potential, and by using as low a resistance electrode as possible.
Capacitance errors. Microelectrodes are capacitors, and are particularly troublesome because they are non-linear. The capacitance arises because the electrolyte inside the electrode is separated by an insulator (glass) from the solution outside. This is, by definition and function, a capacitor. Worse, as the thickness of the glass changes the farther you get from the tip, the time constant of the capacitor will vary. This produces a distorted record of membrane voltage or current whenever they are changing. Amplifiers can compensate for this, but not entirely because the capacitance has many time-constants. The experimenter can reduce the problem by keeping the cell's bathing solution shallow (exposing less glass surface to liquid) and by coating the electrode with silicone, resin, paint, or another substance that will increase the distance between the inside and outside solutions.
Space clamp errors. A single electrode is a point source of current. In distant parts of the cell, the current passed through the electrode will be less influential than at nearby parts of the cell. This is particularly a problem when recording from neurons with elaborate dendritic structures. There is nothing one can do about space clamp errors except to temper the conclusions of the experiment.
Discontinuous single-electrode voltage-clamp (SEVC-d)
A single-electrode voltage clamp — discontinuous, or SEVC-d, has some advantages over SEVC-c for whole-cell recording. In this, a different approach is taken for passing current and recording voltage. A SEVC-d amplifier operates on a "time-sharing" basis, so the electrode regularly and frequently switches between passing current and measuring voltage. In effect, there are two electrodes, but each is in operation for only half of the time it is on. The oscillation between the two functions of the single electrode is termed a duty cycle. During each cycle, the amplifier measures the membrane potential and compares it with the holding potential. An operational amplifier measures the difference, and generates an error signal. This current is a mirror image of the current generated by the cell. The amplifier outputs feature sample and hold circuits, so each briefly sampled voltage is then held on the output until the next measurement in the next cycle. To be specific, the amplifier measures voltage in the first few microseconds of the cycle, generates the error signal, and spends the rest of the cycle passing current to reduce that error. At the start of the next cycle, voltage is measured again, a new error signal generated, current passed etc. The experimenter sets the cycle length, and it is possible to sample with periods as low as about 15 microseconds, corresponding to a 67 kHz switching frequency. Switching frequencies lower than about 10 kHz are not sufficient when working with action potentials that are less than 1 millisecond wide. Note that not all discontinuous voltage-clamp amplifier support switching frequencies higher than 10 kHz.
For this to work, the cell capacitance must be higher than the electrode capacitance by at least an order of magnitude. Capacitance slows the kinetics (the rise and fall times) of currents. If the electrode capacitance is much less than that of the cell, then when current is passed through the electrode, the electrode voltage will change faster than the cell voltage. Thus, when current is injected and then turned off (at the end of a duty cycle), the electrode voltage will decay faster than the cell voltage. As soon as the electrode voltage asymptotes to the cell voltage, the voltage can be sampled (again) and the next amount of charge applied. Thus, the frequency of the duty cycle is limited to the speed at which the electrode voltage rises and decays while passing current. The lower the electrode capacitance the faster one can cycle.
SEVC-d has a major advantage over SEVC-c in allowing the experimenter to measure membrane potential, and, as it obviates passing current and measuring voltage at the same time, there is never a series resistance error. The main disadvantages are that the time resolution is limited and the amplifier is unstable. If it passes too much current, so that the goal voltage is over-shot, it reverses the polarity of the current in the next duty cycle. This causes it to undershoot the target voltage, so the next cycle reverses the polarity of the injected current again. This error can grow with each cycle until the amplifier oscillates out of control (“ringing”); this usually results in the destruction of the cell being recorded. The investigator wants a short duty cycle to improve temporal resolution; the amplifier has adjustable compensators that will make the electrode voltage decay faster, but, if these are set too high the amplifier will ring, so the investigator is always trying to “tune” the amplifier as close to the edge of uncontrolled oscillation as possible, in which case small changes in recording conditions can cause ringing. There are two solutions: to “back off” the amplifier settings into a safe range, or to be alert for signs that the amplifier is about to ring.
Mathematical modeling
From the point of view of control theory, the voltage clamp experiment can be described in terms of the application of a high-gain output feedback control law to the neuronal membrane. Mathematically, the membrane voltage can be modeled by a conductance-based model with an input given by the applied current and an output given by the membrane voltage . Hodgkin and Huxley's original conductance-based model, which represents a neuronal membrane containing sodium and potassium ion currents, as well as a leak current, is given by the system of ordinary differential equations
where is the membrane capacitance, , and are maximal conductances, , and are reversal potentials, and are ion channel voltage-dependent rate constants, and the state variables , , and are ion channel gating variables.
It is possible to rigorously show that the feedback law
drives the membrane voltage arbitrarily close to the reference voltage as the gain is increased to an arbitrarily large value. This fact, which is by no means a general property of dynamical systems (a high-gain can, in general, lead to instability), is a consequence of the structure and the properties of the conductance-based model above. In particular, the dynamics of each gating variable , which are driven by , verify the strong stability property of exponential contraction.
References
Further reading
Neurophysiology
Physiology
Electrophysiology | Voltage clamp | [
"Biology"
] | 3,816 | [
"Physiology"
] |
603,780 | https://en.wikipedia.org/wiki/Coherent%20sheaf | In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaves are a class of sheaves closely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to a sheaf of rings that codifies this geometric information.
Coherent sheaves can be seen as a generalization of vector bundles. Unlike vector bundles, they form an abelian category, and so they are closed under operations such as taking kernels, images, and cokernels. The quasi-coherent sheaves are a generalization of coherent sheaves and include the locally free sheaves of infinite rank.
Coherent sheaf cohomology is a powerful technique, in particular for studying the sections of a given coherent sheaf.
Definitions
A quasi-coherent sheaf on a ringed space is a sheaf of -modules that has a local presentation, that is, every point in has an open neighborhood in which there is an exact sequence
for some (possibly infinite) sets and .
A coherent sheaf on a ringed space is a sheaf of -modules satisfying the following two properties:
is of finite type over , that is, every point in has an open neighborhood in such that there is a surjective morphism for some natural number ;
for any open set , any natural number , and any morphism of -modules, the kernel of is of finite type.
Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of -modules.
The case of schemes
When is a scheme, the general definitions above are equivalent to more explicit ones. A sheaf of -modules is quasi-coherent if and only if over each open affine subscheme the restriction is isomorphic to the sheaf associated to the module over . When is a locally Noetherian scheme, is coherent if and only if it is quasi-coherent and the modules above can be taken to be finitely generated.
On an affine scheme , there is an equivalence of categories from -modules to quasi-coherent sheaves, taking a module to the associated sheaf . The inverse equivalence takes a quasi-coherent sheaf on to the -module of global sections of .
Here are several further characterizations of quasi-coherent sheaves on a scheme.
Properties
On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on any scheme form an abelian category, and they are extremely useful in that context.
On any ringed space , the coherent sheaves form an abelian category, a full subcategory of the category of -modules. (Analogously, the category of coherent modules over any ring is a full abelian subcategory of the category of all -modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. The direct sum of two coherent sheaves is coherent; more generally, an -module that is an extension of two coherent sheaves is coherent.
A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always an -module of finite presentation, meaning that each point in has an open neighborhood such that the restriction of to is isomorphic to the cokernel of a morphism for some natural numbers and . If is coherent, then, conversely, every sheaf of finite presentation over is coherent.
The sheaf of rings is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, the Oka coherence theorem states that the sheaf of holomorphic functions on a complex analytic space is a coherent sheaf of rings. The main part of the proof is the case . Likewise, on a locally Noetherian scheme , the structure sheaf is a coherent sheaf of rings.
Basic constructions of coherent sheaves
An -module on a ringed space is called locally free of finite rank, or a vector bundle, if every point in has an open neighborhood such that the restriction is isomorphic to a finite direct sum of copies of . If is free of the same rank near every point of , then the vector bundle is said to be of rank .
Vector bundles in this sheaf-theoretic sense over a scheme are equivalent to vector bundles defined in a more geometric way, as a scheme with a morphism and with a covering of by open sets with given isomorphisms over such that the two isomorphisms over an intersection differ by a linear automorphism. (The analogous equivalence also holds for complex analytic spaces.) For example, given a vector bundle in this geometric sense, the corresponding sheaf is defined by: over an open set of , the -module is the set of sections of the morphism . The sheaf-theoretic interpretation of vector bundles has the advantage that vector bundles (on a locally Noetherian scheme) are included in the abelian category of coherent sheaves.
Locally free sheaves come equipped with the standard -module operations, but these give back locally free sheaves.
Let , a Noetherian ring. Then vector bundles on are exactly the sheaves associated to finitely generated projective modules over , or (equivalently) to finitely generated flat modules over .
Let , a Noetherian -graded ring, be a projective scheme over a Noetherian ring . Then each -graded -module determines a quasi-coherent sheaf on such that is the sheaf associated to the -module , where is a homogeneous element of of positive degree and is the locus where does not vanish.
For example, for each integer , let denote the graded -module given by . Then each determines the quasi-coherent sheaf on . If is generated as -algebra by , then is a line bundle (invertible sheaf) on and is the -th tensor power of . In particular, is called the tautological line bundle on the projective -space.
A simple example of a coherent sheaf on that is not a vector bundle is given by the cokernel in the following sequence
this is because restricted to the vanishing locus of the two polynomials has two-dimensional fibers, and has one-dimensional fibers elsewhere.
Ideal sheaves: If is a closed subscheme of a locally Noetherian scheme , the sheaf of all regular functions vanishing on is coherent. Likewise, if is a closed analytic subspace of a complex analytic space , the ideal sheaf is coherent.
The structure sheaf of a closed subscheme of a locally Noetherian scheme can be viewed as a coherent sheaf on . To be precise, this is the direct image sheaf , where is the inclusion. Likewise for a closed analytic subspace of a complex analytic space. The sheaf has fiber (defined below) of dimension zero at points in the open set , and fiber of dimension 1 at points in . There is a short exact sequence of coherent sheaves on :
Most operations of linear algebra preserve coherent sheaves. In particular, for coherent sheaves and on a ringed space , the tensor product sheaf and the sheaf of homomorphisms are coherent.
A simple non-example of a quasi-coherent sheaf is given by the extension by zero functor. For example, consider for
Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category of modules over the underlying ring, and the adjunction comes from taking global sections.
Functoriality
Let be a morphism of ringed spaces (for example, a morphism of schemes). If is a quasi-coherent sheaf on , then the inverse image -module (or pullback) is quasi-coherent on . For a morphism of schemes and a coherent sheaf on , the pullback is not coherent in full generality (for example, , which might not be coherent), but pullbacks of coherent sheaves are coherent if is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle.
If is a quasi-compact quasi-separated morphism of schemes and is a quasi-coherent sheaf on , then the direct image sheaf (or pushforward) is quasi-coherent on .
The direct image of a coherent sheaf is often not coherent. For example, for a field , let be the affine line over , and consider the morphism ; then the direct image is the sheaf on associated to the polynomial ring , which is not coherent because has infinite dimension as a -vector space. On the other hand, the direct image of a coherent sheaf under a proper morphism is coherent, by results of Grauert and Grothendieck.
Local behavior of coherent sheaves
An important feature of coherent sheaves is that the properties of at a point control the behavior of in a neighborhood of , more than would be true for an arbitrary sheaf. For example, Nakayama's lemma says (in geometric language) that if is a coherent sheaf on a scheme , then the fiber of at a point (a vector space over the residue field ) is zero if and only if the sheaf is zero on some open neighborhood of . A related fact is that the dimension of the fibers of a coherent sheaf is upper-semicontinuous. Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset.
In the same spirit: a coherent sheaf on a scheme is a vector bundle if and only if its stalk is a free module over the local ring for every point in .
On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On a reduced locally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant.
Examples of vector bundles
For a morphism of schemes , let be the diagonal morphism, which is a closed immersion if is separated over . Let be the ideal sheaf of in . Then the sheaf of differentials can be defined as the pullback of to . Sections of this sheaf are called 1-forms on over , and they can be written locally on as finite sums for regular functions and . If is locally of finite type over a field , then is a coherent sheaf on .
If is smooth over , then (meaning ) is a vector bundle over , called the cotangent bundle of . Then the tangent bundle is defined to be the dual bundle . For smooth over of dimension everywhere, the tangent bundle has rank .
If is a smooth closed subscheme of a smooth scheme over , then there is a short exact sequence of vector bundles on :
which can be used as a definition of the normal bundle to in .
For a smooth scheme over a field and a natural number , the vector bundle of i-forms on is defined as the -th exterior power of the cotangent bundle, . For a smooth variety of dimension over , the canonical bundle means the line bundle . Thus sections of the canonical bundle are algebro-geometric analogs of volume forms on . For example, a section of the canonical bundle of affine space over
can be written as
where is a polynomial with coefficients in .
Let be a commutative ring and a natural number. For each integer , there is an important example of a line bundle on projective space over , called . To define this, consider the morphism of -schemes
given in coordinates by . (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section of over an open subset of is defined to be a regular function on that is homogeneous of degree , meaning that
as regular functions on (. For all integers and , there is an isomorphism of line bundles on .
In particular, every homogeneous polynomial in of degree over can be viewed as a global section of over . Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundles . This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective space over are just the "constants" (the ring ), and so it is essential to work with the line bundles .
Serre gave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, let be a Noetherian ring (for example, a field), and consider the polynomial ring as a graded ring with each having degree 1. Then every finitely generated graded -module has an associated coherent sheaf on over . Every coherent sheaf on arises in this way from a finitely generated graded -module . (For example, the line bundle is the sheaf associated to the -module with its grading lowered by .) But the -module that yields a given coherent sheaf on is not unique; it is only unique up to changing by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves on is the quotient of the category of finitely generated graded -modules by the Serre subcategory of modules that are nonzero in only finitely many degrees.
The tangent bundle of projective space over a field can be described in terms of the line bundle . Namely, there is a short exact sequence, the Euler sequence:
It follows that the canonical bundle (the dual of the determinant line bundle of the tangent bundle) is isomorphic to . This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of the ample line bundle means that projective space is a Fano variety. Over the complex numbers, this means that projective space has a Kähler metric with positive Ricci curvature.
Vector bundles on a hypersurface
Consider a smooth degree- hypersurface defined by the homogeneous polynomial of degree . Then, there is an exact sequence
where the second map is the pullback of differential forms, and the first map sends
Note that this sequence tells us that is the conormal sheaf of in . Dualizing this yields the exact sequence
hence is the normal bundle of in . If we use the fact that given an exact sequence
of vector bundles with ranks ,,, there is an isomorphism
of line bundles, then we see that there is the isomorphism
showing that
Serre construction and vector bundles
One useful technique for constructing rank 2 vector bundles is the Serre constructionpg 3 which establishes a correspondence between rank 2 vector bundles on a smooth projective variety and codimension 2 subvarieties using a certain -group calculated on . This is given by a cohomological condition on the line bundle (see below).
The correspondence in one direction is given as follows: for a section we can associated the vanishing locus . If is a codimension 2 subvariety, then
It is a local complete intersection, meaning if we take an affine chart then can be represented as a function , where and
The line bundle is isomorphic to the canonical bundle on
In the other direction, for a codimension 2 subvariety and a line bundle such that
there is a canonical isomorphism,which is functorial with respect to inclusion of codimension subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, for that is an isomorphism there is a corresponding locally free sheaf of rank 2 that fits into a short exact sequenceThis vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studying moduli of stable vector bundles in many specific cases, such as on principally polarized abelian varieties and K3 surfaces.
Chern classes and algebraic K-theory
A vector bundle on a smooth variety over a field has Chern classes in the Chow ring of , in for . These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence
of vector bundles on , the Chern classes of are given by
It follows that the Chern classes of a vector bundle depend only on the class of in the Grothendieck group . By definition, for a scheme , is the quotient of the free abelian group on the set of isomorphism classes of vector bundles on by the relation that for any short exact sequence as above. Although is hard to compute in general, algebraic K-theory provides many tools for studying it, including a sequence of related groups for integers .
A variant is the group (or ), the Grothendieck group of coherent sheaves on . (In topological terms, G-theory has the formal properties of a Borel–Moore homology theory for schemes, while K-theory is the corresponding cohomology theory.) The natural homomorphism is an isomorphism if is a regular separated Noetherian scheme, using that every coherent sheaf has a finite resolution by vector bundles in that case. For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field.
More generally, a Noetherian scheme is said to have the resolution property if every coherent sheaf on has a surjection from some vector bundle on . For example, every quasi-projective scheme over a Noetherian ring has the resolution property.
Applications of resolution property
Since the resolution property states that a coherent sheaf on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles :
we can compute the total Chern class of with
For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme of . If we take the projective scheme associated to the ideal , then
since there is the resolution
over .
Bundle homomorphism vs. sheaf homomorphism
When vector bundles and locally free sheaves of finite constant rank are used interchangeably,
care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundles , by definition, a bundle homomorphism is a scheme morphism over (i.e., ) such that, for each geometric point in , is a linear map of rank independent of . Thus, it induces the sheaf homomorphism of constant rank between the corresponding locally free -modules (sheaves of dual sections). But there may be an -module homomorphism that does not arise this way; namely, those not having constant rank.
In particular, a subbundle is a subsheaf (i.e., is a subsheaf of ). But the converse can fail; for example, for an effective Cartier divisor on , is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles).
The category of quasi-coherent sheaves
The quasi-coherent sheaves on any fixed scheme form an abelian category. Gabber showed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, a Grothendieck category. A quasi-compact quasi-separated scheme (such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves on , by Rosenberg, generalizing a result of Gabriel.
Coherent cohomology
The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language of sheaf cohomology applied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role.
Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such as Serre duality, relations between topology and algebraic geometry such as Hodge theory, and formulas for Euler characteristics of coherent sheaves such as the Riemann–Roch theorem.
See also
Picard group
Divisor (algebraic geometry)
Reflexive sheaf
Quot scheme
Twisted sheaf
Essentially finite vector bundle
Bundle of principal parts
Gabriel–Rosenberg reconstruction theorem
Pseudo-coherent sheaf
Quasi-coherent sheaf on an algebraic stack
Notes
References
Sections 0.5.3 and 0.5.4 of
External links
Part V of
Algebraic geometry
Sheaf theory
Vector bundles
Topological methods of algebraic geometry
Complex manifolds | Coherent sheaf | [
"Mathematics"
] | 4,350 | [
"Mathematical structures",
"Fields of abstract algebra",
"Sheaf theory",
"Category theory",
"Topology",
"Algebraic geometry"
] |
603,812 | https://en.wikipedia.org/wiki/Message%20forgery | In cryptography, message forgery is sending a message so to deceive the recipient about the actual sender's identity. A common example is sending a spam or prank e-mail as if it were originated from an address other than the one which was really used.
See also
Authentication
Message authentication code
Stream cipher attack
Cryptographic attacks
Practical jokes | Message forgery | [
"Technology"
] | 73 | [
"Cryptographic attacks",
"Computer security exploits"
] |
603,916 | https://en.wikipedia.org/wiki/K%C3%A4hler%20differential | In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available.
Definition
Let and be commutative rings and be a ring homomorphism. An important example is for a field and a unital algebra over (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module
of differentials in different, but equivalent ways.
Definition using derivations
An -linear derivation on is an -module homomorphism to an -module satisfying the Leibniz rule (it automatically follows from this definition that the image of is in the kernel of ). The module of Kähler differentials is defined as the -module for which there is a universal derivation . As with other universal properties, this means that is the best possible derivation in the sense that any other derivation may be obtained from it by composition with an -module homomorphism. In other words, the composition with provides, for every , an -module isomorphism
One construction of and proceeds by constructing a free -module with one formal generator for each in , and imposing the relations
,
,
,
for all in and all and in . The universal derivation sends to . The relations imply that the universal derivation is a homomorphism of -modules.
Definition using the augmentation ideal
Another construction proceeds by letting be the ideal in the tensor product defined as the kernel of the multiplication map
Then the module of Kähler differentials of can be equivalently defined by
and the universal derivation is the homomorphism defined by
This construction is equivalent to the previous one because is the kernel of the projection
Thus we have:
Then may be identified with by the map induced by the complementary projection
This identifies with the -module generated by the formal generators for in , subject to being a homomorphism of -modules which sends each element of to zero. Taking the quotient by precisely imposes the Leibniz rule.
Examples and basic facts
For any commutative ring , the Kähler differentials of the polynomial ring are a free -module of rank n generated by the differentials of the variables:
Kähler differentials are compatible with extension of scalars, in the sense that for a second -algebra and , there is an isomorphism
As a particular case of this, Kähler differentials are compatible with localizations, meaning that if is a multiplicative set in , then there is an isomorphism
Given two ring homomorphisms , there is a short exact sequence of -modules
If for some ideal , the term vanishes and the sequence can be continued at the left as follows:
A generalization of these two short exact sequences is provided by the cotangent complex.
The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated -algebras . Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable,
Kähler differentials for schemes
Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, represents the ideal defining the diagonal in the fiber product of with itself over . This construction therefore has a more geometric flavor, in the sense that the notion of first infinitesimal neighbourhood of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes by setting to be the ideal of the diagonal in the fiber product . The cotangent sheaf , together with the derivation defined analogously to before, is universal among -linear derivations of -modules. If is an open affine subscheme of whose image in is contained in an open affine subscheme , then the cotangent sheaf restricts to a sheaf on which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying and .
Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms and of schemes there is an exact sequence of sheaves on
Also, if is a closed subscheme given by the ideal sheaf , then and there is an exact sequence of sheaves on
Examples
Finite separable field extensions
If is a finite field extension, then if and only if is separable. Consequently, if is a finite separable field extension and is a smooth variety (or scheme), then the relative cotangent sequence
proves .
Cotangent modules of a projective variety
Given a projective scheme , its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve
then we can compute the cotangent module as
Then,
Morphisms of schemes
Consider the morphism
in . Then, using the first sequence we see that
hence
Higher differential forms and algebraic de Rham cohomology
de Rham complex
As before, fix a map . Differential forms of higher degree are defined as the exterior powers (over ),
The derivation extends in a natural way to a sequence of maps
satisfying This is a cochain complex known as the de Rham complex.
The de Rham complex enjoys an additional multiplicative structure, the wedge product
This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra.
de Rham cohomology
The hypercohomology of the de Rham complex of sheaves is called the algebraic de Rham cohomology of over and is denoted by or just if is clear from the context. (In many situations, is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by . It is closely related to crystalline cohomology.
As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when and are affine schemes. In this case, because affine schemes have no higher cohomology, can be computed as the cohomology of the complex of abelian groups
which is, termwise, the global sections of the sheaves .
To take a very particular example, suppose that is the multiplicative group over Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is
The differential obeys the usual rules of calculus, meaning The kernel and cokernel compute algebraic de Rham cohomology, so
and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of are much larger, namely,
Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields.
Grothendieck's comparison theorem
If is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves
between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on , the complex manifold associated to X. Here, denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, showed that the comparison map induces an isomorphism
from algebraic to smooth de Rham cohomology (and thus to singular cohomology by de Rham's theorem). In particular, if X is a smooth affine algebraic variety embedded in , then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on X is a quasi-isomorphism. For example, if
,
then as shown above, the computation of algebraic de Rham cohomology gives explicit generators for and , respectively, while all other cohomology groups vanish. Since X is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem.
Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring with where and . Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal.
A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by .
Applications
Canonical divisor
If is a smooth variety over a field , then is a vector bundle (i.e., a locally free -module) of rank equal to the dimension of . This implies, in particular, that
is a line bundle or, equivalently, a divisor. It is referred to as the canonical divisor. The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality.
Classification of algebraic curves
The geometric genus of a smooth algebraic variety of dimension over a field is defined as the dimension
For curves, this purely algebraic definition agrees with the topological definition (for ) as the "number of handles" of the Riemann surface associated to X. There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively.
Tangent bundle and Riemann–Roch theorem
The tangent bundle of a smooth variety is, by definition, the dual of the cotangent sheaf . The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle.
Unramified and smooth morphisms
The sheaf of differentials is related to various algebro-geometric notions. A morphism of schemes is unramified if and only if is zero. A special case of this assertion is that for a field , is separable over iff , which can also be read off the above computation.
A morphism of finite type is a smooth morphism if it is flat and if is a locally free -module of appropriate rank. The computation of above shows that the projection from affine space is smooth.
Periods
Periods are, broadly speaking, integrals of certain arithmetically defined differential forms. The simplest example of a period is , which arises as
Algebraic de Rham cohomology is used to construct periods as follows: For an algebraic variety defined over the above-mentioned compatibility with base-change yields a natural isomorphism
On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold associated to , denoted here Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients, , which by the universal coefficient theorem is in its turn isomorphic to Composing these isomorphisms yields two rational vector spaces which, after tensoring with become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are periods.
Algebraic number theory
In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If is a finite extension with rings of integers and respectively then the different ideal , which encodes the ramification data, is the annihilator of the -module :
Related notions
Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology of an algebra of a smooth variety is isomorphic to the de-Rham complex for a field of characteristic . A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex.
The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors.
Notes
References
(letter to Michael Atiyah, October 14, 1963)
External links
Notes on p-adic algebraic de-Rham cohomology - gives many computations over characteristic 0 as motivation
A thread devoted to the relation on algebraic and analytic differential forms
Differentials (Stacks project)
Commutative algebra
Differential algebra
Algebraic geometry
Cohomology theories | Kähler differential | [
"Mathematics"
] | 2,755 | [
"Differential algebra",
"Fields of abstract algebra",
"Commutative algebra",
"Algebraic geometry"
] |
603,940 | https://en.wikipedia.org/wiki/Mesosome | Mesosomes or chondrioids are folded invaginations in the plasma membrane of bacteria that are produced by the chemical fixation techniques used to prepare samples for electron microscopy. Although several functions were proposed for these structures in the 1960s, they were recognized as artifacts by the late 1970s and are no longer considered to be part of the normal structure of bacterial cells. These extensions are in the form of vesicles, tubules and lamellae.
Initial observations
These structures are invaginations of the plasma membrane observed in gram-positive bacteria that have been chemically fixed to prepare them for electron microscopy. They were first observed in 1953 by George B. Chapman and James Hillier, who referred to them as "peripheral bodies." They were termed "mesosomes" by Fitz-James in 1960.
Initially, it was thought that mesosomes might play a role in several cellular processes, such as cell wall formation during cell division, chromosome replication, or as a site for oxidative phosphorylation. The mesosome was thought to increase the surface area of the cell, aiding the cell in cellular respiration. This is analogous to cristae in the mitochondrion in eukaryotic cells, which are finger-like projections and help eukaryotic cells undergo cellular respiration. Mesosomes were also hypothesized to aid in photosynthesis, cell division, DNA replication, and cell compartmentalisation.
Disproof of hypothesis
These models were called into question during the late 1970s when data accumulated suggesting that mesosomes are artifacts formed through damage to the membrane during the process of chemical fixation, and do not occur in cells that have not been chemically fixed. By the mid to late 1980s, with advances in cryofixation and freeze substitution methods for electron microscopy, it was generally concluded that mesosomes do not exist in living cells. However, a few researchers continue to argue that the evidence remains inconclusive, and that mesosomes might not be artifacts in all cases.
Recently, similar folds in the membrane have been observed in bacteria that have been exposed to some classes of antibiotics, and antibacterial peptides (defensins). The appearance of these mesosome-like structures may be the result of these chemicals damaging the plasma membrane and/or cell wall.
The case of the proposal and then disproof of the mesosome hypothesis has been discussed from the viewpoint of the philosophy of science as an example of how a scientific idea can be falsified and the hypothesis then rejected, and analyzed to explore how the scientific community carries out this testing process.
See also
Cell membrane
Organelle
Lysosome
References
Further reading
Membrane biology
Organelles
Prokaryotic cell anatomy | Mesosome | [
"Chemistry"
] | 574 | [
"Membrane biology",
"Molecular biology"
] |
604,013 | https://en.wikipedia.org/wiki/RTAI | Real-time application interface (RTAI) is a real-time extension for the Linux kernel, which lets users write applications with strict timing constraints for Linux. Like Linux itself the RTAI software is a community effort. RTAI provides deterministic response to interrupts, POSIX-compliant and native RTAI real-time tasks. RTAI supports several architectures, including IA-32 (with and without FPU and TSC), x86-64, PowerPC, ARM (StrongARM and ARM7: clps711x-family, Cirrus Logic EP7xxx, CS89712, PXA25x), and MIPS.
RTAI consists mainly of two parts: an Adeos-based patch to the Linux kernel which introduces a hardware abstraction layer, and a broad variety of services which make lives of real-time programmers easier. RTAI versions over 3.0 use an Adeos kernel patch, slightly modified in the x86 architecture case, providing additional abstraction and much lessened dependencies on the "patched" operating system. Adeos is a kernel patch comprising an Interrupt Pipeline where different operating system domains register interrupt handlers. This way, RTAI can transparently take over interrupts while leaving the processing of all others to Linux. Use of Adeos also frees RTAI from patent restrictions caused by RTLinux project.
RTAI-XML
RTAI-XML is a server component of RTAI, implementing a service-oriented way to design and develop real-time (RT) control applications.
This project was born to fulfill the needs of a university group, mainly focused to have a flexible platform for learning control systems design, allowing the students to test their programs remotely, over the Internet. Leaving the first wishful thinking and going to the real implementation gave rise to the alpha version of RTAI-XML, that showed the potential impact of the basic idea of a net separation of hard and soft real-time tasks in the programmation logic. What was necessary to assure that students could not crash the RT process, is now becoming a new RTAI paradigm.
RTAI-XML consists of a server component waiting for incoming calls on a box where a real-time process, the Target, is running (or ready to). A generic client program, the Host, can reach the server through the TCP/IP network, using a standard protocol based on XML, and hence interact with the Target, in order to monitor the status of the RT process, to see the signals collected (or generated) by the system and also to fetch and change the RT parameters (for example, the gains of a PID feedback ring). In other words, RTAI-XML provides a simple way towards remoting of control applications, adding flexibility to the RTAI project, without losing the key features of an open and standard implementation.
The RTAI-XML section of this site presents the details of the implementation. The general architecture is revised, in order to focus the three key components, the Server, the Server-Target interface and the Server-Host communication. The Applications section contains some examples of control systems based on RTAI-XML and the References section contains hints and links for further readings on this topic.
See also
Xenomai
References
External links
Univ. di Padova - RTAI / Xenomai presentation
RTAI-XML official website
Linux kernel
Real-time operating systems | RTAI | [
"Technology"
] | 705 | [
"Real-time computing",
"Real-time operating systems"
] |
604,020 | https://en.wikipedia.org/wiki/Mosaic%20%28genetics%29 | Mosaicism or genetic mosaicism is a condition in which a multicellular organism possesses more than one genetic line as the result of genetic mutation. This means that various genetic lines resulted from a single fertilized egg. Mosaicism is one of several possible causes of chimerism, wherein a single organism is composed of cells with more than one distinct genotype.
Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction, anaphase lag, and endoreplication. Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. Mosaicism can also result from a mutation in one cell during development, in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). Somatic mosaicism is not generally inheritable as it does not generally affect germ cells.
History
In 1929, Alfred Sturtevant studied mosaicism in Drosophila, a genus of fruit fly. H. J. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements, and Schultz in 1936 showed that, in all cases studied, these rearrangements were associated with heterochromatic inert regions. Several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing, as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions.
In the 1930s, Stern demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis. When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. The term somatic mosaicism was used by CW Cotterman in 1956 in his seminal paper on antigenic variation.
In 1944, M. L. Belgovskii proposed that mosaicism could not account for certain mosaic expressions caused by chromosomal rearrangements involving heterochromatic inert regions. The associated weakening of biochemical activity led to what he called a genetic chimera.
Types
Germline mosaicism
Germline or gonadal mosaicism is a particular form of mosaicism wherein some gametes—i.e., sperm or oocytes—carry a mutation, but the rest are normal. The cause is usually a mutation that occurred in an early stem cell that gave rise to all or part of the gametes.
Somatic mosaicism
Somatic mosaicism (also known as clonal mosaicism) occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages.
Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life. Somatic mosaics are common in embryogenesis due to retrotransposition of long interspersed nuclear element-1 (LINE-1 or L1) and Alu transposable elements. In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, unmethylated regions in the genome. Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As longevity has increased dramatically over the last century, human genome may not have had time to adapt to cumulative effects of mutagenesis. Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia, lymphomas, and solid tumors.
Trisomies, monosomies, and related conditions
The most common form of mosaicism found through prenatal diagnosis involves trisomies. Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, some cases occur where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells. Generally, this leads to a milder phenotype than in nonmosaic patients with the same disorder.
In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes (46, XX/XY). In the fruit fly Drosophila melanogaster, where a fly possessing two X chromosomes is a female and a fly possessing a single X chromosome is a sterile male, a loss of an X chromosome early in embryonic development can result in sexual mosaics, or gynandromorphs. Likewise, a loss of the Y chromosome can result in XY/X mosaic males.
An example of this is one of the milder forms of Klinefelter syndrome, called 46,XY/47,XXY mosaic wherein some of the patient's cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes.
Also monosomies can present with some form of mosaicism. The only non-lethal full monosomy occurring in humans is the one causing Turner's syndrome. Around 30% of Turner's syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases.
Mosaicism isn't necessarily deleterious, though. Revertant somatic mosaicism is a rare recombination event with a spontaneous correction of a mutant, pathogenic allele. In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues such as blood and epithelia that regenerate often. In the skin disorder ichthyosis with confetti, normal skin spots appear early in life and increase in number and size over time.
Other endogenous factors can also lead to mosaicism, including mobile elements, DNA polymerase slippage, and unbalanced chromosome segregation. Exogenous factors include nicotine and UV radiation. Somatic mosaics have been created in Drosophila using X‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics.
True mosaicism should not be mistaken for the phenomenon of X-inactivation, where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells. The latter is the case in normal (XX) female mammals, although it is not always visible from the phenotype (as it is in calico cats). However, all multicellular organisms are likely to be somatic mosaics to some extent.
Gonosomal mosaicism
Gonosomal mosaicism is a type of somatic mosaicism that occurs very early in the organisms development and thus is present within both germline and somatic cells. Somatic mosaicism is not generally inheritable as it does not usually affect germ cells. In the instance of gonosomal mosaicism, organisms have the potential to pass the genetic alteration, including to potential offspring because the altered allele is present in both somatic and germline cells.
Brain cell mosaicism
A frequent type of neuronal genomic mosaicism is copy number variation. Possible sources of such variation were suggested to be incorrect repairs of DNA damage and somatic recombination.
Mitotic recombination
One basic mechanism that can produce mosaic tissue is mitotic recombination or somatic crossover. It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue that is mosaic depends on where in the tree of cell division the exchange takes place. A phenotypic character called "twin spot" seen in Drosophila is a result of mitotic recombination. However, it also depends on the allelic status of the genes undergoing recombination. Twin spot occurs only if the heterozygous genes are linked in repulsion, i.e. the trans phase. The recombination needs to occur between the centromeres of the adjacent gene. This gives an appearance of yellow patches on the wild-type background in Drosophila. another example of mitotic recombination is the Bloom's syndrome, which happens due to the mutation in the blm gene. The resulting BLM protein is defective. The defect in RecQ, a helicase, facilitates the defective unwinding of DNA during replication, thus is associated with the occurrence of this disease.
Use in experimental biology
Genetic mosaics are a particularly powerful tool when used in the commonly studied fruit fly, where specially selected strains frequently lose an X or a Y chromosome in one of the first embryonic cell divisions. These mosaics can then be used to analyze such things as courtship behavior, and female sexual attraction.
More recently, the use of a transgene incorporated into the Drosophila genome has made the system far more flexible. The flip recombinase (or FLP) is a gene from the commonly studied yeast Saccharomyces cerevisiae that recognizes "flip recombinase target" (FRT) sites, which are short sequences of DNA, and induces recombination between them. FRT sites have been inserted transgenically near the centromere of each chromosome arm of D. melanogaster. The FLP gene can then be induced selectively, commonly using either the heat shock promoter or the GAL4/UAS system. The resulting clones can be identified either negatively or positively.
In negatively marked clones, the fly is transheterozygous for a gene encoding a visible marker (commonly the green fluorescent protein) and an allele of a gene to be studied (both on chromosomes bearing FRT sites). After induction of FLP expression, cells that undergo recombination will have progeny homozygous for either the marker or the allele being studied. Therefore, the cells that do not carry the marker (which are dark) can be identified as carrying a mutation.
Using negatively marked clones is sometimes inconvenient, especially when generating very small patches of cells, where seeing a dark spot on a bright background is more difficult than a bright spot on a dark background. Creating positively marked clones is possible using the so-called MARCM ("mosaic analysis with a repressible cell marker" system, developed by Liqun Luo, a professor at Stanford University, and his postdoctoral student Tzumin Lee, who now leads a group at Janelia Farm Research Campus. This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However, a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination, GAL4 is allowed to function, and GFP turns on. This results in the cells of interest being marked brightly in a dark background.
See also
45,X/46,XY mosaicism (X0/XY mosaicism)
References
Further reading
Genetics
Cell biology | Mosaic (genetics) | [
"Biology"
] | 2,430 | [
"Cell biology",
"Genetics"
] |
604,026 | https://en.wikipedia.org/wiki/Bowing | Bowing (also called stooping) is the act of lowering the torso and head as a social gesture in direction to another person or symbol. It is most prominent in Asian cultures but it is also typical of nobility and aristocracy in many European countries. It is also used in religious contexts, as a form of worship or veneration. Sometimes the gesture may be limited to lowering the head such as in Indonesia, and in many cultures several degrees of the lowness of the bow are distinguished and regarded as appropriate for different circumstances. It is especially prominent in Nepal, India, Cambodia, Thailand, Laos, Vietnam, China, Korea, and Japan, where it may be executed standing or kneeling. Some bows are performed equally by two or more people while others are unequal – the person bowed to either does not bow in return or performs a less low bow in response. A nod of the head may be regarded as the minimal form of bow; forms of kneeling, genuflection, or prostration which involves the hands or whole body touching the ground, are the next levels of gesture.
In Europe and the Commonwealth
Bowing is a traditional gesture of respect and gratitude in European cultures. Since the 17th century, bowing has been a primarily male practice. Women instead perform a curtsy, a related gesture that diverged from the bow during the early modern period. However, women may still bow in some religious practices or during stage performances, such as at the curtain call.
The depth of the bow was related to the difference in rank or degree of respect or gratitude. In early modern courtly circles, males were expected to "bow and scrape" (hence the term "bowing and scraping" for what appears to be excessive ceremony). "Scraping" refers to the drawing back of the right leg as one bows, such that the right foot scrapes the floor or earth. Typically, while executing such a bow, the man's right hand is pressed horizontally across the abdomen while the left is held out from the body. Today, social bowing is all but extinct, except in some very formal settings. However, hand-kissing of women by men, which entails bowing to reach the hand, continues in some cultures.
In the British, Australian, and other Commonwealth courts, lawyers and clerks (of both sexes) are expected to perform a cursory bow of the head only to the judge when entering or leaving a law court that is in session. Similar gestures are made to the Speaker of the House of Commons when entering or leaving the chamber of the House of Commons in session, and to the monarch by their staff.
Members of the Royal Family of the various Commonwealth Realms are either bowed or curtsied to, depending on the gender of the subject. Australians are expected to bow to the Governor-General of Australia, the spouse of the Governor-General and state Governors and Lieutenant-Governors.
In Asia
In East Asia
Bows are the traditional greeting in East Asia, particularly in Japan, Korea, Hong Kong, China and Vietnam. In China and Vietnam, shaking hands or a slight bow have become more popular than a full bow. However, bowing is not reserved only for greetings; it can also be used as a gesture of respect, with different bows used for apologies and gratitude.
In China
The kowtow is the highest sign of reverence in Han Chinese culture, but its use has become extremely rare since the collapse of Imperial China. In many situations, the standing bow has replaced the kowtow. However, in modern Chinese societies, bowing is not as formalized as in Japan, South Korea and North Korea. Bowing is normally reserved for occasions such as marriage ceremonies and as a gesture of respect for the deceased, although it still sometimes used for more formal greetings.
In China, three bows are customarily executed at funerals including state funerals,
ancestral worship, and at special ceremonies in commemoration of pater patriae Sun Yat-sen.
As in Japan and Korea, public figures may bow formally to apologize. Chinese Premier bowed and offered his condolences to stranded railway passengers; Taiwanese Defense Minister bowed in apology following a faux pas concerning the shooting of former President in 2004.
In South and Southeast Asia
Similarly to East Asia, bowing is the traditional form of greeting in many South Asian and Southeast Asian countries. A gesture known as the Añjali Mudrā is used as a sign of respect and greeting and involves a bow of varying degrees depending on whom one performs it to and hands pressed together generally at chest level. Practised throughout South Asia and Southeast Asia, the gesture is most commonly used in India, Sri Lanka, Nepal, Bhutan, Bangladesh, Cambodia, Thailand, Laos, Myanmar and Indonesia. Gestures across the region are known by different names such as the sampeah in Cambodia, wai in Thailand,, sembah in Indonesia, namaste in India and Nepal, and in Sri Lanka the gesture is used as a greeting with the word "Ayubowan"
In religious settings
Eastern religions
In many Eastern religions bowing is used as a sign of respect in worship and has its origins in the Indian "Añjali Mudrā".
Sikhism
Sikhs only bowed to their Gurus, who were the messengers of God. Their holy book, the Guru Granth Sahib is seen as the eternal guru after the death of their living gurus, as it has the word of god written by the past living gurus. In a Gurdwara, Sikhs bow to the Guru Granth Sahib and are not permitted to take part in idol worshiping, bowing to anything other than the Guru Granth Sahib or bowing to any living person.
Shinto
Bows are performed in Shinto settings. Visitors to a Shinto shrine will clap or ring a bell to attract the attention of the enshrined deity, clasp the hands in prayer, and then bow.
Buddhism
Bowing is a common feature for worship in Buddhism. Zen Buddhism, for example, has a daily ritual in which practitioners do 1,080 full prostration bows, usually spread throughout the day. More casual practitioners and laypeople typically do 108 bows once a day instead.
Hinduism
In the Hindu traditions people show deference by bowing or kneeling down and touching feet of an elder or respected person.
Traditionally, a child is expected to bow down to their parents, teachers and elders during certain formal ceremonies and casual settings.
Abrahamic religions
Judaism
In the Jewish setting, bowing, similar to in Christianity, is a sign of respect, and is done at certain points in Jewish services. By tradition, in the Temple in Jerusalem, kneeling was part of the regular service, but this is not part of a modern Jewish service.
Some bows within the current liturgy are simple bows from the waist — others (especially during parts of the Amidah) involve bending the knees while saying Baruch (Blessed), bowing from the waist at Atah ([are] you) and then straightening up at Adonai (God). During the concluding Aleinu section of the services, congregants usually bow when they say "V'anachnu kor'im u'mishtachavim u'modim," meaning "we bend our knees, prostrate, and acknowledge our thanks." Another moment in the service which triggers the bow is during the "Bar'chu". Many bow at the mention of "Adonai" (the Jewish addressing of the Lord) at this and various other parts in the service (most likely if they are to remain standing during that prayer).
Kneeling is retained in modern Orthodox Judaism, but only on the High Holy Days — once on each day of Rosh Hashanah (when the Aleinu prayer is recited during the Amidah), and four times on Yom Kippur — again, once for Aleinu, and three times during a central portion of the service when the details of the Avodah, the High Priest's service in the Temple are recited.
The Talmudic texts as well as writings of Gaonim and Rishonim indicate that total prostration was common among many Jewish communities until some point during the Middle Ages. Members of the Karaite denomination practice full prostrations during prayers. Ashkenazi Jews prostrate during Rosh Hashana and Yom Kippur as did Yemenite Jews during the Tachanun part of regular daily Jewish prayer until somewhat recently. Ethiopian Jews traditionally prostrated during a holiday specific to their community known as Sigd. Sigd comes from a root word meaning prostration in Amharic, Aramaic, and Arabic. There is a move among Talmide haRambam, a small modern restorationist group with perspectives on Jewish law similar to that of Dor Daim, to revive prostration as a regular part of daily Jewish worship.
Christianity
Communicants of many Christian denominations bow at the mention of the name of Jesus, while inside of a church and outside of one. The origin of this practice is within Sacred Scripture, which states: "Therefore God also highly exalted Him and gave Him the name that is above every name, so that at the name of Jesus every knee should bend, in heaven and on earth and under the earth" (NRSV). This pious custom was mandated in the Second Council of Lyon, which proclaimed "Whenever that glorious name is recalled, especially during the sacred mysteries of the Mass, everyone should bow the knees of his heart which he can do even by a bow of his head." The eighteenth canon of the Church of England, mother Church of the Anglican Communion, made this external obeisance obligatory during the divine service, declaring: "When in time of divine service the Lord JESUS shall be mentioned, due and lowly reverence shall be done by all persons present, as it has been accustomed; testifying by these outward ceremonies and gestures their inward humility, Christian resolution, and due acknowledgement that the Lord JESUS CHRIST, the true eternal Son of God, is the only Saviour of the world, in whom alone all the mercies, graces, and promises of God to mankind for this life, and the life to come, are fully and wholly comprised." Likewise, in the Lutheran Churches, people are "to bow when the name of Jesus is mentioned", and in the Roman Catholic Church "at the mention of the name of Jesus, there is a slight bow of the head". John Wesley, the founder of the Methodist Churches, also taught the faithful "to bow at the Name of Jesus" and as a result, it is customary for Methodists to bow at the mention of His name, especially during the recitation of the Creed.
In Christian liturgy, bowing is a sign of respect or deference. In many Christian denominations, individuals will bow when passing in front of the altar, or at certain points in the service (for example, when the name of Jesus Christ is spoken, as mentioned above). It may take the form of a simple bow of the head, or a slight incline of the upper body. A profound bow is a deep bow from the waist, and is often done as a substitution for genuflection.
In Eastern Orthodoxy, there are several degrees of bowing, each with a different meaning. Strict rules exist as to which type of a bow should be used at any particular time. The rules are complicated and are not always carried out in all parishes.
In the Roman Rite of the Catholic Church, a profound bow, prostration, a slight bow of the head (during the Creed) genuflection, and kneeling are all prescribed in the liturgy at various points. In addition, there are two forms of genuflection, depending on whether or not the Blessed Sacrament is exposed on the altar or not. In addition to bowing at the mention of the name of Jesus, in the Anglican Communion, "A reverence in the form of a bow is made to an altar, because it is as it were God's throne, and in a manner represents Him." As with Anglican churches, in Lutheran and Methodist churches, when approaching the chancel, it is customary to bow towards the altar (or altar cross). In Anglican churches a bow is also made when the processional cross passes by a communicant in a church procession.
Conservative Protestant Christians such as Brethren, Mennonite, and Seventh-day Adventists make a practice of kneeling during community prayer in the church service. Until the mid-1900s this was common practice among many Protestant Christian groups.
According to the New Testament writer Paul, everyone on Earth will someday bow to Jesus Christ. He writes in Philippians 2:9-11, "Wherefore God also hath highly exalted him, and given him a name which is above every name: That at the name of Jesus every knee should bow, of things in heaven, and things in earth, and things under the earth; And that every tongue should confess that Jesus Christ is Lord, to the glory of God the Father." KJV. He is here quoting a similar passage regarding bowing from the Old Testament, Isaiah 45:23.
Islam
In Islam, there are two types of bowing, Sujud and Ruk'u. Sajdah or Sujud is to prostrate oneself to God in the direction of the Kaaba at Mecca which is done during daily prayers (salat). While in sujud, a Muslim is to praise God and glorify him. The position involves having the forehead, nose, both hands, knees and all toes touching the ground together. Ruku' is bowing down in the standing position during daily prayers (salat). The position of ruku' is established by bending over, putting one's hands on one's knees, and remaining in that position while also praising God and glorifying him.
See also
References
External links
More information on bowing in religious settings
Gestures of respect
Human positions
Greetings
Parting traditions
Articles containing video clips | Bowing | [
"Biology"
] | 2,851 | [
"Behavior",
"Human positions",
"Human behavior"
] |
604,046 | https://en.wikipedia.org/wiki/Rosiglitazone | Rosiglitazone (trade name Avandia) is an antidiabetic drug in the thiazolidinedione class. It works as an insulin sensitizer, by binding to the PPAR in fat cells and making the cells more responsive to insulin. It is marketed by the pharmaceutical company GlaxoSmithKline (GSK) as a stand-alone drug or for use in combination with metformin or with glimepiride. First released in 1999, annual sales peaked at approximately $2.5-billion in 2006; however, following a meta-analysis in 2007 that linked the drug's use to an increased risk of heart attack, sales plummeted to just $9.5-million in 2012. The drug's patent expired in 2012.
It was patented in 1987 and approved for medical use in 1999. Despite rosiglitazone's effectiveness at decreasing blood sugar in type 2 diabetes mellitus, its use decreased dramatically as studies showed apparent associations with increased risks of heart attacks and death. Adverse effects alleged to be caused by rosiglitazone were the subject of over 13,000 lawsuits against GSK; as of July 2010, GSK had agreed to settlements on more than 11,500 of these suits.
Some reviewers recommended rosiglitazone be taken off the market, but an FDA panel disagreed, and it remains available in the U.S. From November 2011 until November 2013, the federal government did not allow Avandia to be sold without a prescription from a certified doctor; moreover, patients were required to be informed of the risks associated with its use, and the drug had to be purchased by mail order through specified pharmacies. In 2013, the FDA lifted its earlier restrictions on rosiglitazone after reviewing the results of a 2009 trial which failed to show increased heart attack risk.
In Europe, the European Medicines Agency (EMA) recommended in September 2010 that the drug be suspended because the benefits no longer outweighed the risks. It was withdrawn from the market in the UK, Spain and India in 2010, and in New Zealand and South Africa in 2011.
Medical uses
Rosiglitazone was approved for glycemic control in people with type 2 diabetes, as measured by glycated haemoglobin A1c (HbA1c) as a surrogate endpoint, similar to that of other oral antidiabetic drugs. The controversy over adverse effects has dramatically reduced the use of rosiglitazone.
Published studies did not provide evidence that outcomes like mortality, morbidity, adverse effects, costs and health-related quality of life are positively influenced by rosiglitazone.
Adverse effects
Heart failure
One of the safety concerns identified before approval was fluid retention. Moreover, the combination of rosiglitazone with insulin resulted in a higher rate of congestive heart failure. In Europe there were contraindications for use in heart failure and combination with insulin.
A meta analysis of all trials from 2010 and 2019 confirmed a higher risk of heart failure and a double risk when rosiglitazone was administered as add-on therapy to insulin. Two meta-analyses of real life cohort studies found a higher risk of heart failure compared to pioglitazone. There were 649 excess cases of heart failure every 100,000 patients who received rosiglitazone rather than pioglitazone.
Heart attacks
The relative risk of ischemic cardiac events seen in pre-approval trials of rosiglitazone was similar to that of comparable drugs, but there was increased LDL cholesterol, LDL/HDL cholesterol ratio, triglycerides and weight.
In 2005, at the insistence of the World Health Organization, GSK performed a meta-analysis of all 37 trials involving use of rosiglitazone, finding a hazard ratio of 1.29 (0.99 to 1.89).
In 2006 the GSK updated the analysis, now including 42 trials and showing a hazard ratio of 1.31 (1.01 to 1.70). A large observational study comparing patients treated with rosiglitizone with patients treated with other diabetes therapies was performed at the same time and found a relative risk of 0.93 (95% C.I. 0.8 to 1.1) for those treated with rosiglitazone. The information was passed to the FDA and posted on the company website, but not otherwise published. GSK provided these analyses to the FDA, but neither the company nor the FDA warned prescribers or patients of the hazard. According to the FDA, the Agency did not issue a safety bulletin because the results of the meta analysis conflicted with those of the observational study and with the results of the ADOPT trial.
A meta-analysis in May 2007 reported the use of rosiglitazone was associated with a 1.4 fold increased risk of heart attack and a numerically higher (but non-significant) increase in risk of death from all cardiovascular diseases against control. It contained 42 trials of which 27 were unpublished. Another meta analysis of 4 trials with follow-up longer than 1 year found similar results. Nissen's meta analysis was criticized in a 2007 article by George Diamond et al. in the Annals of Internal Medicine. The authors concluded that Nissens' analysis had excluded trials with important data on the cardiovascular profile of rosiglitazone, had inappropriately combined trials of greatly differing design, and had inappropriately excluded trials with no cardiovascular events. The authors concluded that no firm conclusion could be drawn regarding whether rosiglitazone increased or decreased cardiovascular risk. Investigators from the Cochrane Collaboration published a meta-analysis of their own on the use of rosiglitazone in Type II diabetes, concluding there was not sufficient evidence to show any health benefit for rosiglitazone. Noting the recent publication by Nissen, they repeated their meta analysis including only the trials included in the Nissen study that dealt with Type II diabetics. (The Nissen study included some trials in people with other disorders.) They did not find a statistically significant increase in cardiovascular events, but noted that all of the cardiovascular endpoints they analyzed showed a non-significant trend toward worse outcomes in the rosiglitazone arms.
In July 2007 the FDA held a joint meeting of the Endocrinologic and Metabolic Drugs Advisory Committee and the Drug Safety and Risk Management Advisory Committee. FDA scientist Joy Mele presented a meta analysis examining the cardiovascular risk of rosiglitazone in completed clinical trials. The study found an overall 1.4 fold increase in risk of cardiovascular ischemic events relative to the control arms. The results were heterogenous, with clear evidence of increased risk relative to placebo but not relative to other diabetes treatments and higher risk associated with combinations of rosiglitazone with insulin or metformin. Based on the 1.4 fold increased risk relative to control groups, FDA scientist David Graham presented an analysis suggesting that rosiglitazone had caused 83,000 excess heart attacks between 1999 and 2007. The advisory panel voted 20 : 3 that the evidence available indicated that rosiglitazone increased the risk of cardiovascular events and 22 : 1 that the overall risk:benefit ratio of rosiglitazone justified its continued marketing in the United States. The FDA placed restrictions on the drug, including adding a boxed warning about heart attacks, but did not withdraw it.
In 2000 a study to address the concerns regarding cardiovascular safety was requested by the European Medicines Agency (EMA). GSK agreed to perform post-marketing a long-term cardiovascular morbidity/mortality study in patients on rosiglitazone in combination with a sulfonylurea or metformin: the RECORD study. The results as published in 2009 showed that rosiglitazone was non-inferior to treatment with metformin or a sulfonylurea with respect to the rate of cardiovascular events and cardiovascular death. European regulators concluded that due in part to design limitations, the results neither proved nor eliminated concerns of excess cardiovascular risk.
In February 2010, the FDA's associate director of drug safety, recommended rosiglitazone be taken off the market. In June 2010, they published a retrospective study comparing roziglitazone to pioglitazone, the other thiazolidinedione marketed in the United States and concluded rosiglitazone was associated with "an increased risk of stroke, heart failure, and all-cause mortality and an increased risk of the composite of AMI, stroke, heart failure, or all-cause mortality in patients 65 years or older". The number needed to harm with roziglitazone was sixty. Graham argued rosiglitazone caused 500 more heart attacks and 300 more heart failures than its main competitor.
Two meta analyses released in 2010, one incorporating 56 trials and a second incorporating 164 trials reached conflicting conclusions. Nissen et al. found again an increased risk for heart infarction against control, but no increased risk for cardiovascular death. Mannucci et al. found no statistically significant increase in cardiac events but a significant increase in heart failure. A 2011 drug class review found an increased risk of cardiovascular adverse events.
A meta-analysis of 16 observational studies released in March, 2011, compared rosiglitazone to pioglitazone, finding support for greater cardiovascular safety for pioglitazone. The meta-analysis involved 810 000 patients taking rosiglitazone or pioglitazone. The study suggests 170 excess myocardial infarctions, 649 excess cases of heart failure, and 431 excess deaths for every 100 000 patients who receive rosiglitazone rather than pioglitazone. This was confirmed by another meta-analysis involving 945 286 patients in 8 retrospective cohort studies, most in the US.
In 2012, the U.S. Justice Department announced GlaxoSmithKline had agreed to plead guilty and pay a $3 billion fine, in part for withholding the results of two studies of the cardiovascular safety of Avandia between 2001 and 2007.
Death
There was no difference in all cause and vascular death in a meta-analysis of 4 trials against controls. Two meta-analyses of cohort studies found excess deaths against pioglitazone.
Stroke
A retrospective observational study performed using Medicare data found that patients treated with rosiglitazone had a 27% higher risk of stroke compared to those treated with pioglitazone.
Bone fractures
GlaxoSmithKline reported a greater incidence of fractures of the upper arms, hands and feet in female diabetics given rosiglitazone compared with those given metformin or glyburide.
The information was based on data from the ADOPT trial The same increase has been found with pioglitazone (Actos), another thiazolidinedione.
A meta-analysis of 10 RCTs, involving 13,715 patients and including both rosiglitazone- and pioglitazone-treated patients, showed an overall 45% increased risk of fracture with thiazolidone use compared with placebo or active comparator. It doubled the risk of fractures among women with type 2 diabetes, without a significant increase in risk of fractures among men with type 2 diabetes.
Hypoglycaemia
The risk of hypoglycaemia is reduced with thiazolidinediones when compared with sulfonylureas; the risk is similar to the risk with metformin (high strength of evidence).
Weight gain
Both thiazolidinediones cause a similar degree of weight gain to that caused by sulfonylureas (moderate strength of evidence).
Eye damage
Both rosiglitazone and pioglitazone have been suspected of causing macular edema, which damages the retina of the eye and causes partial blindness. Blindness is also a possible effect of diabetes, which rosiglitazone is intended to treat. One report documented several occurrences and recommended discontinuation at the first sign of vision problems. A retrospective cohort study showed an association between the use of thiazolidinediones and the incidence of diabetic macular edema (DME). Both use was associated with a 2,3 higher risk at 1 year and at 10 year follow-up, rising to 3 if associated with insulin.
Hepatotoxicity
Moderate to severe acute hepatitis has occurred in several adults who had been taking the drug at the recommended dose for two to four weeks. Plasma rosiglitazone concentrations may be increased in people with existing liver problems.
Contraindications
Both rosiglitazone and pioglitazone are contraindicated in people with NYHA Class III and IV heart failure. They are not recommended for use in heart failure.
In Europe rosiglitazone was contraindicated for heart failure or history of heart failure with regard to all NYHA stages, for combined use with insulin and for acute coronary syndrome. The European Medicines Agency recommended on 23 September 2010 that Avandia be suspended from the European market.
Pharmacology
Rosiglitazone is a member of the thiazolidinedione class of drugs. Thiazolidinediones act as insulin sensitizers. They reduce glucose, fatty acid, and insulin blood concentrations. They work by binding to the peroxisome proliferator-activated receptors (PPARs). PPARs are transcription factors that reside in the nucleus and become activated by ligands such as thiazolidinediones. Thiazolidinediones enter the cell, bind to the nuclear receptors, and alter the expression of genes. The several PPARs include PPARα, PPARβ/δ, and PPARγ. Thiazolidinediones bind to PPARγ.
PPARs are expressed in fat cells, cells of the liver, muscle, heart, and inner wall (endothelium) and smooth muscle of blood vessels. PPARγ is expressed mainly in fat tissue, where it regulates genes involved in fat cell (adipocyte) differentiation, fatty acid uptake and storage, and glucose uptake. It is also found in pancreatic beta cells, vascular endothelium, and macrophages Rosiglitazone is a selective ligand of PPARγ and has no PPARα-binding action. Other drugs bind to PPARα.
Rosiglitazone also appears to have an anti-inflammatory effect in addition to its effect on insulin resistance. Nuclear factor kappa-B (NF-κB), a signaling molecule, stimulates the inflammatory pathways. NF-κB inhibitor (IκB) downregulates the inflammatory pathways. When patients take rosiglitazone, NF-κB levels fall and IκB levels increase.
History
Rosiglitazone was approved by the US FDA in 1999 and by the EMA in 2000; the EMA however required two postmarketing studies on longterm adverse effects, one for chronic heart failure and the other for cardiovascular effects.
Society and culture
Sales
US sales of the drug were of $2.2 billion in 2006. Sales in 2Q 2007 down 22% compared to 2006. 4Q 2007 sales down to $252 million.
Though sales have gone down since 2007 due to safety concerns, Avandia sales for 2009 totalled $1.2 billion worldwide.
Lawsuits
According to analysts from UBS, 13,000 suits had been filed by March 2010. Included among those suing: Santa Clara County, California, which claims to have spent $2 million on rosiglitazone between 1999 and 2007 at its public hospital and is asking for "triple damages".
In May 2010, GlaxoSmithKline (GSK) reached settlement agreements for some of the cases against the company, agreeing to pay $60 million to resolve 700 suits. In July 2010, GSK reached settlement agreements to close another 10,000 of the lawsuits against it, agreeing to pay about $460 million to settle these suits.
In 2012, the U.S. Justice Department announced GlaxoSmithKline had agreed to plead guilty and pay a $3 billion fine, in part for withholding the results of two studies of the cardiovascular safety of Avandia between 2001 and 2007. The settlement stems from claims made by four employees of GlaxoSmithKline, including a former senior marketing development manager for the company and a regional vice president, who tipped off the government about a range of improper practices from the late 1990s to the mid-2000s.
United States investigations
GlaxoSmithKline was being investigated by the FDA and the US Congress regarding Avandia.
Senators Democrat Max Baucus and Republican Charles Grassley filed a report urging GSK to withdraw Avandia in 2008 due to the side effects. The report noted the drug caused 500 avoidable heart attacks a month, and Glaxo officials sought to intimidate doctors who criticized the drug. It also said GSK continued to sell and promote the drug despite knowing the increased risk of heart attacks and stroke.
The Senate Finance Committee, in a panel investigation, revealed emails from GSK company officials that suggest the company downplayed scientific findings about safety risks dating back to 2000. It was also alleged by the committee that the company initiated a "ghostwriting campaign", whereby GSK sought outside companies to write positive articles about Avandia to submit to medical journals. GSK defended itself by presenting data that its own tests found Avandia to be safe, although an FDA staff report showed the conclusions were flawed.
On July 14, 2010, after two days of extensive deliberations, the FDA panel investigating Avandia came to a mixed vote. Twelve members of the panel voted to take the drug off the market, 17 recommended to leave it on but with a more revised warning label, and three voted to keep it on the market with the current warning label. The panel has come to some controversy, however; on July 20, 2010, one of the panelists was discovered to have been a paid speaker for GlaxoSmithKline, arousing questions of a conflict of interest. This panel member was one of the three who voted to keep Avandia on the market with no additional warning labels.
In 2011 the FDA has decided on revising its prescribing information and medication guides for all rosilitazone containing medicines. The US label for rosiglitazone (Avandia, GlaxoSmithKline) and all rosiglitazone-containing medications (Avandamet and Avandaryl) now include the additional safety information and restrictions. The revised labels restrict use to patients already taking a rosiglitazone-containing medicine or to new patients who are unable to achieve adequate glycemic control on other diabetes medications and to those, who in consultation with their healthcare provider, have decided not to take Actos (pioglitazone) or other pioglitazone-containing medicines for medical reasons.
In June 2013 an FDA Advisory Committee reviewed all available data, including a re-adjudicated RECORD trial, found no evidence of increased cardiovascular risk with Avandia, and voted to remove the restrictions on Avandia marketing in the United States. In November 2013, the US FDA removed these marketing restrictions on the product. Under the FDA's instruction, Avandia's maker, GlaxoSmithKline, had funded the Duke Clinical Research Institute to re-analyze the raw data from the study. At the 2010 panel, three panelists voted that the existing warnings were good enough; two were back in 2013. Seven voted to make those warnings more onerous, and five of them returned. But of the 10 who voted to restrict Avandia's use, only four returned. And of the 12 who voted in 2010 to withdraw Avandia from the market, only three came back.
European investigations
In 2000 a study to address the concerns regarding cardiovascular safety was requested by the EMA, and the makers agreed to perform post-marketing a long-term cardiovascular morbidity/mortality study in patients on rosiglitazone in combination with a sulfonylurea or metformin: the RECORD study. The results as published in 2009 showed non-inferiority with regard to cardiovascular events and cardiovascular death when the treatment with rosiglitazone was compared with metformin or a sulfonylurea. For myocardial infarction, there was a non-statistically significant increase in risk. In their assessment, the European regulators acknowledged weaknesses of the study, such as an unexpectedly low rate of cardiovascular events and the open-label design, which may lead to reporting bias. They found that the results were inconclusive. The European Medicines Agency recommended on 23 September 2010 that Avandia be suspended from the European market.
According to a probe by the British Medical Journal in September 2010, the United Kingdom's Commission on Human Medicines recommended to the Medicines and Healthcare Products Regulatory Agency (MHRA) back in July 2010, to withdraw Avandia sale because its "risks outweigh its benefits". Additionally, the probe revealed that in 2000, members of the European panel in charge of reviewing Avandia prior to its approval had concerns about the long-term risks of the drug.
New Zealand
Rosiglitazone was withdrawn from the New Zealand market April 2011 because Medsafe concluded the suspected cardiovascular risks of the medicine for patients with type 2 diabetes outweigh its benefits.
South Africa
A notice issued by the Medicines Control Council of South Africa on July 5, 2011, stated that it had resolved on July 3, 2011, to withdraw all rosiglitazone-containing medicines from the South African market due to safety risks. It disallowed all new prescriptions of Avandia.
Controversy and response
Following the reports in 2007 that Avandia can significantly increase the risk of heart attacks, the drug has been controversial. A 2010 article in Time uses the Avandia case as evidence of a broken FDA regulatory system that "may prove criminal as well as fatal". It details the disclosure failures, adding, "Congressional reports revealed that GSK sat on early evidence of the heart risks of its drug, and that the FDA knew of the dangers months before it informed the public." It reports, "the FDA is investigating whether GSK broke the law by failing to fully inform the agency of Avandia's heart risks", according to deputy FDA commissioner Dr. Joshua Sharfstein. GSK threatened academics who reported adverse research results, and received multiple warning letters from the FDA for deceptive marketing and failure to report clinical data. The maker of the drug, GlaxoSmithKline, has dealt with serious backlash against the company for the drug's controversy. Sales on the drug dropped significantly after the story first broke in 2007, dropping from $2.5 billion in 2006 to less than $408 million in 2009 in the US.
In response to the rise in risk of heart attacks, the Indian government ordered GSK to suspend its research study, called TIDE, in 2010. The FDA also halted the TIDE study in the United States.
Three doctors' groups, the Endocrine Society, the American Diabetes Association and the American Association of Clinical Endocrinologists, urged patients to continue to take the drug as it would be much worse to stop all treatment, despite any associated risk, but that patients could consult their doctors and begin a switch to a different drug if they or their doctors find concern. The American Heart Association said in a statement in June 2010: " ...the reports deserves serious consideration, and patients with diabetes who are 65 years of age or older and being treated with rosiglitazone should discuss the findings with their prescribing physician....".
"For patients with diabetes, the most serious consequences are heart disease and stroke, and the risk of suffering from them is significantly increased when diabetes is present. As in most situations, patients should not change or stop medications without consulting their healthcare provider."
Research
Rosiglitazone was thought to be able to benefit patients with Alzheimer's disease who do not express the ApoE4 allele, but the phase III trial designed to test this showed that rosiglitazone was ineffective in all patients, including ApoE4-negative patients.
Rosiglitazone may also treat mild to moderate ulcerative colitis, due to its anti-inflammatory properties as a PPAR ligand.
Rosiglitazone has been investigated as an agent that may expedite body fat redistribution into a more feminine shape in trans women who have had little fat redistribution from hormone replacement therapy, due to thiozolidinediones' effects on body fat metabolism.
Synthesis
References
External links
3β-Hydroxysteroid dehydrogenase inhibitors
CYP17A1 inhibitors
Hepatotoxins
Thiazolidinediones
2-Aminopyridines
Phenol ethers
Withdrawn drugs
Drugs developed by GSK plc
Ethanolamines
Tertiary amines | Rosiglitazone | [
"Chemistry"
] | 5,181 | [
"Drug safety",
"Withdrawn drugs"
] |
604,052 | https://en.wikipedia.org/wiki/Schinzel%27s%20hypothesis%20H | In mathematics, Schinzel's hypothesis H is one of the most famous open problems in the topic of number theory. It is a very broad generalization of widely open conjectures such as the twin prime conjecture. The hypothesis is named after Andrzej Schinzel.
Statement
The hypothesis claims that for every finite collection of nonconstant irreducible polynomials over the integers with positive leading coefficients, one of the following conditions holds:
There are infinitely many positive integers such that all of are simultaneously prime numbers, or
There is an integer (called a "fixed divisor"), which depends on the polynomials, which always divides the product . (Or, equivalently: There exists a prime such that for every there is an such that divides .)
The second condition is satisfied by sets such as , since is always divisible by 2. It is easy to see that this condition prevents the first condition from being true. Schinzel's hypothesis essentially claims that condition 2 is the only way condition 1 can fail to hold.
No effective technique is known for determining whether the first condition holds for a given set of polynomials, but the second one is straightforward to check: Let and compute the greatest common divisor of successive values of . One can see by extrapolating with finite differences that this divisor will also divide all other values of too.
Schinzel's hypothesis builds on the earlier Bunyakovsky conjecture, for a single polynomial, and on the Hardy–Littlewood conjectures and Dickson's conjecture for multiple linear polynomials. It is in turn extended by the Bateman–Horn conjecture.
Examples
As a simple example with ,
has no fixed prime divisor. We therefore expect that there are infinitely many primes
This has not been proved, though. It was one of Landau's conjectures and goes back to Euler, who observed in a letter to Goldbach in 1752 that is often prime for up to 1500.
As another example, take with and . The hypothesis then implies the existence of infinitely many twin primes, a basic and notorious open problem.
Variants
As proved by Schinzel and Sierpiński
it is equivalent to the following: if condition 2 does not hold, then there exists at least one positive integer such that all will be simultaneously prime, for any choice of irreducible integral polynomials with positive leading coefficients.
If the leading coefficients were negative, we could expect negative prime values; this is a harmless restriction.
There is probably no real reason to restrict polynomials with integer coefficients, rather than integer-valued polynomials (such as , which takes integer values for all integers even though the coefficients are not integers).
Previous results
The special case of a single linear
polynomial is Dirichlet's theorem on arithmetic progressions, one of the most important results of
number theory. In fact, this special case is the only known instance of Schinzel's Hypothesis H. We do not
know the hypothesis to hold for any given polynomial of degree greater than , nor for any system of
more than one polynomial.
Almost prime approximations to Schinzel's Hypothesis have been attempted by many mathematicians; among them, most notably,
Chen's theorem states that there exist infinitely many prime numbers such that is either a prime or a semiprime
and Iwaniec proved that there exist infinitely many integers for which is either a prime or a semiprime. Skorobogatov and Sofos have proved that almost all polynomials of any fixed degree satisfy Schinzel's hypothesis H.
Let be an integer-valued polynomial with common factor , and let . Then is an primitive integer-valued polynomial. Ronald Joseph Miech proved using Brun sieve that infinitely often and therefore infinitely often, where runs over positive integers. The numbers and don't depend on , and , where is the degree of the polynomial . This theorem is also known as Miech's theorem. The proof of the Miech's theorem uses Brun sieve.
If there is a hypothetical probabilistic density sieve, using the Miech's theorem can prove the Schinzel's hypothesis H in all cases by mathematical induction.
Prospects and applications
The hypothesis is probably not accessible with current methods in analytic number theory, but is now quite often used to prove conditional results, for example in Diophantine geometry. This connection is due to Jean-Louis Colliot-Thélène and Jean-Jacques Sansuc. For further explanations and references on this connection
see the notes of Swinnerton-Dyer.
The conjectural result being so strong in nature, it is possible that it could be shown to be too much to expect.
Extension to include the Goldbach conjecture
The hypothesis does not cover Goldbach's conjecture, but a closely related version (hypothesis HN) does. That requires an extra polynomial , which in the Goldbach problem would just be , for which
N − F(n)
is required to be a prime number, also. This is cited in Halberstam and Richert, Sieve Methods. The conjecture here takes the form of a statement when N is sufficiently large, and subject to the condition that
has no fixed divisor > 1. Then we should be able to require the existence of n such that N − F(n) is both positive and a prime number; and with all the fi(n) prime numbers.
Not many cases of these conjectures are known; but there is a detailed quantitative theory (see Bateman–Horn conjecture).
Local analysis
The condition of having no fixed prime divisor is purely local (depending just on primes, that is). In other words, a finite set of irreducible integer-valued polynomials
with no local obstruction to taking infinitely many prime values is conjectured to take infinitely many prime values.
An analogue that fails
The analogous conjecture with the integers replaced by the one-variable polynomial ring over a finite field is false. For example, Swan noted in 1962 (for reasons unrelated to Hypothesis H) that the polynomial
over the ring F2[u] is irreducible and has no fixed prime polynomial divisor (after all, its values at x = 0 and x = 1 are relatively prime polynomials) but all of its values as x runs over F2[u] are composite. Similar examples can be found with F2 replaced by any finite field; the obstructions in a proper formulation of Hypothesis H over F[u], where F is a finite field, are no longer just local but a new global obstruction occurs with no classical parallel, assuming hypothesis H is in fact correct.
References
External links
Analytic number theory
Conjectures about prime numbers
Unsolved problems in number theory | Schinzel's hypothesis H | [
"Mathematics"
] | 1,364 | [
"Analytic number theory",
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Mathematical problems",
"Number theory"
] |
604,063 | https://en.wikipedia.org/wiki/Verbal%20Behavior | Verbal Behavior is a 1957 book by psychologist B. F. Skinner, in which he describes what he calls verbal behavior, or what was traditionally called linguistics. Skinner's work describes the controlling elements of verbal behavior with terminology invented for the analysis - echoics, mands, tacts, autoclitics and others - as well as carefully defined uses of ordinary terms such as audience.
Origins
The origin of Verbal Behavior was an outgrowth of a series of lectures first presented at the University of Minnesota in the early 1940s and developed further in his summer lectures at Columbia and William James lectures at Harvard in the decade before the book's publication.
Research
Skinner's analysis of verbal behavior drew heavily on methods of literary analysis. This tradition has continued. The book Verbal Behavior is almost entirely theoretical, involving little experimental research in the work itself. Many research papers and applied extensions based on Verbal Behavior have been done since its publication.
Functional analysis
Skinner's Verbal Behavior also introduced the autoclitic and six elementary operants: mand, tact, audience relation, echoic, textual, and intraverbal. For Skinner, the proper object of study is behavior itself, analyzed without reference to hypothetical (mental) structures, but rather with reference to the functional relationships of the behavior in the environment in which it occurs. This analysis extends Ernst Mach's pragmatic inductive position in physics, and extends even further a disinclination towards hypothesis-making and testing. Verbal Behavior is divided into 5 parts with 19 chapters. The first chapter sets the stage for this work, a functional analysis of verbal behavior. Skinner presents verbal behavior as a function of controlling consequences and stimuli, not as the product of a special inherent capacity. Neither does he ask us to be satisfied with simply describing the structure, or patterns, of behavior. Skinner deals with some alternative, traditional formulations, and moves on to his own functional position.
General problems
In the ascertaining of the strength of a response Skinner suggests some criteria for strength (probability): emission, energy-level, speed, and repetition. He notes that these are all very limited means for inferring the strength of a response as they do not always vary together and they may come under the control of other factors. Emission is a yes/no measure, however the other three—energy-level, speed, repetition—comprise possible indications of relative strength.
Emission – If a response is emitted it may tend to be interpreted as having some strength. Unusual or difficult conditions would tend to lend evidence to the inference of strength. Under typical conditions it becomes a less compelling basis for inferring strength. This is an inference that is either there or not, and has no gradation of value.
Energy-level – Unlike emission as a basis for inference, energy-level (response magnitude) provides a basis for inferring the response has a strength with a high range of varying strength. Energy level is a basis from which we can infer a high tendency to respond. An energetic and strong "Water!" forms the basis for inferring the strength of the response as opposed to a weak, brief "Water".
Speed – Speed is the speed of the response itself, or the latency from the time in which it could have occurred to the time in which it occurs. A response given quickly when prompted forms the basis for inferring a high strength.
Repetition – "Water! Water! Water!" may be emitted and used as an indication of relative strength compared to the speedy and/or energetic emission of "Water!". In this way repetition can be used as a way to infer strength.
Mands
Chapter Three of Skinner's work Verbal Behavior discusses a functional relationship called the mand. Mand is verbal behavior under functional control of satiation or deprivation (that is, motivating operations) followed by characteristic reinforcement often specified by the response. A mand is typically a demand, command, or request. The mand is often said to "describe its own reinforcer" although this is not always the case, especially as Skinner's definition of verbal behavior does not require that mands be vocal. A loud knock at the door, may be a mand "open the door" and a servant may be called by a hand clap as much as a child might "ask for milk".
Lamarre & Holland (1985) study on mands demonstrated the role of motivating operations. The authors contrived motivating operations for objects by training behavior chains that could not be completed without certain objects. The participants learned to mand for these missing objects, which they had previously only been able to tact...
Behavior under the control of verbal stimuli
Textual
In Chapter Four Skinner notes forms of control by verbal stimuli. One form is textual behavior which refers to the type of behavior we might typically call reading or writing. A vocal response is controlled by a verbal stimulus that is not heard. There are two different modalities involved ("reading"). If they are the same they become "copying text" (see Jack Michael on copying text), if they are heard, then written, it becomes "taking dictation", and so on.
Echoic
Skinner was one of the first to seriously consider the role of imitation in language learning. He introduced this concept into his book Verbal Behavior with the concept of the echoic. It is a behavior under the functional control of a verbal stimulus. The verbal response and the verbal stimulus share what is called point to point correspondence (a formal similarity.) The speaker repeats what is said. In echoic behavior, the stimulus is auditory and response is vocal. It is often seen in early shaping behavior. For example, in learning a new language, a teacher might say "parsimonious" and then say "can you say it?" to induce an echoic response. Winokur (1978) is one example of research about echoic relations.
Tacts
Chapter Five of Verbal Behavior discusses the tact in depth. A tact is said to "make contact with" the world, and refers to behavior that is under functional control of a non-verbal stimulus and generalized conditioned reinforcement. The controlling stimulus is nonverbal, "the whole of the physical environment". In linguistic terms, the tact might be regarded as "expressive labelling". Tact is the most useful form of verbal behaviour to other listeners, as it extends the listeners contact with the environment. In contrast, the tact is the most useful form of verbal behaviour to the speaker as it allows to contact tangible reinforcement.
Tacts can undergo many extensions: generic, metaphoric, metonymical, solecistic, nomination, and "guessing". It can also be involved in abstraction. Lowe, Horne, Harris & Randle (2002) would be one example of recent work in tacts.
Intraverbal
Intraverbals are verbal behavior under the control of other verbal behavior. Intraverbals are often studied by the use of classic association techniques.
Audiences
Audience control is developed through long histories of reinforcement and punishment. Skinner's three-term contingency can be used to analyze how this works: the first term, the antecedent, refers to the audience, in whose presence the verbal response (the second term) occurs. The consequences of the response are the third term, and whether or not those consequences strengthen or weaken the response will affect whether that response will occur again in the presence of that audience. Through this process, audience control, or the probability that certain responses will occur in the presence of certain audiences, develops. Skinner notes that while audience control is developed due to histories with certain audiences, we do not have to have a long history with every listener in order to effectively engage in verbal behavior in their presence (p. 176). We can respond to new audiences (new stimuli) as we would to similar audiences with whom we have a history.
Negative audiences
An audience that has punished certain kinds of verbal behavior is called a negative audience (p. 178): in the presence of this audience, the punished verbal behavior is less likely to occur. Skinner gives examples of adults punishing certain verbal behavior of children, and a king punishing the verbal behavior of his subjects.
Summary of verbal operants
The following table summarizes the new verbal operants in the analysis of verbal behavior.
Verbal operants as a unit of analysis
Skinner notes his categories of verbal behavior: mand, textual, intraverbal, tact, audience relations, and notes how behavior might be classified. He notes that form alone is not sufficient (he uses the example of "fire!" having multiple possible relationships depending on the circumstances). Classification depends on knowing the circumstances under which the behavior is emitted. Skinner then notes that the "same response" may be emitted under different operant conditions. Skinner states:
That is, classification alone does little to further the analysis—the functional relations controlling the operants outlined must be analyzed consistent with the general approach of a scientific analysis of behavior.
Multiple causation
Skinner notes in this chapter how any given response is likely to be the result of multiple variables. Secondly, that any given variable usually affects multiple responses. The issue of multiple audiences is also addressed, as each audience is, as already noted, an occasion for strong and successful responding. Combining audiences produces differing tendencies to respond.
Supplementary stimulation
Supplementary stimulation is a discussion to practical matters of controlling verbal behavior given the context of material which has been presented thus far. Issues of multiple control, and involving many of the elementary operants stated in previous chapters are discussed.
New combinations of fragmentary responses
A special case of where multiple causation comes into play creating new verbal forms is in what Skinner describes as fragmentary responses. Such combinations are typically vocal, although this may be due to different conditions of self-editing rather than any special property. Such mutations may be "nonsense" and may not further the verbal interchange in which it occurs. Freudian slips may be one special case of fragmentary responses which tend to be given reinforcement and may discourage self-editing. This phenomenon appears to be more common in children, and in adults learning a second language. Fatigue, illness and insobriety may tend to produce fragmentary responding.
Autoclitics
An autoclitic is a form of verbal behavior which modifies the functions of other forms of verbal behavior. For example, "I think it is raining" possesses the autoclitic "I think" which moderates the strength of the statement "it is raining". An example of research that involved autoclitics would be Lodhi & Greer (1989).
Self-strengthening
Here Skinner draws a parallel to his position on self-control and notes: "A person controls his own behavior, verbal or otherwise, as he controls the behavior of others." Appropriate verbal behavior may be weak, as in forgetting a name, and in need of strengthening. It may have been inadequately learned, as in a foreign language. Repeating a formula, reciting a poem, and so on. The techniques are manipulating stimuli, changing the level of editing, the mechanical production of verbal behavior, changing motivational and emotional variables, incubation, and so on. Skinner gives an example of the use of some of these techniques provided by an author.
Logical and scientific
The special audience in this case is one concerned with "successful action". Special methods of stimulus control are encouraged that will allow for maximum effectiveness. Skinner notes that "graphs, models, tables" are forms of text that allow for this kind of development. The logical and scientific community also sharpens responses to assure accuracy and avoid distortion. Little progress in the area of science has been made from a verbal behavior perspective; however, suggestions of a research agenda have been laid out.
Tacting private events
Private events are events accessible to only the speaker. Public events are events that occur outside of an organism's skin that are observed by more than one individual. A headache is an example of a private event and a car accident is an example of a public event.
The tacting of private events by an organism is shaped by the verbal community who differentially reinforce a variety of behaviors and responses to the private events that occur (Catania, 2007, p. 9). For example, if a child verbally states, "a circle" when a circle is in the immediate environment, it may be a tact. If a child verbally states, "I have a toothache", she/he may be tacting a private event, whereas the stimulus is present to the speaker, but not the rest of the verbal community.
The verbal community shapes the original development and the maintenance or discontinuation of the tacts for private events (Catania, 2007, p. 232). An organism responds similarly to both private stimuli and public stimuli (Skinner, 1957, p. 130). However, it is harder for the verbal community to shape the verbal behavior associated with private events (Catania, 2007, p. 403). It may be more difficult to shape private events, but there are critical things that occur within an organism's skin that should not be excluded from our understanding of verbal behavior (Catania, 2007, p. 9).
Several concerns are associated with tacting private events. Skinner (1957) acknowledged two major dilemmas. First, he acknowledges our difficulty with predicting and controlling the stimuli associated with tacting private events (p. 130). Catania (2007) describes this as the unavailability of the stimulus to the members of the verbal community (p. 253). The second problem Skinner (1957) describes is our current inability to understand how the verbal behavior associated with private events is developed (p. 131).
Skinner (1957) continues to describe four potential ways a verbal community can encourage verbal behavior with no access to the stimuli of the speaker. He suggests the most frequent method is via "a common public accompaniment". An example might be that when a kid falls and starts bleeding, the caregiver tells them statements like, "you got hurt". Another method is the "collateral response" associated with the private stimulus. An example would be when a kid comes running and is crying and holding their hands over their knee, the caregiver might make a statement like, "you got hurt". The third way is when the verbal community provides reinforcement contingent on the overt behavior and the organism generalizes that to the private event that is occurring. Skinner refers to this as a "metaphorical or metonymical extension". The final method that Skinner suggests may help form our verbal behavior is when the behavior is initially at a low level and then turns into a private event (Skinner, 1957, p. 134). This notion can be summarized by understanding that the verbal behavior of private events can be shaped through the verbal community by extending the language of tacts (Catania, 2007, p. 263).
Private events are limited and should not serve as "explanations of behavior" (Skinner, 1957, p. 254). Skinner (1957) continues to caution that, "the language of private events can easily distract us from the public causes of behavior" (see functions of behavior).
Chomsky's review and replies
In 1959, Noam Chomsky published an influential critique of Verbal Behavior. Chomsky pointed out that children acquire their first language without being explicitly or overtly "taught" in a way that would be consistent with behaviorist theory (see Language acquisition and Poverty of the stimulus), and that Skinner's theories of "operants" and behavioral reinforcements are not able to account for the fact that people can speak and understand sentences that they have never heard before.
According to Frederick J. Newmeyer:
Chomsky's review has come to be regarded as one of the foundational documents of the discipline of cognitive psychology, and even after the passage of twenty-five years it is considered the most important refutation of behaviorism. Of all his writings, it was the Skinner review which contributed most to spreading his reputation beyond the small circle of professional linguists.
Chomsky's 1959 review, amongst his other work of the period, is generally thought to have been influential in the decline of behaviorism's influence within linguistics, philosophy and cognitive science. One reply to it was Kenneth MacCorquodale's 1970 paper On Chomsky's Review of Skinner's Verbal Behavior. MacCorquodale argued that Chomsky did not possess an adequate understanding of either behavioral psychology in general, or the differences between Skinner's behaviorism and other varieties. As a consequence, he argued, Chomsky made several serious errors of logic. On account of these problems, MacCorquodale maintains that the review failed to demonstrate what it has often been cited as doing, implying that those most influenced by Chomsky's paper probably already substantially agreed with him. Chomsky's review has been further argued to misrepresent the work of Skinner and others, including by taking quotes out of context. Chomsky has maintained that the review was directed at the way Skinner's variant of behavioral psychology "was being used in Quinean empiricism and naturalization of philosophy".
Current research
Current research in verbal behavior is published in The Analysis of Verbal Behavior (TAVB), and other Behavior Analytic journals such as The Journal of the Experimental Analysis of Behavior (JEAB) and the Journal of Applied Behavior Analysis (JABA). Also research is presented at poster sessions and conferences, such as at regional Behavior Analysis conventions or Association for Behavior Analysis (ABA) conventions nationally or internationally. There is also a Verbal Behavior Special Interest Group (SIG) of the Association for Behavior Analysis (ABA) which has a mailing list.
Journal of Early and Intensive Behavior Intervention and the Journal of Speech-Language Pathology and Applied Behavior Analysis both publish clinical articles on interventions based on verbal behavior.
Skinner has argued that his account of verbal behavior might have a strong evolutionary parallel. In Skinner's essay, Selection by Consequences he argued that operant conditioning was a part of a three-level process involving genetic evolution, cultural evolution and operant conditioning. All three processes, he argued, were examples of parallel processes of selection by consequences. David L. Hull, Rodney E. Langman and Sigrid S. Glenn have developed this parallel in detail. This topic continues to be a focus for behavior analysts. Behavior analysts have been working on developing ideas based on Verbal Behaviour for fifty years, and despite this, experience difficulty explaining generative verbal behavior.
See also
The Analysis of Verbal Behavior
Applied behavior analysis
Child development
Experimental analysis of behavior
Functional analytic psychotherapy
Jack Michael
Reinforcement
Relational frame theory
References
External links
An Introduction to Verbal Behavior Online Tutorial
Chomsky's 1959 Review of Verbal Behavior
On Chomsky's Appraisal of Skinner's Verbal Behavior: A Half Century of Misunderstanding
The Analysis of Verbal Behavior pubmed archive
abainternational.org
contextualpsychology.org
ironshrink.com
A Tutorial of B.F. Skinner's Verbal Behavior (1957)
Psychology books
Linguistics books
1957 non-fiction books
Behaviorism
Cognitive science literature
Works by B. F. Skinner
History of psychology | Verbal Behavior | [
"Biology"
] | 3,940 | [
"Behavior",
"Behaviorism"
] |
604,067 | https://en.wikipedia.org/wiki/Nemetschek | Nemetschek Group is a vendor of software for architects, engineers and the construction industry. The company develops and distributes software for planning, designing, building and managing buildings and real estate, as well as for media and entertainment.
History
20th century
The company was founded by Prof. Georg Nemetschek in 1963, and initially went by the name of Ingenieurbüro für das Bauwesen (engineering firm for the construction industry), focusing on structural design. It was one of the first companies in the industry to use computers and developed software for engineers, initially for its own requirements. In 1977, Nemetschek started distributing its program Statik 97/77 for civil engineering.
At the Hanover Fair in 1980, Nemetschek presented a software package for integrated calculation and design of standard components for solid construction. This was the first software enabling computer-aided engineering (CAE) on microcomputers, and the product remained unique on the market for many years.
In 1989, Nemetschek Programmsystem GmbH was founded and was responsible for software distribution; Georg Nemetschek's engineering firm continued to be in charge of program development. The main product, Allplan – a CAD system for architects and engineers, was launched in 1984. This allowed designers to model buildings in three dimensions. Nemetschek began to expand internationally in the 1980s. By 1996, the company had subsidiaries in eight European countries and distribution partners in nine European countries; since 1992, it has also had a development site in Bratislava, Slovakia.
The first acquisitions were made at the end of the 1990s, including the structural design program vendor Friedrich + Lochner. The company, operating as Nemetschek AG since 1994, went public in 1999 (it has been listed in the Prime Standard market segment and the TecDAX in Frankfurt ever since).
21st century
Two major company takeovers followed in 2000: the American firm Diehl Graphsoft (now Vectorworks) and Maxon Computer GmbH, with its Cinema 4D software for visualization and animation. In 2006, Nemetschek acquired Hungary's Graphisoft (for its key product ArchiCAD), and Belgium's SCIA International.
In November 2013, Nemetschek acquired the MEP software provider Data Design System (DDS). On 31 October 2014, the acquisition of Bluebeam Software, Inc. was concluded. At the end of 2015, Solibri was acquired.
Since 2016, the company has operated as Nemetschek SE. Later that year, SDS/2 was acquired. In 2017, it acquired dRofus and RISA. MCS Solutions was acquired in 2018, closely followed by the acquisition of Axxerion B.V and Plandatis and subsequently rebranded to Spacewell. Other acquisitions have been completed at a brand level (for example, Redshift Rendering Technologies, Red Giant and Pixologic were acquired by Maxon, DEXMA by Spacewell).
Since 18 September 2018, Nemetschek is listed in the MDAX in addition to its TecDAX listing.
Among others, Nemetschek is a member of the BuildingSMART e.V. and the Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) (German Sustainable Building Council), actively advocating for open building information modeling (BIM) standards ("open BIM") in the AEC/O industry.
Business units
Since 2008, Nemetschek has acted as a holding company with four business units:
Planning & Design (Architecture and Civil Engineering)
Build & Construct
Manage & Operate
Media & Entertainment.
The holding company maintains 13 product brands, covering the whole building lifecycle, from planning to operations.
See also
Comparison of CAD editors for architecture, engineering and construction (AEC)
References
External links
Nemetschek SE website
Companies based in Munich
Software companies established in 1963
Software companies of Germany
Companies listed on the Frankfurt Stock Exchange
Building information modeling
German brands
Companies in the TecDAX
Companies in the MDAX
1963 establishments in West Germany | Nemetschek | [
"Engineering"
] | 830 | [
"Building engineering",
"Building information modeling"
] |
604,111 | https://en.wikipedia.org/wiki/Stone%27s%20representation%20theorem%20for%20Boolean%20algebras | In mathematics, Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a certain field of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged in the first half of the 20th century. The theorem was first proved by Marshall H. Stone. Stone was led to it by his study of the spectral theory of operators on a Hilbert space.
Stone spaces
Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in S(B) are the ultrafilters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The topology on S(B) is generated by a basis consisting of all sets of the form
where b is an element of B. These sets are also closed and so are clopen (both closed and open). This is the topology of pointwise convergence of nets of homomorphisms into the two-element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdorff space; such spaces are called Stone spaces (also profinite spaces). Conversely, given any topological space X, the collection of subsets of X that are clopen is a Boolean algebra.
Representation theorem
A simple version of Stone's representation theorem states that every Boolean algebra B is isomorphic to the algebra of clopen subsets of its Stone space S(B). The isomorphism sends an element to the set of all ultrafilters that contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the correspondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specifically, the theorem is equivalent to the Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.
An extension of the classical Stone duality to the category of Boolean spaces (that is, zero-dimensional locally compact Hausdorff spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by H. P. Doctor).
See also
Citations
References
General topology
Boolean algebra
Theorems in lattice theory
Categorical logic | Stone's representation theorem for Boolean algebras | [
"Mathematics"
] | 626 | [
"Boolean algebra",
"General topology",
"Mathematical structures",
"Categorical logic",
"Mathematical logic",
"Fields of abstract algebra",
"Topology",
"Category theory"
] |
604,224 | https://en.wikipedia.org/wiki/Insulin%20receptor | The insulin receptor (IR) is a transmembrane receptor that is activated by insulin, IGF-I, IGF-II and belongs to the large class of receptor tyrosine kinase. Metabolically, the insulin receptor plays a key role in the regulation of glucose homeostasis; a functional process that under degenerate conditions may result in a range of clinical manifestations including diabetes and cancer. Insulin signalling controls access to blood glucose in body cells. When insulin falls, especially in those with high insulin sensitivity, body cells begin only to have access to lipids that do not require transport across the membrane. So, in this way, insulin is the key regulator of fat metabolism as well. Biochemically, the insulin receptor is encoded by a single gene , from which alternate splicing during transcription results in either IR-A or IR-B isoforms. Downstream post-translational events of either isoform result in the formation of a proteolytically cleaved α and β subunit, which upon combination are ultimately capable of homo or hetero-dimerisation to produce the ≈320 kDa disulfide-linked transmembrane insulin receptor.
Structure
Initially, transcription of alternative splice variants derived from the INSR gene are translated to form one of two monomeric isomers; IR-A in which exon 11 is excluded, and IR-B in which exon 11 is included. Inclusion of exon 11 results in the addition of 12 amino acids upstream of the intrinsic furin proteolytic cleavage site.
Upon receptor dimerisation, after proteolytic cleavage into the α- and β-chains, the additional 12 amino acids remain present at the C-terminus of the α-chain (designated αCT) where they are predicted to influence receptor–ligand interaction.
Each isometric monomer is structurally organized into 8 distinct domains consists of; a leucine-rich repeat domain (L1, residues 1–157), a cysteine-rich region (CR, residues 158–310), an additional leucine rich repeat domain (L2, residues 311–470), three fibronectin type III domains; FnIII-1 (residues 471–595), FnIII-2 (residues 596–808) and FnIII-3 (residues 809–906). Additionally, an insert domain (ID, residues 638–756) resides within FnIII-2, containing the α/β furin cleavage site, from which proteolysis results in both IDα and IDβ domains. Within the β-chain, downstream of the FnIII-3 domain lies a transmembrane helix (TH) and intracellular juxtamembrane (JM) region, just upstream of the intracellular tyrosine kinase (TK) catalytic domain, responsible for subsequent intracellular signaling pathways.
Upon cleavage of the monomer to its respective α- and β-chains, receptor hetero or homo-dimerisation is maintained covalently between chains by a single disulphide link and between monomers in the dimer by two disulphide links extending from each α-chain. The overall 3D ectodomain structure, possessing four ligand binding sites, resembles an inverted 'V', with the each monomer rotated approximately 2-fold about an axis running parallel to the inverted 'V' and L2 and FnIII-1 domains from each monomer forming the inverted 'V's apex.
Ligand binding
The insulin receptor's endogenous ligands include insulin, IGF-I and IGF-II. Using a cryo-EM, structural insight into conformational changes upon insulin binding was provided. Binding of ligand to the α-chains of the IR dimeric ectodomain shifts it from an inverted V-shape to a T-shaped conformation, and this change is propagated structurally to the transmembrane domains, which get closer, eventually leading to autophosphorylation of various tyrosine residues within the intracellular TK domain of the β-chain. These changes facilitate the recruitment of specific adapter proteins such as the insulin receptor substrate proteins (IRS) in addition to SH2-B (Src Homology 2 - B ), APS and protein phosphatases, such as PTP1B, eventually promoting downstream processes involving blood glucose homeostasis.
Strictly speaking the relationship between IR and ligand shows complex allosteric properties. This was indicated with the use of a Scatchard plots which identified that the measurement of the ratio of IR bound ligand to unbound ligand does not follow a linear relationship with respect to changes in the concentration of IR bound ligand, suggesting that the IR and its respective ligand share a relationship of cooperative binding. Furthermore, the observation that the rate of IR-ligand dissociation is accelerated upon addition of unbound ligand implies that the nature of this cooperation is negative; said differently, that the initial binding of ligand to the IR inhibits further binding to its second active site - exhibition of allosteric inhibition.
These models state that each IR monomer possesses 2 insulin binding sites; site 1, which binds to the 'classical' binding surface of insulin: consisting of L1 plus αCT domains and site 2, consisting of loops at the junction of FnIII-1 and FnIII-2 predicted to bind to the 'novel' hexamer face binding site of insulin. As each monomer contributing to the IR ectodomain exhibits 3D 'mirrored' complementarity, N-terminal site 1 of one monomer ultimately faces C-terminal site 2 of the second monomer, where this is also true for each monomers mirrored complement (the opposite side of the ectodomain structure). Current literature distinguishes the complement binding sites by designating the second monomer's site 1 and site 2 nomenclature as either site 3 and site 4 or as site 1' and site 2' respectively.
As such, these models state that each IR may bind to an insulin molecule (which has two binding surfaces) via 4 locations, being site 1, 2, (3/1') or (4/2'). As each site 1 proximally faces site 2, upon insulin binding to a specific site, 'crosslinking' via ligand between monomers is predicted to occur (i.e. as [monomer 1 Site 1 - Insulin - monomer 2 Site (4/2')] or as [monomer 1 Site 2 - Insulin - monomer 2 site (3/1')]). In accordance with current mathematical modelling of IR-insulin kinetics, there are two important consequences to the events of insulin crosslinking; 1. that by the aforementioned observation of negative cooperation between IR and its ligand that subsequent binding of ligand to the IR is reduced and 2. that the physical action of crosslinking brings the ectodomain into such a conformation that is required for intracellular tyrosine phosphorylation events to ensue (i.e. these events serve as the requirements for receptor activation and eventual maintenance of blood glucose homeostasis).
Applying cryo-EM and molecular dynamics simulations of receptor reconstituted in nanodiscs, the structure of the entire dimeric insulin receptor ectodomain with four insulin molecules bound was visualized, therefore confirming and directly showing biochemically predicted 4 binding locations.
Agonists
4548-G05
Insulin
Insulin-like growth factor 1
Mecasermin
A number of small-molecule insulin receptor agonists have been identified.
Signal transduction pathway
The insulin receptor is a type of tyrosine kinase receptor, in which the binding of an agonistic ligand triggers autophosphorylation of the tyrosine residues, with each subunit phosphorylating its partner. The addition of the phosphate groups generates a binding site for the insulin receptor substrate (IRS-1), which is subsequently activated via phosphorylation. The activated IRS-1 initiates the signal transduction pathway and binds to phosphoinositide 3-kinase (PI3K), in turn causing its activation. This then catalyses the conversion of Phosphatidylinositol 4,5-bisphosphate into Phosphatidylinositol 3,4,5-trisphosphate (PIP3). PIP3 acts as a secondary messenger and induces the activation of phosphatidylinositol dependent protein kinase, which then activates several other kinases – most notably protein kinase B, (PKB, also known as Akt). PKB triggers the translocation of glucose transporter (GLUT4) containing vesicles to the cell membrane, via the activation of SNARE proteins, to facilitate the diffusion of glucose into the cell. PKB also phosphorylates and inhibits glycogen synthase kinase, which is an enzyme that inhibits glycogen synthase. Therefore, PKB acts to start the process of glycogenesis, which ultimately reduces blood-glucose concentration.
Function
Regulation of gene expression
The activated IRS-1 acts as a secondary messenger within the cell to stimulate the transcription of insulin-regulated genes. First, the protein Grb2 binds the P-Tyr residue of IRS-1 in its SH2 domain. Grb2 is then able to bind SOS, which in turn catalyzes the replacement of bound GDP with GTP on Ras, a G protein. This protein then begins a phosphorylation cascade, culminating in the activation of mitogen-activated protein kinase (MAPK), which enters the nucleus and phosphorylates various nuclear transcription factors (such as Elk1).
Stimulation of glycogen synthesis
Glycogen synthesis is also stimulated by the insulin receptor via IRS-1. In this case, it is the SH2 domain of PI-3 kinase (PI-3K) that binds the P-Tyr of IRS-1. Now activated, PI-3K can convert the membrane lipid phosphatidylinositol 4,5-bisphosphate (PIP2) to phosphatidylinositol 3,4,5-triphosphate (PIP3). This indirectly activates a protein kinase, PKB (Akt), via phosphorylation. PKB then phosphorylates several target proteins, including glycogen synthase kinase 3 (GSK-3). GSK-3 is responsible for phosphorylating (and thus deactivating) glycogen synthase. When GSK-3 is phosphorylated, it is deactivated, and prevented from deactivating glycogen synthase. In this roundabout manner, insulin increases glycogen synthesis.
Degradation of insulin
Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment or it may be degraded by the cell. Degradation normally involves endocytosis of the insulin-receptor complex followed by the action of insulin degrading enzyme. Most insulin molecules are degraded by liver cells. It has been estimated that a typical insulin molecule is finally degraded about 71 minutes after its initial release into circulation.
Immune system
Besides the metabolic function, insulin receptors are also expressed on immune cells, such as macrophages, B cells, and T cells. On T cells, the expression of insulin receptors is undetectable during the resting state but up-regulated upon T-cell receptor (TCR) activation. Indeed, insulin has been shown when supplied exogenously to promote in vitro T cell proliferation in animal models. Insulin receptor signalling is important for maximizing the potential effect of T cells during acute infection and inflammation.
Pathology
The main activity of activation of the insulin receptor is inducing glucose uptake. For this reason "insulin insensitivity", or a decrease in insulin receptor signaling, leads to diabetes mellitus type 2 – the cells are unable to take up glucose, and the result is hyperglycemia (an increase in circulating glucose), and all the sequelae that result from diabetes.
Patients with insulin resistance may display acanthosis nigricans.
A few patients with homozygous mutations in the INSR gene have been described, which causes Donohue syndrome or Leprechaunism. This autosomal recessive disorder results in a totally non-functional insulin receptor. These patients have low-set, often protuberant, ears, flared nostrils, thickened lips, and severe growth retardation. In most cases, the outlook for these patients is extremely poor, with death occurring within the first year of life. Other mutations of the same gene cause the less severe Rabson-Mendenhall syndrome, in which patients have characteristically abnormal teeth, hypertrophic gingiva (gums), and enlargement of the pineal gland. Both diseases present with fluctuations of the glucose level: After a meal the glucose is initially very high, and then falls rapidly to abnormally low levels. Other genetic mutations to the insulin receptor gene can cause Severe Insulin Resistance.
Interactions
Insulin receptor has been shown to interact with
ENPP1,
GRB10,
GRB7,
IRS1,
MAD2L1,
PRKCD,
PTPN11, and
SH2B1.
References
Further reading
External links
Clusters of differentiation
EC 2.7.10
Single-pass transmembrane proteins
Tyrosine kinase receptors | Insulin receptor | [
"Chemistry"
] | 2,823 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
604,238 | https://en.wikipedia.org/wiki/Data%20Protection%20Act%201998 | The Data Protection Act 1998 (c. 29) (DPA) was an act of Parliament of the United Kingdom designed to protect personal data stored on computers or in an organised paper filing system. It enacted provisions from the European Union (EU) Data Protection Directive 1995 on the protection, processing, and movement of data.
Under the 1998 DPA, individuals had legal rights to control information about themselves. Most of the Act did not apply to domestic use, such as keeping a personal address book. Anyone holding personal data for other purposes was legally obliged to comply with this Act, subject to some exemptions. The Act defined eight data protection principles to ensure that information was processed lawfully.
It was superseded by the Data Protection Act 2018 (DPA 2018) on 23 May 2018. The DPA 2018 supplements the EU General Data Protection Regulation (GDPR), which came into effect on 25 May 2018. The GDPR regulates the collection, storage, and use of personal data significantly more strictly.
Background
The 1998 act replaced the (c. 35) and the (c. 37). Additionally, the 1998 act implemented the EU Data Protection Directive 1995.
The Privacy and Electronic Communications (EC Directive) Regulations 2003 altered the consent requirement for most electronic marketing to "positive consent" such as an opt-in box. Exemptions remain for the marketing of "similar products and services" to existing customers and enquirers, which can still be permitted on an opt-out basis.
The Jersey data protection law, the Data Protection (Jersey) Law 2005 was modelled on the United Kingdom's law.
Contents
Scope of protection
Section 1 of DPA 1998 defined "personal data" as any data that could have been used to identify a living individual. Anonymised or aggregated data was less regulated by the Act, provided the anonymisation or aggregation had not been done reversibly. Individuals could have been identified by various means including name and address, telephone number, or email address. The Act applied only to data which was held, or was intended to be held, on computers ("equipment operating automatically in response to instructions given for that purpose"), or held in a "relevant filing system".
In some cases, paper records could have been classified as a relevant filing system, such as an address book or a salesperson's diary used to support commercial activities.
The Freedom of Information Act 2000 modified the act for public bodies and authorities, and the Durant case modified the interpretation of the act by providing case law and precedent.
A person who had their data processed had the following rights:
under section 7, to view the data on them held by an organisation for a reasonable fee: the maximum fee was £2 for requests to credit reference agencies, £50 for health and educational request, and £10 per individual otherwise,
under section 14, to request that incorrect information be corrected. If the company ignored the request, a court could have ordered the data to be corrected or destroyed, and in some cases compensation could have been awarded.
under section 10, to require that their data was not used in any way that potentially could have caused damage or distress.
under section 11, to require that their data was not used for direct marketing.
Data protection principles
Schedule 1 listed eight "data protection principles":
Personal data shall be processed fairly and lawfully and, in particular, shall not be processed unless:
at least one of the conditions in Schedule 2 is met, and
in the case of sensitive personal data, at least one of the conditions in Schedule 3 is also met.
Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.
Personal data shall be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.
Personal data shall be accurate and, where necessary, kept up to date.
Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.
About the rights of individuals e.g. personal data shall be processed in accordance with the rights of data subjects (individuals).
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.
Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.
Broadly speaking, these eight principles were similar to the six principles set out in the GDPR of 2016.
Conditions relevant to the first principle
Personal data should only be processed fairly and lawfully. In order for data to be classed as 'fairly processed', at least one of these six conditions had to be applicable to that data (Schedule 2).
The data subject (the person whose data is stored) has consented ("given their permission") to the processing;
Processing is necessary for the performance of, or commencing, a contract;
Processing is required under a legal obligation (other than one stated in the contract);
Processing is necessary to protect the vital interests of the data subject;
Processing is necessary to carry out any public functions;
Processing is necessary in order to pursue the legitimate interests of the "data controller" or "third parties" (unless it could unjustifiably prejudice the interests of the data subject).
Consent
Except under the exceptions mentioned below, the individual had to consent to the collection of their personal information and its use in the purpose(s) in question. The European Data Protection Directive defined consent as “…any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed", meaning the individual could have signified agreement other than in writing. However, non-communication should not have been interpreted as consent.
Additionally, consent should have been appropriate to the age and capacity of the individual and other circumstances of the case. If an organisation "intends to continue to hold or use personal data after the relationship with the individual ends, then the consent should cover this." When consent was given, it was not assumed to last forever, though in most cases, consent lasted for as long as the personal data needed to be processed, and individuals may have been able to withdraw their consent, depending on the nature of the consent and the circumstances in which the personal information was collected and used.
The Data Protection Act also specified that sensitive personal data must have been processed according to a stricter set of conditions, in particular, any consent must have been explicit.
Exceptions
The Act was structured such that all processing of personal data was covered by the act while providing a number of exceptions in Part IV. Notable exceptions were:
Section 28 – National security. Any processing for the purpose of safeguarding national security is exempt from all the data protection principles, as well as Part II (subject access rights), Part III (notification), Part V (enforcement), and Section 55 (Unlawful obtaining of personal data).
Section 29 – Crime and taxation. Data processed for the prevention or detection of crime, the apprehension or prosecution of offenders, or the assessment or collection of taxes are exempt from the first data protection principle.
Section 36 – Domestic purposes. Processing by an individual only for the purposes of that individual's personal, family or household affairs is exempt from all the data protection principles, as well as Part II (subject access rights) and Part III (notification).
Police and court powers
The Act granted or acknowledged various police and court powers.
Section 29 – Consent of the data subject was not required when processing personal data to prevent or detect crime, apprehend or prosecute offenders, the assessment and collection of taxes and duties and discharge a statutory function.
Section 35 – Disclosures required by law or made in connection with legal proceedings. This included obeying court orders and other laws, and were part of legal proceedings.
Offences
The Act detailed a number of civil and criminal offences for which data controllers may have been liable if a data controller failed to gain appropriate consent from a data subject. However, consent was not specifically defined in the Act and so was a common law matter.
Section 21(1) made it an offence to process personal information without registration.
Section 21(2) made it an offence to fail to comply with the notification regulations made by the Secretary of State (proposed by the Information Commissioner under section 25 of the Act).
Section 55 made the acquisition of personal data unlawful, and made the acquisition of unauthorised access to personal data an offence for people (other parties), such as hackers and impersonators, outside the organisation.
Section 56 made it a criminal offence to require an individual to make a Subject Access Request relating to cautions or convictions for the purposes of recruitment, continued employment, or the provision of services. This section was enforced on 10 March 2015.
Complexity
The UK Data Protection Act was a large Act that had a reputation for complexity. While the basic principles were honored for protecting privacy, interpreting the act was not always simple. Many companies, organisations, and individuals seemed very unsure of the aims, content, and principles of the Act. Some refused to provide even very basic, publicly available material, quoting the Act as a restriction. The Act also impacted the way in which organisations conducted business in terms of who should have been contacted for marketing purposes, not only by telephone and direct mail, but also electronically. This has led to the development of permission-based marketing strategies.
Definition of personal data
The definition of personal data was data relating to a living individual who can be identified
from that data; or
from that data plus other information that was in the possession, or likely to come into the possession, of the data controller.
Sensitive personal data concerned the subject's race, ethnicity, politics, religion, trade union status, health, sexual history, or criminal record.
Subject access requests
The Information Commissioner's Office website stated regarding subject access requests: "You have the right to find out if an organisation is using or storing your personal data. This is called the right of access. You exercise this right by asking for a copy of the data, which is commonly known as making a 'subject access request.'"
Before the General Data Protection Regulation (GDPR) came into force on 25 May 2018, organisations could have charged a specified fee for responding to a SAR of up to £10 for most requests. Following GDPR: "A copy of your personal data should be provided free. An organisation may charge for additional copies. It can only charge a fee if it thinks the request is 'manifestly unfounded or excessive'. If so, it may ask for a reasonable fee for administrative costs associated with the request."
Information Commissioner
Compliance with the Act was regulated and enforced by an independent authority, the Information Commissioner's Office, which maintained guidance relating to the Act.
EU’s Article 29 Working Party
In January 2017, the Information Commissioner's Office invited public comments on the EU's Article 29 Working Party's proposed changes to data protection law and the anticipated introduction of extensions to the interpretation of the Act, the Guide to the General Data Protection Regulation.
See also
Data Protection Act, 2012 (Ghana)
Computer Misuse Act 1990
Data privacy
Data Protection Directive (EU)
Freedom of Information Act 2000
Gaskin v United Kingdom
List of UK government data losses
Privacy and Electronic Communications (EC Directive) Regulations 2003
General Data Protection Regulation – a 2016 EU regulation on data protection
Smith v Lloyds TSB Bank plc
References
External links
Information Commissioner's Office
The Department for Constitutional Affairs
Council of Europe – ETS no. 108 – Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981) – basis for Data Protection Act 1984
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data – basis for Data Protection Act 1998
UK legislation
Data laws of the United Kingdom
Data protection
Information privacy
United Kingdom Acts of Parliament 1998 | Data Protection Act 1998 | [
"Engineering"
] | 2,463 | [
"Cybersecurity engineering",
"Information privacy"
] |
604,268 | https://en.wikipedia.org/wiki/Multi-index%20notation | Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices.
Definition and basic properties
An n-dimensional multi-index is an -tuple
of non-negative integers (i.e. an element of the -dimensional set of natural numbers, denoted ).
For multi-indices and , one defines:
Componentwise sum and difference
Partial order
Sum of components (absolute value)
Factorial
Binomial coefficient
Multinomial coefficient
where .
Power
.
Higher-order partial derivative
where (see also 4-gradient). Sometimes the notation is also used.
Some applications
The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, (or ), , and (or ).
Multinomial theorem
Multi-binomial theorem
Note that, since is a vector and is a multi-index, the expression on the left is short for .
Leibniz formula
For smooth functions and ,
Taylor series
For an analytic function in variables one has In fact, for a smooth enough function, we have the similar Taylor expansion where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets
General linear partial differential operator
A formal linear -th order partial differential operator in variables is written as
Integration by parts
For smooth functions with compact support in a bounded domain one has This formula is used for the definition of distributions and weak derivatives.
An example theorem
If are multi-indices and , then
Proof
The proof follows from the power rule for the ordinary derivative; if α and β are in , then
Suppose , , and . Then we have that
For each in , the function only depends on . In the above, each partial differentiation therefore reduces to the corresponding ordinary differentiation . Hence, from equation (), it follows that vanishes if for at least one in . If this is not the case, i.e., if as multi-indices, then
for each and the theorem follows. Q.E.D.
See also
Einstein notation
Index notation
Ricci calculus
References
Saint Raymond, Xavier (1991). Elementary Introduction to the Theory of Pseudodifferential Operators. Chap 1.1 . CRC Press.
Combinatorics
Mathematical notation
Articles containing proofs | Multi-index notation | [
"Mathematics"
] | 509 | [
"Articles containing proofs",
"Discrete mathematics",
"nan",
"Combinatorics"
] |
604,382 | https://en.wikipedia.org/wiki/X/Open | X/Open group (also known as the Open Group for Unix Systems and incorporated in 1987 as X/Open Company, Ltd.) was a consortium founded by several European UNIX systems manufacturers in 1984 to identify and promote open standards in the field of information technology. More specifically, the original aim was to define a single specification for operating systems derived from UNIX, to increase the interoperability of applications and reduce the cost of porting software. Its original members were Bull, ICL, Siemens, Olivetti, and Nixdorf—a group sometimes referred to as BISON. Philips and Ericsson joined in 1985, at which point the name X/Open was adopted.
The group published its specifications as X/Open Portability Guide, starting with Issue 1 in 1985, and later as X/Open CAE Specification.
In 1987, X/Open was incorporated as X/Open Company, Ltd.
By March 1988, X/Open grew to 13 members: AT&T, Digital, Hewlett-Packard, Sun Microsystems, Unisys, NCR, Olivetti, Bull, Ericsson, Nixdorf, Philips, ICL, and Siemens.
By 1990 the group had expanded to 21 members: in addition to the original five, Philips and Nokia from Europe; AT&T, Digital, Unisys, Hewlett-Packard, IBM, NCR, Sun, Prime Computer, Apollo Computer from North America; Fujitsu, Hitachi, and NEC from Japan; plus the Open Software Foundation and Unix International.
In October 1993, a planned transfer of UNIX trademark from Novell to X/Open was announced; it was finalized in 2nd quarter of 1994.
In 1994, X/Open published the Single UNIX Specification, which was drawn from XPG4 Base and other sources.
In 1996, X/Open merged with the Open Software Foundation to form The Open Group.
X/Open was also responsible for the XA protocol for heterogeneous distributed transaction processing, which was released in 1991.
X/Open Portability Guide
X/Open published its specifications under the name X/Open Portability Guide (or XPG). Based on the AT&T System V Interface Definition, the guide has a wider scope than POSIX, which is only concerned with direct operating system interfaces. The guide specifies a Common Application Environment (CAE) intended to allow portability of applications across operating systems. The primary aim was compatibility between different vendors' implementations of UNIX, though some vendors also implemented the standards on non-UNIX platforms.
Issue 1 of the guide covered basic operating system interfaces, the C language, COBOL, indexed sequential file access method (ISAM) and other parts and was published in 1985. Issue 2 followed in 1987, and extended the coverage to include Internationalization, Terminal Interfaces, Inter-Process Communication, and the programming languages C, COBOL, FORTRAN, and Pascal, as well as data access interfaces for SQL and ISAM. In many cases these were profiles of existing international standards. Issue 3 (XPG3) followed in 1989, its primary focus being convergence with the POSIX operating system specifications; it added Window Manager, ADA Language and more. Issue 4 (XPG4) was published in July 1992. The Single UNIX Specification was based on the XPG4 standard. The XPG3 and XPG4 standards define all aspects of the operating system, programming languages and protocols which compliant systems should have.
Multiple levels of compliance and corresponding labels were available, depending on the scope of the guide that was covered: Base and Plus; labels Component and Application are for SW components and applications that make use of the portability guide.
Issue 1 was published as a single publication with multiple parts, .
Issue 2 was published in multiple volumes:
X/Open Portability Guide Volume 1: System V Specification Commands and Utilities, 1987,
X/Open Portability Guide Volume 2: System V Specification System Calls and Libraries, 1987,
X/Open Portability Guide Volume 3: System V Specification Supplementary Definitions, 1987,
X/Open Portability Guide Volume 4: Programming Languages, 1987,
X/Open Portability Guide Volume 5: Data Management, 1987,
Issue 3 was published in multiple volumes:
X/Open Portability Guide Volume 1: XSI Commands and Utilities, 1989,
X/Open Portability Guide Volume 2: XSI System Interface and Headers, 1989,
X/Open Portability Guide Volume 3: XSI Supplementary Definitions, 1989,
X/Open Portability Guide Volume 4: Programming Languages, 1988,
X/Open Portability Guide Volume 5: Data Management, 1988,
X/Open Portability Guide Volume 6: Window Management, 1988,
X/Open Portability Guide Volume 7: Networking Services, 1988,
The XPG4 Base specification includes the following documents:
System Interfaces and Headers (XSH), Issue 4, 1992, , C202
Commands and Utilities (XCU), Issue 4, 1992, , C203
System Interface Definitions (XBD), Issue 4, 1992, , C204
The above three documents were published not under the label X/Open Portability Guide but rather as CAE Specification.
Nonetheless, the term X/Open Portability Guide, Issue 4 sees some use in reference to 1992 year of publication.
Further X/Open publications under the label X/Open CAE Specification rather than X/Open Portability Guide:
Distributed Transaction Processing: The XA Specification, December 1991,
Systems Management: Management Protocol Profiles (XMPP), October 1993,
X/Open DCE: Remote Procedure Call, August 1994,
System Interface Definitions, Issue 4, Version 2, September 1994,
System Interfaces and Headers, Issue 4, Version 2, September 1994,
Commands and Utilities, Issue 4, Version 2, September 1994,
Networking Services, Issue 4, September 1994,
Data Management:SQL Call Level Interface (CLI), March 1995,
File System Safe UCS Transformation Format (UTF-8), March 1995,
Distributed Transaction Processing: The TX (Transaction Demarcation) Specification, April 1995,
X.25 Programming Interface using XTI (XX25), November 1995,
Distributed Transaction Processing: The TxRPC Specification, November 1995,
Distributed Transaction Processing: The XATMI Specification, November 1995,
Distributed Transaction Processing: The XCPI-C Specification Version 2, November 1995,
X/Open Curses, Issue 4, 1995,
X/Open Curses, Issue 4, Version 2, 1996,
Data Management: Structured Query Language (SQL) Version 2, March 1996,
And more.
See also
Joint Inter-Domain Management
References
- Mentions X/Open; lists members and its efforts to define "a new standard interface to UNIX".
C. B. Taylor. The X/OPEN group and the common application environment. ICL Technical Journal Vol 5(4) pp. 665–679, 1987.
C. B. Taylor. X/Open - from Strength to Strength. ICL Technical Journal, Vol 7(3) pp. 565–583, 1991
C. B. Taylor. X/Open and Open Systems. X/Open Company Limited, 1992.
External links
The Open Group, opengroup.org -- resulted from merger of X/Open Company and Open Software Foundation
What is UNIX?, unix.org
X/Open Portability Guide, issue 1, 1985
History of software
Software engineering papers
Standards organisations in the United Kingdom
Technology consortia
Unix history
Unix standards | X/Open | [
"Technology"
] | 1,519 | [
"Computer standards",
"History of software",
"Unix standards",
"History of computing"
] |
604,408 | https://en.wikipedia.org/wiki/Per%20annum | Units of frequency | Per annum | [
"Mathematics"
] | 5 | [
"Quantity",
"Units of frequency",
"Units of measurement"
] |
604,486 | https://en.wikipedia.org/wiki/Environmental%20impact%20statement | An environmental impact statement (EIS), under United States environmental law, is a document required by the 1969 National Environmental Policy Act (NEPA) for certain actions "significantly affecting the quality of the human environment". An EIS is a tool for decision making. It describes the positive and negative environmental effects of a proposed action, and it usually also lists one or more alternative actions that may be chosen instead of the action described in the EIS. One of the primary authors of the act is Lynton K. Caldwell.
Preliminary versions of these documents are officially known as a draft environmental impact statement (DEIS) or draft environmental impact report (DEIR).
Purpose
The purpose of the NEPA is to promote informed decision-making by federal agencies by making "detailed information concerning significant environmental impacts" available to both agency leaders and the public. The NEPA was the first piece of legislation that created a comprehensive method to assess potential and existing environmental risks at once. It also encourages communication and cooperation between all the actors involved in environmental decisions, including government officials, private businesses, and citizens.
In particular, an EIS acts as an enforcement mechanism to ensure that the federal government adheres to the goals and policies outlined in the NEPA. An EIS should be created in a timely manner as soon as the agency is planning development or is presented with a proposal for development. The statement should use an interdisciplinary approach so that it accurately assesses both the physical and social impacts of the proposed development. In many instances an action may be deemed subject to NEPA's EIS requirement even though the action is not specifically sponsored by a federal agency. These factors may include actions that receive federal funding, federal licensing or authorization, or that are subject to federal control.
Not all federal actions require a full EIS. If the action may or may not cause a significant impact, the agency can first prepare a smaller, shorter document called an Environmental Assessment (EA). The finding of the EA determines whether an EIS is required. If the EA indicates that no significant impact is likely, then the agency can release a finding of no significant impact (FONSI) and carry on with the proposed action. Otherwise, the agency must then conduct a full-scale EIS. Most EAs result in a FONSI. A limited number of federal actions may avoid the EA and EIS requirements under NEPA if they meet the criteria for a categorical exclusion (CATEX). A CATEX is usually permitted when a course of action is identical or very similar to a past course of action and the impacts on the environment from the previous action can be assumed for the proposed action, or for building a structure within the footprint of an existing, larger facility or complex. For example, two recently completed sections of Interstate 69 in Kentucky were granted a CATEX from NEPA requirements as these portions of I-69 utilize existing freeways that required little more than minor spot improvements and a change of highway signage. Additionally, a CATEX can be issued during an emergency when time does not permit the preparation of an EA or EIS. An example of the latter is when the Federal Highway Administration issued a CATEX to construct the replacement bridge in the wake of the I-35W Mississippi River Bridge Collapse.
NEPA does not prohibit the federal government or its licensees/permittees from harming the environment, instead it requires that the prospective impacts be understood and disclosed in advance. The intent of NEPA is to help key decisionmakers and stakeholders balance the need to implement an action with its impacts on the surrounding human and natural environment, and provide opportunities for mitigating those impacts while keeping the cost and schedule for implementing the action under control. However, many activities require various federal permits to comply with other environmental legislation, such as the Clean Air Act, the Clean Water Act, Endangered Species Act and Section 4(f) of the Federal Highway Act to name a few. Similarly, many states and local jurisdictions have enacted environmental laws and ordinances, requiring additional state and local permits before the action can proceed. Obtaining these permits typically requires the lead agency to implement the Least Environmentally Damaging Practicable Alternative (LEDPA) to comply with federal, state, and local environmental laws that are ancillary to NEPA. In some instances, the result of NEPA analysis leads to abandonment or cancellation of the proposed action, particularly when the "No Action" alternative ends up being the LEDPA.
Layout
An EIS typically has four sections:
An Introduction including a statement of the Purpose and Need of the Proposed Action.
A description of the Affected Environment.
A Range of Alternatives to the proposed action. Alternatives are considered the "heart" of the EIS.
An analysis of the environmental impacts of each of the possible alternatives. This section covers topics such as:
Impacts to threatened or endangered species
Air and water quality impacts
Impacts to historic and cultural sites, particularly sites of significant importance to indigenous peoples.
Social and economic impacts to local communities, often including consideration of attributes such as impacts on the available housing stock, economic impacts to businesses, property values, public health, aesthetics and noise within the affected area
Cost and Schedule Analyses for each alternative, including costs and timeline to mitigate expected impacts, to determine if the proposed action can be completed at an acceptable cost and within a reasonable amount of time
While not required in the EIS, the following subjects may be included as part of the EIS or as separate documents based on agency policy.
Financial Plan for the proposed action identifying the sources of secured funding for the action. For example, the Federal Highway Administration has started requiring states to include a financial plan showing that funding has been secured for major highway projects before it will approve an EIS and issue a Record of Decision.
An Environmental mitigation plan is often requested by the Environmental Protection Agency (EPA) if substantial environmental impacts are expected from the preferred alternative.
Additional documentation to comply with state and local environmental policy laws and secure required federal, state, and local permits before the action can proceed.
Every EIS is required to analyze a No Action Alternative, in addition to the range of alternatives presented for study. The No Action Alternative identifies the expected environmental impacts in the future if existing conditions were left as is with no action taken by the lead agency. Analysis of the No Action Alternative is used to establish a baseline upon which to compare the proposed "Action" alternatives. Contrary to popular belief, the "No Action Alternative" doesn't necessarily mean that nothing will occur if that option is selected in the Record of Decision. For example, the "No Action Alternative" was selected for the I-69/Trans-Texas Corridor Tier-I Environmental Impact Statement. In that Record of Decision, the Texas Department of Transportation opted not to proceed with building its portion of I-69 as one of the Trans-Texas Corridors to be built as a new-terrain route (the Trans-Texas Corridor concept was ultimately scrapped entirely), but instead decided to proceed with converting existing US and state routes to I-69 by upgrading those roads to interstate standards.
NEPA process
The NEPA process is designed to involve the public and gather the best available information in a single place so that decision makers can be fully informed when they make their choices.
This is the process of EIS
Proposal: In this stage, the needs and objectives of a project have been decided, but the project has not been financed.
Categorical Exclusion (CATEX): As discussed above, the government may exempt an agency from the process. The agency can then proceed with the project and skip the remaining steps.
Environmental Assessment (EA): The proposal is analyzed in addition to the local environment with the aim to reduce the negative impacts of the development on the area.
Finding of No Significant Impact (FONSI): Occurs when no significant impacts are identified in an EA. A FONSI typically allows the lead agency to proceed without having to complete an EIS.
Environmental Impact Statement
Scoping: The first meetings are held to discuss existing laws, the available information, and the research needed. The tasks are divided up and a lead group is selected. Decision makers and all those involved with the project can attend the meetings.
Notice: The public is notified that the agency is preparing an EIS. The agency also provides the public with information regarding how they can become involved in the process. The agency announces its project proposal with a notice in the Federal Register, notices in local media, and letters to citizens and groups that it knows are likely to be interested. Citizens and groups are welcome to send in comments helping the agency identify the issues it must address in the EIS (or EA).
Draft EIS (DEIS): Based on both agency expertise and issues raised by the public, the agency prepares a Draft EIS with a full description of the affected environment, a reasonable range of alternatives, and an analysis of the impacts of each alternative.
Comment: Affected individuals then have the opportunity to provide feedback through written and public hearing statements.
Final EIS (FEIS) and Proposed Action: Based on the comments on the Draft EIS, the agency writes a Final EIS, and announces its Proposed Action. The public is not invited to comment on this, but if they are still unhappy, or feel that the agency has missed a major issue, they may protest the EIS to the Director of the agency. The Director may either ask the agency to revise the EIS, or explain to the protester why their complaints are not actually taken care of.
Re-evaluation: Prepared following an approved FEIS or ROD when unforeseen changes to the proposed action or its impacts occurs, or when a substantial period of time has passed between approval of an action and the planned start of said action. Based on the significance of the changes, three outcomes may result from a re-evaluation report: (1) the action may proceed with no substantive changes to the FEIS, (2) significant impacts are expected with the change that can be adequately addressed in a Supplemental EIS (SEIS), or (3) the circumstances force a complete change in the nature and scope of the proposed action, thereby voiding the pre-existing FEIS (and ROD, if applicable), requiring the lead agency to restart the NEPA process and prepare a new EIS to encompass the changes.
Supplemental EIS (SEIS): Typically prepared after either a Final EIS or Record of Decision has been issued and new environmental impacts that were not considered in the original EIS are discovered, requiring the lead agency to re-evaluate its initial decision and consider new alternatives to avoid or mitigate the new impacts. Supplemental EISs are also prepared when the size and scope of a federal action changes, when a significant period of time has lapsed since the FEIS was completed to account for changes in the surrounding environment during that time, or when all of the proposed alternatives in an EIS are deemed to have unacceptable environmental impacts and new alternatives are proposed.
Record of Decision (ROD): Once all the protests are resolved the agency issues a Record of Decision which is its final action prior to implementation. If members of the public are still dissatisfied with the outcome, they may sue the agency in Federal court.
Often, the agencies responsible for preparing an EA or EIS do not compile the document directly, but outsource this work to private-sector consulting firms with expertise in the proposed action and its anticipated effects on the environment. Because of the intense level of detail required in analyzing the alternatives presented in an EIS or EA, such documents may take years or even decades to compile, and often compose of multiple volumes that can be thousands to tens of thousands of pages in length.
To avoid potential conflicts in securing required permits and approvals after the ROD is issued, the lead agency will often coordinate with stakeholders at all levels, and resolve any conflicts to the greatest extent possible during the EIS process. Proceeding in this fashion helps avoid interagency conflicts and potential lawsuits after the lead agency reaches its decision.
Tiering
On exceptionally large projects, especially proposed highway, railroad, and utility corridors that cross long distances, the lead agency may use a two-tiered process prior to implementing the proposed action. In such cases, the Tier I EIS would analyze the potential socio-environmental impacts along a general corridor, but would not identify the exact location of where the action would occur. A Tier I ROD would be issued approving the general area where the action would be implemented. Following the Tier I ROD, the approved Tier I area is further broken down into subareas, and a Tier II EIS is then prepared for each subarea, that identifies the exact location of where the proposed action will take place. The preparation of Tier II EISs for each subarea proceeds at its own pace, independent from the other subareas within the Tier I area. For example, parts of the proposed Interstate 69 extension in Indiana and Texas, as well as portions of the Interstate 11 corridor in Nevada and Arizona are being studied through a two-tiered process
Strengths
By requiring agencies to complete an EIS, the act encourages them to consider the environmental costs of a project and introduces new information into the decision-making process. The NEPA has increased the influence of environmental analysts and agencies in the federal government by increasing their involvement in the development process. Because an EIS requires expert skill and knowledge, agencies must hire environmental analysts. Unlike agencies who may have other priorities, analysts are often sympathetic to environmental issues. In addition, this feature introduces scientific procedures into the political process.
Limitations
The differences that exist between science and politics limit the accuracy of an EIS. Although analysts are members of the scientific community, they are affected by the political atmosphere. Analysts do not have the luxury of an unlimited time for research. They are also affected by the different motives behind the research of the EIS and by different perspectives of what constitutes a good analysis. In addition, government officials do not want to reveal an environmental problem from within their own agency.
Citizens often misunderstand the environmental assessment process. The public does not realize that the process is only meant to gather information relevant to the decision. Even if the statement predicts negative impacts of the project, decision makers can still proceed with the proposal.
See also
Natural environment
References
External links
Knowledge Mosaic's environmental blog, The Green Mien, provides a weekly round-up of recently released environmental impact statements.
Northwestern University Transportation Library has one of the world's largest collections of hard copy environmental impact statements.
Environmental science
Environmental law in the United States
Statements (law)
Statement
Environmental impact in the United States | Environmental impact statement | [
"Environmental_science"
] | 2,964 | [
"nan"
] |
604,658 | https://en.wikipedia.org/wiki/Structured%20systems%20analysis%20and%20design%20method | Structured systems analysis and design method (SSADM) is a systems approach to the analysis and design of information systems. SSADM was produced for the Central Computer and Telecommunications Agency, a UK government office concerned with the use of technology in government, from 1980 onwards.
Overview
SSADM is a waterfall method for the analysis and design of information systems. SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system design, and contrasts with more contemporary agile methods such as DSDM or Scrum.
SSADM is one particular implementation and builds on the work of different schools of structured analysis and development methods, such as Peter Checkland's soft systems methodology, Larry Constantine's structured design, Edward Yourdon's Yourdon Structured Method, Michael A. Jackson's Jackson Structured Programming, and Tom DeMarco's structured analysis.
The names "Structured Systems Analysis and Design Method" and "SSADM" are registered trademarks of the Office of Government Commerce (OGC), which is an office of the United Kingdom's Treasury.
History
The principal stages of the development of Structured System Analysing And Design Method were:
1980: Central Computer and Telecommunications Agency (CCTA) evaluate analysis and design methods.
1981: Consultants working for Learmonth & Burchett Management Systems, led by John Hall, chosen to develop SSADM v1.
1982: John Hall and Keith Robinson left to found Model Systems Ltd, LBMS later developed LSDM, their proprietary version.
1983: SSADM made mandatory for all new information system developments
1984: Version 2 of SSADM released
1986: Version 3 of SSADM released, adopted by NCC
1988: SSADM Certificate of Proficiency launched, SSADM promoted as ‘open’ standard
1989: Moves towards Euromethod, launch of CASE products certification scheme
1990: Version 4 launched
1993: SSADM V4 Standard and Tools Conformance Scheme
1995: SSADM V4+ announced, V4.2 launched
2000: CCTA renamed SSADM as "Business System Development". The method was repackaged into 15 modules and another 6 modules were added.
SSADM techniques
The three most important techniques that are used in SSADM are as follows:
Logical Data Modelling
The process of identifying, modelling and documenting the data requirements of the system being designed. The result is a data model containing entities (things about which a business needs to record information), attributes (facts about the entities) and relationships (associations between the entities).
Data Flow Modelling
The process of identifying, modelling and documenting how data moves around an information system. Data Flow Modeling examines processes (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow).
Entity Event Modelling
A two-stranded process: Entity Behavior Modelling, identifying, modelling and documenting the events that affect each entity and the sequence (or life history) in which these events occur, and Event Modelling, designing for each event the process to coordinate entity life histories.
Stages
The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following.
Stage 0 – Feasibility study
In order to determine whether or not a given project is feasible, there must be some form of investigation into the goals and implications of the project. For very small scale projects this may not be necessary at all as the scope of the project is easily understood. In larger projects, the feasibility may be done but in an informal sense, either because there is no time for a formal study or because the project is a "must-have" and will have to be done one way or the other. A data flow Diagram is used to describe how the current system works and to visualize the known problems.
When a feasibility study is carried out, there are four main areas of consideration:
Technical – is the project technically possible?
Financial – can the business afford to carry out the project?
Organizational – will the new system be compatible with existing practices?
Ethical – is the impact of the new system socially acceptable?
To answer these questions, the feasibility study is effectively a condensed version of a comprehensive systems analysis and design. The requirements and usages are analyzed to some extent, some business options are drawn up and even some details of the technical implementation.
The product of this stage is a formal feasibility study document. SSADM specifies the sections that the study should contain including any preliminary models that have been constructed and also details of rejected options and the reasons for their rejection.
Stage 1 – Investigation of the current environment
The developers of SSADM understood that in almost all cases there is some form of current system even if it is entirely composed of people and paper. Through a combination of interviewing employees, circulating questionnaires, observations and existing documentation, the analyst comes to full understanding of the system as it is at the start of the project. This serves many purposes (Like examples?).
Stage 2 – Business system options
Having investigated the current system, the analyst must decide on the overall design of the new system. To do this, he or she, using the outputs of the previous stage, develops a set of business system options. These are different ways in which the new system could be produced varying from doing nothing to throwing out the old system entirely and building an entirely new one. The analyst may hold a brainstorming session so that as many and various ideas as possible are generated.
The ideas are then collected to options which are presented to the user. The options consider the following:
the degree of automation
the boundary between the system and the users
the distribution of the system, for example, is it centralized to one office or spread out across several?
cost/benefit
impact of the new system
Where necessary, the option will be documented with a logical data structure and a level 1 data-flow diagram.
The users and analyst together choose a single business option. This may be one of the ones already defined or may be a synthesis of different aspects of the existing options. The output of this stage is the single selected business option together with all the outputs of the feasibility stage.
Stage 3 – Requirements specification
This is probably the most complex stage in SSADM. Using the requirements developed in stage 1 and working within the framework of the selected business option, the analyst must develop a full logical specification of what the new system must do. The specification must be free from error, ambiguity and inconsistency. By logical, we mean that the specification does not say how the system will be implemented but rather describes what the system will do.
To produce the logical specification, the analyst builds the required logical models for both the data-flow diagrams (DFDs) and the Logical Data Model (LDM), consisting of the Logical Data Structure (referred to in other methods as entity relationship diagrams) and full descriptions of the data and its relationships. These are used to produce function definitions of every function which the users will require of the system, Entity Life-Histories (ELHs) which describe all events through the life of an entity, and Effect Correspondence Diagrams (ECDs) which describe how each event interacts with all relevant entities. These are continually matched against the requirements and where necessary, the requirements are added to and completed.
The product of this stage is a complete requirements specification document which is made up of:
the updated data catalogue
the updated requirements catalogue
the processing specification which in turn is made up of
user role/function matrix
function definitions
required logical data model
entity life-histories
effect correspondence diagrams
Stage 4 – Technical system options
This stage is the first towards a physical implementation of the new system application. Like the Business System Options, in this stage a large number of options for the implementation of the new system are generated. This is narrowed down to two or three to present to the user from which the final option is chosen or synthesized.
However, the considerations are quite different being:
the hardware architectures
the software to use
the cost of the implementation
the staffing required
the physical limitations such as a space occupied by the system
the distribution including any networks which that may require
the overall format of the human computer interface
All of these aspects must also conform to any constraints imposed by the business such as available money and standardization of hardware and software.
The output of this stage is a chosen technical system option.
Stage 5 – Logical design
Though the previous level specifies details of the implementation, the outputs of this stage are implementation-independent and concentrate on the requirements for the human computer interface. The logical design specifies the main methods of interaction in terms of menu structures and command structures.
One area of activity is the definition of the user dialogues. These are the main interfaces with which the users will interact with the system. Other activities are concerned with analyzing both the effects of events in updating the system and the need to make inquiries about the data on the system. Both of these use the events, function descriptions and effect correspondence diagrams produced in stage 3 to determine precisely how to update and read data in a consistent and secure way.
The product of this stage is the logical design which is made up of:
Data catalogue
Required logical data structure
Logical process model – includes dialogues and model for the update and inquiry processes
Stress & Bending moment.
Stage 6 – Physical design
This is the final stage where all the logical specifications of the system are converted to descriptions of the system in terms of real hardware and software. This is a very technical stage and a simple overview is presented here.
The logical data structure is converted into a physical architecture in terms of database structures. The exact structure of the functions and how they are implemented is specified. The physical data structure is optimized where necessary to meet size and performance requirements.
The product is a complete Physical Design which could tell software engineers how to build the system in specific details of hardware and software and to the appropriate standards.
References
5. Keith Robinson, Graham Berrisford: Object-oriented SSADM, Prentice Hall International (UK), Hemel Hempstead,
External links
What is SSADM? at webopedia.com
Introduction to Methodologies and SSADM
Case study using pragmatic SSADM
Structured Analysis Wiki
Information systems
Software design
Systems analysis | Structured systems analysis and design method | [
"Technology",
"Engineering"
] | 2,093 | [
"Information systems",
"Information technology",
"Design",
"Software design"
] |
604,698 | https://en.wikipedia.org/wiki/Hereditary%20peer | The hereditary peers form part of the peerage in the United Kingdom. As of November 2024, there are 801 hereditary peers: 30 dukes (including six royal dukes), 34 marquesses, 189 earls, 109 viscounts, and 439 barons (not counting subsidiary titles).
As a result of the Peerage Act 1963, all peers except those in the peerage of Ireland were entitled to sit in the House of Lords. Since the House of Lords Act 1999 came into force only 92 hereditary peers, elected from all hereditary peers, are permitted to do so, unless they are also life peers. Peers are called to the House of Lords with a writ of summons.
Not all hereditary titles are titles of the peerage. For instance, baronets and baronetesses may pass on their titles, but they are not peers. Conversely, the holder of a non-hereditary title may belong to the peerage, as with life peers. Peerages may be created by means of letters patent, but the granting of new hereditary peerages has largely dwindled; only seven hereditary peerages have been created since 1965, four of them for members of the British royal family. The most recent grant of a hereditary peerage was in 2019 for the youngest child of Elizabeth II, Prince Edward, who was created Earl of Forfar; the most recent grant of a hereditary peerage to a non-royal was in 1984 for former Prime Minister Harold Macmillan, who was created Earl of Stockton with the subsidiary title of Viscount Macmillan.
Origins
The hereditary peerage, as it now exists, combines several different English institutions with analogues from Scotland and Ireland.
English earls are an Anglo-Saxon institution. Around 1014, England was divided into shires or counties, largely to defend against the Danes; each shire was led by a local great man, called an earl; the same man could be earl of several shires. When the Normans invaded England, they continued to appoint earls, but not for all counties; the administrative head of the county became the sheriff. Earldoms began as offices, with a perquisite of a share of the legal fees in the county; they gradually became honours, with a stipend of £20 a year. Like most feudal offices, earldoms were inherited, but the kings frequently asked earls to resign or exchange earldoms. Usually there were few earls in England, and they were men of great wealth in the shire from which they held title, or an adjacent one, but it depended on circumstances: during the civil war between Stephen and the Empress Matilda, nine earls were created in three years.
William the Conqueror and his great-grandson Henry II did not make dukes; they were themselves only Dukes of Normandy or Aquitaine. But when Edward III of England declared himself King of France, he made his sons dukes, to distinguish them from other noblemen, much as royal dukes are now distinguished from other dukes. Later kings created marquesses and viscounts to make finer gradations of honour: a rank something more than an earl and something less than an earl, respectively.
When Henry III or Edward I wanted money or advice from his subjects, he would order great churchmen, earls, and other great men to come to his Great Council (some of these are now considered the first parliaments); he would generally order lesser men from towns and counties to gather and pick some men to represent them. The English Order of Barons evolved from those men who were individually ordered to attend Parliament, but held no other title; the chosen representatives, on the other hand, became the House of Commons. This order, called a writ, was not originally hereditary, or even a privilege; the recipient had to come to the Great Council at his own expense, vote on taxes on himself and his neighbours, acknowledge that he was the king's tenant-in-chief (which might cost him special taxes), and risk involvement in royal politics—or a request from the king for a personal loan (benevolence). Which men were ordered to council varied from council to council; a man might be so ordered once and never again, or all his life, but his son and heir might never go.
Under Henry VI of England, in the 15th century, just before the Wars of the Roses, attendance at Parliament became more valuable. The first claim of hereditary right to a writ comes from this reign; so does the first patent, or charter declaring a man to be a baron. The five orders began to be called peers. Holders of older peerages also began to receive greater honour than peers of the same rank just created.
If a man held a peerage, his son would succeed to it; if he had no children, his brother would succeed. If he had a single daughter, his son-in-law would inherit the family lands, and usually the same peerage; more complex cases were decided depending on circumstances. Customs changed with time; earldoms were the first to be hereditary, and three different rules can be traced for the case of an earl who left no sons and several married daughters. In the 13th century, the husband of the eldest daughter inherited the earldom automatically; in the 15th century, the earldom reverted to the Crown, who might re-grant it (often to the eldest son-in-law); in the 17th century, it would not be inherited by anybody unless all but one of the daughters died and left no descendants, in which case the remaining daughter (or her heir) would inherit.
After Henry II became the Lord of Ireland, he and his successors began to imitate the English system as it was in their time. Irish earls were first created in the 13th century, and Irish parliaments began later in the same century; until Henry VIII declared himself King of Ireland, these parliaments were small bodies, representing only the Irish Pale. A writ does not create a peerage in Ireland; all Irish peerages are by patent or charter, although some early patents have been lost. After James II left England, he was King of Ireland alone for a time; three creations he ordered then are in the Irish Patent Roll, although the patents were never issued; but these are treated as valid.
The Irish peers were in a peculiar political position: because they were subjects of the King of England, but peers in a different kingdom, they could sit in the English House of Commons, and many did. In the 18th century, Irish peerages became rewards for English politicians, limited only by the concern that they might go to Dublin and interfere with the Irish Government.
Scotland evolved a similar system, differing in points of detail. The first Scottish earldoms derive from the seven mormaers, of immemorial antiquity; they were named earls by Queen Margaret. The Parliament of Scotland is as old as the English; the Scottish equivalent of baronies are called lordships of Parliament.
The Act of Union 1707, between England and Scotland, provided that future peerages should be peers of Great Britain, and the rules covering the peers should follow the English model; because there were proportionately many more Scottish peers, they chose a number of representatives to sit in the British House of Lords. The Acts of Union 1800 changed this to peers of the United Kingdom, but provided that Irish peerages could still be created; but the Irish peers were concerned that their honours would be diluted as cheap prizes, and insisted that an Irish peerage could be created only when three Irish peerages had gone extinct (until there were only a hundred Irish peers left). In the early 19th century, Irish creations were as frequent as this allowed; but only three have been created since 1863, and none since 1898. As of 2011, only 66 "only-Irish" peers remain.
Modern laws
The law applicable to a British hereditary peerage depends on which Kingdom it belongs to. Peerages of England, Great Britain, and the United Kingdom follow English law; the difference between them is that peerages of England were created before the Act of Union 1707, peerages of Great Britain between 1707 and the Union with Ireland in 1800, and peerages of the United Kingdom since 1800. Irish peerages follow the law of the Kingdom of Ireland, which is very much similar to English law, except in referring to the Irish Parliament and Irish officials, generally no longer appointed; no Irish peers have been created since 1898, and they have no part in the present governance of the United Kingdom. Scottish peerage law is generally similar to English law, but differs in innumerable points of detail, often being more similar to medieval practice.
Women are ineligible to succeed to the majority of English, Irish, and British hereditary peerages, but may inherit certain English baronies by writ and Scottish peerages in the absence of a male heir.
Ranks and titles
The ranks of the peerage in most of the United Kingdom are, in descending order of rank, duke, marquess, earl, viscount and baron; the female equivalents are duchess, marchioness, countess, viscountess and baroness respectively. Women typically do not hold hereditary titles in their own right, except for certain peerages in the peerage of Scotland. One significant change to the status quo in England was in 1532 when Henry VIII created the Marquess of Pembroke title for his soon-to-be wife, Anne Boleyn; she held this title in her own right and was therefore ennobled with the same rank as a male.
In the Scottish peerage, the lowest rank is lordship of Parliament, the male holder thereof being known as a lord of Parliament. A Scottish barony is a feudal rank, and not of the Peerage. The barony by tenure or feudal barony in England and Wales was similar to a Scottish feudal barony, in being hereditary, but is long obsolete, the last full summons of the English feudal barons to military service having occurred in 1327. The Tenures Abolition Act 1660 finally quashed any remaining doubt as to their continued status.
Peerage dignities are created by the sovereign by either writs of summons or letters patent. Under modern constitutional conventions, no peerage dignity, with the possible exception of those given to members of the royal family, would be created if not upon the advice of the prime minister.
Many peers hold more than one hereditary title; for example, the same individual may be a duke, a marquess, an earl, a viscount, and a baron by virtue of different peerages. If such a person is entitled to sit in the House of Lords, he still only has one vote. However, until the House of Lords Act 1999 it was possible for one of the peer's subsidiary titles to be passed to his heir before his death by means of a writ of acceleration, in which case the peer and his heir would have one vote each. Where this is not done, the heir may still use one of the father's subsidiary titles as a "courtesy title", but he is not considered a peer.
Inheritance of peerages
The mode of inheritance of an hereditary peerage is determined by the method of its creation. Titles may be created by writ of summons or by letters patent. The former is merely a summons of an individual to Parliament and does not explicitly confer a peerage; descent is always to the heirs of the body, male and female. The latter method explicitly creates a peerage and names the dignity in question. Letters patent may state the course of descent; usually, this is only to male heirs, but by a special remainder other descents can be specified. The Gender Recognition Act 2004 regulates acquired gender and provides that acquiring a new gender under the Act does not affect the descent of any peerage.
A child is deemed to be legitimate if its parents are married at the time of its birth or marry later; only legitimate children may succeed to a title, and furthermore, an English, Irish, or British (but not Scottish) peerage can only be inherited by a child born legitimate, not legitimated by a later marriage. An example of this can be seen in the film director Christopher Guest, who bypassed his older half-brother Anthony to became the 5th Baron Haden-Guest as the 4th Baron Haden-Guest was not married to Anthony's mother at the time of his birth.
Normally, a peerage passes to the next holder on the death of the previous holder. However, Edward IV introduced a procedure known as a writ of acceleration, whereby it was possible for the eldest son of a peer holding more than one peerage to sit in the House of Lords by virtue of one of his father's subsidiary dignities.
A person who is a possible heir to a peerage is said to be "in remainder". A title becomes extinct (an opposite to extant, alive) when all possible heirs (as provided by the letters patent) have died out; i.e., there is nobody in remainder at the death of the holder. A title becomes dormant if nobody has claimed the title, or if no claim has been satisfactorily proven. A title goes into abeyance if there is more than one person equally entitled to be the holder.
In the past, peerages were sometimes forfeit or attainted under Acts of Parliament, most often as the result of treason on the part of the holder. The blood of an attainted peer was considered "corrupted", consequently his or her descendants could not inherit the title. If all descendants of the attainted peer were to die out, however, then an heir from another branch of the family not affected by the attainder could take the title. The Forfeiture Act 1870 abolished corruption of blood; instead of losing the peerage, a peer convicted of treason would be disqualified from sitting in Parliament for the period of imprisonment.
The Titles Deprivation Act 1917 permitted the Crown to suspend peerages if their holders had fought against the United Kingdom during the First World War. Guilt was to be determined by a committee of the Privy Council; either House of Parliament could reject the committee's report within 40 days of its presentation. In 1919, King George V issued an Order in Council suspending the Dukedom of Albany (together with its subsidiary peerages, the Earldom of Clarence and the Barony of Arklow), the Dukedom of Cumberland and Teviotdale (along with the Earldom of Armagh) and the Viscountcy of Taaffe (along with the Barony of Ballymote). Under the Titles Deprivation Act, the successors to the peerages may petition the Crown for a reinstatement of the titles; so far, none of them has chosen to do so (the Taaffe and Ballymote peerages would have become extinct in 1967).
Nothing prevents a British peerage from being held by a foreign citizen (although such peers cannot sit in the House of Lords, while the term foreign does not include Irish or Commonwealth citizens). Several descendants of George III were British peers and German subjects; the Lords Fairfax of Cameron were American citizens for several generations.
A peer may also disclaim an hereditary peerage under the Peerage Act 1963. To do so, the peer must deliver an instrument of disclaimer to the Lord Chancellor within 12 months of succeeding to the peerage, or, if under the age of 21 at the time of succession, within 12 months of becoming 21 years old. If, at the time of succession, the peer is a member of the House of Commons, then the instrument must be delivered within one month of succession; meanwhile, the peer may not sit or vote in the House of Commons. Prior to the House of Lords Act 1999, a hereditary peer could not disclaim a peerage after having applied for a writ of summons to Parliament; now, however, hereditary peers do not have the automatic right to a writ of summons to the House. Irish peerages may not be disclaimed. A peer who disclaims the peerage loses all titles, rights and privileges associated with the peerage; his wife or her husband is similarly affected. No further hereditary peerages may be conferred upon the person, but life peerages may be. The peerage remains without a holder until the death of the peer making the disclaimer, when it descends normally.
Merging in the Crown
A title held by someone who becomes monarch is said to merge in the Crown and therefore ceases to exist, because the sovereign cannot hold a dignity from himself.
The Dukedoms of Cornwall and of Rothesay, and the Earldom of Carrick, are special cases, which when not in use are said to lapse to the Crown: they are construed as existing, but held by no one, during such periods. These peerages are also special in that they are never directly inherited. The Dukedom of Cornwall was held formerly by the eldest son of the King of England, and the Dukedom of Rothesay, the Earldom of Carrick, and certain non-peerage titles (Baron of Renfrew, Lord of the Isles and Prince and Great Steward of Scotland) by the eldest son of the King of Scotland. Since those titles have been united, the dukedoms and associated subsidiary titles are held by the eldest son of the monarch. In Scotland, the title Duke of Rothesay is used for life or until accession. In England and Northern Ireland, the title Duke of Cornwall is used until the heir apparent is created Prince of Wales; at the same time as the principality is created, the duke is also created Earl of Chester. The earldom is a special case, because it is not hereditary, instead revesting or merging in the Crown if the prince succeeds to the Crown or predeceases the monarch: thus George III (then the grandson of the reigning monarch) was created Prince of Wales and Earl of Chester a month after the death of his father Frederick, Prince of Wales.
The Dukedom of Cornwall is associated with the Duchy of Cornwall; the former is a peerage dignity, while the latter is an estate held by the Duke of Cornwall. Income from the duchy goes to the Duke of Cornwall, or, when there is no duke, to the sovereign (but the money is then paid to the heir to the throne under the Sovereign Grant Act 2011).
The only other duchy in the United Kingdom is the Duchy of Lancaster, which is also an estate rather than a peerage dignity. The Dukedom of Lancaster merged in the Crown when Henry of Monmouth, Duke of Lancaster became King Henry V. Nonetheless, the Duchy of Lancaster continues to exist, theoretically run by the Chancellor of the Duchy of Lancaster (which is normally a sinecure position with no actual duties related to the duchy and is used to appoint a minister without portfolio). The Duchy of Lancaster is the inherited property that belongs personally to the monarch, rather than to the Crown. Thus, while income from the Crown Estate is turned over to the Exchequer in return for a sovereign grant payment, the income from the duchy forms a part of the Privy Purse, the personal funds of the Sovereign.
Writs of summons
At the beginning of each new parliament, each peer who has established his or her right to attend Parliament is issued a writ of summons. Without the writ, no peer may sit or vote in Parliament. The form of writs of summons has changed little over the centuries. It is established precedent that the sovereign may not deny writs of summons to qualified peers.
Baronies by writ
By modern English law, if a writ of summons was issued to a person who was not a peer, that person took his seat in Parliament, and the parliament was a parliament in the modern sense (including representatives of the Commons), that single writ created a barony, a perpetual peerage inheritable by male-preference primogeniture. This was not medieval practice, and it is doubtful whether any writ was ever issued with the intent of creating such a peerage. The last instance of a man being summoned by writ without already holding a peerage was under the early Tudors; the first clear decision that a single writ (as opposed to a long succession of writs) created a peerage was in Lord Abergavenny's case of 1610. The House of Lords Act 1999 also renders it doubtful that such a writ would now create a peer if one were now issued; however, this doctrine is applied retrospectively: if it can be shown that a writ was issued, that the recipient sat and that the council in question was a parliament, the Committee of Privileges of the House of Lords determines who is now entitled to the peerage as though modern law had always applied. Several such long-lost baronies were claimed in the 19th and 20th centuries, though the committee was not consistent on what constituted proof of a writ, what constituted proof of sitting, and which 13th-century assemblages were actually parliaments. Even a writ issued in error is held to create a peerage unless the writ was cancelled before the recipient took his seat; the cancellation was performed by the now obsolete writ of supersedeas.
Peerages created by writ of summons are presumed to be inheritable only by the recipient's heirs of the body. The House of Lords has settled such a presumption in several cases, including Lord Grey's Case (1640) Cro Cas 601, the Clifton Barony Case (1673), the Vaux Peerage Case (1837) 5 Cl & Fin 526, the Braye Peerage Case (1839) 6 Cl & Fin 757 and the Hastings Peerage Case (1841) 8 Cl & Fin 144. The meaning of heir of the body is determined by common law. Essentially, descent is by the rules of male primogeniture, a mechanism whereby normally, male descendants of the peer take precedence over female descendants, with children representing their deceased ancestors, and wherein the senior line of descent always takes precedence over the junior line per each gender. These rules, however, are amended by the proviso whereby sisters (and their heirs) are considered co-heirs; seniority of the line is irrelevant when succession is through a female line. In other words, no woman inherits because she is older than her sisters. If all of the co-heirs but one die, then the surviving co-heir succeeds to the title. Otherwise, the title remains abeyant until the sovereign "terminates" the abeyance in favour of one of the co-heirs. The termination of an abeyance is entirely at the discretion of the Crown.
A writ of acceleration is a type of writ of summons that enables the eldest son of a peer to attend the House of Lords using one of his father's subsidiary titles. The title is strictly not inherited by the eldest son, however; it remains vested in the father. A writ may be granted only if the title being accelerated is a subsidiary one, and not the main title, and if the beneficiary of the writ is the heir-apparent of the actual holder of the title. A total of ninety-four writs of acceleration have been issued since Edward IV issued the first one, including four writs issued in the 20th century. The only individual who recently sat in the House of Lords by writ of acceleration is Viscount Cranborne in 1992, through the Barony of Cecil which was actually being held by his father, the Marquess of Salisbury. (Viscount Cranborne succeeded to the marquessate on the death of his father in 2003.)
There are no Scottish peerages created by writ; neither can Scottish baronies go into abeyance, for Scots law does not hold sisters as equal heirs regardless of age. Furthermore, there is only one extant barony by writ in the Peerage of Ireland, that of La Poer, now held by the Marquess of Waterford. (Certain other baronies were originally created by writ but later confirmed by letters patent.)
Letters patent
More often, letters patent are used to create peerages. Letters patent must explicitly name the recipient of the title and specify the course of descent; the exact meaning of the term is determined by common law. For remainders in the Peerage of the United Kingdom, the most common wording is "to have and to hold unto him and the heirs male of his body lawfully begotten and to be begotten". Where the letters patent specifies the peer's heirs male of the body as successors, the rules of agnatic succession apply, meaning that succession is through the male line only. Some very old titles, like the Earldom of Arlington, may pass to heirs of the body (not just heirs-male), these follow the same rules of descent as do baronies by writ and seem able to fall into abeyance as well. Many Scottish titles allow for passage to heirs general of the body, in which case the rules of male primogeniture apply; they do not fall into abeyance, as under Scots law, sisters are not treated as equal co-heirs. English and British letters patent that do not specify a course of descent are invalid, though the same is not true for the letters patent creating peers in the Peerage of Scotland. The House of Lords has ruled in certain cases that when the course of descent is not specified, or when the letters patent are lost, the title descends to heirs-male.
Limitation to heirs of the body
It is generally necessary for English patents to include limitation to heirs "of the body", unless a special remainder is specified (see below). The limitation indicates that only lineal descendants of the original peer may succeed to the peerage. In some very rare instances, the limitation was left out. In the Devon Peerage Case (1831) 2 Dow & Cl 200, the House of Lords permitted an heir who was a collateral descendant of the original peer to take his seat. The precedent, however, was reversed in 1859, when the House of Lords decided in the Wiltes Peerage Case (1869) LR 4 HL 126 that a patent that did not include the words "of the body" would be held void.
Special remainder
It is possible for a patent to allow for succession by someone other than an heir-male or heir of the body, under a so-called special remainder. Several instances may be cited: the Barony of Nelson (to an elder brother and his heirs-male), the Earldom of Roberts (to a daughter and her heirs-male), the Barony of Amherst (to a nephew and his heirs-male) and the Dukedom of Dover (to a younger son and his heirs-male while the eldest son is still alive). In many cases, at the time of the grant the proposed peer in question had no sons, nor any prospect of producing any, and the special remainder was made to allow remembrance of his personal honour to continue after his death and to preclude an otherwise certain rapid extinction of the peerage. However, in all cases the course of descent specified in the patent must be known in common law. For instance, the Crown may not make a "shifting limitation" in the letters patent; in other words, the patent may not vest the peerage in an individual and then, before that person's death, shift the title to another person. The doctrine was established in the Buckhurst Peerage Case (1876) 2 App Cas 1, in which the House of Lords deemed invalid the clause intended to keep the Barony of Buckhurst separate from the Earldom of De La Warr (the invalidation of clause may not affect the validity of the letters patent itself). The patent stipulated that if the holder of the barony should ever inherit the earldom, then he would be deprived of the barony, which would instead pass to the next successor as if the deprived holder had died without issue.
Amendment of letters patent
Letters patent are not absolute; they may be amended or revoked by Act of Parliament. For example, Parliament amended the letters patent creating the Dukedom of Marlborough in 1706. The patent originally provided that the dukedom could be inherited by the heirs-male of the body of the first duke, Captain-General Sir John Churchill. One son had died in infancy and the other died in 1703 from smallpox. Under Parliament's amendment to the patent, designed to allow the famous general's honour to survive after his death, the dukedom was allowed to pass to the Duke's daughters; Lady Henrietta, the Countess of Sunderland, the Countess of Bridgewater and Lady Mary and their heirs-male—and thereafter "to all and every other the issue male and female, lineally descending of or from the said Duke of Marlborough, in such manner and for such estate as the same are before limited to the before-mentioned issue of the said Duke, it being intended that the said honours shall continue, remain, and be invested in all the issue of the said Duke, so long as any such issue male or female shall continue, and be held by them severally and successively in manner and form aforesaid, the elder and the descendants of every elder issue to be preferred before the younger of such issue."
Number of hereditary peers
The number of peers has varied considerably with time. At the end of the Wars of the Roses, which killed many peers, and degraded or attainted many others, there were only 29 Lords Temporal; but the population of England was also much . The Tudors doubled the number of Peers, creating many but executing others; at the death of Queen Elizabeth I, there were 59.
The number of peers then grew under the Stuarts and all later monarchs. By the time of Queen Anne's death in 1714, there were 168 peers. In 1712, Queen Anne was called upon to create 12 peers in one day in order to pass a government measure, more than Queen Elizabeth I had created during a 45-year reign.
Several peers were alarmed at the rapid increase in the size of the Peerage, fearing that their individual importance and power would decrease as the number of peers increased. Therefore, in 1719, a bill was introduced in the House of Lords to place a limitation on the Crown's power. It sought to permit no more than six new creations, and thereafter one new creation for each other title that became extinct. But it did allow the Crown to bestow titles on members of the Royal Family without any such limitation. The Bill was rejected in its final stage in the Lords, but it was passed in the Lords when it was reintroduced in the next year. Nonetheless, the House of Commons rejected the Peerage Bill by 269 to 177.
George III was especially profuse with the creation of titles, mainly due to the desire of some of his Prime Ministers to obtain a majority in the House of Lords. During his 12 years in power, Lord North had about 30 new peerages created. During William Pitt the Younger's 17-year tenure, over 140 new peerages were awarded.
A restriction on the creation of peerages, but only in the Peerage of Ireland, was enacted under the Acts of Union 1800 that combined Ireland and Great Britain into the United Kingdom in 1801. New creations were restricted to a maximum of one new Irish peerage for every three existing Irish peerages that became extinct, excluding those held concurrently with an English or British peerage; only if the total number of Irish peers dropped below 100 could the Sovereign create one new Irish peerage for each extinction.
There were no restrictions on creations in the Peerage of the United Kingdom. The Peerage continued to swell through the 19th century. In the 20th century, there were even more creations, as Prime Ministers were again eager to secure majorities in the House of Lords. Peerages were handed out not to honour the recipient but to give him a seat in the House of Lords.
Current status
Since the start of the Labour government of Harold Wilson in 1964, the practice of granting hereditary peerages has largely ceased except for members of the royal family. Only seven hereditary peers have been created since 1965: four in the royal family (the Duke of York, the Earl of Wessex, the Duke of Cambridge, and the Duke of Sussex) and three additional creations under Margaret Thatcher's government (the Viscount Whitelaw [had four daughters], the Viscount Tonypandy [had no issue] and the Earl of Stockton [with issue]). The two viscounts died without male heirs, extinguishing their titles. Harold Macmillan, 1st Earl of Stockton received the earldom customarily bestowed on former prime ministers after he retired from the House of Commons. As for the practice of granting hereditary titles (usually earldoms) to male commoners who married into the royal family, the latest offer of such peerage was in 1973 to Captain Mark Phillips (husband of The Princess Anne) who declined, and the most recent to accept was the Earl of Snowdon (husband of The Princess Margaret) in 1961.
There is no statute that prevents the creation of new hereditary peerages; they may technically be created at any time, and the government continues to maintain pro forma letters patent for their creation. The most recent policies outlining the creation of new peerages, the Royal Warrant of 2004, explicitly apply to both hereditary and life peers. However, successive governments have largely disowned the practice, and the Royal Household website currently describes the King as the fount of honour for "life peerages, knighthoods and gallantry awards", with no mention of hereditary titles.
Abolition of hereditary peers
In 2024, the Starmer Labour government announced in the King's speech that they would bring in legislation to abolish the remaining hereditary peers' rights to sit in the House of Lords.
Roles
Until the coming into force of the Peerage Act 1963, peers could not disclaim their peerage in order to sit in the House of Commons, and thus a peerage was sometimes seen as an impediment to a future political career. The law changed due to an agreement that the Labour MP Tony Benn (formerly the Viscount Stansgate) having been deprived of his seat due to an inadvertent inheritance was undemocratic, and the desire of the Conservatives to put their choice of prime minister (ultimately Alec Douglas-Home) into the House of Commons, which by that time was deemed politically necessary.
In 1999, the House of Lords Act abolished the automatic right of hereditary peers to sit in the House of Lords. Out of about 750 hereditary peers, only 92 may sit in the House of Lords. The Act provides that 90 of those 92 seats are to be elected by other members of the House: 15 by vote of the whole house (including life peers), 42 by the Conservative hereditary peers, two by the Labour hereditary peers, three by the Liberal Democrat hereditary peers, and 28 by the crossbench hereditary peers. Elections were held in October and November 1999 to choose those initial 90 peers, with all hereditary peers eligible to vote. Hereditary peers elected hold their seats until their death, resignation or exclusion for non-attendance (the latter two means introduced by the House of Lords Reform Act 2014), at which point by-elections are held to maintain the number at 92.
The remaining two hold their seats by right of the hereditary offices of Earl Marshal and Lord Great Chamberlain. These offices are hereditary in themselves, and in recent times have been held by the Dukes of Norfolk and the Barons Carrington respectively. These are the only two hereditary peers whose right to sit is automatic.
The Government reserves a number of political and ceremonial positions for hereditary peers. To encourage hereditary peers in the House of Lords to follow the party line, a number of lords-in-waiting (government whips) are usually hereditary peers. This practice was not adhered to by the Labour government of 1997–2010 due to the small number of Labour hereditary peers in the House of Lords.
Modern composition of the hereditary peerage
The peerage has traditionally been associated with high gentry, the British nobility, and in recent times, the Conservative Party. Only a tiny proportion of wealthy people are peers, but the peerage includes a few of the very wealthiest people in the UK, such as Hugh Grosvenor (the Duke of Westminster) and Lord Salisbury, and indeed the world in the case of David Thomson, 3rd Baron Thomson of Fleet. A few peers own one or more of England's largest estates passed down through inheritance, particularly those with medieval roots: until the late 19th century the dominant English and Scottish land division on death was primogeniture.
However, the proliferation of peerage creations in the late 19th century and the first half of the 20th century resulted in even minor political figures entering the ranks of the peerage; these included newspaper owners (e.g. Alfred Harmsworth) and trade union leaders (e.g. Walter Citrine). As a result, there are many hereditary peers who have taken up careers which do not fit traditional conceptions of aristocracy. For example, Arup Kumar Sinha, 6th Baron Sinha, is a computer technician working for a travel agency; Matt Ridley, 5th Viscount Ridley, is a popular science writer; Timothy Bentinck, 12th Earl of Portland, is an actor and plays David Archer in the BBC's long-running radio soap opera, The Archers; and Peter St Clair-Erskine, 7th Earl of Rosslyn, is a former Metropolitan Police Service Commander. The Earl of Longford was a socialist and prison reformer, while Tony Benn, who renounced his peerage as Viscount Stansgate (only for his son to reclaim the family title after his death) was a senior government minister (later a writer and orator) with left-wing policies.
Gender distribution
As the vast majority of hereditary peerages can only be inherited by men, the number of peeresses in their own right is very small; only 18 out of 758 hereditary peers by succession, or 2.4%, were female, as of 1992.
All female hereditary peers succeeding after 1980 have been to English or Scottish peerages originally created before 1700. Of the over 800 hereditary peerages created since 1863, only 13 could be inherited by daughters of the original recipient, and none can be inherited by granddaughters or higher-order female descendants of the original recipient. The 2nd Countess Mountbatten of Burma was the last woman to hold such a post-1900 title from 1979 until her death in 2017.
From 1963 (when female hereditary peers were allowed to enter the House of Lords) to 1999, there has been a total of 25 female hereditary peers.
Of those 92 currently sitting in the House of Lords, none are female, since the retirement of Margaret of Mar, 31st Countess of Mar, in 2020. Originally five female peers were elected under the House of Lords Act 1999 (out of seven female candidates; all of them crossbenchers). But all of these have since died or resigned, and no woman has stood in a by-election to a vacant Lords seat since 1999.
A single female peer, the 29th Baroness Dacre, is listed in the "Register of Hereditary Peers" among about 200 male peers as willing to stand in by-elections, as of October 2020.
See also
List of hereditary baronies in the Peerage of the United Kingdom
List of excepted hereditary peers
By-elections to the House of Lords
List of hereditary peers in the House of Lords by virtue of a life peerage
Reform of the House of Lords
Roll of the Peerage
Substantive title
Writ of acceleration
The Hereditary Peerage Association
Notes
References
UK legislation
External links
Kinship and descent
Peerages in the United Kingdom | Hereditary peer | [
"Biology"
] | 7,969 | [
"Behavior",
"Human behavior",
"Kinship and descent"
] |
604,707 | https://en.wikipedia.org/wiki/Truth%20function | In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional.
Classical propositional logic is a truth-functional logic, in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table), thus every compound statement is a truth function. On the other hand, modal logic is non-truth-functional.
Overview
A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective "and" is truth-functional since a sentence like "Apples are fruits and carrots are vegetables" is true if, and only if, each of its sub-sentences "apples are fruits" and "carrots are vegetables" is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional.
Connectives of the form "x believes that ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence
"Mary believes that Al Gore was President of the USA on April 20, 2000"
is true while
"Mary believes that the moon is made of green cheese"
is false. In both cases, each component sentence (i.e. "Al Gore was president of the USA on April 20, 2000" and "the moon is made of green cheese") is false, but each compound sentence formed by prefixing the phrase "Mary believes that" differs in truth-value. That is, the truth-value of a sentence of the form "Mary believes that..." is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply operator since it is unary) is non-truth-functional.
The class of classical logic connectives (e.g. &, →) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables. Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false.
Table of binary truth functions
In two-valued logic, there are sixteen possible truth functions, also called Boolean functions, of two inputs P and Q. Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity.
Functional completeness
Because a function may be expressed as a composition, a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete. This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has equivalent to . The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use.
A minimal set of operators that can express every statement expressible in the propositional calculus is called a minimal functionally complete set. A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}.
The following are the minimal functionally complete sets of operators whose arities do not exceed 2:
One element {↑}, {↓}.
Two elements , , , , , , , , , , , , , , , , , .
Three elements , , , , , .
Algebraic properties
Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are:
associativity: Within an expression containing two or more of the same associative connectives in a row, the order of the operations does not matter as long as the sequence of the operands is not changed.
commutativity: The operands of the connective may be swapped without affecting the truth-value of the expression.
distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a · b) + (a · c) for all operands a, b, c.
idempotence: Whenever the operands of the operation are the same, the connective gives the operand as the result. In other words, the operation is both truth-preserving and falsehood-preserving (see below).
absorption: A pair of connectives satisfies the absorption law if for all operands a, b.
A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it:
monotonic: If f(a1, ..., an) ≤ f(b1, ..., bn) for all a1, ..., an, b1, ..., bn ∈ {0,1} such that a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn. E.g., .
affine: For each variable, changing its value either always or never changes the truth-value of the operation, for all fixed values of all other variables. E.g., , .
self dual: To read the truth-value assignments for the operation from top to bottom on its truth table is the same as taking the complement of reading it from bottom to top; in other words, f(¬a1, ..., ¬an) = ¬f(a1, ..., an). E.g., .
truth-preserving: The interpretation under which all variables are assigned a truth value of true produces a truth value of true as a result of these operations. E.g., . (see validity)
falsehood-preserving: The interpretation under which all variables are assigned a truth value of false produces a truth value of false as a result of these operations. E.g., . (see validity)
Arity
A concrete function may be also referred to as an operator. In two-valued logic there are 2 nullary operators (constants), 4 unary operators, 16 binary operators, 256 ternary operators, and n-ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators, 19683 binary operators, 7625597484987 ternary operators, and n-ary operators. In k-valued logic, there are k nullary operators, unary operators, binary operators, ternary operators, and n-ary operators. An n-ary operator in k-valued logic is a function from . Therefore, the number of such operators is , which is how the above numbers were derived.
However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary Boolean operators cited above, of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle. The ternary operator is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs.
"Not" is a unary operator, it takes a single term (¬P). The rest are binary operators, taking two terms to make a compound statement (P ∧ Q, P ∨ Q, P → Q, P ↔ Q).
The set of logical operators may be partitioned into disjoint subsets as follows:
In this partition, is the set of operator symbols of arity .
In the more familiar propositional calculi, is typically partitioned as follows:
nullary operators:
unary operators:
binary operators:
Principle of compositionality
Instead of using truth tables, logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning.
Let I be an interpretation function, let Φ, Ψ be any two sentences and let the truth function fnand be defined as:
fnand(T,T) = F; fnand(T,F) = fnand(F,T) = fnand(F,F) = T
Then, for convenience, fnot, for fand and so on are defined by means of fnand:
fnot(x) = fnand(x,x)
for(x,y) = fnand(fnot(x), fnot(y))
fand(x,y) = fnot(fnand(x,y))
or, alternatively fnot, for fand and so on are defined directly:
fnot(T) = F; fnot(F) = T;
for(T,T) = for(T,F) = for(F,T) = T; for(F,F) = F
fand(T,T) = T; fand(T,F) = fand(F,T) = fand(F,F) = F
Then
etc.
Thus if S is a sentence that is a string of symbols consisting of logical symbols v1...vn representing logical connectives, and non-logical symbols c1...cn, then if and only if have been provided interpreting v1 to vn by means of fnand (or any other set of functional complete truth-functions) then the truth-value of is determined entirely by the truth-values of c1...cn, i.e. of . In other words, as expected and required, S is true or false only under an interpretation of all its non-logical symbols.
Computer science
Logical operators are implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates. NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates.
The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence.
The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer.
See also
Bertrand Russell and Alfred North Whitehead,Principia Mathematica, 2nd edition
Ludwig Wittgenstein,Tractatus Logico-Philosophicus, Proposition 5.101
Bitwise operation
Binary function
Boolean domain
Boolean logic
Boolean-valued function
List of Boolean algebra topics
Logical constant
Modal operator
Propositional calculus
Truth-functional propositional logic
Notes
References
Further reading
Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated from the French and German versions by Otto Bird, Dordrecht, South Holland: D. Reidel.
Alonzo Church (1944), Introduction to Mathematical Logic, Princeton, NJ: Princeton University Press. See the Introduction for a history of the truth function concept.
Mathematical logic
Logical truth | Truth function | [
"Mathematics"
] | 2,556 | [
"Mathematical logic",
"Logical truth"
] |
604,725 | https://en.wikipedia.org/wiki/Bluejacking |
Bluejacking is the sending of unsolicited messages over Bluetooth to Bluetooth-enabled devices such as mobile phones, PDAs or laptop computers, sending a vCard which typically contains a message in the name field (i.e., for bluedating) to another Bluetooth-enabled device via the OBEX protocol.
Bluetooth has a very limited range, usually around on mobile phones, but laptops can reach up to with powerful (Class 1) transmitters.
Origins
Bluejacking was reportedly first carried out between 2001 and 2003 by a Malaysian IT consultant who used his phone to advertise Ericsson to a single Nokia 7650 phone owner in a Malaysian bank. He also invented the name, which he claims is an amalgam of Bluetooth and ajack, his username on Esato, a Sony Ericsson fan online forum. Jacking is, however, an extremely common shortening of "hijack', the act of taking over something. Ajack's original posts are hard to find, but references to the exploit are common in 2003 posts.
Another user on the forum claims earlier discovery, reporting a near-identical story to that attributed to Ajack, except they describe bluejacking 44 Nokia 7650 phones instead of one, and the location is a garage, seemingly in Denmark, rather than a Malaysian Bank. Also, the message was an insult to Nokia owners rather than a Sony Ericsson advertisement.
Usage
Bluejacking is usually not very harmful, except that bluejacked people generally don't know what has happened, and so may think that their phone is malfunctioning. Usually, a bluejacker will only send a text message, but with modern phones it's possible to send images or sounds as well. Bluejacking has been used in guerrilla marketing campaigns to promote advergames.
The actual message itself doesn't deploy any malware to the software; rather, it is crafted to elicit a response from the user or add a new contact and can be seen as more of a prank than an attack. These messages can evoke either annoyance or amusement in the recipient. Users typically possess the ability to reject such messages, and this tactic is frequently employed in confined environments such as planes, trains, and buses. However, some forms of DoS Disruptions are still possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
Bluejacking is also confused with Bluesnarfing, which is the way in which mobile phones are illegally hacked via Bluetooth.
Companies
BluejackQ
BlueJackQ is a website dedicated to bluejacking. The website contains a few bluejacking stories taken from the site's forum. The website also includes software that can be used for bluejacking and guides on how to bluejack which are slightly out of date but the basic principle still applies to most makes of phone. Its forum has 4,000 registered users and 93,050 posts. The website has been featured in many news articles.
The forums were opened on the November 13, 2003 and has been the center of BluejackQ from the start. It currently has 4 moderators and has 20 different sections available to members. The areas included information about BluejackQ, reviews of mobile phones, media players, PDAs and Miscellaneous devices, general bluejacking threads and an off-topic area. The BluejackQ podcast was first released as a test version on January 15, 2006, thus becoming the first bluejacking-related podcast. Podcasts 1, 2 and 3 featured three members of the forums.
The forums seem to have been unused since 2020.
Fictional reference in Person of Interest
The authentic bluejacking as described here is not the same exploit which was frequently depicted in the television series Person of Interest; that fictional exploit portrayed different and more invasive capabilities.
See also
Bluebugging
Bluesnarfing
AirDrop
References
Bluetooth
Spamming | Bluejacking | [
"Technology"
] | 826 | [
"Wireless networking",
"Bluetooth"
] |
604,783 | https://en.wikipedia.org/wiki/Seroconversion | In immunology, seroconversion is the development of specific antibodies in the blood serum as a result of infection or immunization, including vaccination. During infection or immunization, antigens enter the blood, and the immune system begins to produce antibodies in response. Before seroconversion, the antigen itself may or may not be detectable, but the antibody is absent. During seroconversion, the antibody is present but not yet detectable. After seroconversion, the antibody is detectable by standard techniques and remains detectable unless the individual seroreverts, in a phenomenon called seroreversion, or loss of antibody detectability, which can occur due to weakening of the immune system or decreasing antibody concentrations over time. Seroconversion refers the production of specific antibodies against specific antigens, meaning that a single infection could cause multiple waves of seroconversion against different antigens. Similarly, a single antigen could cause multiple waves of seroconversion with different classes of antibodies. For example, most antigens prompt seroconversion for the IgM class of antibodies first, and subsequently the IgG class.
Seroconversion rates are one of the methods used for determining the efficacy of a vaccine. The higher the rate of seroconversion, the more protective the vaccine for a greater proportion of the population. Seroconversion does not inherently confer immunity or resistance to infection. Only some antibodies, such as anti-spike antibodies for COVID-19, confer protection.
Because seroconversion refers to detectability by standard techniques, seropositivity status depends on the sensitivity and specificity of the assay. As a result, assays, like any serum test, may give false positives or false negatives and should be confirmed if used for diagnosis or treatment.
Mechanism
The physical structure of an antibody allows it to bind to a specific antigen, such as bacterial or viral proteins, to form a complex. Because antibodies are highly specific in what they bind, tests can detect specific antibodies by replicating the antigen which that antibody binds to. Assays can likewise detect specific antigens by replicating the antibodies that bind to them. If an antibody is already bound to an antigen, that antibody and that antigen cannot bind to the test. Antibody tests therefore cannot detect that specific antibody molecule. Due to this binding, if the amounts of antigen and antibody in the blood are equal, each antibody molecule will be in a complex and be undetectable by standard techniques. The antigen, which is bound as well, will also be undetectable. The antibody or antigen is only detectable in the blood when there is substantially more of one than the other. Standard techniques require a high enough concentration of antibody or antigen to detect the amount of antibody or antigen; therefore, they cannot detect the small amount that is not bound during seroconversion.
The immune system may take several days or weeks to detect antigen in tissue, begin to create antibodies, and ramp up the production of antibodies to counter the antigen. As a result, the antigen molecules outnumber the antibody molecules in the early stages of an infection. Because there are more antigen molecules than antibody molecules, the majority of the antibody molecules are bound to antigen. Thus, tests at this stage are unable to detect sufficient unbound antibodies. On the other hand, there may be unbound antigen that can be detectable. As seroconversion progresses, the amount of antibody in the blood gradually rises. Eventually the amount of antibody outnumbers the amount of antigen. At this time, the majority of the antigen molecules is bound to antibodies, and the antigen is undetectable. Conversely, there is a substantial amount of unbound antibodies, allowing standard techniques to detect these antibodies.
Terminology
Serological assays are tests that detect specific antibodies and are used to determine whether those antibodies are in an organism's blood; such tests require a significant concentration of unbound antibody in the blood serum. Serostatus is a term denoting the presence or absence of particular antibodies in an individual's blood. An individual's serostatus may be positive or negative. During seroconversion, the specific antibody being tested for is generated. Therefore, before seroconversion, the serological assay will not detect any antibody, and the individual's serostatus is seronegative for the antibody. After seroconversion, sufficient concentration of the specific antibody exists in the blood, and the serological assay will detect the antibody. The individual is now seropositive for the antibody.
During seroconversion, when the amounts of antibody and antigen are very similar, it may not be possible to detect free antigen or free antibody. This may give a false negative result when testing for the infection. The time during which the amount of antibody and antigen are sufficiently similar that standard techniques will be unable to detect the antibody or antigen is referred to as the window period. Since different antibodies are produced independently of one another, a given infection may have several window periods. Each specific antibody has its own window period.
Similarly, because standard techniques utilize assumptions about the specificity of antibodies and antigens and are based on chemical interactions, these tests are not completely accurate. Serological assays may give a false positive result, causing the individual to appear to have seroconverted when the individual has not. False positives can occur due to the test reacting to, or detecting, an antibody that happens to be sufficiently similar in structure to the target antibody. Antibodies are generated randomly, so the immune system has a low chance of generating an antibody capable of weakly binding to the assay by coincidence. More rarely, individuals who have recently had some vaccines or who have certain autoimmune conditions can temporarily test falsely seropositive. Due to the possibility of false positives, positive test results are usually reported as "reactive." This indicates that the assay reacted to antibodies, but this does not mean that the individual has the specific antibodies tested for.
Seroreversion is the opposite of seroconversion. During seroreversion, the amount of antibody in the serum decreases. This decrease may occur naturally as a result of the infection resolving and the immune system slowly tamping down its response, or as a result of loss of the immune system. Different infections and antigens lead to the production of antibodies for differing periods of time. Some infections may lead to antibodies that the immune system produces for years after the infection resolves. Others lead to antibodies that the immune system only produces for a few weeks following resolution. After seroreversion, tests can no longer detect antibodies in a patient's serum.
The immune system generates antibodies to any antigen, so seroconversion can occur as a result of either natural infection or as a result of vaccination. Detectable seroconversion and the timeline of seroconversion are among of the parameters studied in evaluating the efficacy of vaccines. A vaccine does not need to have a 100% seroconversion rate to be effective. As long as a sufficient proportion of the population seroconverts, the entire population will be effectively protected by herd immunity.
An individual being seropositive means that the individual has antibodies to that antigen, but it does not mean that that individual has immunity or even resistance to the infection. While antibodies form an important part of the immune system's ability to fight off and resolve an infection, antibodies and seropositivity alone do not guarantee that an individual will resolve the infection. An individual who is seropositive for anti-HIV antibodies will retain that infection chronically unless treated with medications specific to HIV. Conversely, seroconversion in other infections may indicate resistance or immunity. For example, higher concentrations of antibodies after seroconversion in individuals vaccinated against COVID-19 predicts reduced chance of breakthrough infection.
Although seroconversion refers to the production of sufficient quantities of antibodies in the serum, the word seroconversion is often used more specifically in reference to blood testing for anti-HIV antibodies. In particular, "seroconverted" has been used to refer to the process of having "become HIV positive". This indicates that the individual has a detectable amount of anti-HIV antibodies. An individual may have a transmittable HIV infection before the individual becomes HIV positive due to the window period.
In epidemiology, seroconversion is often used in reference to observing the evolution of a virus from a host or natural reservoir host to the human population. Epidemiologists compare archived human blood specimens taken from infected hosts before an epidemic and later specimens from infected hosts at later stages of the epidemic. In this context, seroconversion refers to the process of anti-viral antibodies becoming detectable in the human population serum.
Background
The immune system maintains an immunological memory of infectious pathogens to facilitate early detection and to confer protective immunity against a rechallenge. This explains why many childhood diseases never recur in adulthood (and when they do, it generally indicates immunosuppression).
It generally takes several days for B cells to begin producing antibodies, and it takes further time for those antibodies to develop sufficient specificity to bind strongly to their specific antigen. In the initial (primary infection) phase of the infection, the immune system responds by generating weakly binding immunoglobulin M (IgM) antibodies; although they individually bind weakly, each IgM antibody has many binding regions and can thus make for an effective initial mobilization of the immune system. Over time, immunoglobulin class switching will result in IgM-generating B-cells switching to more specific IgG-generating B-cells. Levels of IgM then gradually decline and eventually become undetectable by immunoassays, while levels of immunoglobulin G (IgG) levels rise and become detectable. After the infection resolves, levels of IgM antibodies generally fall to completely undetectable levels as the immune response self-regulates, but some plasma cells will remain as memory cells to produce levels of IgG that will frequently remain detectable for months to years after the initial infection.
Upon reinfection, levels of both IgM and IgG rise, with IgM antibodies having a more rapid but smaller and less sustained peak, and IgG antibodies having a slightly slower, but far greater peak sustained over a longer period of time compared to IgM antibodies. Subsequent infections will demonstrate similar patterns, with initial IgM peaks and significantly stronger IgG peaks, with the IgG peak occurring more rapidly during subsequent infections. Thus an elevated IgM titre indicates recent primary infection or acute reinfection, while the presence of IgG suggests past infection or immunization.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2, the virus causing COVID-19) sometimes does not follow the usual pattern, with IgM sometimes occurring after IgG, together with IgG, or not occurring at all. Generally, however, median IgM detection occurs 5 days after symptom onset, and IgG is detected a median 14 days after symptom onset.
In HIV
Most individuals infected with HIV will begin to produce antibodies within a few weeks after their initial exposure to HIV. During the window period, the antibody assay cannot detect unbound anti-HIV antibodies and will indicate that the individual is seronegative. The length of the window period depends on the individual's immune response and the particular parameters of the test. An individual in the window period can still infect others despite appearing seronegative on tests because the individual still carries the virus.
The average window period for the development of antibodies to p24 antigen, the standard for testing, is about two weeks. However, the window periods used for the assays are based on capturing as many people as possible. More recent, fourth-generation assays that assess for both the antibody and the antigen can have a window period as short as six weeks to detect more than 99% of infections, while third-generation tests that assess only for unbound antibody tend to have a longer window period of eight to nine weeks. Third-generation tests are no longer recommended if fourth-generation tests are available. Rapid tests procurable at a consumer level often fail to detect antibody until at least three months have passed since the initial infection. It takes longer for fingerstick blood or other fluids to accumulate sufficiently high levels of antibodies compared to venous blood plasma sampling. Thus point of care tests reliant on these sources can have even longer periods. While a reactive (seropositive) rapid point of care test may prompt an individual to undergo further testing. A non-reactive (negative) rapid point of care test should still be followed up with immunoassay testing such as by a fourth-generation test after the window period. Similarly, individuals taking pre-exposure prophylaxis (PrEP) can experience extended window periods compared to the average population, leading to ambiguous testing. Thus, individuals who test negative for HIV before the window period ends for that specific test will usually need to be retested after the window period, as they may fall into the minority who take more time to develop antibodies.
Current CDC recommendations are to begin with a test that screens for both antigen and antibody, then follow up with an immunoassay to differentiate between HIV-1 and HIV-2 antibodies. Non-reactive (negative) tests are followed up with nucleic acid tests for viral RNA.
About 70-80% of people infected with HIV will experience symptoms during the seroconversion period within about two to four weeks, primarily associated with a high viral load and the immune system's acute response to the infection. These symptoms can last for anywhere from a couple of days to several weeks. Some people have no symptoms at all. The symptoms of seroconversion are non-specific and can often be mistaken for a more benign illness such as the flu. Symptoms can include lymphadenopathy (swelling of the lymph glands), general fatigue and malaise, chills, low-grade fever, sore throat, body aches, night sweats, ulcers in the mouth, pain in the joints and muscles, loss of appetite, headache, and a maculopapular rash on the trunk of the body. Because not all individuals experience the symptoms of seroconversion, and because they are non-specific, individuals should receive testing for HIV if they are high-risk or have possibly had an exposure to HIV. Likewise, if an individual suspects exposure for HIV, a lack of symptoms does not indicate that seroconversion has not occurred. 20–30% of people undergoing HIV seroconversion lack symptoms entirely or have mild symptoms.
The immune system mounts an acute effort to resolve the HIV infection during the seroconversion period. Following this period, the immune system temporarily contains the infection. The symptoms of seroconversion lessen and disappear in most people, with HIV entering a stage of clinical latency. At this stage, the infection remains within the body without causing symptoms, and the viral load gradually increases. The body continues producing anti-HIV antibodies throughout clinical latency, and the HIV infection remains detectable.
Individuals who have become HIV seropositive may benefit from seroconversion testing for comorbid infections for which they are suspectible. For example, positive seroconversion of human herpesvirus 8 is highly predictive of later development of Kaposi's sarcoma, allowing for individuals who are seropositive to be aware of their risk of developing Kaposi's sarcoma and thus receive appropriate monitoring.
In COVID-19
As with other viruses, seroconversion in COVID-19 refers to the development of antibodies in the blood serum against COVID-19 antigens. An individual is seropositive, or has seroconverted for COVID-19, once standard techniques are able to detect COVID-19 antibodies in the blood. Seroconversion testing is primarily used to detect individuals who have been infected with COVID-19 in the past who have already resolved their infections. Due to the time delay of seroconversion compared to viral load, seroconversion is not sufficiently timely to diagnose a current case of COVID-19. However, seroconversion may be helpful for individuals with suspected infections who are negative by RT-PCR testing for viral load.
Not all people who are infected with SARS-CoV-2 become seropositive. Conversely, some individuals can become seropositive without ever experiencing symptoms of COVID-19 or knowing that they were exposed to COVID-19 at any point. Some asymptomatic individuals can still transmit COVID-19 to others. However, it is unclear whether all asymptomatic individuals who seroconvert to COVID-19 had transmissibility at any point (active infection), or whether an individual can seroconvert to COVID-19 without undergoing a period during which they can infect others.
Most standard assays for COVID-19 seroconversion test for antibodies against the COVID-19 specific spike protein (S) and the COVID-19 specific nucleoprotein (N). Concentrations of antibodies develop after several days and reach their maximal value approximately two to three weeks after infection. Some individuals have detectable levels of both IgG and IgM as early as within the first week after symptoms begin. Although viral infections typically have a rise in IgM that precedes a rise in IgG, some individuals infected with COVID-19 have both IgM and IgG responses at approximately the same time. After initial seroconversion for either IgM or both IgG and IgM, concentrations continue to rise and peak within one week after antibodies first become detectable. Concentration of IgM tend to fall within three weeks after symptoms first begin regardless of resolution to the COVID-19 infection. Levels of IgG plateau and remain high for at least six to seven months after the resolution of the infection in most individuals. The length of time that anti-spike IgG remains high varies greatly between different individuals. Older individuals and individuals with less robust immune systems tend to serorevert within a shorter period of time.
Becoming seropositive for COVID-19 antibodies can occur due to either infection with COVID-19 itself or due to becoming vaccinated to COVID-19. Being seropositive for COVID-19 does not intrinsically confer immunity or even resistance. However, higher rates of seroconversion are linked to greater clinical efficacy of vaccines. This suggests that for most individuals, seroconversion does lead to resistance. Studies of the available COVID-19 vaccines have indicated that vaccination causes a stronger seroconversion with a heightened peak concentration of IgG antibodies, as well as a longer plateau of resistance compared to seroconversion from a natural infection of COVID-19. The timeline of seroconversion is similar between seroconversion from infection and seroconversion from vaccines. Antibodies first becoming detectable within approximately two to three weeks. Younger individuals tend to have more robust responses to vaccinations compared to older individuals. The difference in the robustness of the response increases with the second dose. Younger individuals tend to have much higher and more sustained peaks of anti-spike IgG antibodies following the second dose. Many otherwise ill individuals, such as those with cancer or chronic liver disease, still exhibit similar rates of seroconversion to the general population. On the other hand, individuals with weakened immune systems, such as due to immunosuppressive medications or leukemia, can exhibit decreased rates of seroconversion for currently available vaccines. The different vaccines currently utilized do not appear to have significant differences in seroconversion rates when compared in similar population groups.
Seroconversion does not necessarily occur at the same rate to all COVID-19 antigens. Individuals who seroconvert more rapidly to different antigens may have different disease courses. Individuals infected with COVID-19 who developed primarily anti-spike antibodies rather than anti-nucleocapsid antibodies are less likely to have a severe disease course. Studies suggest that anti-spike antibodies confer greater resistance to COVID-19 than anti-nucleocapsid antibodies. A higher ratio of anti-spike antibodies to anti-nucleocapsid antibodies thus serves as a predictor of disease course and patient mortality. As a result, currently available vaccines target the production of anti-spike antibodies rather than anti-nucleocapsid antibodies.
Not all individuals who are infected to COVID-19 seroconvert, including individuals who otherwise fully recover from COVID-19. This could suggest that the individuals are developing antibodies that standard techniques do not cover, that individuals can recover with extremely low levels of antibodies not detectable by standard techniques, or that individuals do not need antibodies against COVID-19 in order to recover. Individuals who recover from COVID-19 but never seroconvert tend to have lower viral loads and be of younger age than individuals who do seroconvert. This may indicate that individuals who have experienced less severe COVID-19 infections are less likely to trigger full responses from their immune systems and that these individuals manage to clear the infection despite not producing sufficient quantities of antibodies or any specific antibodies against COVID-19 at all. Significantly older patients of greater than eighty years old are more likely to have higher quantities of IgG antibodies compared to younger patients at the time of infection. This is consistent with the fact that older patients tend to have more severe COVID-19 infections and thus have higher viral loads compared to younger patients. However, this increased antibody load tends to decrease after about three months post-recovery compared to younger patients, compared to the six to seven months observed in the general population. This implies that the resistance may not last long-term in older individuals, leaving them suspectible to subsequent COVID-19 infections. Some studies have disputed the link between concentrations of antibodies of either IgM or IgG and the severity of the disease course.
Several studies have demonstrated that individuals who recovered from COVID-19 infections and are seropositive for COVID-19 at the time of vaccination produce significantly more anti-spike IgG antibodies in response to vaccination than individuals who are not seropositive for COVID-19, while individuals who have recovered from COVID-19 infections but never seroconverted and are seronegative respond similarly to individuals who have never been exposed to COVID-19. Specifically, individuals who are seropositive for COVID-19 at the time of their first dose of vaccination have a response similar to the general population's response to the second dose, due to this increased concentration of IgG antibodies. Some individuals who have recovered from COVID-19 may decline vaccination due to the belief that their recovery from infection has a protective effect. Nevertheless, the lack of seroconversion for all former infectees indicates that recovery from infection alone does not guarantee resistance to COVID-19. Even for individuals who seroconverted, seropositivity is at best only as protective as a single dose of vaccine, as opposed to the more robust protection of both doses of the vaccine and subsequent boosters. Therefore, those who have recovered from COVID-19, regardless of seropositivity, are still advised by health bodies such as the CDC to seek vaccination to prevent future reinfection and to limit future potential spread of COVID-19.
In hepatitis B
Seroconversion plays a major role in the diagnosis and treatment of hepatitis B infections. As in other viral infections, seropositivity indicates that an individual has a sufficiently high concentration of antibody or antigen in the blood to be detectable by standard techniques. While assays for other infections such as COVID-19 and HIV primarily test for seroconversion of antibodies against antigens, assays for HBV also test for antigens. The standard serology panel for seroconversion include hepatitis B surface antigen, hepatitis B surface antibody for IgM and IgG, hepatitis B core antibody for IgM and IgG, and hepatitis B e-antigen.
In the typical disease course for hepatitis B, the individual will first seroconvert for hepatitis B surface antigen (HBsAg). While some can convert within one week, most individuals take about four weeks after initial infection to convert. Anti-core antibodies (anti-HBc) are the first antibodies produced by the body, first in short-term IgM (anti-HBc IgM), and subsequently in long-term IgG; while levels of IgM anti-HBc will peak around sixteen weeks after exposure and fall within about seven to eight months, IgG anti-HBc will remain detectable in the serum as a sign of chronic infection for years. IgM anti-HBc concentration will fall regardless of whether or not the individual clears the infection. The window period for HBsAg/anti-HBs testing occurs as concentration of HBsAg falls and before the body develops anti-HBs antibodies, lasting approximately six to eight weeks in most individuals. During this time, serology assays can test for total anti-HBc. Levels of anti-surface antibody (anti-HBs) generally become detectable after thirty-two weeks and peak around thirty-six to forty; the production of anti-HBs antibodies indicates imminent resolution of the HBV infection. Anti-HBs concentration falls as the infection resolves but does not serorevert completely, and anti-HBs IgG remains positive for years as a sign of immunity.
Hepatitis B e-antigen (HBeAg) is a sign of current infectivity. An individual who is seropositive for HBeAg can infect others. An individual who is infected with HBV and who never becomes seropositive for HBeAg can likewise be infective, because not all HBV infections produce HBeAg. For most individuals, those who seroconvert positive for HBeAg during their disease course and subsequently serorevert negative as their infection progresses are no longer infective. Seroreversion from HBeAg is thus used as one marker of resolution of infection.
On a serological assay, the presence of hepatitis B surface antigen (HBsAg) indicates an individual with a currently active hepatitis B infection, whether acute or chronic. The presence of core antibody (anti-HBc) indicates an individual with an infection in general, whether current or previously resolved. The presence of surface antibody (anti-HBs) indicates an individual with immunity to hepatitis B, whether due to previously resolved infection or due to hepatitis B vaccination. For example, an individual who has never had any exposure to HBV, either by vaccine or by infection, would test negative for the entire serology panel. An individual who has been vaccinated and never had an infection will test seropositive for anti-HBs due to vaccination and negative for markers of infection. An individual with an acute HBV infection would test positive for HBsAg and anti-HBc (total and IgM) while negative for anti-HBs. An individual with a chronic infection would test positive for HBsAg and total anti-HBc (IgM and IgG), but negative for IgM anti-HBc and anti-HBs. An individual who has successfully resolved their HBV infection will test negative for HBsAg, positive for anti-HBc, and may test negative or positive for anti-HBs, although most will test positive..
Some studies have suggested that a significant minority across all population cohorts fails to seroconvert after the standard three-dose series. For these individuals, a booster is recommended. Other studies have indicated that even for those who seroconvert, the immunity conferred may decrease over time, and boosters are also recommended for immunocompromised individuals after five years. However, those who are immunocompetent may forego testing or boosters after the five-year period. Individuals who receive vaccination for HBV should undergo serology testing to confirm seroconversion following the initial vaccine series as well as any boosters. Those who are persistent non-responders to the booster series are unlikely to benefit from additional boosters and should instead be cautioned on prevention.
See also
Correlates of immunity
HIV superinfection
References
Serology
HIV vaccine research
COVID-19 vaccines
Hepatitis vaccines | Seroconversion | [
"Chemistry"
] | 5,928 | [
"HIV vaccine research",
"Drug discovery"
] |
604,798 | https://en.wikipedia.org/wiki/Joule%20heating | Joule heating (also known as resistive heating, resistance heating, or Ohmic heating) is the process by which the passage of an electric current through a conductor produces heat.
Joule's first law (also just Joule's law), also known in countries of the former USSR as the Joule–Lenz law, states that the power of heating generated by an electrical conductor equals the product of its resistance and the square of the current. Joule heating affects the whole electric conductor, unlike the Peltier effect which transfers heat from one electrical junction to another.
Joule-heating or resistive-heating is used in many devices and industrial processes. The part that converts electricity into heat is called a heating element.
Among the applications are:
Buildings are often heated with electric heaters where grid power is available.
Electric stoves and ovens use Joule heating to cook food.
Soldering irons generate heat to melt conductive solder and make electrical connections.
Cartridge heaters are used in various manufacturing processes.
Electric fuses are used as a safety device, breaking a circuit by melting if enough current flows to heat them to the melting point.
Electronic cigarettes vaporize liquid by Joule heating.
Some food processing equipment may make use of Joule heating: running a current through food material (which behave as an electrical resistor) causes heat release inside the food. The alternating electrical current coupled with the resistance of the food causes the generation of heat. A higher resistance increases the heat generated. Ohmic heating allows for fast and uniform heating of food products, which maintains quality. Products with particulates heat up faster (compared to conventional heat processing) due to higher resistance.
History
James Prescott Joule first published in December 1840, an abstract in the Proceedings of the Royal Society, suggesting that heat could be generated by an electrical current. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the immersed wire.
In 1841 and 1842, subsequent experiments showed that the amount of heat generated was proportional to the chemical energy used in the voltaic pile that generated the template. This led Joule to reject the caloric theory (at that time the dominant theory) in favor of the mechanical theory of heat (according to which heat is another form of energy).
Resistive heating was independently studied by Heinrich Lenz in 1842.
The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known unit of power, the watt, is equivalent to one joule per second.
Microscopic description
Joule heating is caused by interactions between charge carriers (usually electrons) and the body of the conductor.
A potential difference (voltage) between two points of a conductor creates an electric field that accelerates charge carriers in the direction of the electric field, giving them kinetic energy. When the charged particles collide with the quasi-particles in the conductor (i.e. the canonically quantized, ionic lattice oscillations in the harmonic approximation of a crystal), energy is being transferred from the electrons to the lattice (by the creation of further lattice oscillations). The oscillations of the ions are the origin of the radiation ("thermal energy") that one measures in a typical experiment.
Power loss and noise
Joule heating is referred to as ohmic heating or resistive heating because of its relationship to Ohm's Law. It forms the basis for the large number of practical applications involving electric heating. However, in applications where heating is an unwanted by-product of current use (e.g., load losses in electrical transformers) the diversion of energy is often referred to as resistive loss. The use of high voltages in electric power transmission systems is specifically designed to reduce such losses in cabling by operating with commensurately lower currents. The ring circuits, or ring mains, used in UK homes are another example, where power is delivered to outlets at lower currents (per wire, by using two paths in parallel), thus reducing Joule heating in the wires. Joule heating does not occur in superconducting materials, as these materials have zero electrical resistance in the superconducting state.
Resistors create electrical noise, called Johnson–Nyquist noise. There is an intimate relationship between Johnson–Nyquist noise and Joule heating, explained by the fluctuation-dissipation theorem.
Formulas
Direct current
The most fundamental formula for Joule heating is the generalized power equation:
where
is the power (energy per unit time) converted from electrical energy to thermal energy,
is the current travelling through the resistor or other element,
is the voltage drop across the element.
The explanation of this formula () is:
Assuming the element behaves as a perfect resistor and that the power is completely converted into heat, the formula can be re-written by substituting Ohm's law, , into the generalized power equation:
where R is the resistance.
Voltage can be increased in DC circuits by connecting batteries or solar panels in series.
Alternating current
When current varies, as it does in AC circuits,
where t is time and P is the instantaneous active power being converted from electrical energy to heat. Far more often, the average power is of more interest than the instantaneous power:
where "avg" denotes average (mean) over one or more cycles, and "rms" denotes root mean square.
These formulas are valid for an ideal resistor, with zero reactance. If the reactance is nonzero, the formulas are modified:
where is phase difference between current and voltage, means real part, Z is the complex impedance, and Y* is the complex conjugate of the admittance (equal to 1/Z*).
For more details in the reactive case, see AC power.
Differential form
Joule heating can also be calculated at a particular location in space. The differential form of the Joule heating equation gives the power per unit volume.
Here, is the current density, and is the electric field. For a material with a conductivity , and therefore
where is the resistivity. This directly resembles the "" term of the macroscopic form.
In the harmonic case, where all field quantities vary with the angular frequency as , complex valued phasors and are usually introduced for the current density and the electric field intensity, respectively. The Joule heating then reads
where denotes the complex conjugate.
Electricity transmission
Overhead power lines transfer electrical energy from electricity producers to consumers. Those power lines have a nonzero resistance and therefore are subject to Joule heating, which causes transmission losses.
The split of power between transmission losses (Joule heating in transmission lines) and load (useful energy delivered to the consumer) can be approximated by a voltage divider. In order to minimize transmission losses, the resistance of the lines has to be as small as possible compared to the load (resistance of consumer appliances). Line resistance is minimized by the use of copper conductors, but the resistance and power supply specifications of consumer appliances are fixed.
Usually, a transformer is placed between the lines and consumption. When a high-voltage, low-intensity current in the primary circuit (before the transformer) is converted into a low-voltage, high-intensity current in the secondary circuit (after the transformer), the equivalent resistance of the secondary circuit becomes higher and transmission losses are reduced in proportion.
During the war of currents, AC installations could use transformers to reduce line losses by Joule heating, at the cost of higher voltage in the transmission lines, compared to DC installations.
Applications
Food processing
Joule heating is a flash pasteurization (also called "high-temperature short-time" (HTST)) aseptic process that runs an alternating current of 50–60 Hz through food. Heat is generated through the food's electrical resistance. As the product heats, electrical conductivity increases linearly. A higher electrical current frequency is best as it reduces oxidation and metallic contamination. This heating method is best for foods that contain particulates suspended in a weak salt-containing medium due to their high resistance properties.
Heat is generated rapidly and uniformly in the liquid matrix as well as in particulates, producing a higher quality sterile product that is suitable for aseptic processing.
Electrical energy is linearly translated to thermal energy as electrical conductivity increases, and this is the key process parameter that affects heating uniformity and heating rate. This heating method is best for foods that contain particulates suspended in a weak salt containing medium due to their high resistance properties. Ohmic heating is beneficial due to its ability to inactivate microorganisms through thermal and non-thermal cellular damage.
This method can also inactivate antinutritional factors thereby maintaining nutritional and sensory properties. However, ohmic heating is limited by viscosity, electrical conductivity, and fouling deposits. Although ohmic heating has not yet been approved by the Food and Drug Administration (FDA) for commercial use, this method has many potential applications, ranging from cooking to fermentation.
There are different configurations for continuous ohmic heating systems, but in the most basic process, a power supply or generator is needed to produce electrical current. Electrodes, in direct contact with food, pass electric current through the matrix. The distance between the electrodes can be adjusted to achieve the optimum electrical field strength.
The generator creates the electrical current which flows to the first electrode and passes through the food product placed in the electrode gap. The food product resists the flow of current causing internal heating. The current continues to flow to the second electrode and back to the power source to close the circuit. The insulator caps around the electrodes controls the environment within the system.
The electrical field strength and the residence time are the key process parameters which affect heat generation.
The ideal foods for ohmic heating are viscous with particulates.
Thick soups
Sauces
Stews
Salsa
Fruit in a syrup medium
Milk
Ice cream mix
Egg
Whey
Heat sensitive liquids
Soymilk
The efficiency by which electricity is converted to heat depends upon on salt, water, and fat content due to their thermal conductivity and resistance factors. In particulate foods, the particles heat up faster than the liquid matrix due to higher resistance to electricity and matching conductivity can contribute to uniform heating. This prevents overheating of the liquid matrix while particles receive sufficient heat processing. Table 1 shows the electrical conductivity values of certain foods to display the effect of composition and salt concentration. The high electrical conductivity values represent a larger number of ionic compounds suspended in the product, which is directly proportional to the rate of heating. This value is increased in the presence of polar compounds, like acids and salts, but decreased with nonpolar compounds, like fats. Electrical conductivity of food materials generally increases with temperature, and can change if there are structural changes caused during heating such as gelatinization of starch. Density, pH, and specific heat of various components in a food matrix can also influence heating rate.
Benefits of Ohmic heating include: uniform and rapid heating (>1°Cs−1), less cooking time, better energy efficiency, lower capital cost, and heating simulataneously throughout food's volume as compared to aseptic processing, canning, and PEF. Volumetric heating allows internal heating instead of transferring heat from a secondary medium. This results in the production of safe, high quality food with minimal changes to structural, nutritional, and organoleptic properties of food. Heat transfer is uniform to reach areas of food that are harder to heat. Less fouling accumulates on the electrodes as compared to other heating methods. Ohmic heating also requires less cleaning and maintenance, resulting in an environmentally cautious heating method.
Microbial inactivation in ohmic heating is achieved by both thermal and non-thermal cellular damage from the electrical field. This method destroys microorganisms due to electroporation of cell membranes, physical membrane rupture, and cell lysis. In electroporation, excessive leakage of ions and intramolecular components results in cell death. In membrane rupture, cells swell due to an increase in moisture diffusion across the cell membrane. Pronounced disruption and decomposition of cell walls and cytoplasmic membranes causes cells to lyse.
Decreased processing times in ohmic heating maintains nutritional and sensory properties of foods. Ohmic heating inactivates antinutritional factors like lipoxigenase (LOX), polyphenoloxidase (PPO), and pectinase due to the removal of active metallic groups in enzymes by the electrical field. Similar to other heating methods, ohmic heating causes gelatinization of starches, melting of fats, and protein agglutination. Water-soluble nutrients are maintained in the suspension liquid allowing for no loss of nutritional value if the liquid is consumed.
Ohmic heating is limited by viscosity, electrical conductivity, and fouling deposits. The density of particles within the suspension liquid can limit the degree of processing. A higher viscosity fluid will provide more resistance to heating, allowing the mixture to heat up quicker than low viscosity products. A food product's electrical conductivity is a function of temperature, frequency, and product composition. This may be increased by adding ionic compounds, or decreased by adding non-polar constituents. Changes in electrical conductivity limit ohmic heating as it is difficult to model the thermal process when temperature increases in multi-component foods.
The potential applications of ohmic heating range from cooking, thawing, blanching, peeling, evaporation, extraction, dehydration, and fermentation. These allow for ohmic heating to pasteurize particulate foods for hot filling, pre-heat products prior to canning, and aseptically process ready-to-eat meals and refrigerated foods. Prospective examples are outlined in Table 2 as this food processing method has not been commercially approved by the FDA. Since there is currently insufficient data on electrical conductivities for solid foods, it is difficult to prove the high quality and safe process design for ohmic heating. Additionally, a successful 12D reduction for C. botulinum prevention has yet to be validated.
Materials synthesis, recovery and processing
Flash joule heating (transient high-temperature electrothermal heating) has been used to synthesize allotropes of carbon, including graphene and diamond. Heating various solid carbon feedstocks (carbon black, coal, coffee grounds, etc.) to temperatures of ~3000 K for 10-150 milliseconds produces turbostratic graphene flakes. FJH has also been used to recover rare-earth elements used in modern electronics from industrial wastes. Beginning from a fluorinated carbon source, fluorinated activated carbon, fluorinated nanodiamond, concentric carbon (carbon shell around a nanodiamond core), and fluorinated flash graphene can be synthesized.
Gallery
Heating efficiency
Heat is not to be confused with internal energy or synonymously thermal energy. While intimately connected to heat, they are distinct physical quantities.
As a heating technology, Joule heating has a coefficient of performance of 1.0, meaning that every joule of electrical energy supplied produces one joule of heat. In contrast, a heat pump can have a coefficient of more than 1.0 since it moves additional thermal energy from the environment to the heated item.
The definition of the efficiency of a heating process requires defining the boundaries of the system to be considered. When heating a building, the overall efficiency is different when considering heating effect per unit of electric energy delivered on the customer's side of the meter, compared to the overall efficiency when also considering the losses in the power plant and transmission of power.
Hydraulic equivalent
In the energy balance of groundwater flow a hydraulic equivalent of Joule's law is used:
where:
= loss of hydraulic energy () due to friction of flow in -direction per unit of time (m/day), comparable to
= flow velocity in -direction (m/day), comparable to
= hydraulic conductivity of the soil (m/day), the hydraulic conductivity is inversely proportional to the hydraulic resistance which compares to
See also
References
Electric heating
Electricity
Thermodynamics
Heating
Food processing
Cooking techniques | Joule heating | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,371 | [
"Thermodynamics",
"Dynamical systems"
] |
604,896 | https://en.wikipedia.org/wiki/Life-cycle%20assessment | Life cycle assessment (LCA), also known as life cycle analysis, is a methodology for assessing environmental impacts associated with all the stages of the life cycle of a commercial product, process, or service. For instance, in the case of a manufactured product, environmental impacts are assessed from raw material extraction and processing (cradle), through the product's manufacture, distribution and use, to the recycling or final disposal of the materials composing it (grave).
An LCA study involves a thorough inventory of the energy and materials that are required across the supply chain and value chain of a product, process or service, and calculates the corresponding emissions to the environment. LCA thus assesses cumulative potential environmental impacts. The aim is to document and improve the overall environmental profile of the product by serving as a holistic baseline upon which carbon footprints can be accurately compared.
The LCA method is based on ISO 14040 (2006) and ISO 14044 (2006) standards. Widely recognized procedures for conducting LCAs are included in the 14000 series of environmental management standards of the International Organization for Standardization (ISO), in particular, in ISO 14040 and ISO 14044. ISO 14040 provides the 'principles and framework' of the Standard, while ISO 14044 provides an outline of the 'requirements and guidelines'. Generally, ISO 14040 was written for a managerial audience and ISO 14044 for practitioners. As part of the introductory section of ISO 14040, LCA has been defined as the following:LCA studies the environmental aspects and potential impacts throughout a product's life cycle (i.e., cradle-to-grave) from raw materials acquisition through production, use and disposal. The general categories of environmental impacts needing consideration include resource use, human health, and ecological consequences.Criticisms have been leveled against the LCA approach, both in general and with regard to specific cases (e.g., in the consistency of the methodology, the difficulty in performing, the cost in performing, revealing of intellectual property, and the understanding of system boundaries). When the understood methodology of performing an LCA is not followed, it can be completed based on a practitioner's views or the economic and political incentives of the sponsoring entity (an issue plaguing all known data-gathering practices). In turn, an LCA completed by 10 different parties could yield 10 different results. The ISO LCA Standard aims to normalize this; however, the guidelines are not overly restrictive and 10 different answers may still be generated.
Definition, synonyms, goals, and purpose
Life cycle assessment (LCA) is sometimes referred to synonymously as life cycle analysis in the scholarly and agency report literatures. Also, due to the general nature of an LCA study of examining the life cycle impacts from raw material extraction (cradle) through disposal (grave), it is sometimes referred to as "cradle-to-grave analysis".
As stated by the National Risk Management Research Laboratory of the EPA, "LCA is a technique to assess the environmental aspects and potential impacts associated with a product, process, or service, by:
Compiling an inventory of relevant energy and material inputs and environmental releases
Evaluating the potential environmental impacts associated with identified inputs and releases
Interpreting the results to help you make a more informed decision".
Hence, it is a technique to assess environmental impacts associated with all the stages of a product's life from raw material extraction through materials processing, manufacture, distribution, use, repair and maintenance, and disposal or recycling. The results are used to help decision-makers select products or processes that result in the least impact to the environment by considering an entire product system and avoiding sub-optimization that could occur if only a single process were used.
Therefore, the goal of LCA is to compare the full range of environmental effects assignable to products and services by quantifying all inputs and outputs of material flows and assessing how these material flows affect the environment. This information is used to improve processes, support policy and provide a sound basis for informed decisions.
The term life cycle refers to the notion that a fair, holistic assessment requires the assessment of raw-material production, manufacture, distribution, use and disposal including all intervening transportation steps necessary or caused by the product's existence.
Despite attempts to standardize LCA, results from different LCAs are often contradictory, therefore it is unrealistic to expect these results to be unique and objective. Thus, it should not be considered as such, but rather as a family of methods attempting to quantify results through different points-of-view. Among these methods are two main types: Attributional LCA and Consequential LCA. Attributional LCAs seek to attribute the burdens associated with the production and use of a product, or with a specific service or process, for an identified temporal period. Consequential LCAs seek to identify the environmental consequences of a decision or a proposed change in a system under study, and thus are oriented to the future and require that market and economic implications must be taken into account. In other words, Attributional LCA "attempts to answer 'how are things (i.e. pollutants, resources, and exchanges among processes) flowing within the chosen temporal window?', while Consequential LCA attempts to answer 'how will flows beyond the immediate system change in response to decisions?"
A third type of LCA, termed "social LCA", is also under development and is a distinct approach to that is intended to assess potential social and socio-economic implications and impacts. Social life cycle assessment (SLCA) is a useful tool for companies to identify and assess potential social impacts along the lifecycle of a product or service on various stakeholders (for example: workers, local communities, consumers). SLCA is framed by the UNEP/SETAC’s Guidelines for social life cycle assessment of products published in 2009 in Quebec. The tool builds on the ISO 26000:2010 Guidelines for Social Responsibility and the Global Reporting Initiative (GRI) Guidelines.
The limitations of LCA to focus solely on the ecological aspects of sustainability, and not the economical or social aspects, distinguishes it from product line analysis (PLA) and similar methods. This limitation was made deliberately to avoid method overload but recognizes these factors should not be ignored when making product decisions.
Some widely recognized procedures for LCA are included in the ISO 14000 series of environmental management standards, in particular, ISO 14040 and 14044. Greenhouse gas (GHG) product life cycle assessments can also comply with specifications such as Publicly Available Specification (PAS) 2050 and the GHG Protocol Life Cycle Accounting and Reporting Standard.
Main ISO phases of LCA
According to standards in the ISO 14040 and 14044, an LCA is carried out in four distinct phases, as illustrated in the figure shown at the above right (at opening of the article). The phases are often interdependent, in that the results of one phase will inform how other phases are completed. Therefore, none of the stages should be considered finalized until the entire study is complete.
Goal and scope
An LCA study begins with a goal and scope definition phase, which includes the product function, functional unit, product system and its boundaries, assumptions, data categories, allocation procedures, and review method to be employed in the analysis. The ISO LCA Standard requires a series of parameters to be quantitatively and qualitatively expressed, which are occasionally referred to as study design parameters (SPDs). The two main SPDs for an LCA are the goal and scope, both which must be explicitly stated.
Generally, an LCA study starts with a clear statement of its goal, outlining the study's context and detailing how and to whom the results will be communicated. Per ISO guidelines, the goal must unambiguously state the following items:
The intended application
Reasons for carrying out the study
The audience
Whether the results will be used in a comparative assertion released publicly
The goal should also be defined with the commissioner for the study, and it is recommended a detailed description for why the study is being carried out is acquired from the commissioner.
Following the goal, the scope must be defined by outlining the qualitative and quantitative information included in the study. Unlike the goal, which may only include a few sentences, the scope often requires multiple pages. It is set to describe the detail and depth of the study and demonstrate that the goal can be achieved within the stated limitations. Under the ISO LCA Standard guidelines, the scope of the study should outline the following:
Product system, which is a collection of processes (activities that transform inputs to outputs) that are needed to perform a specified function and are within the system boundary of the study. It is representative of all the processes in the life cycle of a product or process.
Functional unit, which defines precisely what is being studied, quantifies the service delivered by the system, provides a reference to which the inputs and outputs can be related, and provides a basis for comparing/analyzing alternative goods or services. The functional unit is a very important component of LCA and needs to be clearly defined. It is used as a basis for selecting one or more product systems that can provide the function. Therefore, the functional unit enables different systems to be treated as functionally equivalent. The defined functional unit should be quantifiable, include units, consider temporal coverage, and not contain product system inputs and outputs (e.g., kg emissions). Another way to look at it is by considering the following questions:
What?
How much?
For how long / how many times?
Where?
How well?
Reference flow, which is the amount of product or energy that is needed to realize the functional unit. Typically, the reference flow is different qualitatively and quantitatively for different products or systems across the same reference flow; however, there are instances where they can be the same.
System boundary, which delimits which processes should be included in the analysis of a product system, including whether the system produces any co-products that must be accounted for by system expansion or allocation. The system boundary should be in accordance with the stated goal of the study.
Assumptions and limitations, which includes any assumptions or decisions made throughout the study that may influence the final results. It is important these are made transmitted as the omittance may result in misinterpretation of the results. Additional assumptions and limitations necessary to accomplish the project are often made throughout the project and should recorded as necessary.
Data quality requirements, which specify the kinds of data that will be included and what restrictions. According to ISO 14044, the following data quality considerations should be documented in the scope:
Temporal coverage
Geographical coverage
Technological coverage
Precision, completeness, and representativeness of the data
Consistency and reproducibility of the methods used in the study
Sources of data
Uncertainty of information and any recognized data gaps
Allocation procedure, which is used to partition the inputs and outputs of a product and is necessary for processes that produce multiple products, or co-products. This is also known as multifunctionality of a product system. ISO 14044 presents a hierarchy of solutions to deal with multifunctionality issues, as the choice of allocation method for co-products can significantly impact results of an LCA. The hierarchy methods are as follows:
Avoid Allocation by Sub-Division - this method attempts to disaggregate the unit process into smaller sub-processes in order to separate the production of the product from the production of the co-product.
Avoid Allocation through System Expansion (or substitution) - this method attempts to expand the process of the co-product with the most likely way of providing the secondary function of the determining product (or reference product). In other words, by expanding the system of the co-product in the most likely alternative way of producing the co-product independently (System 2). The impacts resulting from the alternative way of producing the co-product (System 2) are then subtracted from the determining product to isolate the impacts in System 1.
Allocation (or partition) based on Physical Relationship - this method attempts to divide inputs and outputs and allocate them based on physical relationships between the products (e.g., mass, energy-use, etc.).
Allocation (or partition) based on Other Relationship (non-physical) - this method attempts to divide inputs and outputs and allocate them based on non-physical relationships (e.g., economic value).
Impact assessment, which includes an outline of the impact categories identified under interest for the study, and the selected methodology used to calculate the respective impacts. Specifically, life cycle inventory data is translated into environmental impact scores, which might include such categories as human toxicity, smog, global warming, and eutrophication. As part of the scope, only an overview needs to be provided, as the main analysis on the impact categories is discussed in the Life Cycle Impact Assessment (LCIA) phase of the study.
Documentation of data, which is the explicit documentation of the inputs/outputs (individual flows) used within the study. This is necessary as most analyses do not consider all inputs and outputs of a product system, so this provides the audience with a transparent representation of the selected data. It also provides transparency for why the system boundary, product system, functional unit, etc. was chosen.
Life cycle inventory (LCI)
Life cycle inventory (LCI) analysis involves creating an inventory of flows from and to nature (ecosphere) for a product system. It is the process of quantifying raw material and energy requirements, atmospheric emissions, land emissions, water emissions, resource uses, and other releases over the life cycle of a product or process. In other words, it is the aggregation of all elementary flows related to each unit process within a product system.
To develop the inventory, it is often recommended to start with a flow model of the technical system using data on inputs and outputs of the product system. The flow model is typically illustrated with a flow diagram that includes the activities that are going to be assessed in the relevant supply chain and gives a clear picture of the technical system boundaries. Generally, the more detailed and complex the flow diagram, the more accurate the study and results. The input and output data needed for the construction of the model is collected for all activities within the system boundary, including from the supply chain (referred to as inputs from the technosphere).
According to ISO 14044, an LCI should be documented using the following steps:
Preparation of data collection based on goal and scope
Data collection
Data validation (even if using another work's data)
Data allocation (if needed)
Relating data to the unit process
Relating data to the functional unit
Data aggregation
As referenced in the ISO 14044 standard, the data must be related to the functional unit, as well as the goal and scope. However, since the LCA stages are iterative in nature, the data collection phase may cause the goal or scope to change. Conversely, a change in the goal or scope during the course of the study may cause additional collection of data or removal of previously collected data in the LCI.
The output of an LCI is a compiled inventory of elementary flows from all of the processes in the studied product system(s). The data is typically detailed in charts and requires a structured approach due to its complex nature.
When collecting the data for each process within the system boundary, the ISO LCA standard requires the study to measure or estimate the data in order to quantitatively represent each process in the product system. Ideally, when collecting data, a practitioner should aim to collect data from primary sources (e.g., measuring inputs and outputs of a process on-site or other physical means). Questionnaire are frequently used to collect data on-site and can even be issued to the respective manufacturer or company to complete. Items on the questionnaire to be recorded may include:
Product for data collection
Data collector and date
Period of data collection
Detailed explanation of the process
Inputs (raw materials, ancillary materials, energy, transportation)
Outputs (emissions to air, water, and land)
Quantity and quality of each input and output
Oftentimes, the collection of primary data may be difficult and deemed proprietary or confidential by the owner. An alternative to primary data is secondary data, which is data that comes from LCA databases, literature sources, and other past studies. With secondary sources, it is often you find data that is similar to a process but not exact (e.g., data from a different country, slightly different process, similar but different machine, etc.). As such, it is important to explicitly document the differences in such data. However, secondary data is not always inferior to primary data. For example, referencing another work's data in which the author used very accurate primary data. Along with primary data, secondary data should document the source, reliability, and temporal, geographical, and technological representativeness.
When identifying the inputs and outputs to document for each unit process within the product system of an LCI, a practitioner may come across the instance where a process has multiple input streams or generate multiple output streams. In such case, the practitioner should allocate the flows based on the "Allocation procedure" outlined in the previous "Goal and scope" section of this article.
The technosphere is more simply defined as the human-made world, and considered by geologists as secondary resources, these resources are in theory 100% recyclable; however, in a practical sense, the primary goal is salvage. For an LCI, these technosphere products (supply chain products) are those that have been produced by humans, including products such as forestry, materials, and energy flows. Typically, they will not have access to data concerning inputs and outputs for previous production processes of the product. The entity undertaking the LCA must then turn to secondary sources if it does not already have that data from its own previous studies. National databases or data sets that come with LCA-practitioner tools, or that can be readily accessed, are the usual sources for that information. Care must then be taken to ensure that the secondary data source properly reflects regional or national conditions.
LCI methods include "process-based LCAs", economic input–output LCA (EIOLCA), and hybrid approaches. Process-based LCA is a bottom-up LCI approach the constructs an LCI using knowledge about industrial processes within the life cycle of a product, and the physical flows connecting them. EIOLCA is a top-down approach to LCI and uses information on elementary flows associated with one unit of economic activity across different sectors. This information is typically pulled from government agency national statistics tracking trade and services between sectors. Hybrid LCA is a combination of process-based LCA and EIOLCA.
The quality of LCI data is typically evaluated with the use of a pedigree matrix. Different pedigree matrices are available, but all contain a number of data quality indicators and a set of qualitative criteria per indicator. There is another hybrid approach integrates the widely used, semi-quantitative approach that uses a pedigree matrix, into a qualitative analysis to better illustrate the quality of LCI data for non-technical audiences, in particular policymakers.
Life cycle impact assessment (LCIA)
Life cycle inventory analysis is followed by a life cycle impact assessment (LCIA). This phase of LCA is aimed at evaluating the potential environmental and human health impacts resulting from the elementary flows determined in the LCI. The ISO 14040 and 14044 standards require the following mandatory steps for completing an LCIA:
Mandatory
Selection of impaction categories, category indicators, and characterization models. The ISO Standard requires that a study selects multiple impacts that encompass "a comprehensive set of environmental issues". The impacts should be relevant to the geographical region of the study and justification for each chosen impact should be discussed. Often times in practice, this is completed by choosing an already existing LCIA method (e.g., TRACI, ReCiPe, AWARE, etc.).
Classification of inventory results. In this step, the LCI results are assigned to the chosen impact categories based on their known environmental effects. In practice, this is often completed using LCI databases or LCA software. Common impact categories include Global Warming, Ozone Depletion, Acidification, Human Toxicity, etc.
Characterization, which quantitatively transforms the LCI results within each impact category via "characterization factors" (also referred to as equivalency factors) to create "impact category indicators." In other words, this step is aimed at answering "how much does each result contribute to the impact category?" A main purpose of this step is to convert all classified flows for an impact into common units for comparison. For example, for Global Warming Potential, the unit is generally defined as CO2-equiv or CO2-e (CO2 equivalents) where CO2 is given a value of 1 and all other units are converted respective to their related impact.
In many LCAs, characterization concludes the LCIA analysis, as it is the last compulsory stage according to ISO 14044. However, the ISO Standard provides the following optional steps to be taken in addition to the aforementioned mandatory steps:
Optional
Normalization of results. This step aims to answer "Is that a lot?" by expressing the LCIA results in respect to a chosen reference system. A separate reference value is often chosen for each impact category, and the rationale for the step is to provide temporal and spatial perspective and to help validate the LCIA results. Standard references are typical impacts per impact category per: geographical zone, inhabitant of geographical zone (per person), industrial sector, or another product system or baseline reference scenario.
Grouping of LCIA results. This step is accomplished by sorting or ranking the LCIA results (either characterized or normalized depending on the prior steps chosen) into a single group or several groups as defined within the goal and scope. However, grouping is subjective and may be inconsistent across studies.
Weighting of impact categories. This step aims to determine the significance of each category and how important it is relative to the others. It allows studies to aggregate impact scores into a single indicator for comparison. Weighting is highly subjective and as it is often decided based on the interested parties' ethics. There are three main categories of weighting methods: the panel method, monetization method, and target method. ISO 14044 generally advises against weighting, stating that "weighting, shall not be used in LCA studies intended to be used in comparative assertions intended to be disclosed to the public". If a study decides to weight results, then the weighted results should always be reported together with the non-weighted results for transparency.
Life cycle impacts can also be categorized under the several phases of the development, production, use, and disposal of a product. Broadly speaking, these impacts can be divided into first impacts, use impacts, and end of life impacts. First impacts include extraction of raw materials, manufacturing (conversion of raw materials into a product), transportation of the product to a market or site, construction/installation, and the beginning of the use or occupancy. Use impacts include physical impacts of operating the product or facility (such as energy, water, etc.), and any maintenance, renovation, or repairs that are required to continue to use the product or facility. End of life impacts include demolition and processing of waste or recyclable materials.
Interpretation
Life cycle interpretation is a systematic technique to identify, quantify, check, and evaluate information from the results of the life cycle inventory and/or the life cycle impact assessment. The results from the inventory analysis and impact assessment are summarized during the interpretation phase. The outcome of the interpretation phase is a set of conclusions and recommendations for the study. According to ISO 14043, the interpretation should include the following:
Identification of significant issues based on the results of the LCI and LCIA phases of an LCA
Evaluation of the study considering completeness, sensitivity and consistency checks
Conclusions, limitations and recommendations
A key purpose of performing life cycle interpretation is to determine the level of confidence in the final results and communicate them in a fair, complete, and accurate manner. Interpreting the results of an LCA is not as simple as "3 is better than 2, therefore Alternative A is the best choice". Interpretation begins with understanding the accuracy of the results, and ensuring they meet the goal of the study. This is accomplished by identifying the data elements that contribute significantly to each impact category, evaluating the sensitivity of these significant data elements, assessing the completeness and consistency of the study, and drawing conclusions and recommendations based on a clear understanding of how the LCA was conducted and the results were developed.
Specifically, as voiced by M.A. Curran, the goal of the LCA interpretation phase is to identify the alternative that has the least cradle-to-grave environmental negative impact on land, sea, and air resources.
LCA uses
LCA was primarily used as a comparison tool, providing informative information on the environmental impacts of a product and comparing it to available alternatives. Its potential applications expanded to include marketing, product design, product development, strategic planning, consumer education, ecolabeling and government policy.
ISO specifies three types of classification in regard to standards and environmental labels:
Type I environmental labelling requires a third-party certification process to verify a products compliance against a set of criteria, according to ISO 14024.
Type II environmental labels are self-declared environmental claims, according to ISO 14021.
Type III environmental declaration, also known as environmental product declaration (EPD), uses LCA as a tool to report the environmental performance of a product, while conforming to the ISO standards 14040 and 14044.
EPDs provide a level of transparency that is being increasingly demanded through policies and standards around the world. They are used in the built environment as a tool for experts in the industry to compose whole building life cycle assessments more easily, as the environmental impact of individual products are known.
Data analysis
A life cycle analysis is only as accurate and valid as is its basis set of data. There are two fundamental types of LCA data–unit process data, and environmental input-output (EIO) data. A unit process data collects data around a single industrial activity and its product(s), including resources used from the environment and other industries, as well as its generated emissions throughout its life cycle. EIO data are based on national economic input-output data.
In 2001, ISO published a technical specification on data documentation, describing the format for life cycle inventory data (ISO 14048). The format includes three areas: process, modeling and validation, and administrative information.
When comparing LCAs, the data used in each LCA should be of equivalent quality, since no just comparison can be done if one product has a much higher availability of accurate and valid data, as compared to another product which has lower availability of such data.
Moreover, time horizon is a sensitive parameter and was shown to introduce inadvertent bias by providing one perspective on the outcome of LCA, when comparing the toxicity potential between petrochemicals and biopolymers for instance. Therefore, conducting sensitivity analysis in LCA are important to determine which parameters considerably impact the results, and can also be used to identify which parameters cause uncertainties.
Data sources used in LCAs are typically large databases. Common data sources include:
HESTIA (University of Oxford)
soca
EuGeos' 15804-IA
NEEDS
CarbonCloud
ecoinvent
PSILCA
ESU World Food
GaBi
ELCD
LC-Inventories.ch
Social Hotspots
ProBas
bioenergiedat
Agribalyse
USDA
Ökobaudat
Agri-footprint
Comprehensive Environmental Data Archive (CEDA)
As noted above, the inventory in the LCA usually considers a number of stages including materials extraction, processing and manufacturing, product use, and product disposal. When an LCA is done on a product across all stages, the stage with the highest environmental impact can be determined and altered. For example, woolen-garment was evaluated on its environmental impacts during its production, use and end-of-life, and identified the contribution of fossil fuel energy to be dominated by wool processing and GHG emissions to be dominated by wool production. However, the most influential factor was the number of garment wear and length of garment lifetime, indicating that the consumer has the largest influence on this products' overall environmental impact.
Variants
Cradle-to-grave or life cycle assessment
Cradle-to-grave is the full life cycle assessment from resource extraction ('cradle'), to manufacturing, usage, and maintenance, all the way through to its disposal phase ('grave'). For example, trees produce paper, which can be recycled into low-energy production cellulose (fiberised paper) insulation, then used as an energy-saving device in the ceiling of a home for 40 years, saving 2,000 times the fossil-fuel energy used in its production. After 40 years the cellulose fibers are replaced and the old fibers are disposed of, possibly incinerated. All inputs and outputs are considered for all the phases of the life cycle.
Cradle-to-gate
Cradle-to-gate is an assessment of a partial product life cycle from resource extraction (cradle) to the factory gate (i.e., before it is transported to the consumer). The use phase and disposal phase of the product are omitted in this case. Cradle-to-gate assessments are sometimes the basis for environmental product declarations (EPD) termed business-to-business EPDs. One of the significant uses of the cradle-to-gate approach compiles the life cycle inventory (LCI) using cradle-to-gate. This allows the LCA to collect all of the impacts leading up to resources being purchased by the facility. They can then add the steps involved in their transport to plant and manufacture process to more easily produce their own cradle-to-gate values for their products.
Cradle-to-cradle or closed loop production
Cradle-to-cradle is a specific kind of cradle-to-grave assessment, where the end-of-life disposal step for the product is a recycling process. It is a method used to minimize the environmental impact of products by employing sustainable production, operation, and disposal practices and aims to incorporate social responsibility into product development. From the recycling process originate new, identical products (e.g., asphalt pavement from discarded asphalt pavement, glass bottles from collected glass bottles), or different products (e.g., glass wool insulation from collected glass bottles).
Allocation of burden for products in open loop production systems presents considerable challenges for LCA. Various methods, such as the avoided burden approach have been proposed to deal with the issues involved.
Gate-to-gate
Gate-to-gate is a partial LCA looking at only one value-added process in the entire production chain. Gate-to-gate modules may also later be linked in their appropriate production chain to form a complete cradle-to-gate evaluation.
Well-to-wheel
Well-to-wheel (WtW) is the specific LCA used for transport fuels and vehicles. The analysis is often broken down into stages entitled "well-to-station", or "well-to-tank", and "station-to-wheel" or "tank-to-wheel", or "plug-to-wheel". The first stage, which incorporates the feedstock or fuel production and processing and fuel delivery or energy transmission, and is called the "upstream" stage, while the stage that deals with vehicle operation itself is sometimes called the "downstream" stage. The well-to-wheel analysis is commonly used to assess total energy consumption, or the energy conversion efficiency and emissions impact of marine vessels, aircraft and motor vehicles, including their carbon footprint, and the fuels used in each of these transport modes. WtW analysis is useful for reflecting the different efficiencies and emissions of energy technologies and fuels at both the upstream and downstream stages, giving a more complete picture of real emissions.
The well-to-wheel variant has a significant input on a model developed by the Argonne National Laboratory. The Greenhouse gases, Regulated Emissions, and Energy use in Transportation (GREET) model was developed to evaluate the impacts of new fuels and vehicle technologies. The model evaluates the impacts of fuel use using a well-to-wheel evaluation while a traditional cradle-to-grave approach is used to determine the impacts from the vehicle itself. The model reports energy use, greenhouse gas emissions, and six additional pollutants: volatile organic compounds (VOCs), carbon monoxide (CO), nitrogen oxide (NOx), particulate matter with size smaller than 10 micrometer (PM10), particulate matter with size smaller than 2.5 micrometer (PM2.5), and sulfur oxides (SOx).
Quantitative values of greenhouse gas emissions calculated with the WTW or with the LCA method can differ, since the LCA is considering more emission sources. For example, while assessing the GHG emissions of a battery electric vehicle in comparison with a conventional internal combustion engine vehicle, the WTW (accounting only the GHG for manufacturing the fuels) concludes that an electric vehicle can save around 50–60% of GHG. On the other hand, using a hybrid LCA-WTW method, concludes that GHG emission savings are 10-13% lower than the WTW results, as the GHG due to the manufacturing and the end of life of the battery are also considered.
Economic input–output life cycle assessment
Economic input–output LCA (EIOLCA) involves use of aggregate sector-level data on how much environmental impact can be attributed to each sector of the economy and how much each sector purchases from other sectors. Such analysis can account for long chains (for example, building an automobile requires energy, but producing energy requires vehicles, and building those vehicles requires energy, etc.), which somewhat alleviates the scoping problem of process LCA; however, EIOLCA relies on sector-level averages that may or may not be representative of the specific subset of the sector relevant to a particular product and therefore is not suitable for evaluating the environmental impacts of products. Additionally, the translation of economic quantities into environmental impacts is not validated.
Ecologically based LCA
While a conventional LCA uses many of the same approaches and strategies as an Eco-LCA, the latter considers a much broader range of ecological impacts. It was designed to provide a guide to wise management of human activities by understanding the direct and indirect impacts on ecological resources and surrounding ecosystems. Developed by Ohio State University Center for resilience, Eco-LCA is a methodology that quantitatively takes into account regulating and supporting services during the life cycle of economic goods and products. In this approach services are categorized in four main groups: supporting, regulating, provisioning and cultural services.
Exergy-based LCA
Exergy of a system is the maximum useful work possible during a process that brings the system into equilibrium with a heat reservoir. Wall clearly states the relation between exergy analysis and resource accounting. This intuition confirmed by DeWulf and Sciubba lead to Exergo-economic accounting and to methods specifically dedicated to LCA such as Exergetic material input per unit of service (EMIPS). The concept of material input per unit of service (MIPS) is quantified in terms of the second law of thermodynamics, allowing the calculation of both resource input and service output in exergy terms. This exergetic material input per unit of service (EMIPS) has been elaborated for transport technology. The service not only takes into account the total mass to be transported and the total distance, but also the mass per single transport and the delivery time.
Life cycle energy analysis
Life cycle energy analysis (LCEA) is an approach in which all energy inputs to a product are accounted for, not only direct energy inputs during manufacture, but also all energy inputs needed to produce components, materials and services needed for the manufacturing process. With LCEA, the total life cycle energy input is established.
Energy production
It is recognized that much energy is lost in the production of energy commodities themselves, such as nuclear energy, photovoltaic electricity or high-quality petroleum products. Net energy content is the energy content of the product minus energy input used during extraction and conversion, directly or indirectly. A controversial early result of LCEA claimed that manufacturing solar cells requires more energy than can be recovered in using the solar cell. Although these results were true when solar cells were first manufactured, their efficiency increased greatly over the years. Currently, energy payback time of photovoltaic solar panels range from a few months to several years. Module recycling could further reduce the energy payback time to around one month. Another new concept that flows from life cycle assessments is energy cannibalism. Energy cannibalism refers to an effect where rapid growth of an entire energy-intensive industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants. Thus, during rapid growth, the industry as a whole produces no energy because new energy is used to fuel the embodied energy of future power plants. Work has been undertaken in the UK to determine the life cycle energy (alongside full LCA) impacts of a number of renewable technologies.
Energy recovery
If materials are incinerated during the disposal process, the energy released during burning can be harnessed and used for electricity production. This provides a low-impact energy source, especially when compared with coal and natural gas. While incineration produces more greenhouse gas emissions than landfills, the waste plants are well-fitted with regulated pollution control equipment to minimize this negative impact. A study comparing energy consumption and greenhouse gas emissions from landfills (without energy recovery) against incineration (with energy recovery) found incineration to be superior in all cases except for when landfill gas is recovered for electricity production.
Criticism
Energy efficiency is arguably only one consideration in deciding which alternative process to employ, and should not be elevated as the only criterion for determining environmental acceptability. For example, a simple energy analysis does not take into account the renewability of energy flows or the toxicity of waste products. Incorporating "dynamic LCAs", e.g., with regard to renewable energy technologies—which use sensitivity analyses to project future improvements in renewable systems and their share of the power grid—may help mitigate this criticism.
In recent years, the literature on life cycle assessment of energy technology has begun to reflect the interactions between the current electrical grid and future energy technology. Some papers have focused on energy life cycle, while others have focused on carbon dioxide (CO2) and other greenhouse gases. The essential critique given by these sources is that when considering energy technology, the growing nature of the power grid must be taken into consideration. If this is not done, a given class energy technology may emit more CO2 over its lifetime than it initially thought it would mitigate, with this most well documented {{Citation needed|reason=Please include a study|date=October 2023}} in wind energy's case.
A problem that arises when using the energy analysis method is that different energy forms—heat, electricity, chemical energy etc.—have inconsistent functional units, different quality, and different values. This is due to the fact that the first law of thermodynamics measures the change in internal energy, whereas the second law measures entropy increase. Approaches such as cost analysis or exergy may be used as the metric for LCA, instead of energy.
LCA dataset creation
There are structured systematic datasets of and for LCAs.
A 2022 dataset provided standardized calculated detailed environmental impacts of >57,000 food products in supermarkets, potentially e.g., informing consumers or policy. There also is at least one crowdsourced database for collecting LCA data for food products.
Datasets can also consist of options, activities, or approaches, rather than of products – for example one dataset assesses PET bottle waste management options in Bauru, Brazil. There are also LCA databases about buildings – complex products – which a 2014 study compared.
LCA dataset platforms
There are some initiatives to develop, integrate, populate, standardize, quality control, combine and maintain such datasets or LCAs – for example:
The goal of the LCA Digital Commons Project of the U.S. National Agricultural Library is "to develop a database and tool set intended to provide data for use in LCAs of food, biofuels, and a variety of other bioproducts".
The Global LCA Data Access network (GLAD) by the UN's Life Cycle Initiative is a "platform which allows to search, convert and download datasets from different life cycle assessment dataset providers".
The BONSAI project "aims to build a shared resource where the community can contribute to data generation, validation, and management decisions" for "product footprinting" with its first goal being "to produce an open dataset and an open source toolchain capable of supporting LCA calculations". With product footprints they refer to the goal of "reliable, unbiased sustainability information on products".
Dataset optimization
Datasets that are suboptimal in accuracy or have gaps can be, temporarily until the complete data is available or permanently, be patched or optimized by various methods such as mechanisms for "selection of a dataset that represents the missing dataset that leads in most cases to a much better approximation of environmental impacts than a dataset selected by default or by geographical proximity" or machine learning.
Integration in systems and systems theory
Life cycle assessments can be integrated as routine processes of systems, as input for modeled future socio-economic pathways, or, more broadly, into a larger context (such as qualitative scenarios).
For example, a study estimated the environmental benefits of microbial protein within a future socio-economic pathway, showing substantial deforestation reduction (56%) and climate change mitigation if only of per-capita beef was replaced by microbial protein by 2050.
Life cycle assessments, including as product/technology analyses, can also be integrated in analyses of potentials, barriers and methods to shift or regulate consumption or production.
The life cycle perspective also allows considering losses and lifetimes of rare goods and services in the economy. For example, the usespans of, often scarce, tech-critical metals were found to be short as of 2022. Such data could be combined with conventional life cycle analyses, e.g., to enable life-cycle material/labor cost analyses and long-term economic viability or sustainable design. One study suggests that in LCAs, resource availability is, as of 2013, "evaluated by means of models based on depletion time, surplus energy, etc."
Broadly, various types of life cycle assessments (or commissioning such) could be used in various ways in various types of societal decision-making, especially because financial markets of the economy typically do not consider life cycle impacts or induced societal problems in the future and present—the "externalities" to the contemporary economy.
Critiques
Life cycle assessment is a powerful tool for analyzing commensurable aspects of quantifiable systems. Not every factor, however, can be reduced to a number and inserted into a model. Rigid system boundaries make accounting for changes in the system difficult. This is sometimes referred to as the boundary critique to systems thinking. The accuracy and availability of data can also contribute to inaccuracy. For instance, data from generic processes may be based on averages, unrepresentative sampling, or outdated results. This is especially the case for the use and end of life phases in the LCA. Additionally, social implications of products are generally lacking in LCAs. Comparative life cycle analysis is often used to determine a better process or product to use. However, because of aspects like differing system boundaries, different statistical information, different product uses, etc., these studies can easily be swayed in favor of one product or process over another in one study and the opposite in another study based on varying parameters and different available data. There are guidelines to help reduce such conflicts in results but the method still provides a lot of room for the researcher to decide what is important, how the product is typically manufactured, and how it is typically used.
An in-depth review of 13 LCA studies of wood and paper products found a lack of consistency in the methods and assumptions used to track carbon during the product lifecycle. A wide variety of methods and assumptions were used, leading to different and potentially contrary conclusions—particularly with regard to carbon sequestration and methane generation in landfills and with carbon accounting during forest growth and product use.
Moreover, the fidelity of LCAs can vary substantially as various data may not be incorporated, especially in early versions: for example, LCAs that do not consider regional emission information can under-estimate the life cycle environmental impact.
See also
Agroecology
Agroecosystem analysis
Anthropogenic metabolism
Biofuel
Carbon footprint
Depreciation
Design for the environment
Ecodesign
Ecological footprint
End-of-life (product)
ISO 15686
True cost accounting
Water footprint
References
Further reading
Crawford, R.H. (2011) Life Cycle Assessment in the Built Environment, London: Taylor and Francis.
J. Guinée, ed:, Handbook on Life Cycle Assessment: Operational Guide to the ISO Standards, Kluwer Academic Publishers, 2002.
Baumann, H. och Tillman, A-M. The hitchhiker's guide to LCA : an orientation in life cycle assessment methodology and application. 2004.
Curran, Mary A. "Environmental Life Cycle Assessment", McGraw-Hill Professional Publishing, 1996,
Ciambrone, D. F. (1997). Environmental Life Cycle Analysis. Boca Raton, FL: CRC Press. .
Horne, Ralph., et al. "LCA: Principles, Practice and Prospects". CSIRO Publishing, Victoria, Australia, 2009.,
Vallero, Daniel A. and Brasier, Chris (2008), "Sustainable Design: The Science of Sustainability and Green Engineering", John Wiley and Sons, Inc., Hoboken, NJ, . 350 pages.
Vigon, B. W. (1994). Life Cycle Assessment: Inventory Guidelines and Principles. Boca Raton, FL: CRC Press. .
Vogtländer, J.G., "A practical guide to LCA for students, designers, and business managers", VSSD, 2010, .
When
External links
LCA Example: Light Emitting Diode (LED) from GSA's Sustainable Facilities Tool
Design for X
Environmental impact assessment
Industrial ecology
Management cybernetics | Life-cycle assessment | [
"Chemistry",
"Engineering"
] | 9,586 | [
"Design for X",
"Industrial engineering",
"Environmental engineering",
"Industrial ecology",
"Design"
] |
605,011 | https://en.wikipedia.org/wiki/AM%E2%80%93GM%20inequality | In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number).
The simplest non-trivial case is for two non-negative numbers and , that is,
with equality if and only if . This follows from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the identity :
Hence , with equality when , i.e. . The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both sides by 2.
For a geometrical interpretation, consider a rectangle with sides of length and ; it has perimeter and area . Similarly, a square with all sides of length has the perimeter and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that and that only the square has the smallest perimeter amongst all rectangles of equal area.
The simplest case is implicit in Euclid's Elements, Book 5, Proposition 25.
Extensions of the AM–GM inequality treat weighted means and generalized means.
Background
The arithmetic mean, or less precisely the average, of a list of numbers is the sum of the numbers divided by :
The geometric mean is similar, except that it is only defined for a list of nonnegative real numbers, and uses multiplication and a root in place of addition and division:
If , this is equal to the exponential of the arithmetic mean of the natural logarithms of the numbers:
The inequality
Restating the inequality using mathematical notation, we have that for any list of nonnegative real numbers ,
and that equality holds if and only if .
Geometric interpretation
In two dimensions, is the perimeter of a rectangle with sides of length and . Similarly, is the perimeter of a square with the same area, , as that rectangle. Thus for the AM–GM inequality states that a rectangle of a given area has the smallest perimeter if that rectangle is also a square.
The full inequality is an extension of this idea to dimensions. Consider an -dimensional box with edge lengths .
Every vertex of the box is connected to edges of different directions, so the average length of edges incident to the vertex is .
On the other hand, is the edge length of an -dimensional cube of equal volume, which therefore is also the average length of edges incident to a vertex of the cube.
Thus the AM–GM inequality states that only the -cube has the smallest average length of edges connected to each vertex amongst all -dimensional boxes with the same volume.
Examples
Example 1
If , then the AM-GM inequality tells us that
Example 2
A simple upper bound for can be found. AM-GM tells us
and so
with equality at .
Equivalently,
Example 3
Consider the function
for all positive real numbers , and . Suppose we wish to find the minimal value of this function. It can be rewritten as:
with
Applying the AM–GM inequality for , we get
Further, we know that the two sides are equal exactly when all the terms of the mean are equal:
All the points satisfying these conditions lie on a half-line starting at the origin and are given by
Applications
Cauchy-Schwarz inequality
The AM-GM equality can be used to prove the Cauchy–Schwarz inequality.
Annualized returns
In financial mathematics, the AM-GM inequality shows that the annualized return, the geometric mean, is less than the average annual return, the arithmetic mean.
Nonnegative polynomials
The Motzkin polynomial is a nonnegative polynomial which is not a sum of square polynomials. It can be proven nonnegative using the AM-GM inequality with , , and , that is, Simplifying and multiplying both sides by 3 gives so
Proofs of the AM–GM inequality
The AM–GM inequality can be proven in many ways.
Proof using Jensen's inequality
Jensen's inequality states that the value of a concave function of an arithmetic mean is greater than or equal to the arithmetic mean of the function's values. Since the logarithm function is concave, we have
Taking antilogs of the far left and far right sides, we have the AM–GM inequality.
Proof by successive replacement of elements
We have to show that
with equality only when all numbers are equal.
If not all numbers are equal, then there exist such that . Replacing by and by will leave the arithmetic mean of the numbers unchanged, but will increase the geometric mean because
If the numbers are still not equal, we continue replacing numbers as above. After at most such replacement steps all the numbers will have been replaced with while the geometric mean strictly increases at each step. After the last step, the geometric mean will be , proving the inequality.
It may be noted that the replacement strategy works just as well from the right hand side. If any of the numbers is 0 then so will the geometric mean thus proving the inequality trivially. Therefore we may suppose that all the numbers are positive. If they are not all equal, then there exist such that . Replacing by and by leaves the geometric mean unchanged but strictly decreases the arithmetic mean since
. The proof then follows along similar lines as in the earlier replacement.
Induction proofs
Proof by induction #1
Of the non-negative real numbers , the AM–GM statement is equivalent to
with equality if and only if for all .
For the following proof we apply mathematical induction and only well-known rules of arithmetic.
Induction basis: For the statement is true with equality.
Induction hypothesis: Suppose that the AM–GM statement holds for all choices of non-negative real numbers.
Induction step: Consider non-negative real numbers , . Their arithmetic mean satisfies
If all the are equal to , then we have equality in the AM–GM statement and we are done. In the case where some are not equal to , there must exist one number that is greater than the arithmetic mean , and one that is smaller than . Without loss of generality, we can reorder our in order to place these two particular elements at the end: and . Then
Now define with
and consider the numbers which are all non-negative. Since
Thus, is also the arithmetic mean of numbers and the induction hypothesis implies
Due to (*) we know that
hence
in particular . Therefore, if at least one of the numbers is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we can substitute (***) into (**) to get
which completes the proof.
Proof by induction #2
First of all we shall prove that for real numbers and there follows
Indeed, multiplying both sides of the inequality by , gives
whence the required inequality is obtained immediately.
Now, we are going to prove that for positive real numbers satisfying
, there holds
The equality holds only if .
Induction basis: For the statement is true because of the above property.
Induction hypothesis: Suppose that the statement is true for all natural numbers up to .
Induction step: Consider natural number , i.e. for positive real numbers , there holds . There exists at least one , so there must be at least one . Without loss of generality, we let and .
Further, the equality we shall write in the form of . Then, the induction hypothesis implies
However, taking into account the induction basis, we have
which completes the proof.
For positive real numbers , let's denote
The numbers satisfy the condition . So we have
whence we obtain
with the equality holding only for .
Proof by Cauchy using forward–backward induction
The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from Augustin Louis Cauchy and can be found in his Cours d'analyse.
The case where all the terms are equal
If all the terms are equal:
then their sum is , so their arithmetic mean is ; and their product is , so their geometric mean is ; therefore, the arithmetic mean and geometric mean are equal, as desired.
The case where not all the terms are equal
It remains to show that if not all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when .
This case is significantly more complex, and we divide it into subcases.
The subcase where n = 2
If , then we have two terms, and , and since (by our assumption) not all terms are equal, we have:
hence
as desired.
The subcase where n = 2k
Consider the case where , where is a positive integer. We proceed by mathematical induction.
In the base case, , so . We have already shown that the inequality holds when , so we are done.
Now, suppose that for a given , we have already shown that the inequality holds for , and we wish to show that it holds for . To do so, we apply the inequality twice for numbers and once for numbers to obtain:
where in the first inequality, the two sides are equal only if
and
(in which case the first arithmetic mean and first geometric mean are both equal to , and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all numbers are equal, it is not possible for both inequalities to be equalities, so we know that:
as desired.
The subcase where n < 2k
If is not a natural power of , then it is certainly less than some natural power of 2, since the sequence is unbounded above. Therefore, without loss of generality, let be some natural power of that is greater than .
So, if we have terms, then let us denote their arithmetic mean by , and expand our list of terms thus:
We then have:
so
and
as desired.
Proof by induction using basic calculus
The following proof uses mathematical induction and some basic differential calculus.
Induction basis: For the statement is true with equality.
Induction hypothesis: Suppose that the AM–GM statement holds for all choices of non-negative real numbers.
Induction step: In order to prove the statement for non-negative real numbers , we need to prove that
with equality only if all the numbers are equal.
If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all numbers are positive.
We consider the last number as a variable and define the function
Proving the induction step is equivalent to showing that for all , with only if and are all equal. This can be done by analyzing the critical points of using some basic calculus.
The first derivative of is given by
A critical point has to satisfy , which means
After a small rearrangement we get
and finally
which is the geometric mean of . This is the only critical point of . Since for all , the function is strictly convex and has a strict global minimum at . Next we compute the value of the function at this global minimum:
where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when are all equal. In this case, their geometric mean has the same value, Hence, unless are all equal, we have . This completes the proof.
This technique can be used in the same manner to prove the generalized AM–GM inequality and Cauchy–Schwarz inequality in Euclidean space .
Proof by Pólya using the exponential function
George Pólya provided a proof similar to what follows. Let for all real , with first derivative and second derivative . Observe that , and for all real , hence is strictly convex with the absolute minimum at . Hence for all real with equality only for .
Consider a list of non-negative real numbers . If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean . By -fold application of the above inequality, we obtain that
with equality if and only if for every . The argument of the exponential function can be simplified:
Returning to ,
which produces , hence the result
Proof by Lagrangian multipliers
If any of the are , then there is nothing to prove. So we may assume all the are strictly positive.
Because the arithmetic and geometric means are homogeneous of degree 1, without loss of generality assume that . Set , and . The inequality will be proved (together with the equality case) if we can show that the minimum of subject to the constraint is equal to , and the minimum is only achieved when . Let us first show that the constrained minimization problem has a global minimum.
Set . Since the intersection is compact, the extreme value theorem guarantees that the minimum of subject to the constraints and is attained at some point inside . On the other hand, observe that if any of the , then , while , and . This means that the minimum inside is in fact a global minimum, since the value of at any point inside is certainly no smaller than the minimum, and the value of at any point not inside is strictly bigger than the value at , which is no smaller than the minimum.
The method of Lagrange multipliers says that the global minimum is attained at a point where the gradient of is times the gradient of , for some . We will show that the only point at which this happens is when and
Compute
and
along the constraint. Setting the gradients proportional to one another therefore gives for each that and so Since the left-hand side does not depend on , it follows that , and since , it follows that and , as desired.
Generalizations
Weighted AM–GM inequality
There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers and the nonnegative weights be given. Set . If , then the inequality
holds with equality if and only if all the with are equal. Here the convention is used.
If all , this reduces to the above inequality of arithmetic and geometric means.
One stronger version of this, which also gives strengthened version of the unweighted version, is due to Aldaz. Specifically, let the nonnegative numbers and the nonnegative weights be given. Assume further that the sum of the weights is 1. Then
.
Proof using Jensen's inequality
Using the finite form of Jensen's inequality for the natural logarithm, we can prove the inequality between the weighted arithmetic mean and the weighted geometric mean stated above.
Since an with weight has no influence on the inequality, we may assume in the following that all weights are positive. If all are equal, then equality holds. Therefore, it remains to prove strict inequality if they are not all equal, which we will assume in the following, too. If at least one is zero (but not all), then the weighted geometric mean is zero, while the weighted arithmetic mean is positive, hence strict inequality holds. Therefore, we may assume also that all are positive.
Since the natural logarithm is strictly concave, the finite form of Jensen's inequality and the functional equations of the natural logarithm imply
Since the natural logarithm is strictly increasing,
Matrix arithmetic–geometric mean inequality
Most matrix generalizations of the arithmetic geometric mean inequality apply on the level of unitarily invariant norms, since, even if the matrices and are positive semi-definite, the matrix may not be positive semi-definite and hence may not have a canonical square root. In Bhatia and Kittaneh proved that for any unitarily invariant norm and positive semi-definite matrices and it is the case that
Later, in the same authors proved the stronger inequality that
Finally, it is known for dimension that the following strongest possible matrix generalization of the arithmetic-geometric mean inequality holds, and it is conjectured to hold for all
This conjectured inequality was shown by Stephen Drury in 2012. Indeed, he proved
Finance: Link to geometric asset returns
In finance much research is concerned with accurately estimating the rate of return of an asset over multiple periods in the future. In the case of lognormal asset returns, there is an exact formula to compute the arithmetic asset return from the geometric asset return.
For simplicity, assume we are looking at yearly geometric returns over a time horizon of years, i.e.
where:
= value of the asset at time ,
= value of the asset at time .
The geometric and arithmetic returns are respectively defined as
When the yearly geometric asset returns are lognormally distributed, then the following formula can be used to convert the geometric average return to the arithemtic average return:
where is the variance of the observed asset returns This implicit equation for can be solved exactly as follows. First, notice that by setting
we obtain a polynomial equation of degree 2:
Solving this equation for and using the definition of , we obtain 4 possible solutions for :
However, notice that
This implies that the only 2 possible solutions are (as asset returns are real numbers):
Finally, we expect the derivative of with respect to to be non-negative as an increase in the geometric return should never cause a decrease in the arithmetic return. Indeed, both measure the average growth of an asset's value and therefore should move in similar directions. This leaves us with one solution to the implicit equation for , namely
Therefore, under the assumption of lognormally distributed asset returns, the arithmetic asset return is fully determined by the geometric asset return.
Other generalizations
Other generalizations of the inequality of arithmetic and geometric means include:
Muirhead's inequality,
Maclaurin's inequality,
QM-AM-GM-HM inequalities,
Generalized mean inequality,
Means of complex numbers.
See also
Hoffman's packing puzzle
Ky Fan inequality
Young's inequality for products
Notes
References
External links
Inequalities
Means
Articles containing proofs | AM–GM inequality | [
"Physics",
"Mathematics"
] | 3,722 | [
"Means",
"Point (geometry)",
"Mathematical theorems",
"Mathematical analysis",
"Geometric centers",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Articles containing proofs",
"Mathematical problems",
"Symmetry"
] |
605,080 | https://en.wikipedia.org/wiki/International%20Color%20Consortium | The International Color Consortium (ICC) was formed in 1993 by eight vendors in order to create an open, vendor-neutral color management system which would function transparently across all operating systems and software packages.
Overview
The ICC specification, currently on version 4.4, allows for matching of color when moved between applications and operating systems, from the point of creation to the final output, whether display or print. This specification is technically identical to ISO 15076-1:2010, available from ISO.
The ICC profile describes the color attributes of a particular device or viewing requirement by defining a mapping between the source or target color space and a profile connection space (PCS).
The ICC defines the specification precisely but does not define algorithms or processing details. As such, applications or systems that work with different ICC profiles are allowed to vary.
ICC has also published a preliminary specification for iccMAX, a next-generation color management architecture with significantly expanded functionality and a choice of colorimetric, spectral, or material connection space. Details are at https://www.color.org/iccmax/
Membership
The eight founding members of the ICC were Adobe, Agfa, Apple, Kodak, Microsoft, Silicon Graphics, Sun Microsystems, and Taligent. Sun Microsystems, Silicon Graphics, and Taligent have since left the organization.
there are 5 founding members, 37 regular members and 18 honorary members. Most members specialize in photography, printing, or Electronic visual displays. Regular members include:
BenQ,
Canon,
Dolby,
Fuji,
Heidelberg Printing Machines AG,
Hewlett–Packard,
Konica Minolta,
Kyocera,
Nikon,
Seiko,
Sun Chemical,
Toshiba,
vivo,
Xerox,
Xiaomi,
and X-Rite.
ICC profile specification version
See also
International Colour Association
International Commission on Illumination
References
External links
Official website
Color organizations
International organizations based in the United States
Information technology organizations | International Color Consortium | [
"Technology"
] | 390 | [
"Information technology",
"Information technology organizations"
] |
605,107 | https://en.wikipedia.org/wiki/Frugivore | A frugivore () is an animal that thrives mostly on raw fruits or succulent fruit-like produce of plants such as roots, shoots, nuts and seeds. Approximately 20% of mammalian herbivores eat fruit. Frugivores are highly dependent on the abundance and nutritional composition of fruits. Frugivores can benefit or hinder fruit-producing plants by either dispersing or destroying their seeds through digestion. When both the fruit-producing plant and the frugivore benefit by fruit-eating behavior the interaction is a form of mutualism.
Frugivore seed dispersal
Seed dispersal is important for plants because it allows their progeny to move away from their parents over time. The advantages of seed dispersal may have led to the evolution of fleshy fruits, which entice animals to consume them and move the plant's seeds from place to place. While many fruit-producing plant species would not disperse far without frugivores, their seeds can usually germinate even if they fall to the ground directly below their parent.
Many types of animals are seed dispersers. Mammal and bird species represent the majority of seed-dispersing species. However, frugivorous tortoises, lizards, amphibians, and even fish also disperse seeds. For example, cassowaries are a keystone species because they spread fruit through digestion, many of the seeds of which will not grow unless they have been digested by the animal. While frugivores and fruit-producing plant species are present worldwide, there is some evidence that tropical forests have more frugivore seed dispersers than the temperate zones.
Ecological significance
Frugivore seed dispersal is a common phenomenon in many ecosystems. However, it is not a highly specific type of plant–animal interaction. For example, a single species of frugivorous bird may disperse fruits from several species of plants, or a few species of bird may disperse seeds of one plant species. This lack of specialization could be because fruit availability varies by season and year, which tends to discourage frugivore animals from focusing on just one plant species. Furthermore, different seed dispersers tend to disperse seeds to different habitats, at different abundances, and distances, depending on their behavior and numbers.
Plant adaptations to attract dispersers
There are a number of fruit characteristics that seem to be adaptive characteristics to attract frugivores. Animal-dispersed fruits may advertise their palatability to animals with bright colors and attractive smells (mimetic fruits). Fruit pulp is generally rich in water and carbohydrates and low in protein and lipids. However, the exact nutritional composition of fruits varies widely. The seeds of animal-dispersed fruits are often adapted to survive digestion by frugivores. For example, seeds can become more permeable to water after passage through an animal's gut. This leads to higher germination rates. Some mistletoe seeds even germinate inside the disperser's intestine.
Frugivore adaptations for fruit consumption
Many seed-dispersing animals have specialized digestive systems to process fruits, which leave seeds intact. Some bird species have shorter intestines to rapidly pass seeds from fruits, while some frugivorous bat species have longer intestines. Some seed-dispersing frugivores have short gut-retention times, and others can alter intestinal enzyme composition when eating different types of fruits.
Plant mechanisms to delay or deter frugivory
Since plants invest considerable energy into fruit production, many have evolved to encourage mutualist frugivores to consume their fruit for seed dispersal. Some have also evolved mechanisms to decrease consumption of fruits when unripe and from non-seed-dispersing predators. Predators and parasites of fruit include seed predators, insects, and microbial frugivores.
Plants have developed both chemical and physical adaptations:
Physical deterrents:
Cryptic coloration (e.g. green fruits blend in with the plant leaves)
Unpalatable textures (e.g. thick skins made of anti-nutritive substances)
Resins and saps (e.g. prevent animals from swallowing)
Repellent substances, hard outer coats, spines, thorns
Chemical deterrents:
Chemical deterrents in plants are called secondary metabolites. Secondary metabolites are compounds produced by the plant that are not essential for the primary processes, such as growth and reproduction. Toxins might have evolved to prevent consumption by animals that disperse seeds into unsuitable habitats, to prevent too many fruits from being eaten per feeding bout by preventing too many seeds being deposited in one site, or to prevent digestion of the seeds in the gut of the animal. Secondary chemical defenses are divided into three categories: nitrogen-based, carbon-based terpenes, and carbon-based phenolics.
Examples of secondary chemical defenses in fruit:
Capsaicin is a carbon-based phenolic compound only found in plant genus Capsicum (chili and bell peppers). Capsaicin is responsible for the pungent, hot "flavor" of peppers and inhibits growth of microbes and invertebrates.
Cyanogenic glycosides are nitrogen-based compounds and are found in 130 plant families, but not necessarily in the fruit of all the plants. It is specifically found in the red berries of the genus Ilex (holly, an evergreen woody plant). It can inhibit electron transport, cellular respiration, induce vomiting, diarrhea, and mild narcosis in animals.
Emodin is a carbon-based phenolic compound in plants like rhubarb. Emodin can be cathartic or act as a laxative in humans, kills dipteran larvae, inhibits growth of bacteria and fungi, and deters consumption by birds and mice.
Starch is a polysaccharide that is slowly converted to fructose as the fruit ripens.
Frugivorous animals
Birds
Birds are a main focus of frugivory research. An article by Bette A. Loiselle and John G. Blake, "Potential Consequences of Extinction of Frugivorous Birds for Shrubs of a Tropical Wet Forest", discusses the important role frugivorous birds have on ecosystems. The conclusions of their research indicate how the extinction of seed-dispersing species could negatively affect seed removal, seed viability, and plant establishment. The article highlights the importance that seed-dispersing birds have on the deposition of plant species.
Examples of seed-dispersing birds are the hornbill, the toucan, the aracari, the cotinga (ex. Guianan cock-of-the-rock), and some species of parrots. Frugivores are common in the temperate zone, but mostly found in the tropics. Many frugivorous birds feed mainly on fruits until nesting season, when they incorporate protein-rich insects into their diet. Facultatively-baccivorous birds may also eat bitter berries, such as juniper, in months when alternative foods are scarce. In North America, red mulberry (Morus rubra) fruits are widely sought after by birds in spring and early summer; as many as 31 species of birds were recorded visiting a fruiting tree in Arkansas.
Prior to 1980, most reports of avian frugivory were made in the tropics. From 1979–1981, a number of studies recognized the importance of fruits to fall temperate assemblages of passerine migrants. The earliest of these field studies were conducted in the fall of 1974 in upstate New York by Robert Rybczynski & Donald K. Riker and separately by John W. Baird in New Jersey, each documenting ingestion of fruits in stands of fruit-bearing shrubs by mixed species assemblages dominated by migrant white-throated sparrows.
Mammals
Mammals are considered frugivorous if the seed is dispersed and able to establish. One example of a mammalian frugivore is the maned wolf, or Chrysocyon brachyurus, which is found in South America. A study by José Carlos Motta-Junior and Karina Martins found that the maned wolf is probably an important seed disperser. The researchers found that 22.5–54.3% of the diet was fruit.
65% of the diet of orangutans consists of fruit. Orangutans primarily eat fruit, along with young leaves, bark, flowers, honey, insects, and vines. One of their preferred foods is the fruit of the durian tree, which tastes somewhat like sweet custard. Orangutans discard the skin, eat the flesh, and spit out the seeds.
Other examples of mammalian frugivores include fruit bats and the gray-bellied night monkey, also known as the owl monkey:
"Owl monkeys are frugivores and supplement their diet with flowers, insects, nectar, and leaves (Wright 1989; 1994). They prefer small, ripe fruit when available and in order to find these, they forage in large-crown trees (larger than ten meters [32.8 ft]) (Wright 1986). Seasonal availability of fruit varies across environments. Aotus species in tropical forests eat more fruit throughout the year because it is more readily available compared to the dry forests where fruit is limited in the dry season and owl monkeys are more dependent on leaves."
Fish
Some species of fish are frugivorous, such as the tambaqui.
Conservation
Since seed dispersal allows plant species to disperse to other areas, the loss of frugivores could change plant communities and lead to the local loss of particular plant species. Since frugivore seed dispersal is so important in the tropics, many researchers have studied the loss of frugivores and related it to changed plant population dynamics. Several studies have noted that even the loss of only large frugivores, such as monkeys, could have a negative effect, since they are responsible for certain types of long-distance seed dispersal that is not seen with other frugivore types, like birds. However, plant species whose seeds are dispersed by animals may be less vulnerable to fragmentation than other plant species. Frugivores can also benefit from the invasion of exotic fruit-producing species and can be vectors of exotic invasion by dispersing non-native seeds. Consequently, anthropogenic habitat loss and change may negatively affect some frugivore species but benefit others.
See also
Consumer-resource systems
Fruit flies
Fruitarianism
References
Further reading
Animals by eating behaviors
Herbivory | Frugivore | [
"Biology"
] | 2,172 | [
"Behavior",
"Animals by eating behaviors",
"Eating behaviors",
"Herbivory",
"Ethology"
] |
605,108 | https://en.wikipedia.org/wiki/Charles%20Ehresmann | Charles Ehresmann (19 April 1905 – 22 September 1979) was a German-born French mathematician who worked in differential topology and category theory.
He was an early member of the Bourbaki group, and is known for his work on the differential geometry of smooth fiber bundles, notably the introduction of the concepts of Ehresmann connection and of jet bundles, and for his seminar on category theory.
Life
Ehresmann was born in Strasbourg (at the time part of the German Empire) to an Alsatian-speaking family; his father was a gardener. After World War I, Alsace returned part of France and Ehresmann was taught in French at Lycée Kléber.
Between 1924 and 1927 he studied at the École Normale Supérieure (ENS) in Paris and obtained agrégation in mathematics. After one year of military service, in 1928-29 he taught at a French school in Rabat, Morocco. He studied further at the University of Göttingen during the years 1930–31, and at Princeton University in 1932–34.
He completed his PhD thesis entitled Sur la topologie de certains espaces homogènes (On the topology of certain homogeneous spaces) at ENS in 1934 under the supervision of Élie Cartan.
From 1935 to 1939 he was a researcher with the Centre national de la recherche scientifique and he contributed to the seminar of Gaston Julia, which was a forerunner of the Bourbaki seminar. In 1939 Ehresmann became a lecturer at the University of Strasbourg, but one year later the whole faculty was evacuated to Clermont-Ferrand due to the German occupation of France. When Germany withdrew in 1945, he returned to Strasbourg.
From 1955 he was Professor of Topology at Sorbonne, and after the reorganization of Parisian universities in 1969 he moved to Paris Diderot University (Paris 7).
Ehresmann was President of the Société Mathématique de France in 1965. He was awarded in 1940 the Prix Francoeur for young researchers in mathematics and in 1967 an honorary doctorate by the University of Bologna. He also held visiting chairs at Yale University, Princeton University, in Brazil (São Paulo, Rio de Janeiro), Buenos Aires, Mexico City, Montreal, and the Tata Institute of Fundamental Research in Bombay.
After his retirement in 1975 and until 1978 he gave lectures at the University of Picardy at Amiens, where he moved because his second wife, Andrée Charles-Ehresmann, was a professor of mathematics there. He died at Amiens in 1979.
Mathematical work
In the first part of his career Ehresmann introduced many new mathematical objects in differential geometry and topology, which gave rise to entire new fields, often developed later by his students.
In his first works he investigated the topology and homology of manifolds associated with classical Lie groups, such as Grassmann manifolds and other homogeneous spaces.
He developed the concept of fiber bundle, and the related notions of Ehresmann connection and solder form, building on the works by Herbert Seifert and Hassler Whitney in the 1930s. Norman Steenrod was working in the same direction from a topological point of view, but Ehresmann, influenced by Cartan's ideas, was particularly interested in differentiable (smooth) fiber bundles, and in the differential-geometric aspects of these. This approach led him also to the notion of almost complex structure, which was introduced independently also by Heinz Hopf.
In order to obtain a more conceptual understanding of completely integrable systems of partial differential equations, in 1944 Ehresmann inaugurated the theory of foliations, which will be later developed by his student Georges Reeb. With the same perspective, he pioneered the notions of jet and of Lie groupoid.
Since the 1960s, Ehresmann's research interests moved to category theory, where he introduced the concepts of sketch and of strict 2-category.
His collected works, edited by his wife, appeared in seven volumes in 1980–1983 (four volumes published by Imprimerie Evrard, Amiens, and the rest in the journal Cahiers de Topologie et Géométrie Différentielle Catégoriques, which he had founded in 1957). His publications include also the books Catégories et structures (Dunod, Paris, 1965) and Algèbre (1969).
Jean Dieudonné described Ehresmann's personality as "... distinguished by forthrightness, simplicity, and total absence of conceit or careerism. As a teacher he was outstanding, not so much for the brilliance of his lectures as for the inspiration and tireless guidance he generously gave to his research students ... "
He had 76 PhD students, including Georges Reeb, Wu Wenjun (吴文俊), André Haefliger, Valentin Poénaru, and Daniel Tanré. His first student was Jacques Feldbau.
References
External links
International Conference "Charles Ehresmann: 100 ans" Université de Picardie Jules Verne à Amiens, 7-8-9 October 2005. http://pagesperso-orange.fr/vbm-ehr/ChEh/indexAng.htm
'The mathematical legacy of Charles Ehresmann', Proceedings of the 7th Conference on the Geometry and Topology of Manifolds: The Mathematical Legacy of Charles Ehresmann, Będlewo (Poland), 8.05.2005–15.05.2005, Edited by J. Kubarski, J. Pradines, T. Rybicki, R. Wolak, Banach Center Publications, vol. 76, Institute of Mathematics of the Polish Academy of Sciences, Warsaw, 2007. https://www.impan.pl/pl/wydawnictwa/banach-center-publications/all/76
20th-century French mathematicians
Nicolas Bourbaki
Category theorists
École Normale Supérieure alumni
Alsatian-German people
1905 births
1979 deaths
Differential geometers
Academic staff of the University of Paris
Academic staff of the University of Strasbourg
Topologists | Charles Ehresmann | [
"Mathematics"
] | 1,239 | [
"Mathematical structures",
"Topologists",
"Topology",
"Category theory",
"Category theorists"
] |
605,242 | https://en.wikipedia.org/wiki/A1%20broth | An A1 broth is a liquid culture medium used in microbiology for the detection of fecal coliforms in foods, treated wastewater and seawater bays using the most probable number (MPN) method. It is prepared according to the formulation of Andrews and Presnell given below. It is used with a Durham tube, a positive tube being one that exhibits a trapped bubble of gas.
Typical formula (g/L)
Directions
Suspend the dry ingredients in one liter of cold distilled water. Gently heat until completely dissolved and distribute 9 mL into test tubes with an inverted Durham tube. Sterilize in an autoclave at 121°C for 15 minutes. If needed, prepare multi-strength broth weighing the appropriate quantity of the dry medium. The final pH is 6.9 ± 0.1.
Widespread usage
Variants of this test has been used for potable water across the globe, for example by the Cree community of Split Lake, Manitoba, by the Mapuche people of Maquehue, Chile and in Singapore, Malaysia and Thailand
References
Microbiological media | A1 broth | [
"Biology"
] | 217 | [
"Microbiological media",
"Microbiology equipment"
] |
605,477 | https://en.wikipedia.org/wiki/Behavioral%20neuroscience | Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is part of the broad, interdisciplinary field of neuroscience, with its primary focus being on the biological and neural substrates underlying human experiences and behaviors, as in our psychology. Derived from an earlier field known as physiological psychology, behavioral neuroscience applies the principles of biology to study the physiological, genetic, and developmental mechanisms of behavior in humans and other animals. Behavioral neuroscientists examine the biological bases of behavior through research that involves neuroanatomical substrates, environmental and genetic factors, effects of lesions and electrical stimulation, developmental processes, recording electrical activity, neurotransmitters, hormonal influences, chemical components, and the effects of drugs. Important topics of consideration for neuroscientific research in behavior include learning and memory, sensory processes, motivation and emotion, as well as genetic and molecular substrates concerning the biological bases of behavior. Subdivisions of behavioral neuroscience include the field of cognitive neuroscience, which emphasizes the biological processes underlying human cognition. Behavioral and cognitive neuroscience are both concerned with the neuronal and biological bases of psychology, with a particular emphasis on either cognition or behavior depending on the field.
History
Behavioral neuroscience as a scientific discipline emerged from a variety of scientific and philosophical traditions in the 18th and 19th centuries. René Descartes proposed physical models to explain animal as well as human behavior. Descartes suggested that the pineal gland, a midline unpaired structure in the brain of many organisms, was the point of contact between mind and body. Descartes also elaborated on a theory in which the pneumatics of bodily fluids could explain reflexes and other motor behavior. This theory was inspired by moving statues in a garden in Paris.
Other philosophers also helped give birth to psychology. One of the earliest textbooks in the new field, The Principles of Psychology by William James, argues that the scientific study of psychology should be grounded in an understanding of biology.
The emergence of psychology and behavioral neuroscience as legitimate sciences can be traced from the emergence of physiology from anatomy, particularly neuroanatomy. Physiologists conducted experiments on living organisms, a practice that was distrusted by the dominant anatomists of the 18th and 19th centuries. The influential work of Claude Bernard, Charles Bell, and William Harvey helped to convince the scientific community that reliable data could be obtained from living subjects.
Even before the 18th and 19th centuries, behavioral neuroscience was beginning to take form as far back as 1700 B.C. The question that seems to continually arise is: what is the connection between the mind and body? The debate is formally referred to as the mind-body problem. There are two major schools of thought that attempt to resolve the mind–body problem; monism and dualism. Plato and Aristotle are two of several philosophers who participated in this debate. Plato believed that the brain was where all mental thought and processes happened. In contrast, Aristotle believed the brain served the purpose of cooling down the emotions derived from the heart. The mind-body problem was a stepping stone toward attempting to understand the connection between the mind and body.
Another debate arose about localization of function or functional specialization versus equipotentiality which played a significant role in the development in behavioral neuroscience. As a result of localization of function research, many famous people found within psychology have come to various different conclusions. Wilder Penfield was able to develop a map of the cerebral cortex through studying epileptic patients along with Rassmussen. Research on localization of function has led behavioral neuroscientists to a better understanding of which parts of the brain control behavior. This is best exemplified through the case study of Phineas Gage.
The term "psychobiology" has been used in a variety of contexts, emphasizing the importance of biology, which is the discipline that studies organic, neural and cellular modifications in behavior, plasticity in neuroscience, and biological diseases in all aspects, in addition, biology focuses and analyzes behavior and all the subjects it is concerned about, from a scientific point of view. In this context, psychology helps as a complementary, but important discipline in the neurobiological sciences. The role of psychology in this questions is that of a social tool that backs up the main or strongest biological science. The term "psychobiology" was first used in its modern sense by Knight Dunlap in his book An Outline of Psychobiology (1914). Dunlap also was the founder and editor-in-chief of the journal Psychobiology. In the announcement of that journal, Dunlap writes that the journal will publish research "...bearing on the interconnection of mental and physiological functions", which describes the field of behavioral neuroscience even in its modern sense.
Neuroscience is considered a relatively new discipline, with the first conference for the Society of Neuroscience occurring in 1971. The meeting was held to merge different fields focused on studying the nervous system (ex. neuroanatomy, neurochemistry, physiological psychology, neuroendocrinology, clinical neurology, neurophysiology, neuropharmacology, etc.) by creating one interdisciplinary field. In 1983, the Journal of Comparative and Physiological Psychology, published by the American Psychological Association, was split into two separate journals: Behavioral Neuroscience and the Journal of Comparative Psychology. The author of the journal at the time gave reasoning for this separation, with one being that behavioral neuroscience is the broader contemporary advancement of physiological psychology. Furthermore, in all animals, the nervous system is the organ of behavior. Therefore, every biological and behavioral variable that influences behavior must go through the nervous system to do so. Present-day research in behavioral neuroscience studies all biological variables which act through the nervous system and relate to behavior.
Relationship to other fields of psychology and biology
In many cases, humans may serve as experimental subjects in behavioral neuroscience experiments; however, a great deal of the experimental literature in behavioral neuroscience comes from the study of non-human species, most frequently rats, mice, and monkeys. As a result, a critical assumption in behavioral neuroscience is that organisms share biological and behavioral similarities, enough to permit extrapolations across species. This allies behavioral neuroscience closely with comparative psychology, ethology, evolutionary biology, and neurobiology. Behavioral neuroscience also has paradigmatic and methodological similarities to neuropsychology, which relies heavily on the study of the behavior of humans with nervous system dysfunction (i.e., a non-experimentally based biological manipulation). Synonyms for behavioral neuroscience include biopsychology, biological psychology, and psychobiology. Physiological psychology is a subfield of behavioral neuroscience, with an appropriately narrower definition.
Research methods
The distinguishing characteristic of a behavioral neuroscience experiment is that either the independent variable of the experiment is biological, or some dependent variable is biological. In other words, the nervous system of the organism under study is permanently or temporarily altered, or some aspect of the nervous system is measured (usually to be related to a behavioral variable).
Disabling or decreasing neural function
Lesions – A classic method in which a brain-region of interest is naturally or intentionally destroyed to observe any resulting changes such as degraded or enhanced performance on some behavioral measure. Lesions can be placed with relatively high accuracy "Thanks to a variety of brain 'atlases' which provide a map of brain regions in 3-dimensional" stereotactic coordinates.
Surgical lesions – Neural tissue is destroyed by removing it surgically.
Electrolytic lesions – Neural tissue is destroyed through the application of electrical shock trauma.
Chemical lesions – Neural tissue is destroyed by the infusion of a neurotoxin.
Temporary lesions – Neural tissue is temporarily disabled by cooling or by the use of anesthetics such as tetrodotoxin.
Transcranial magnetic stimulation – A new technique usually used with human subjects in which a magnetic coil applied to the scalp causes unsystematic electrical activity in nearby cortical neurons which can be experimentally analyzed as a functional lesion.
Synthetic ligand injection – A receptor activated solely by a synthetic ligand (RASSL) or Designer Receptor Exclusively Activated by Designer Drugs (DREADD), permits spatial and temporal control of G protein signaling in vivo. These systems utilize G protein-coupled receptors (GPCR) engineered to respond exclusively to synthetic small molecules ligands, like clozapine N-oxide (CNO), and not to their natural ligand(s). RASSL's represent a GPCR-based chemogenetic tool. These synthetic ligands upon activation can decrease neural function by G-protein activation. This can with Potassium attenuating neural activity.
Optogenetic inhibition – A light activated inhibitory protein is expressed in cells of interest. Powerful millisecond timescale neuronal inhibition is instigated upon stimulation by the appropriate frequency of light delivered via fiber optics or implanted LEDs in the case of vertebrates, or via external illumination for small, sufficiently translucent invertebrates. Bacterial Halorhodopsins or Proton pumps are the two classes of proteins used for inhibitory optogenetics, achieving inhibition by increasing cytoplasmic levels of halides () or decreasing the cytoplasmic concentration of protons, respectively.
Enhancing neural function
Electrical stimulation – A classic method in which neural activity is enhanced by application of a small electric current (too small to cause significant cell death).
Psychopharmacological manipulations – A chemical receptor antagonist induces neural activity by interfering with neurotransmission. Antagonists can be delivered systemically (such as by intravenous injection) or locally (intracerebrally) during a surgical procedure into the ventricles or into specific brain structures. For example, NMDA antagonist AP5 has been shown to inhibit the initiation of long term potentiation of excitatory synaptic transmission (in rodent fear conditioning) which is believed to be a vital mechanism in learning and memory.
Synthetic Ligand Injection – Likewise, Gq-DREADDs can be used to modulate cellular function by innervation of brain regions such as Hippocampus. This innervation results in the amplification of γ-rhythms, which increases motor activity.
Transcranial magnetic stimulation – In some cases (for example, studies of motor cortex), this technique can be analyzed as having a stimulatory effect (rather than as a functional lesion).
Optogenetic excitation – A light activated excitatory protein is expressed in select cells. Channelrhodopsin-2 (ChR2), a light activated cation channel, was the first bacterial opsin shown to excite neurons in response to light, though a number of new excitatory optogenetic tools have now been generated by improving and imparting novel properties to ChR2.
Measuring neural activity
Optical techniques – Optical methods for recording neuronal activity rely on methods that modify the optical properties of neurons in response to the cellular events associated with action potentials or neurotransmitter release.
Voltage sensitive dyes (VSDs) were among the earliest method for optically detecting neuronal activity. VSDs commonly changed their fluorescent properties in response to a voltage change across the neuron's membrane, rendering membrane sub-threshold and supra-threshold (action potentials) electrical activity detectable. Genetically encoded voltage sensitive fluorescent proteins have also been developed.
Calcium imaging relies on dyes or genetically encoded proteins that fluoresce upon binding to the calcium that is transiently present during an action potential.
Synapto-pHluorin is a technique that relies on a fusion protein that combines a synaptic vesicle membrane protein and a pH sensitive fluorescent protein. Upon synaptic vesicle release, the chimeric protein is exposed to the higher pH of the synaptic cleft, causing a measurable change in fluorescence.
Single-unit recording – A method whereby an electrode is introduced into the brain of a living animal to detect electrical activity that is generated by the neurons adjacent to the electrode tip. Normally this is performed with sedated animals but sometimes it is performed on awake animals engaged in a behavioral event, such as a thirsty rat whisking a particular sandpaper grade previously paired with water in order to measure the corresponding patterns of neuronal firing at the decision point.
Multielectrode recording – The use of a bundle of fine electrodes to record the simultaneous activity of up to hundreds of neurons.
Functional magnetic resonance imaging – fMRI, a technique most frequently applied on human subjects, in which changes in cerebral blood flow can be detected in an MRI apparatus and are taken to indicate relative activity of larger scale brain regions (i.e., on the order of hundreds of thousands of neurons).
Positron emission tomography - PET detects particles called photons using a 3-D nuclear medicine examination. These particles are emitted by injections of radioisotopes such as fluorine. PET imaging reveal the pathological processes which predict anatomic changes making it important for detecting, diagnosing and characterising many pathologies.
Electroencephalography – EEG, and the derivative technique of event-related potentials, in which scalp electrodes monitor the average activity of neurons in the cortex (again, used most frequently with human subjects). This technique uses different types of electrodes for recording systems such as needle electrodes and saline-based electrodes. EEG allows for the investigation of mental disorders, sleep disorders and physiology. It can monitor brain development and cognitive engagement.
Functional neuroanatomy – A more complex counterpart of phrenology. The expression of some anatomical marker is taken to reflect neural activity. For example, the expression of immediate early genes is thought to be caused by vigorous neural activity. Likewise, the injection of 2-deoxyglucose prior to some behavioral task can be followed by anatomical localization of that chemical; it is taken up by neurons that are electrically active.
Magnetoencephalography – MEG shows the functioning of the human brain through the measurement of electromagnetic activity. Measuring the magnetic fields created by the electric current flowing within the neurons identifies brain activity associated with various human functions in real time, with millimeter spatial accuracy. Clinicians can noninvasively obtain data to help them assess neurological disorders and plan surgical treatments.
Genetic techniques
QTL mapping – The influence of a gene in some behavior can be statistically inferred by studying inbred strains of some species, most commonly mice. The recent sequencing of the genome of many species, most notably mice, has facilitated this technique.
Selective breeding – Organisms, often mice, may be bred selectively among inbred strains to create a recombinant congenic strain. This might be done to isolate an experimentally interesting stretch of DNA derived from one strain on the background genome of another strain to allow stronger inferences about the role of that stretch of DNA.
Genetic engineering – The genome may also be experimentally-manipulated; for example, knockout mice can be engineered to lack a particular gene, or a gene may be expressed in a strain which does not normally do so (the 'transgenic'). Advanced techniques may also permit the expression or suppression of a gene to occur by injection of some regulating chemical.
Quantifying behavior
Markerless pose estimation – The advancement of computer vision techniques in recent years have allowed for precise quantifications of animal movements without needing to fit physical markers onto the subject. On high-speed video captured in a behavioral assay, keypoints from the subject can be extracted frame-by-frame, which is often useful to analyze in tandem with neural recordings/manipulations. Analyses can be conducted on how keypoints (i.e. parts of the animal) move within different phases of a particular behavior (on a short timescale), or throughout an animal's behavioral repertoire (longer timescale). These keypoint changes can be compared with corresponding changes in neural activity. A machine learning approach can also be used to identify specific behaviors (e.g. forward walking, turning, grooming, courtship, etc.), and quantify the dynamics of transitions between behaviors.
Other research methods
Computational models - Using a computer to formulate real-world problems to develop solutions. Although this method is often focused in computer science, it has begun to move towards other areas of study. For example, psychology is one of these areas. Computational models allow researchers in psychology to enhance their understanding of the functions and developments in nervous systems. Examples of methods include the modelling of neurons, networks and brain systems and theoretical analysis. Computational methods have a wide variety of roles including clarifying experiments, hypothesis testing and generating new insights. These techniques play an increasing role in the advancement of biological psychology.
Limitations and advantages
Different manipulations have advantages and limitations. Neural tissue destroyed as a primary consequence of a surgery, electric shock or neurotoxin can confound the results so that the physical trauma masks changes in the fundamental neurophysiological processes of interest.
For example, when using an electrolytic probe to create a purposeful lesion in a distinct region of the rat brain, surrounding tissue can be affected: so, a change in behavior exhibited by the experimental group post-surgery is to some degree a result of damage to surrounding neural tissue, rather than by a lesion of a distinct brain region. Most genetic manipulation techniques are also considered permanent. Temporary lesions can be achieved with advanced in genetic manipulations, for example, certain genes can now be switched on and off with diet. Pharmacological manipulations also allow blocking of certain neurotransmitters temporarily as the function returns to its previous state after the drug has been metabolized.
Topic areas
In general, behavioral neuroscientists study various neuronal and biological processes underlying behavior, though limited by the need to use nonhuman animals. As a result, the bulk of literature in behavioral neuroscience deals with experiences and mental processes that are shared across different animal models such as:
Sensation and perception
Motivated behavior (hunger, thirst, sex)
Control of movement
Learning and memory
Sleep and biological rhythms
Emotion
However, with increasing technical sophistication and with the development of more precise noninvasive methods that can be applied to human subjects, behavioral neuroscientists are beginning to contribute to other classical topic areas of psychology, philosophy, and linguistics, such as:
Language
Reasoning and decision making
Consciousness
Behavioral neuroscience has also had a strong history of contributing to the understanding of medical disorders, including those that fall under the purview of clinical psychology and biological psychopathology (also known as abnormal psychology). Although animal models do not exist for all mental illnesses, the field has contributed important therapeutic data on a variety of conditions, including:
Parkinson's disease, a degenerative disorder of the central nervous system that often impairs motor skills and speech.
Huntington's disease, a rare inherited neurological disorder whose most obvious symptoms are abnormal body movements and a lack of coordination. It also affects a number of mental abilities and some aspects of personality.
Alzheimer's disease, a neurodegenerative disease that, in its most common form, is found in people over the age of 65 and is characterized by progressive cognitive deterioration, together with declining activities of daily living and by neuropsychiatric symptoms or behavioral changes.
Clinical depression, a common psychiatric disorder, characterized by a persistent lowering of mood, loss of interest in usual activities and diminished ability to experience pleasure.
Schizophrenia, a psychiatric diagnosis that describes a mental illness characterized by impairments in the perception or expression of reality, most commonly manifesting as auditory hallucinations, paranoid or bizarre delusions or disorganized speech and thinking in the context of significant social or occupational dysfunction.
Autism, a brain development disorder that impairs social interaction and communication, and causes restricted and repetitive behavior, all starting before a child is three years old.
Anxiety, a physiological state characterized by cognitive, somatic, emotional, and behavioral components. These components combine to create the feelings that are typically recognized as fear, apprehension, or worry.
Drug abuse, including alcoholism.
Research on topic areas
Cognition
Behavioral neuroscientists conduct research on various cognitive processes through the use of different neuroimaging techniques. Examples of cognitive research might involve examination of neural correlates during emotional information processing, such as one study that analyzed the relationship between subjective affect and neural reactivity during sustained processing of positive (savoring) and negative (rumination) emotion. The aim of the study was to analyze whether repetitive positive thinking (seen as being beneficial) and repetitive negative thinking (significantly related to worse mental health) would have similar underlying neural mechanisms. Researchers found that the individuals who had a more intense positive affect during savoring, were also the same individuals who had a more intense negative affect during rumination. fMRI data showed similar activations in brain regions during both rumination and savoring, suggesting shared neural mechanisms between the two types of repetitive thinking. The results of the study suggest there are similarities, both subjectively and mechanistically, with repetitive thinking about positive and negative emotions. This overall suggests shared neural mechanisms by which sustained emotional processing of both positive and negative information occurs.
Stress
Research within the field of behavioral neuroscience involves looking at the complex neuroanatomy underlying different emotional processes, such as stress. Godoy et al. (2018) did so by providing an in-depth analyzation of the neurobiological underpinnings of the stress response. The article features on an overview on the historical development of stress research and its importance leading up to research related to both physical and psychological stressors today. The authors explored various significators of stress and their corresponding neuroanatomical processing, along with the temporal dynamics of both acute and chronic stress and its effects on the brain. Overall, the article provides a comprehensive scientific overview of stress through a neurobiological lens, highlighting the importance of our current knowledge in stress-related research areas today.
Awards
Nobel Laureates
The following Nobel Prize winners could reasonably be considered behavioral neuroscientists or neurobiologists. (This list omits winners who were almost exclusively neuroanatomists or neurophysiologists; i.e., those that did not measure behavioral or neurobiological variables.)
Kavli Prize in Neuroscience
Ann Graybiel (1942)
Cornelia Bargmann (1961)
Winfried Denk (1957)
See also
References
External links
Biological Psychology Links
Theory of Biological Psychology (Documents No. 9 and 10 in English)
IBRO (International Brain Research Organization)
Neuropsychology
Psychoneuroimmunology | Behavioral neuroscience | [
"Biology"
] | 4,625 | [
"Behavioural sciences",
"Behavior",
"Behavioral neuroscience"
] |
605,486 | https://en.wikipedia.org/wiki/Chequamegon%E2%80%93Nicolet%20National%20Forest | The Chequamegon–Nicolet National Forest (; the q is silent) is a U.S. National Forest in northern Wisconsin in the United States. Due to logging in the early part of the 20th century, very little old growth forest remains. Some of the trees there were planted by the Civilian Conservation Corps in the 1930s. The national forest land trees and vegetation are part of the North Woods Ecoregion that prevails throughout the upper Great Lakes region.
Legally two separate national forests—the Chequamegon National Forest and the Nicolet National Forest—the areas were established by presidential proclamations in 1933 and have been managed as one unit since 1998.
The Chequamegon National Forest comprises three units in the north-central part of the state totaling . In descending order of forestland area, it is located in parts of Bayfield, Ashland, Price, Sawyer, Taylor, and Vilas counties. Forest headquarters are in Park Falls. There are local ranger district offices in Glidden, Hayward, Medford, Park Falls, and Washburn. Moquah Barrens Research Natural Area is located with the Chequamegon. Lying within the Chequamegon are two officially designated wilderness areas of the National Wilderness Preservation System. These are the Porcupine Lake Wilderness and the Rainbow Lake Wilderness.
The Nicolet National Forest covers of northeastern Wisconsin. It is located in parts of Forest, Oconto, Florence, Vilas, Langlade, and Oneida counties. The forest headquarters are in Rhinelander. There are local ranger district offices in Eagle River, Florence, Lakewood, and Laona. Bose Lake Hemlock Hardwoods and the Franklin Lake Campground are located in the Nicolet. Lying within the Nicolet are three wildernesses—the Blackjack Springs Wilderness, the Headwaters Wilderness, and the Whisker Lake Wilderness.
Flora, fauna, and funga
Remote areas of uplands, bogs, wetlands, muskegs, rivers, streams, pine savannas, meadows and many glacial lakes are found throughout these forests. Native tree species include Acer saccharum (sugar maple), Acer rubrum (red maple), and Acer spicatum (mountain maple), white, red, and black oaks, aspen, beech, basswood, sumac, and paper, yellow, and river birch. Coniferous trees, including red, white, and jack pine, white spruce and balsam fir are abundant due to a dense second growth. Eastern hemlock are also present as this is the westernmost limit of its distribution. Tamarack/black spruce bogs, cedar swamps and alder thickets are common. Blueberries, raspberries, blackberries, cranberries, serviceberries, ferns, mosses, cattails, and mushrooms also grow here, as well as many more shrubs and wildflowers.
White-tailed deer are numerous and are hit by motorists on roads in northern Wisconsin year-round. Black bears, foxes, raccoons, rabbits, beavers, river otters, squirrels, chipmunks, pheasants, grouse and wild turkeys are popular game in the woods. Elk and wolves(Wolves have not been reintroduced to Wisconsin ) have been reintroduced and there have been sightings of moose and pine marten. Bird species include northern cardinal, blue jay, Canada jay, common raven, boreal and black-capped chickadees, black-backed and pileated woodpeckers, red-winged blackbirds, owls, ducks, common loons, bald eagles, evening grosbeaks, red and white-winged crossbills and many species of thrushes, sparrows and warblers. Brook trout, rainbow trout, and brown trout are found in many miles of excellent streams. Walleye, small and largemouth bass, crappie, northern pike, and many species of panfish make the area's lakes famous for freshwater fishing. A record making muskellunge, Wisconsin's state fish, was caught in these waters. The beauty, heritage, and recreational opportunities of these forests draw thousands of tourists to the Chequamegon–Nicolet area every year.
These national forests are best known for recreation, including camping, hiking, fishing, cross country skiing, and snowmobiling.
Clam Lake in Chequamegon National Forest was also home to one of the two extremely low frequency antennae in the United States.
Gallery
See also
List of national forests of the United States
Lake Namakagon
References
External links
History on official website
National forests of Wisconsin
Parks in Wisconsin
Old-growth forests
Civilian Conservation Corps in Wisconsin
Protected areas of Ashland County, Wisconsin
Protected areas of Bayfield County, Wisconsin
Protected areas of Price County, Wisconsin
Protected areas of Sawyer County, Wisconsin
Protected areas of Taylor County, Wisconsin
Protected areas of Vilas County, Wisconsin
Protected areas of Florence County, Wisconsin
Protected areas of Forest County, Wisconsin
Protected areas of Oconto County, Wisconsin
Protected areas of Langlade County, Wisconsin
Protected areas of Oneida County, Wisconsin
1933 establishments in Wisconsin
Protected areas established in 1933 | Chequamegon–Nicolet National Forest | [
"Biology"
] | 1,040 | [
"Old-growth forests",
"Ecosystems"
] |
605,490 | https://en.wikipedia.org/wiki/Care%20perspective | In psychology, the care perspective focuses on people in terms of their connectedness with others, interpersonal communication, relationships with others, and concern for others.
See also
Carol Gilligan
Moral development
References
Human communication | Care perspective | [
"Biology"
] | 43 | [
"Human communication",
"Behavior",
"Human behavior"
] |
605,501 | https://en.wikipedia.org/wiki/Free-radical%20theory%20of%20aging | The free radical theory of aging states that organisms age because cells accumulate free radical damage over time. A free radical is any atom or molecule that has a single unpaired electron in an outer shell. While a few free radicals such as melanin are not chemically reactive, most biologically relevant free radicals are highly reactive. For most biological structures, free radical damage is closely associated with oxidative damage. Antioxidants are reducing agents, and limit oxidative damage to biological structures by passivating them from free radicals.
Strictly speaking, the free radical theory is only concerned with free radicals such as superoxide ( O2− ), but it has since been expanded to encompass oxidative damage from other reactive oxygen species (ROS) such as hydrogen peroxide (H2O2), or peroxynitrite (OONO−).
Denham Harman first proposed the free radical theory of aging in the 1950s, and in the 1970s extended the idea to implicate mitochondrial production of ROS.
In some model organisms, such as yeast and Drosophila, there is evidence that reducing oxidative damage can extend lifespan. However, in mice, only 1 of the 18 genetic alterations (SOD-1 deletion) that block antioxidant defences, shortened lifespan. Similarly, in roundworms (Caenorhabditis elegans), blocking the production of the naturally occurring antioxidant superoxide dismutase has been shown to increase lifespan. Whether reducing oxidative damage below normal levels is sufficient to extend lifespan remains an open and controversial question.
Background
The free radical theory of aging was conceived by Denham Harman in the 1950s, when prevailing scientific opinion held that free radicals were too unstable to exist in biological systems. This was also before anyone invoked free radicals as a cause of degenerative diseases. Two sources inspired Harman: 1) the rate of living theory, which holds that lifespan is an inverse function of metabolic rate which in turn is proportional to oxygen consumption, and 2) Rebeca Gerschman's observation that hyperbaric oxygen toxicity and radiation toxicity could be explained by the same underlying phenomenon: oxygen free radicals. Noting that radiation causes "mutation, cancer and aging", Harman argued that oxygen free radicals produced during normal respiration would cause cumulative damage which would eventually lead to organismal loss of functionality, and ultimately death.
In later years, the free radical theory was expanded to include not only aging per se, but also age-related diseases. Free radical damage within cells has been linked to a range of disorders including cancer, arthritis, atherosclerosis, Alzheimer's disease, and diabetes. There has been some evidence to suggest that free radicals and some reactive nitrogen species trigger and increase cell death mechanisms within the body such as apoptosis and in extreme cases necrosis.
In 1972, Harman modified his original theory. In its current form, this theory proposes that reactive oxygen species (ROS) that are produced in the mitochondria, causes damage to certain macromolecules including lipids, proteins and most importantly mitochondrial DNA. This damage then causes mutations which lead to an increase of ROS production and greatly enhance the accumulation of free radicals within cells. This mitochondrial theory has been more widely accepted that it could play a major role in contributing to the aging process.
Since Harman first proposed the free radical theory of aging, there have been continual modifications and extensions to his original theory.
Processes
Free radicals are atoms or molecules containing unpaired electrons. Electrons normally exist in pairs in specific orbitals in atoms or molecules. Free radicals, which contain only a single electron in any orbital, are usually unstable toward losing or picking up an extra electron, so that all electrons in the atom or molecule will be paired.
The unpaired electron does not imply charge; free radicals can be positively charged, negatively charged, or neutral.
Damage occurs when the free radical encounters another molecule and seeks to find another electron to pair its unpaired electron. The free radical often pulls an electron off a neighboring molecule, causing the affected molecule to become a free radical itself. The new free radical can then pull an electron off the next molecule, and a chemical chain reaction of radical production occurs. The free radicals produced in such reactions often terminate by removing an electron from a molecule which becomes changed or cannot function without it, especially in biology. Such an event causes damage to the molecule, and thus to the cell that contains it (since the molecule often becomes dysfunctional).
The chain reaction caused by free radicals can lead to cross-linking of atomic structures. In cases where the free radical-induced chain reaction involves base pair molecules in a strand of DNA, the DNA can become cross-linked.
Oxidative free radicals, such as the hydroxyl radical and the superoxide radical, can cause DNA damages, and such damages have been proposed to play a key role in the aging of crucial tissues. DNA damage can result in reduced gene expression, cell death and ultimately tissue dysfunction.
DNA cross-linking can in turn lead to various effects of aging, especially cancer. Other cross-linking can occur between fat and protein molecules, which leads to wrinkles. Free radicals can oxidize LDL, and this is a key event in the formation of plaque in arteries, leading to heart disease and stroke. These are examples of how the free-radical theory of aging has been used to neatly "explain" the origin of many chronic diseases.
Free radicals that are thought to be involved in the process of aging include superoxide and nitric oxide. Specifically, an increase in superoxide affects aging whereas a decrease in nitric oxide formation, or its bioavailability, does the same.
Antioxidants are helpful in reducing and preventing damage from free radical reactions because of their ability to donate electrons which neutralize the radical without forming another. Vitamin C, for example, can lose an electron to a free radical and remain stable itself by passing its unstable electron around the antioxidant molecule.
Modifications of the theory
One of the main criticisms of the free radical theory of aging is directed at the suggestion that free radicals are responsible for the damage of biomolecules, thus being a major reason for cellular senescence and organismal aging. Several modifications have been proposed to integrate current research into the overall theory.
Mitochondria
The mitochondrial theory of aging was first proposed in 1978, and two years later, the mitochondrial free-radical theory of aging was introduced. The theory implicates the mitochondria as the chief target of radical damage, since there is a known chemical mechanism by which mitochondria can produce ROS, mitochondrial components such as mtDNA are not as well protected as nuclear DNA, and by studies comparing damage to nuclear and mtDNA that demonstrate higher levels of radical damage on the mitochondrial molecules. Electrons may escape from metabolic processes in the mitochondria like the Electron transport chain, and these electrons may in turn react with water to form ROS such as the superoxide radical, or via an indirect route the hydroxyl radical. These radicals then damage the mitochondria's DNA and proteins, and these damage components in turn are more liable to produce ROS byproducts. Thus a positive feedback loop of oxidative stress is established that, over time, can lead to the deterioration of cells and later organs and the entire body.
This theory has been widely debated and it is still unclear how ROS induced mtDNA mutations develop. Conte et al. suggest iron-substituted zinc fingers may generate free radicals due to the zinc finger proximity to DNA and thus lead to DNA damage.
Afanas'ev suggests the superoxide dismutation activity of CuZnSOD demonstrates an important link between life span and free radicals. The link between CuZnSOD and life span was demonstrated by Perez et al. who indicated mice life span was affected by the deletion of the Sod1 gene which encodes CuZnSOD.
Contrary to the usually observed association between mitochondrial ROS (mtROS) and a decline in longevity, Yee et al. recently observed increased longevity mediated by mtROS signaling in an apoptosis pathway. This serves to support the possibility that observed correlations between ROS damage and aging are not necessarily indicative of the causal involvement of ROS in the aging process but are more likely due to their modulating signal transduction pathways that are part of cellular responses to the aging process.
Epigenetic oxidative redox shift
Brewer proposed a theory which integrates the free radical theory of aging with the insulin signalling effects in aging. Brewer's theory suggests "sedentary behaviour associated with age triggers an oxidized redox shift and impaired mitochondrial function". This mitochondrial impairment leads to more sedentary behaviour and accelerated aging.
Metabolic stability
The metabolic stability theory of aging suggests it is the cells ability to maintain stable concentration of ROS which is the primary determinant of lifespan. This theory criticizes the free radical theory because it ignores that ROS are specific signalling molecules which are necessary for maintaining normal cell functions.
Mitohormesis
Oxidative stress may promote life expectancy of Caenorhabditis elegans by inducing a secondary response to initially increased levels of ROS. In mammals, the question of the net effect of reactive oxygen species on aging is even less clear. Recent epidemiological findings support the process of mitohormesis in humans, and even suggest that the intake of exogenous antioxidants may increase disease prevalence in humans (according to the theory, because they prevent the stimulation of the organism's natural response to the oxidant compounds which not only neutralizes them but provides other benefits as well).
Challenges
Birds
Among birds, parrots live about five times longer than quail. ROS production in heart, skeletal muscle, liver and intact erythrocytes was found to be similar in parrots and quail and showed no correspondence with longevity difference. These findings were concluded to cast doubt on the robustness of the oxidative stress theory of aging.
See also
Life extension
List of life extension-related topics
Senescence
Mitochondrial theory of ageing
References
Senescence
Theories of ageing
Theories of biological ageing
Proximate theories of biological ageing | Free-radical theory of aging | [
"Chemistry",
"Biology"
] | 2,108 | [
"Free radicals",
"Senescence",
"Cellular processes",
"Biomolecules",
"Theories of biological ageing",
"Metabolism"
] |
605,557 | https://en.wikipedia.org/wiki/Espada%20Acequia | The Espada Acequia, or Piedras Creek Aqueduct, was built by Franciscan friars in 1731 in what is now San Antonio, Texas, United States. It was built to supply irrigation water to the lands near Mission San Francisco de la Espada, today part of San Antonio Missions National Historical Park. The acequia is still in use today and is an National Historic Civil Engineering Landmark and a National Historic Landmark.
Irrigation system
Mission Espada's acequia (irrigation) system can still be seen today. The main ditch, or acequia madre, continues to carry water to the mission and its former farmlands. This water is still used by residents living on these neighboring lands.
The initial survival of a new mission depended upon the planting and harvesting of crops. In south central Texas, intermittent rainfall and the need for a reliable water source made the design and installation of an acequia system a high priority. Irrigation was so important to Spanish colonial settlers that they measured cropland in suertes -the amount of land that could be watered in one day.
The use of acequias was originally brought to the arid regions of Spain by the Romans and the Moors. When Franciscan missionaries arrived in the desert Southwest they found the system worked well in the hot, dry environment. In some areas, like New Mexico, it blended in easily with the irrigation system already in use by the Puebloan Native Americans.
In order to distribute water to the missions along the San Antonio River, Franciscan missionaries oversaw the construction of seven gravity-flow ditches, dams, and at least one aqueduct—a network that irrigated approximately of land. The acequia not only conducted potable water and irrigation, but also powered a mill.
Mission Espada has survived from its beginnings to the present day as a community center that still supports a Catholic parish and religious education, however a school originally opened by the Sisters of the Incarnate Word and Blessed Sacrament was closed in 1967.
References
External links
Buildings and structures in San Antonio
History of San Antonio
National Historic Landmarks in Texas
National Register of Historic Places in San Antonio
Historic American Buildings Survey in Texas
Historic American Engineering Record in Texas
Irrigation projects
Irrigation in the United States
Water supply infrastructure on the National Register of Historic Places
Historic Civil Engineering Landmarks
Spanish missions in Texas
Colonial United States (Spanish)
San Antonio Missions National Historical Park
1730s in Texas
1731 establishments in the Spanish Empire
Individually listed contributing properties to historic districts on the National Register in Texas
San Antonio River | Espada Acequia | [
"Engineering"
] | 506 | [
"Civil engineering",
"Irrigation projects",
"Historic Civil Engineering Landmarks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.