id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
244629 | https://en.wikipedia.org/wiki/Scientific%20law | Scientific law | Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented.
Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations.
A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes.
Social sciences such as economics have also attempted to formulate scientific laws, though these generally have much less predictive power.
Overview
A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction.
Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics, such as Bernoulli's principle, do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply.
Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as , where is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as , and Newton's second law can be written as While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics.
Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data.
Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws.
Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. Thus, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations.
Properties
Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". The production of a summary description of our environment in the form of such laws is a fundamental aim of science.
Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. Scientific laws are:
True, at least within their regime of validity. By definition, there have never been repeatable contradicting observations.
Universal. They appear to apply everywhere in the universe.
Simple. They are typically expressed in terms of a single mathematical equation.
Absolute. Nothing in the universe appears to affect them.
Stable. Unchanged since first discovered (although they may have been shown to be approximations of more accurate laws),
All-encompassing. Everything in the universe apparently must comply with them (according to observations).
Generally conservative of quantity.
Often expressions of existing homogeneities (symmetries) of space and time.
Typically theoretically reversible in time (if non-quantum), although time itself is irreversible.
Broad. In physics, laws exclusively refer to the broad domain of matter, motion, energy, and force itself, rather than more specific systems in the universe, such as living systems, e.g. the mechanics of the human body.
The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes.
In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined.
Some examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, exceeding the speed of light, which violates the implications of special relativity, the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
Laws as consequences of mathematical symmetries
Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation, a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity.
The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space.
One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions.
Laws of physics
Conservation laws
Conservation and symmetry
Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase, in other words symmetry.
Noether's theorem: Any quantity with a continuously differentiable symmetry in the action has an associated conservation law.
Conservation of mass was the first law to be understood since most macroscopic physical processes involving masses, for example, collisions of massive particles or fluid flow, provide the apparent belief that mass is conserved. Mass conservation was observed to be true for all chemical reactions. In general, this is only approximative because with the advent of relativity and experiments in nuclear and particle physics: mass can be transformed into energy and vice versa, so mass is not always conserved but part of the more general conservation of mass–energy.
Conservation of energy, momentum and angular momentum for isolated systems can be found to be symmetries in time, translation, and rotation.
Conservation of charge was also realized since charge has never been observed to be created or destroyed and only found to move from place to place.
Continuity and transfer
Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as:
where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison.
{| class="wikitable" align="center"
|-
! scope="col" style="width:150px;"| Physics, conserved quantity
! scope="col" style="width:140px;"| Conserved quantity q
! scope="col" style="width:140px;"| Volume density ρ (of q)
! scope="col" style="width:140px;"| Flux J (of q)
! scope="col" style="width:10px;"| Equation
|-
| Hydrodynamics, fluids
| m = mass (kg)
| ρ = volume mass density (kg m−3)
| ρ u, where
u = velocity field of fluid (m s−1)
|
|-
| Electromagnetism, electric charge
| q = electric charge (C)
| ρ = volume electric charge density (C m−3)
| J = electric current density (A m−2)
|
|-
| Thermodynamics, energy
| E = energy (J)
| u = volume energy density (J m−3)
| q = heat flux (W m−2)
|
|-
| Quantum mechanics, probability
| P = (r, t) = ∫|Ψ|2d3r = probability distribution
| ρ = ρ(r, t) = |Ψ|2 = probability density function (m−3),
Ψ = wavefunction of quantum system
| j = probability current/flux
|
|}
More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation.
Laws of classical mechanics
Principle of least action
Classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from the following principle:
where is the action; the integral of the Lagrangian
of the physical system between two times t1 and t2. The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = (q1, q2, ... qN).
There are generalized momenta conjugate to these coordinates, p = (p1, p2, ..., pN), where:
The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q(t), parameterized by time (see also parametric equation for this concept).
The action is a functional rather than a function, since it depends on the Lagrangian, and the Lagrangian depends on the path q(t), so the action depends on the entire "shape" of the path for all times (in the time interval from t1 to t2). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure).
Notice L is not the total energy E of the system due to the difference, rather than the sum:
The following general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications.
{| class="wikitable" align="center"
|-
! scope="col" style="width:600px;" colspan="2"| Laws of motion
|-
|colspan="2" |Principle of least action:
|- valign="top"
| rowspan="2" scope="col" style="width:300px;"|The Euler–Lagrange equations are:
Using the definition of generalized momentum, there is the symmetry:
| style="width:300px;"| Hamilton's equations
The Hamiltonian as a function of generalized coordinates and momenta has the general form:
|-
|Hamilton–Jacobi equation
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"| Newton's laws
Newton's laws of motion
They are low-limit solutions to relativity. Alternative formulations of Newtonian mechanics are Lagrangian and Hamiltonian mechanics.
The laws can be summarized by two equations (since the 1st is a special case of the 2nd, zero resultant acceleration):
where p = momentum of body, Fij = force on body i by body j, Fji = force on body j by body i.
For a dynamical system the two equations (effectively) combine into one:
in which FE = resultant external force (due to any agent not part of system). Body i does not exert a force on itself.
|}
From the above, any equation of motion in classical mechanics can be derived.
Corollaries in mechanics
Euler's laws of motion
Euler's equations (rigid body dynamics)
Corollaries in fluid mechanics
Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow.
Archimedes' principle
Bernoulli's principle
Poiseuille's law
Stokes' law
Navier–Stokes equations
Faxén's law
Laws of gravitation and relativity
Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his Philosophiae Naturalis Principia Mathematica, and in Albert Einstein's theory of relativity.
Modern laws
Special relativity
The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion.
They can be stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant and has the same value in all inertial frames".
The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector
this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c.
The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum, the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass):
in which the (more famous) mass–energy equivalence is a special case.
General relativity
General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated.
Gravitoelectromagnetism
In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous gravitomagnetic field. They are well established by the theory, and experimental tests form ongoing research.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Einstein field equations (EFE):
where Λ = cosmological constant, Rμν = Ricci curvature tensor, Tμν = stress–energy tensor, gμν = metric tensor
| scope="col" style="width:300px;"|Geodesic equation:
where Γ is a Christoffel symbol of the second kind, containing the metric.
|- style="border-top: 3px solid;"
|colspan="2"| GEM Equations
If g the gravitational field and H the gravitomagnetic field, the solutions in these limits are:
where ρ is the mass density and J is the mass current density or mass flux.
|-
|colspan="2"| In addition there is the gravitomagnetic Lorentz force:
where m is the rest mass of the particlce and γ is the Lorentz factor.
|}
Classical laws
Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe), are true for any central forces.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Newton's law of universal gravitation:
For two point masses:
For a non uniform mass distribution of local mass density ρ (r) of body of Volume V, this becomes:
| scope="col" style="width:300px;"| Gauss's law for gravity:
An equivalent statement to Newton's law is:
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"|Kepler's 1st law: Planets move in an ellipse, with the star at a focus
where
is the eccentricity of the elliptic orbit, of semi-major axis a and semi-minor axis b, and ℓ is the semi-latus rectum. This equation in itself is nothing physically fundamental; simply the polar equation of an ellipse in which the pole (origin of polar coordinate system) is positioned at a focus of the ellipse, where the orbited star is.
|-
| colspan="2" style="width:600px;"|Kepler's 2nd law: equal areas are swept out in equal times (area bounded by two radial distances and the orbital circumference):
where L is the orbital angular momentum of the particle (i.e. planet) of mass m about the focus of orbit,
|-
|colspan="2"|Kepler's 3rd law: The square of the orbital time period T is proportional to the cube of the semi-major axis a:
where M is the mass of the central body (i.e. star).
|}
Thermodynamics
{| class="wikitable" align="center"
|-
!colspan="2"|Laws of thermodynamics
|- valign="top"
| scope="col" style="width:150px;"|First law of thermodynamics: The change in internal energy dU in a closed system is accounted for entirely by the heat δQ absorbed by the system and the work δW done by the system:
Second law of thermodynamics: There are many statements of this law, perhaps the simplest is "the entropy of isolated systems never decreases",
meaning reversible changes have zero entropy change, irreversible process are positive, and impossible process are negative.
| rowspan="2" style="width:150px;"| Zeroth law of thermodynamics: If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with one another.
Third law of thermodynamics:
As the temperature T of a system approaches absolute zero, the entropy S approaches a minimum value C: as T → 0, S → C.
|-
| For homogeneous systems the first and second law can be combined into the Fundamental thermodynamic relation:
|- style="border-top: 3px solid;"
| colspan="2" style="width:500px;"|Onsager reciprocal relations: sometimes called the fourth law of thermodynamics
|}
Newton's law of cooling
Fourier's law
Ideal gas law, combines a number of separately developed gas laws;
Boyle's law
Charles's law
Gay-Lussac's law
Avogadro's law, into one
now improved by other equations of state
Dalton's law (of partial pressures)
Boltzmann equation
Carnot's theorem
Kopp's law
Electromagnetism
Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Maxwell's equations
Gauss's law for electricity
Gauss's law for magnetism
Faraday's law
Ampère's circuital law (with Maxwell's correction)
| scope="col" style="width:300px;"| Lorentz force law:
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"| Quantum electrodynamics (QED): Maxwell's equations are generally true and consistent with relativity - but they do not predict some observed quantum phenomena (e.g. light propagation as EM waves, rather than photons, see Maxwell's equations for details). They are modified in QED theory.
|}
These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations.
Pre-Maxwell laws
These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless they are still very effective for simple calculations.
Lenz's law
Coulomb's law
Biot–Savart law
Other laws
Ohm's law
Kirchhoff's laws
Joule's law
Photonics
Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time.
Fermat's principle
In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation).
Law of reflection
Law of refraction, Snell's law
In physical optics, laws are based on physical properties of materials.
Brewster's angle
Malus's law
Beer–Lambert law
In actuality, optical properties of matter are significantly more complex and require quantum mechanics.
Laws of quantum mechanics
Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows:
The state of a physical system, be it a particle or a system of many particles, is described by a wavefunction.
Every physical quantity is described by an operator acting on the system; the measured quantity has a probabilistic nature.
The wavefunction obeys the Schrödinger equation. Solving this wave equation predicts the time-evolution of the system's behavior, analogous to solving Newton's laws in classical mechanics.
Two identical particles, such as two electrons, cannot be distinguished from one another by any means. Physical systems are classified by their symmetry properties.
These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle.
{| class="wikitable" align="center"
|- valign="top"
| style="width:300px;"| Quantum mechanics, Quantum field theory
Schrödinger equation (general form): Describes the time dependence of a quantum mechanical system.
The Hamiltonian (in quantum mechanics) H is a self-adjoint operator acting on the state space, (see Dirac notation) is the instantaneous quantum state vector at time t, position r, i is the unit imaginary number, is the reduced Planck constant.
| rowspan="2" scope="col" style="width:300px;"|Wave–particle duality
Planck–Einstein law: the energy of photons is proportional to the frequency of the light (the constant is the Planck constant, h).
De Broglie wavelength: this laid the foundations of wave–particle duality, and was the key concept in the Schrödinger equation,
Heisenberg uncertainty principle: Uncertainty in position multiplied by uncertainty in momentum is at least half of the reduced Planck constant, similarly for time and energy;
The uncertainty principle can be generalized to any pair of observables – see main article.
|-
| Wave mechanics
Schrödinger equation (original form):
|- style="border-top: 3px solid;"
| colspan="2" style="width:600px;"| Pauli exclusion principle: No two identical fermions can occupy the same quantum state (bosons can). Mathematically, if two particles are interchanged, fermionic wavefunctions are anti-symmetric, while bosonic wavefunctions are symmetric:
where ri is the position of particle i, and s is the spin of the particle. There is no way to keep track of particles physically, labels are only used mathematically to prevent confusion.
|}
Radiation laws
Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows.
Stefan–Boltzmann law
Planck's law of black-body radiation
Wien's displacement law
Radioactive decay law
Laws of chemistry
Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics.
Quantitative analysis
The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important.
Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction.
The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element.
More modern laws of chemistry define the relationship between energy and its transformations.
Reaction kinetics and equilibria
In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule. Le Chatelier's principle states that the system opposes changes in conditions from equilibrium states, i.e. there is an opposition to change the state of an equilibrium reaction.
Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs.
There is a hypothetical intermediate, or transition structure, that corresponds to the structure at the top of the energy barrier. The Hammond–Leffler postulate states that this structure looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this hypothetical intermediate through chemical interaction is one way to achieve catalysis.
All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible.
The reaction rate has the mathematical parameter known as the rate constant. The Arrhenius equation gives the temperature and activation energy dependence of the rate constant, an empirical law.
Thermochemistry
Dulong–Petit law
Gibbs–Helmholtz equation
Hess's law
Gas laws
Raoult's law
Henry's law
Chemical transport
Fick's laws of diffusion
Graham's law
Lamm equation
Laws of biology
Ecology
Competitive exclusion principle or Gause's law
Genetics
Mendelian laws (Dominance and Uniformity, segregation of genes, and Independent Assortment)
Hardy–Weinberg principle
Natural selection
Whether or not Natural Selection is a “law of nature” is controversial among biologists. Henry Byerly, an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. His approach was to express relative fitness, the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism.
Laws of Earth sciences
Geography
Arbia's law of geography
Tobler's first law of geography
Tobler's second law of geography
Geology
Archie's law
Buys Ballot's law
Birch's law
Byerlee's law
Principle of original horizontality
Law of superposition
Principle of lateral continuity
Principle of cross-cutting relationships
Principle of faunal succession
Principle of inclusions and components
Walther's law
Other fields
Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws.
Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences. By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics.
History
The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se, though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality.
In Europe, systematic theorizing about nature (physis) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny. Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture.
For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's Natural Questions, and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself.
The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of The World, René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology, with minimal speculation about metaphysics and ethics. (Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).)
The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature.
| Physical sciences | Basics | null |
244680 | https://en.wikipedia.org/wiki/Multiple%20cropping | Multiple cropping | In agriculture, multiple cropping or multicropping is the practice of growing two or more crops in the same piece of land during one year, instead of just one crop. When multiple crops are grown simultaneously, this is also known as intercropping. This cropping system helps farmers to double their crop productivity and their income. But, the selection of two or more crops for practicing multicropping mainly depends on the mutual benefit of the selected crops.
Threshing can be difficult in multiple cropping systems where crops are harvested together. It can take the form of double-cropping, in which a second crop is planted after the first has been harvested. In the Garhwal Himalaya of India, a practice called barahnaja involves sowing 12 or more crops on the same plot, including various types of beans, grains, and millets, and harvesting them at different times.
Benefits of multiple cropping
Adopting the practice of multiple cropping on a large scale can help in reducing the food crises of a country. The overall cost of input decreases, cost spent on fertilizers, irrigation, labour, etc. reduces because of growing two or more than two crops on the same field. Risk of weed growth, pest and disease infestation reduces because of mutual relationship within the crop. This results in better farm management and increased income of the farmer. However, only 5% of global rainfed cropland is under multiple cropping, while 40% of global irrigated cropland is under multiple cropping.
In China, the land reform movement and collectivization of farming facilitated double-cropping in the south of the country, leading to a major increase in agricultural yields.
| Technology | Agriculture_2 | null |
244878 | https://en.wikipedia.org/wiki/Winter%20storm | Winter storm | A winter storm is an event in which wind coincides with varieties of precipitation that only occur at freezing temperatures, such as snow, mixed snow and rain, or freezing rain. In temperate continental and subarctic climates, these storms are not necessarily restricted to the winter season, but may occur in the late autumn and early spring as well. A snowstorm with strong winds and low visibility is called a blizzard.
Formation
Winter storms are formed when moist air rises up into the atmosphere, creating low pressure near the ground and clouds up in the air. The air can also be pushed upwards by hills or large mountains. The upward motion is called lift. The moisture is collected by the wind from large bodies of water, such as a big lake or the ocean. If temperature is below freezing, , near the ground and up in the clouds, precipitation will fall as snow, ice, rain and snow mixed (sleet), ice pellets or even graupel (soft hail). Since cold air can not hold as much moisture as warm air, the total precipitation will be less than at higher temperature.
Winter storm warnings will be issued if:
Snow accumulation is or more in 12 hours, or or more in 24 hours.
Blowing snow is reducing visibility in large areas at winds less than .
Ice accumulations on surfaces are or more.
Ice pellets larger than are formed.
Wind chill index is less than for more than 3 hours and sustained wind speed of at least .
Snowstorms with wind speed of more than and reduced visibility of less than for 3 hours or longer are called blizzards.
Terminology
Severe winter weather conditions called "winter storms", can be local weather fulfilling the criteria for 24 hours, or large storm systems covering part of a continent for several days. With large, massive winter storms, weather in any part of the area covered by the extreme weather is usually called "storm"; even if meteorological criteria for winter storms are not met everywhere. An example of this is the February 13–17, 2021 North American winter storm with snowfall and below freezing temperatures as far south as Texas and the Gulf of Mexico.
Snowstorm
Snowstorms are storms where large quantities of snow fall. of snow is enough to create serious disruptions to traffic and school transport (because of the difficulty to drive and manoeuvre the school buses on slick roads). This is particularly true in places where snowfall is not typical but heavy accumulating snowfalls can occur. In places where snowfall is typical, such small snowfalls are rarely disruptive, because of effective snow and ice removal by municipalities, increased use of four-wheel drive and snow tires, and drivers being more used to winter conditions. Snowfalls in excess of are usually universally disruptive.
A large number of severe snowstorms, some of which were blizzards, occurred in the United States during 1888 and 1947 as well as the early and mid-1990s. The snowfall of 1947 exceeded with drifts and snow piles from plowing that reached and for months as temperatures did not rise high enough to melt the snow. The 1993 "Superstorm" manifested as a blizzard in most of the affected areas.
Severe snowstorms could be quite dangerous: a snow depth will make some unplowed roads impassable, and it is possible for cars to get stuck in the snow. Snow depth exceeding especially in southern or generally warm climates will cave the roofs of some homes and cause loss of electricity. Standing dead trees can also be brought down by the weight of the snow, especially if it is wet. Even a few inches of dry snow can form drifts many feet high under windy conditions.
Hazards from snowfall
Accumulated snow can make driving motor vehicles very hazardous. Snow on roadways reduces friction between tires and the road surface, which in turn lowers the maneuverability of a vehicle considerably. As a result, average driving speeds on public roads and highways are reduced by up to 40% while heavy snow is falling. Visibility is reduced by falling snow, and this is further exacerbated by strong winds which are commonly associated with winter storms producing heavy snowfall. In extreme cases, this may lead to prolonged whiteout conditions in which visibility is reduced to only a few feet due to falling or blowing snow. These hazards can manifest even after snowfall has ended when strong winds are present, as these winds will pick up and transport fallen snow back onto roadways and reduce visibility in the process. This can even result in blizzard conditions if winds are strong enough. Heavy snowfall can immobilize a vehicle entirely, which may be deadly depending on how long it takes rescue crews to arrive. The clogging of a vehicle's tailpipe by snow may lead to carbon monoxide buildup inside the cabin.
Depending on the temperature profile in the atmosphere, snow can be either wet or dry. Dry snow, being lighter, is transported by wind more easily and accumulates more efficiently. Wet snow is heavier due to the increased water content. Significant accumulations of heavy wet snow can cause roof damage. It also requires considerably more energy to move and this can create health problems while shoveling when combined with the harsh weather conditions. Numerous deaths as a result of heart attacks can be attributed to snow removal. Accretion of wet snow to elevated surfaces occurs when snow is "sticky" enough which can cause extensive tree and power line damage in a manner similar to ice accretion during ice storms. Power can be lost for days during a major winter storm, and this usually means the loss of heating inside buildings. Other than the obvious risk of hypothermia due to cold exposure, another deadly element associated with snowstorms is carbon monoxide poisoning which can happen anytime combustion products from generators or heating appliances are not properly vented. Partially or fully melted snow on roadways can refreeze when temperatures fall, creating black ice.
Freezing rain
Heavy showers of freezing rain are one of the most dangerous types of winter storm. They typically occur when a layer of warm air hovers over a region, but the ambient temperature a few meters above the ground is near or below , and the ground temperature is sub-freezing.
While a snowfall is somewhat manageable by the standards of the northern United States and Canada, a comparable precipitation of an ice storm can paralyze a region; driving becomes extremely hazardous, telephone and power lines are damaged, and crops may be ruined.
Notable ice storms
Notable ice storms include an El Niño-related North American ice storm of 1998 that affected much of eastern Canada, including Montreal and Ottawa, as well as upstate New York and parts of upper New England. Three million people lost power, some for as long as six weeks. One-third of the trees in Montreal's Mount Royal park were damaged, as well as a large proportion of the sugar-producing maple trees. The amount of economic damage caused by the storm has been estimated at $3 billion CAD.
2000 Christmas Day Ice Storm, caused devastating electrical issues in parts of Arkansas, Oklahoma, and Texas. The city of Texarkana, Arkansas experienced the worst damage, at one point losing the ability to use telephones, electricity and running water. In some areas in Arkansas, Oklahoma, Texas and eventually Louisiana, over of ice accumulated from the freezing rain.
2002 North Carolina ice storm, resulted in massive power loss throughout much of the state, and property damage due to falling trees. Except in the mountainous western part of the state, heavy snow and icy conditions are rare in North Carolina.
2005 December Ice Storm, was another severe winter storm producing extensive ice damage across a large portion of the Southern United States on December 14 to 16. It led to power outages and at least 7 deaths.
2005 January winter storm in Kansas, had been declared a major disaster zone by President George W. Bush after an ice storm caused nearly $39 million in damages to thirty-two counties. Federal funds were provided to the counties during January 4–6, 2005 to aid the recovery process.
2009 January Central Plains and Midwest ice storm, was a crippling and historic ice storm. Most places struck by the storm, saw or more of ice accumulation, and a few inches of snow on top of it. This brought down power lines, causing some people to go without electricity for a few days, to a few weeks. In some cases, electricity was out for a month or more. At the height of the storm, more than 2 million people were without electricity.
2021 Winter Storm was the deadliest winter storm since the Blizzard of 1996 impacting most of the midwest and southcentral United States. The state of Texas gained notable publicity due to the failure of the state's power grid, causing blackouts and power outages for 7–10 days across the state.
Preparing for winter storms
In countries where winter storms can occur, governments and health organizations have websites and online services with advice about how to prepare for the consequences of severe weather. Advices vary with housing standards, infrastructure and safety regulations, but some tips are the same, such as: stock up on three days of food, water, medicines and hygiene items, keep warm clothes ready, keep a flashlight and extra batteries, stay informed, help each other, do not travel unless absolutely necessary.
| Physical sciences | Storms | Earth science |
244905 | https://en.wikipedia.org/wiki/Lauraceae | Lauraceae | Lauraceae, or the laurels, is a plant family that includes the true laurel and its closest relatives. This family comprises about 2850 known species in about 45 genera worldwide. They are dicotyledons, and occur mainly in warm temperate and tropical regions, especially Southeast Asia and South America. Many are aromatic evergreen trees or shrubs, but some, such as Sassafras, are deciduous, or include both deciduous and evergreen trees and shrubs, especially in tropical and temperate climates. The genus Cassytha is unique in the Lauraceae in that its members are parasitic vines. Most laurels are highly poisonous.
Overview
The family has a worldwide distribution in tropical and warm climates. The Lauraceae are important components of tropical forests ranging from low-lying to montane. In several forested regions, Lauraceae are among the top five families in terms of the number of species present.
The Lauraceae give their name to habitats known as laurel forests, which have many trees that superficially resemble the Lauraceae, though they may belong to other plant families such as Magnoliaceae or Myrtaceae. Laurel forests of various types occur on most continents and on many major islands.
Although the taxonomy of the Lauraceae is still not settled, conservative estimates suggest some 52 genera worldwide, including 3,000 to 3,500 species. Compared to other plant families, the taxonomy of Lauraceae still is poorly understood. This is partly due to its great diversity, the difficulty of identifying the species, and partly because of inadequate investment in taxonomic work.
Recent monographs on small and medium-sized genera of Lauraceae (up to about 100 species) have revealed many new species. Similar increases in the numbers of species recognised in other larger genera are to be expected.
Description
Most of the Lauraceae are evergreen trees in habit. Exceptions include some two dozen species of Cassytha, all of which are obligately parasitic vines.
The fruits of Lauraceae are drupes, one-seeded fleshy fruit with a hard layer, the endocarp, surrounding the seed. However, the endocarp is very thin, so the fruit resemble a one-seeded berry. The fruit in some species (particularly in the genera Ocotea) are partly immersed or covered in a cup-shaped or deep thick cupule, which is formed from the tube of the calyx where the peduncle joins the fruit; this gives the fruit an appearance similar to an acorn. In some Lindera species, the fruit have a hypocarpium at the base of the fruit.
Distribution and uses
Because the family is so ancient and was so widely distributed on the Gondwana supercontinent, modern species commonly occur in relict populations isolated by geographical barriers, for instance on islands or tropical mountains. Relict forests retain endemic fauna and flora in communities of great value in inferring the palaeontological succession and climate change that followed the breakups of the supercontinents.
Many Lauraceae contain high concentrations of essential oils, some of which are valued for spices and perfumes. Within the plants, most such substances are components of irritant or toxic sap or tissues that repel or poison many herbivorous or parasitic organisms.
Some of the essential oils are valued as fragrances, such as in the traditional laurel wreath of classical antiquity, or in cabinet making, where the fragrant woods are prized for making insect-repellant furniture chests.
Some are valued in cooking, for example, bay leaves are a popular ingredient in European, American, and Asian cuisines.
Avocados are important oil-rich fruit that are cultivated in warm climates around the world.
Many species are exploited for timber.
Some species are valued as sources of medicinal material.
These genera include some of the best-known species of particular commercial value:
Cinnamomum: cinnamon (Cinnamonum verum) and cassia (Cinnamomum cassia)
Camphora: camphor tree (Camphora officinarum)
Laurus: bay laurel (Laurus nobilis)
Persea: avocado (Persea americana)
Loss of habitat and overexploitation for such products has put many species in danger of extinction as a result of overcutting, extensive illegal logging, and habitat conversion.
Conversely, some species, though commercially valuable in some countries, are regarded as aggressive invaders in other regions. For example, Cinnamomum camphora, though a valued ornamental and medicinal plant, is so invasive as to have been declared a weed in subtropical forested areas of South Africa.
Ecology
Lauraceae flowers are protogynous, often with a complex flowering system to prevent inbreeding. The fruits are an important food source for birds, on which some Palaeognathae are highly dependent. Other birds that rely heavily on the fruit for their diets include members of the families Cotingidae, Columbidae, Trogonidae, Turdidae, and Ramphastidae, amongst others. Birds that are specialised frugivores tend to eat the whole fruit and regurgitate seeds intact, thereby releasing the seeds in favourable situations for germination (ornithochory). Some other birds that swallow the fruit pass the seed intact through their guts.
Seed dispersal of various species in the family is also carried out by monkeys, arboreal rodents, porcupines, opossums, and fishes. Hydrochory occurs in Caryodaphnopsis.
The leaves of some species in the Lauraceae have domatia in the axils of their veins. The domatia are home to certain mites. Other lauraceous species, members of the genus Pleurothyrium in particular, have a symbiotic relationship with ants that protect and defend the tree. Some Ocotea species are also used as nesting sites by ants, which may live in leaf pockets or in hollowed-out stems.
Defense mechanisms that occur among members of the Lauraceae include irritant or toxic sap or tissues that repel or poison many herbivorous organisms.
Trees of the family predominate in the world's laurel forests and cloud forests, which occur in tropical to mild temperate regions of both northern and southern hemispheres. Other members of the family however, occur pantropically in general lowland and Afromontane forest, and in Africa for example there are species endemic to countries such as Cameroon, Sudan, Tanzania, Uganda and Congo. Several relict species in the Lauraceae occur in temperate areas of both hemispheres. Many botanical species in other families have similar foliage to the Lauraceae due to convergent evolution, and forests of such plants are called laurel forest. These plants are adapted to high rainfall and humidity, and have leaves with a generous layer of wax, making them glossy in appearance, and a narrow, pointed-oval shape with a 'drip tip', which permits the leaves to shed water despite the humidity, allowing transpiration to continue. Scientific names similar to Daphne (e.g., Daphnidium, Daphniphyllum) or "laurel" (e.g.,Laureliopsis, Skimmia laureola) indicate other plant families that resemble Lauraceae.
Some Lauraceae species have adapted to demanding conditions in semiarid climates, but they tend to depend on favorable edaphic conditions, for example, perennial aquifers, periodic groundwater flows, or periodically flooded forests in sand that contains hardly any nutrients. Various species have adapted to swampy conditions by growing pneumatophores, roots that grow upward, that project above the levels of periodic floods that drown competing plants which lack such adaptations.
Paleobotanists have suggested the family originated some 174±32 million years ago (Mya), while others do not believe they are older than the mid-Cretaceous. Fossil flowers attributed to this family occur in Cenomanian clays (mid-Cretaceous, 90-98 Mya) of the Eastern United States (Mauldinia mirabilis). Fossils of Lauraceae are common in the Tertiary strata of Europe and North America, but they virtually disappeared from central Europe in the Late Miocene. Because of their unusual fragility, the pollens of Lauraceae do not keep well and have been found only in relatively recent strata.
Deciduous Lauraceae lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical, and arid regions.
Laurel wilt disease, caused by the virulent fungal pathogen Raffaelea lauricola, a native of southern Asia, was found in the southeast United States in 2002. The fungus spreads between hosts via a wood-boring beetle, Xyleborus glabratus, with which it has a symbiotic relationship. Several Lauraceae species are affected. The beetle and disease are believed to have arrived in the US via infected solid wood packing material, and have since spread to several states.
Classification
Classification within the Lauraceae is not fully resolved. Multiple classification schemes based on a variety of morphological and anatomical characteristics have been proposed, but none are fully accepted. According to Judd et al. (2007), the suprageneric classification proposed by van der Werff and Richter (1996) is currently the authority. However, due to an array of molecular and embryological evidence that disagrees with the groupings, it is not fully accepted by the scientific community. Their classification is based on inflorescence structure and wood and bark anatomy. It divides Lauraceae into two subfamilies, Cassythoideae and Lauroideae. The Cassythoideae comprise a single genus, Cassytha, and are defined by their herbaceous, parasitic habit. The Lauroideae are then divided into three tribes: Laureae, Perseeae, and Cryptocaryeae.
The subfamily Cassythoideae is not fully supported. Backing has come from matK sequences of chloroplast genes while a questionable placement of Cassytha has been concluded from analysis of intergenetic spacers of chloroplast and nuclear genomes. Embryological studies also appear contradictory. One study by Heo et al. (1998) supports the subfamily. It found that Cassytha develops an ab initio cellular-type endosperm and the rest of the family (with one exception) develops a nuclear-type endosperm. Kimoto et al. (2006) suggest Cassytha should be placed in the tribe Cryptocaryeae because it shares a glandular anther tapetum and an embryo sac protruding from the nucellus with other members of the Cryptocaryeae.
The tribes Laureae and Perseeae are not well supported by any molecular or embryological studies. Sequences of the matK chloroplast gene, as well as sequences of chloroplast and nuclear genomes, reveal close relationships between the two tribes. Embryological evidence does not support a clear division between the two tribes, either. Genera such as Caryodaphnopsis and Aspidostemon that share embryological characteristics with one tribe and wood and bark characteristics or inflorescence characteristics with another tribe blur the division of these groups. All available evidence, except for inflorescence morphology and wood and bark anatomy, fails to support separate tribes Laureae and Perseeae.
The tribe Cryptocaryeae is partially supported by molecular and embryological studies. Chloroplast and nuclear genomes support a tribal grouping that contains all the genera circumscribed by van der Weff and Richter (1996), as well as three additional genera. Partial support for the tribe is also attained from the matK sequences of chloroplast genes as well as embryology.
Challenges in Lauraceae classification
The knowledge of the species comprising the Lauraceae is incomplete. In 1991, about 25-30% of neotropical Lauraceae species had not been described. In 2001, embryological studies had only been completed on individuals from 26 genera yielding a 38.9% level of knowledge, in terms of embryology, for this family. Additionally, the huge amount of variation within the family poses a major challenge for developing a reliable classification.
Phytochemistry
The adaptation of Lauraceae to new environments has followed a long evolutionary journey which has led to many specializations, including defensive or deterrent systems against other organisms.
Phytochemicals in the Lauraceae are numerous and diverse. Benzylisoquinoline alkaloids include aporphines and oxoaporphines, as well as derivatives of morphinans. Essential oils include terpenoids, benzyl benzoates, allylphenols, and propenylphenols. Lignans and neolignans are present, along with S-methyl-5-O-flavonoids, proanthocyanidins, cinnamoylamides, phenylpyrroles, styryl pyrones, polyketides (acetogenins), furanosesquiterpenes, and germacranolidous, heliangolidous, eudesmanolidous and guaianolidous sesquiterpene lactones.
Genera
Recent taxonomic revisions of the family include these genera:
These genera have traditionally been considered separate within Lauraceae, but have not been included in the most recent treatments:
Popular culture
A laurel wreath, a round or horseshoe-shaped wreath made of connected laurel branches and leaves, is an ancient symbol of triumph in classical Western culture originating in Greek mythology, and is associated in some countries with academic or literary achievement.
| Biology and health sciences | Laurales | Plants |
244980 | https://en.wikipedia.org/wiki/Chromista | Chromista | Chromista is a proposed but polyphyletic biological kingdom, refined from the Chromalveolata, consisting of single-celled and multicellular eukaryotic species that share similar features in their photosynthetic organelles (plastids). It includes all eukaryotes whose plastids contain chlorophyll c and are surrounded by four membranes. If the ancestor already possessed chloroplasts derived by endosymbiosis from red algae, all non-photosynthetic Chromista have secondarily lost the ability to photosynthesise. Its members might have arisen independently as separate evolutionary groups from the last eukaryotic common ancestor.
Chromista as a taxon was created by the British biologist Thomas Cavalier-Smith in 1981 to distinguish the stramenopiles, haptophytes, and cryptophytes. According to Cavalier-Smith, the kingdom originally consisted mostly of photosynthetic eukaryotes (algae), but he later brought many heterotrophs (protozoa) into the proposed group. As of 2018, the kingdom was nearly as diverse as the kingdoms Plantae and Animalia, consisting of eight phyla. Notable members include marine algae, potato blight, dinoflagellates, Paramecium, the brain parasite Toxoplasma, and the malarial parasite Plasmodium.
However, Cavalier-Smith's hypothesis of chromist monophyly has been rejected by other researchers, who consider it more likely that some chromists acquired their plastids by incorporating another chromist instead of inheriting them from a common ancestor. This is thought to have occurred repeatedly, so that the red plastids spread from one group to another. The plastids, far from characterising their hosts as belonging to a single clade, thus have a different history from their disparate hosts. They appear to have originated in the Rhodophytina, and to have been transmitted to the Cryptophytina and from them to both the Ochrophyta and the Haptophyta, and then from these last to the Myzozoa.
Biology
Members of Chromista are single-celled and multicellular eukaryotes having basically either or both features:
plastid(s) that contain chlorophyll c and lie within an extra (periplastid) membrane in the lumen of the rough endoplasmic reticulum (typically within the perinuclear cisterna);
cilia with tripartite or bipartite rigid tubular hairs.
The kingdom includes diverse organisms from algae to malarial parasites (Plasmodium). Molecular evidence indicates that the plastids in chromists were derived from red algae through secondary symbiogenesis in a single event. In contrast, plants acquired their plastids from cyanobacteria through primary symbiogenesis. These plastids are now enclosed in two extra cell membranes, making a four-membrane envelope, as a result of which they acquired many other membrane proteins for transporting molecules in and out of the organelles. The diversity of chromists is hypothesised to have arisen from degeneration, loss or replacement of the plastids in some lineages. Additional symbiogenesis of green algae has provided genes retained in some members (such as heterokonts), and bacterial chlorophyll (indicated by the presence of ribosomal protein L36 gene, rpl36) in haptophytes and cryptophytes.
History and groups
Some examples of classification of the groups involved, which have overlapping but non-identical memberships, are shown below.
Chromophycées (Chadefaud, 1950)
The Chromophycées (Chadefaud, 1950), renamed Chromophycota (Chadefaud, 1960), included the current Ochrophyta (autotrophic Stramenopiles), Haptophyta (included in Chrysophyceae until Christensen, 1962), Cryptophyta, Dinophyta, Euglenophyceae and Choanoflagellida (included in Chrysophyceae until Hibberd, 1975).
Chromophyta (Christensen 1962, 1989)
The Chromophyta (Christensen 1962, 2008), defined as algae with chlorophyll c, included the current Ochrophyta (autotrophic Stramenopiles), Haptophyta, Cryptophyta, Dinophyta and Choanoflagellida. The Euglenophyceae were transferred to the Chlorophyta.
Chromophyta (Bourrelly, 1968)
The Chromophyta (Bourrelly, 1968) included the current Ochrophyta (autotrophic Stramenopiles), Haptophyta and Choanoflagellida. The Cryptophyceae and the Dinophyceae were part of Pyrrhophyta (= Dinophyta).
Chromista (Cavalier-Smith, 1981)
The name Chromista was first introduced by Cavalier-Smith in 1981; the earlier names Chromophyta, Chromobiota and Chromobionta correspond to roughly the same group. It has been described as consisting of three different groups: It includes all protists whose plastids contain chlorophyll c.
Heterokonts or Stramenopiles: brown algae, diatoms, water moulds, etc.
Haptophytes
Cryptomonads
In 1994, Cavalier-Smith and colleagues indicated that the Chromista is probably a polyphyletic group whose members arose independently, sharing no more than descent from the common ancestor of all eukaryotes:
In 2009, Cavalier-Smith gave his reason for making a new kingdom, saying:
Since then Chromista has been defined in different ways at different times. In 2010, Cavalier-Smith reorganised Chromista to include the SAR supergroup (named for the included groups Stramenopiles, Alveolata and Rhizaria) and Hacrobia (Haptista and Cryptista).
Patron et al. (2004) considered the presence of a unique class of FBA (fructose-1,6-biophosphate-aldolase) enzyme not similar to that found in plants as evidence of chromist monophyly. Fast et al. (2001) supported a single origin for the myzozoan (dinoflagellate + apicomplexan), heterokont and cryptophyte plastids based on their comparison of GAPDH (glyceraldehyde-3-phosphate dehydrogenase) genes. Harper & Keeling (2003) described haptophyte homologs and considered them further evidence of a single endosymbiotic event involving the ancestor of all chromists.
Chromalveolata (Adl et al., 2005)
The Chromalveolata included Stramenopiles, Haptophyta, Cryptophyta and Alveolata. However, in 2008 the group was found not to be monophyletic, and later studies confirmed this.
Classification
Cavalier-Smith et al. 2015
In 2015, Cavalier-Smith and his colleagues made a new higher-level grouping of all organisms as a revision of the seven kingdoms model. In it, they classified the kingdom Chromista into 2 subkingdoms and 11 phyla, namely:
Cavalier-Smith 2018
Cavalier-Smith made a new analysis of Chromista in 2018 in which he classified all chromists into 8 phyla (Gyrista corresponds to the above phyla Ochrophyta and Pseudofungi, Cryptista corresponds to the above phyla Cryptista and "N.N.", Haptista corresponds to the above phyla Haptophyta and Heliozoa):
Polyphyly and serial endosymbiosis
Molecular trees have had difficulty resolving relationships between the different groups. All three may share a common ancestor with the alveolates (see chromalveolates), but there is evidence that suggests the haptophytes and cryptomonads do not belong together with the heterokonts or the SAR clade, but may be associated with the Archaeplastida. Cryptista specifically may be sister or part of Archaeplastida, though this could be an artefact due to acquisition of genes from red algae by cryptomonads.
A 2020 phylogeny of the eukaryotes states that "the chromalveolate hypothesis is not widely accepted" (noting Cavalier-Smith et al 2018 as an exception), explaining that the host lineages do not appear to be closely related in "most phylogenetic analyses". Further, none of TSAR, Cryptista, and Haptista, groups formerly within Chromalveolata, appear "likely to be ancestrally defined by red secondary plastids". This is because of the many non-photosynthetic organisms related to the groups with chlorophyll c, and the possibility that cryptophytes are more closely related to plants.
The alternative to monophyly is serial endosymbiosis, meaning that the "chromists" acquired their plastids from each other instead of inheriting them from a single common ancestor. Thus the phylogeny of the distinctive plastids, which are agreed to have a common origin in the rhodophytes, is different from the phylogeny of the host cells. In 2021, Jürgen Strassert and colleagues modelled the timelines for the presumed spread of the red plastids, concluding that "the hypotheses of serial endosymbiosis are chronologically possible, as the stem lineages of all red plastid-containing groups overlap in time" during the Mesoproterozoic and Neoproterozoic eras. They propose that the plastids were transmitted between groups as follows:
Rhodophytina → Cryptophytina → Ochrophyta
↘ Haptophyta → Myzozoa
| Biology and health sciences | Bikonts | Plants |
244992 | https://en.wikipedia.org/wiki/Oomycete | Oomycete | The Oomycetes (), or Oomycota, form a distinct phylogenetic lineage of fungus-like eukaryotic microorganisms within the Stramenopiles. They are filamentous and heterotrophic, and can reproduce both sexually and asexually. Sexual reproduction of an oospore is the result of contact between hyphae of male antheridia and female oogonia; these spores can overwinter and are known as resting spores. Asexual reproduction involves the formation of chlamydospores and sporangia, producing motile zoospores. Oomycetes occupy both saprophytic and pathogenic lifestyles, and include some of the most notorious pathogens of plants, causing devastating diseases such as late blight of potato and sudden oak death. One oomycete, the mycoparasite Pythium oligandrum, is used for biocontrol, attacking plant pathogenic fungi. The oomycetes are also often referred to as water molds (or water moulds), although the water-preferring nature which led to that name is not true of most species, which are terrestrial pathogens.
Oomycetes were originally grouped with fungi due to similarities in morphology and lifestyle. However, molecular and phylogenetic studies revealed significant differences between fungi and oomycetes which means the latter are now grouped with the stramenopiles (which include some types of algae). The Oomycota have a very sparse fossil record; a possible oomycete has been described from Cretaceous amber.
Etymology
Oomycota comes from oo- () and -mycete (), referring to the large round oogonia, structures containing the female gametes, that are characteristic of the oomycetes.
The name "water mold" refers to their earlier classification as fungi and their preference for conditions of high humidity and running surface water, which is characteristic for the basal taxa of the oomycetes.
Morphology
The oomycetes rarely have septa (see hypha), and if they do, they are scarce, appearing at the bases of sporangia, and sometimes in older parts of the filaments. Some are unicellular, while others are filamentous and branching.
Classification
Previously the group was arranged into six orders.
The Saprolegniales are the most widespread. Many break down decaying matter; others are parasites.
The Leptomitales have wall thickenings that give their continuous cell body the appearance of septation. They bear chitin and often reproduce asexually.
The Rhipidiales use rhizoids to attach their thallus to the bed of stagnant or polluted water bodies.
The Albuginales are considered by some authors to be a family (Albuginaceae) within the Peronosporales, although it has been shown that they are phylogenetically distinct from this order.
The Peronosporales too are mainly saprophytic or parasitic on plants, and have an aseptate, branching form. Many of the most damaging agricultural parasites belong to this order.
The Lagenidiales are the most primitive; some are filamentous, others unicellular; they are generally parasitic.
However more recently this has been expanded considerably.
Anisolpidiales Dick 2001
Anisolpidiaceae Karling 1943
Lagenismatales Dick 2001
Lagenismataceae Dick 1995
Salilagenidiales Dick 2001
Salilagenidiaceae Dick 1995
Rozellopsidales Dick 2001
Rozellopsidaceae Dick 1995
Pseudosphaeritaceae Dick 1995
Ectrogellales
Ectrogellaceae
Haptoglossales
Haptoglossaceae
Eurychasmales
Eurychasmataceae Petersen 1905
Haliphthorales
Haliphthoraceae Vishniac 1958
Olpidiopsidales
Sirolpidiaceae Cejp 1959
Pontismataceae Petersen 1909 (contains Petersenia , Pontisma
Olpidiopsidaceae Cejp 1959
Atkinsiellales
Atkinisellaceae
Crypticolaceae Dick 1995
Saprolegniales
Achlyaceae
Verrucalvaceae Dick 1984
Saprolegniaceae Warm. 1884 [Leptolegniaceae]
Leptomitales
Leptomitaceae Kuetz. 1843 [Apodachlyellaceae Dick 1986]
Leptolegniellaceae Dick 1971 [Ducellieriaceae Dick 1995]
Rhipidiales
Rhipidiaceae Cejp 1959
Albuginales
Albuginaceae Schroet. 1893
Peronosporales [Pythiales; Sclerosporales; Lagenidiales]
Salisapiliaceae
Pythiaceae Schroet. 1893 [Pythiogetonaceae; Lagenaceae Dick 1994; Lagenidiaceae; Peronophythoraceae; Myzocytiopsidaceae Dick 1995]
Peronosporaceae Warm. 1884 [Sclerosporaceae Dick 1984]
Phylogenetic relationships
Internal
External
This group was originally classified among the fungi (the name "oomycota" means "egg fungus") and later treated as protists, based on general morphology and lifestyle. A cladistic analysis based on modern discoveries about the biology of these organisms supports a relatively close relationship with some photosynthetic organisms, such as brown algae and diatoms. A common taxonomic classification based on these data, places the class Oomycota along with other classes such as Phaeophyceae (brown algae) within the phylum Heterokonta.
This relationship is supported by a number of observed differences between the characteristics of oomycetes and fungi. For instance, the cell walls of oomycetes are composed of cellulose rather than chitin and generally do not have septations. Also, in the vegetative state they have diploid nuclei, whereas fungi have haploid nuclei. Most oomycetes produce self-motile zoospores with two flagella. One flagellum has a "whiplash" morphology, and the other a branched "tinsel" morphology. The "tinsel" flagellum is unique to the Kingdom Heterokonta. Spores of the few fungal groups which retain flagella (such as the Chytridiomycetes) have only one whiplash flagellum. Oomycota and fungi have different metabolic pathways for synthesizing lysine and have a number of enzymes that differ. The ultrastructure is also different, with oomycota having tubular mitochondrial cristae and fungi having flattened cristae.
In spite of this, many species of oomycetes are still described or listed as types of fungi and may sometimes be referred to as pseudo fungi, or lower fungi.
Biology
Reproduction
Most of the oomycetes produce two distinct types of spores. The main dispersive spores are asexual, self-motile spores called zoospores, which are capable of chemotaxis (movement toward or away from a chemical signal, such as those released by potential food sources) in surface water (including precipitation on plant surfaces). A few oomycetes produce aerial asexual spores that are distributed by wind. They also produce sexual spores, called oospores, that are translucent, double-walled, spherical structures used to survive adverse environmental conditions.
Ecology and pathogenicity
Many oomycetes species are economically important, aggressive algae and plant pathogens. Some species can cause disease in fish, and at least one is a pathogen of mammals. The majority of the plant pathogenic species can be classified into four groups, although more exist.
The Phytophthora group is a paraphyletic genus that causes diseases such as dieback, late blight in potatoes (the cause of the Great Famine of the 1840s that ravaged Ireland and other parts of Europe), sudden oak death, rhododendron root rot, and ink disease in the European chestnut
The paraphyletic Pythium group is more prevalent than Phytophthora and individual species have larger host ranges, although usually causing less damage. Pythium damping off is a very common problem in greenhouses, where the organism kills newly emerged seedlings. Mycoparasitic members of this group (e.g. P. oligandrum) parasitize other oomycetes and fungi, and have been employed as biocontrol agents. One Pythium species, Pythium insidiosum, also causes Pythiosis in mammals.
The third group are the downy mildews, which are easily identifiable by the appearance of white, brownish or olive "mildew" on the leaf undersides (although this group can be confused with the unrelated fungal powdery mildews).
The fourth group are the white blister rusts, Albuginales, which cause white blister disease on a variety of flowering plants. White blister rusts sporulate beneath the epidermis of their hosts, causing spore-filled blisters on stems, leaves and the inflorescence. The Albuginales are currently divided into three genera, Albugo parasitic predominantly to Brassicales, Pustula, parasitic predominantly to Asterales, and Wilsoniana, predominantly parasitic to Caryophyllales. Like the downy mildews, the white blister rusts are obligate biotrophs, which means that they are unable to survive without the presence of a living host.
| Biology and health sciences | SAR supergroup | Plants |
245138 | https://en.wikipedia.org/wiki/Loess | Loess | A loess (, ; from ) is a clastic, predominantly silt-sized sediment that is formed by the accumulation of wind-blown dust. Ten percent of Earth's land area is covered by loesses or similar deposits.
A loess is a periglacial or aeolian (windborne) sediment, defined as an accumulation of 20% or less of clay with a balance of roughly equal parts sand and silt (with a typical grain size from 20 to 50 micrometers), often loosely cemented by calcium carbonate. Usually, they are homogeneous and highly porous and have vertical capillaries that permit the sediment to fracture and form vertical bluffs.
Properties
Loesses are homogeneous; porous; friable; pale yellow or buff; slightly coherent; typically, non-stratified; and often calcareous. Loess grains are angular, with little polishing or rounding, and composed of quartz, feldspar, mica, or other mineral crystals. Loesses have been described as rich, dust-like soil.
Loess deposits may become very thick: at more than a hundred meters in areas of Northwestern China and tens of meters in parts of the Midwestern United States. Loesses generally occur as blanket deposits that cover hundreds of square kilometers. The deposits are often tens of meters thick. Loesses often have steep or vertical faces. Because the grains are angular, loesses will often stand in banks for many years without slumping. This type of soil has "vertical cleavage", and thus, it can be easily excavated to form cave dwellings, which is a popular method of making human habitations in some parts of China. However, loesses can readily erode.
In several areas of the world, loess ridges have formed that had been aligned with the prevailing winds during the last glacial maximum. These are called "paha ridges" in America and "greda ridges" in Europe. The formation of these loess dunes has been explained as a combination of wind and tundra conditions.
Etymology
The word loess, with connotations of origin by wind-deposited accumulation, was introduced into English from the German (1824), which can be traced back to Swiss German and is cognate with the English word loose and the German word . It was first applied to the Rhine River valley loesses around 1821.
History of research
The term "Löß" was first described in Central Europe by Karl Cäsar von Leonhard (1823–1824), who had reported yellowish brown, silty deposits along the Rhine valley near Heidelberg. Charles Lyell (1834) brought the term into widespread usage, observing similarities between "loess" and its derivatives along the loess bluffs in the Rhine and in Mississippi. At the time, it was thought that the yellowish brown silt-rich sediment was of fluvial origin and had been deposited by large rivers. The aeolian origin of the loesses was recognized later (Virlet D'Aoust 1857), particularly due to the convincing observations of loesses in China by Ferdinand von Richthofen (1878). A tremendous number of papers have been published since then, focusing on the formation of loesses and on loess/paleosol (older soil buried under deposits) sequences as the archives of climate and environment change. These water conservation works have been carried out extensively in China, and the research of loesses in China has been ongoing since 1954. [33]
Much effort was put into setting up regional and local loess stratigraphies and their correlations (Kukla 1970, 1975, 1977). However, even the chronostratigraphical position of the last interglacial soil correlating with marine isotope substage 5e was a matter of debate, due to the lack of robust and reliable numerical dating, as summarized, for example, by Zöller et al. (1994) and Frechen et al. (1997) for the Austrian and Hungarian loess stratigraphy, respectively.
Since the 1980s, thermoluminescence (TL), optically stimulated luminescence (OSL), and infrared stimulated luminescence (IRSL) dating have been available, providing the possibility for dating the time of loess (dust) depositions, i.e., the time elapsed since the last exposure of the mineral grains to daylight. During the past decade, luminescence dating has significantly improved by new methodological improvements, especially the development of single aliquot regenerative (SAR) protocols (Murray & Wintle 2000) resulting in reliable ages (or age estimates) with an accuracy of up to 5 and 10% for the last glacial record. More recently, luminescence dating has also become a robust dating technique for penultimate and antepenultimate glacial loess (e.g. Thiel et al. 2011, Schmidt et al. 2011) allowing for a reliable correlation of loess/palaeosol sequences for at least the last two interglacial/glacial cycles throughout Europe and the Northern Hemisphere (Frechen 2011). Furthermore, the numerical dating provides the basis for quantitative loess research applying more sophisticated methods to determine and understand high-resolution proxy data including the palaeodust content of the atmosphere, variations of the atmospheric circulation patterns and wind systems, palaeoprecipitation, and palaeotemperature.
Besides luminescence dating methods, the use of radiocarbon dating in loess has increased during the past decades. Advances in methods of analyses, instrumentation, and refinements to the radiocarbon calibration curve have made it possible to obtain reliable ages from loess deposits for the last 40–45 ka. However, the use of this method relies on finding suitable in situ organic material in deposits such as charcoal, seeds, earthworm granules, or snail shells.
Formation
According to Pye (1995), four fundamental requirements are necessary for the formation of loess: a dust source, adequate wind energy to transport the dust, a suitable accumulation area, and a sufficient amount of time.
Periglacial loess
Periglacial (glacial) loess is derived from the floodplains of glacial braided rivers that carried large volumes of glacial meltwater and sediments from the annual melting of continental ice sheets and mountain ice caps during the spring and summer. During the autumn and winter, when the melting of the ice sheets and ice caps ceased, the flow of meltwater down these rivers either ceased or was greatly reduced. As a consequence, large parts of the formerly submerged and unvegetated floodplains of these braided rivers dried out and were exposed to the wind. Because the floodplains consist of sediment containing a high content of glacially ground flour-like silt and clay, they were highly susceptible to winnowing of their silts and clays by the wind. Once entrained by the wind, particles were then deposited downwind. The loess deposits found along both sides of the Mississippi River alluvial valley are a classic example of periglacial loess.
During the Quaternary, loess and loess-like sediments were formed in periglacial environments on mid-continental shield areas in Europe and Siberia as well as on the margins of high mountain ranges like in Tajikistan and on semi-arid margins of some lowland deserts as in China.
In England, periglacial loess is also known as brickearth.
Non-glacial
Non-glacial loess can originate from deserts, dune fields, playa lakes, and volcanic ash.
Some types of nonglacial loess are:
Desert loess produced by aeolian attrition of quartz grains;
Volcanic loess in Ecuador and Argentina;
Tropical loess in Argentina, Brazil and Uruguay;
Gypsum loess in Spain;
Trade wind loess in Venezuela and Brazil;
Anticyclonic loess in Argentina.
The thick Chinese loess deposits are non-glacial loess having been blown in from deserts in northern China. The loess covering the Great Plains of Nebraska, Kansas, and Colorado is considered to be non-glacial desert loess. Non-glacial desert loess is also found in Australia and Africa.
Fertility
Loess tends to develop into very rich soils. Under appropriate climatic conditions, it is some of the most agriculturally productive terrain in the world.
Soils underlain by loess tend to be excessively drained. The fine grains weather rapidly due to their large surface area, making soils derived from loess rich. The fertility of loess soils is due largely to a high cation exchange capacity (the ability of the soil to retain nutrients) and porosity (the air-filled space in the soil). The fertility of loess is not due to organic matter content, which tends to be rather low, unlike tropical soils which derive their fertility almost wholly from organic matter.
Even well managed loess farmland can experience dramatic erosion of well over 2.5 kg/m2 per year. In China, the loess deposits which give the Yellow River its color have been farmed and have produced phenomenal yields for over one thousand years. Winds pick up loess particles contributing to the Asian Dust pollution problem. The largest deposit of loess in the United States which is the Loess Hills along the border of Iowa and Nebraska, has survived intensive farming and poor farming practices. For almost 150 years, this loess deposit was farmed with mouldboard ploughs and tilled in the fall, both intensely erosive practices. At times it suffered erosion rates of over 10 kilograms per square meter per year. Today this loess deposit is worked as low till or no till in all areas and is aggressively terraced.
Large areas of loess deposits and soils
Central Asia
An area of multiple loess deposits spans from southern Tajikistan up to Almaty, Kazakhstan.
East Asia
China
The Loess Plateau (), also known as the Huangtu Plateau, is a plateau that covers an area of some 640,000 km2 around the upper and middle reaches of China's Yellow River. The Yellow River was so named because the loess forming its banks gave a yellowish tint to the water. The soil of this region has been called the "most highly erodible soil on earth". The Loess Plateau and its dusty soil cover almost all of Shanxi, Shaanxi, and Gansu provinces; the Ningxia Hui Autonomous Region, and parts of others.
Europe
Loess deposits of varying thickness (decimeter to several tens of meters) are widely distributed over the European continent. The northern European loess belt stretches from southern England and northern France to Germany, Poland and the southern Ukraine and deposits are characterized by strong influences of periglacial conditions. South-eastern European loess is mainly deposited in plateau-like situations in the Danube basins, likely derived from the Danube River system. In south-western Europe, relocated loess derivatives are mostly restricted to the Ebro Valley and central Spain.
North America
United States
The Loess Hills of Iowa owe their fertility to the prairie topsoils built by 10,000 years of post-glacial accumulation of organic-rich humus as a consequence of a persistent grassland biome. When the valuable A-horizon topsoil is eroded or degraded, the underlying loess soil is infertile and requires the addition of fertilizer to support agriculture.
The loess along the Mississippi River near Vicksburg, Mississippi, consists of three layers. The Peoria Loess, Sicily Island Loess, and Crowley's Ridge Loess accumulated at different periods during the Pleistocene. Ancient soils, called paleosols, have developed on the top of the Sicily Island Loess and Crowley's Ridge Loess. The lowermost loess, the Crowley's Ridge Loess, accumulated during the late Illinoian Stage. The middle loess, Sicily Island Loess, accumulated during the early Wisconsin Stage. The uppermost loess, the Peoria Loess, in which the modern soil has developed, accumulated during the late Wisconsin Stage. Animal remains include terrestrial gastropods and mastodons.
Oceania
New Zealand
Extensive areas of loess occur in New Zealand including the Canterbury Plains and on the Banks Peninsula. The basis of loess stratigraphy was introduced by John Hardcastle in 1890.
South America
Argentina
Much of Argentina is covered by loess. Two areas of loess are usually distinguished in Argentina: the neotropical loess north of latitude 30° S and the pampean loess.
The neotropical loess is made of silt or silty clay. Relative to the pampean loess the neotropical loess is poor in quartz and calcium carbonate. The source region for this loess is thought by some scientists to be areas of fluvio-glacial deposits the Andean foothills formed by the Patagonian Ice Sheet. Other researchers stress the importance of volcanic material in the neotropical loess.
The pampean loess is sandy or made of silty sand.
| Physical sciences | Sedimentology | Earth science |
245146 | https://en.wikipedia.org/wiki/Bowfin | Bowfin | The bowfin (Amia calva) is a ray-finned fish native to North America. Common names include mudfish, mud pike, dogfish, grindle, grinnel, swamp trout, and choupique. It is regarded as a relict, being one of only two surviving species of the Halecomorphi, a group of fish that first appeared during the Early Triassic, around 250 million years ago. The bowfin is often considered a "living fossil" because they have retained some morphological characteristics of their early ancestors. It is one of two species in the genus Amia, along with Amia ocellicauda, the eyespot bowfin. The closest living relatives of bowfins are gars, with the two groups being united in the clade Holostei.
Bowfins are demersal freshwater piscivores, commonly found throughout much of the eastern United States, and in southern Ontario and Quebec. Fossil deposits indicate Amiiformes were once widespread in both freshwater and marine environments across North and South America, Europe, Asia, and Africa. Now, their range is limited to much of the eastern United States and adjacent southern Canada, including the drainage basins of the Mississippi River, Great Lakes, and various rivers exiting in the Eastern Seaboard or Gulf of Mexico. Their preferred habitat includes vegetated sloughs, lowland rivers and lakes, swamps, and backwater areas; they are also occasionally found in brackish water. They are stalking, ambush predators known to move into the shallows at night to prey on fish and aquatic invertebrates such as crawfish, mollusks, and aquatic insects.
Like gars, bowfin are bimodal breathers—they have the capacity to breathe both water and air. Their gills exchange gases in the water allowing them to breathe, but they also have a gas bladder that serves to maintain buoyancy, and also allows them to breathe air by means of a small pneumatic duct connected from the foregut to the gas bladder. They can break the surface to gulp air, which allows them to survive conditions of aquatic hypoxia that would be lethal to most other species. The bowfin is long-lived, with age up to 33 years reported.
Morphology
The typical length of a bowfin is ; females typically grow to , males to . They can reach in length, and weigh . Young of the year typically grow to by October. Females tend to grow larger than males.
The body of the bowfin is elongated and cylindrical, with the sides and back olive to brown in color, often with vertical bars and dark reticulations or another camouflaged pattern. The dorsal fin has horizontal bars, and the caudal fin has irregular vertical bars. The underside is white or cream, and the paired fins and anal fin are bright green. During larval stage, hatchlings from about total length are black and tadpole-like in appearance. At approximately total length they have been described as looking like miniature placoderms. They grow quickly, and typically leave the nest within 4 to 6 weeks after hatching. Young males have a black eyespot on the base of the tail (caudal peduncle) that is commonly encircled by an orange-yellowish border, while the female's is black, if present at all. It is thought the purpose of the eyespot is to confuse predators, deflecting attacks away from the head of the fish to its tail, which affords the bowfin an opportunity to escape predation. The bowfin is so named for its long, undulating dorsal fin consisting of 145 to 250 rays that runs from the middle of the back to the base of the tail.
The skull of the bowfin is made of two layers of skull, the dermatocranium and the chondrocranium. The chondrocranium layer cannot be seen because it is located below the dermal bones. The bowfin skull is made up of 28 fused bones, which compose the dermatocranium. The roof of the mouth is made up of three bones, the ectopterygoid, the palantine, and the vomer. They have two sets of teeth, including one set of larger sharp teeth coming out of the mandibular and premaxillary bones to grasp and control the prey. The other set of teeth, located posteriorly and connected to the hyomandibular bone, is made up of pharyngeal tooth patches, which are used for sorting out nutrients and grinding down larger pieces of food. Another three bones make up the lower jaw: the dentary, the angular, and the surangular. The cranial surface of the skull is made up of the nasals, the antorbital, the lacrimal, the parietal, the intertemporal, the post parietal, the supratemporal, the extra scapular, the post temporal, and the opercular. The entirety of the skull is attached to the girdle through another set of bones.
Bowfin are often referred to as "living fossils" or "primitive fish" because they retained some of the primitive characters common to their ancestors, including a modified (rounded externally) heterocercal caudal fin, a highly vascularized gas bladder lung, vestiges of a spiral valve, and a bony gular plate. The bony gular plate is located underneath the head on the exterior of the lower jaw between the two sides of the lower jaw bone. Other distinguishing characteristics include long, sharp teeth, and two protruding tube-like nostrils. Unlike all of the most primitive actinopterygians, the scales of bowfin differ in that they are not ganoid scales, rather they are large, single-layered cycloid scales closer in similarity to more derived teleosts.
Fish similar in appearance
Northern snakeheads (Channa argus) are commonly mistaken for bowfin because of similarities in appearance, most noticeably their elongated, cylindrical shape, and long dorsal fin that runs along their backs. Northern snakeheads are piscivorous fish native to the rivers and estuaries of China, Russia, and Korea that have been introduced and become established in parts of North America. However, unlike bowfin which are native to North America, the northern snakehead is considered an invasive species and environmentally harmful there. Some contrasting differences in bowfin include a black eyespot on their caudal peduncle, a tan and olive coloration, a shorter anal fin, a more rounded head, pelvic fins at a greater distance from the pectoral fins than in the northern snakehead and the presence of the gular plate on the ventral side of the lower jaw. Another noticeable difference is that bowfin scales do not continue uniformly from their body to their head. Bowfin heads are smooth and free of scales, whereas the northern snakehead has scales that uniformly continue from their body through to their head.
The burbot (Lota lota), a predatory fish native to streams and lakes of North America and Eurasia, is also commonly mistaken for bowfin. Burbots can be distinguished by their flat head and chin barbel, long anal fin, and pelvic fins situated beneath the pectoral fins.
Bowfin body-shape evolution and development
The first fish lacked jaws and used negative pressure to suck their food in through their mouths. The jaw in the bowfin is a result of their evolutionary need to be able to catch and eat bigger and more nutritious prey. As a result of being able to gather more nutrients, Bowfin are able to live a more active lifestyle. The jaw of a bowfin has several adaptations. The maxilla and premaxilla are fused and the posterior chondrocranium articulates with the vertebra which allows the jaw freedom to rotate. The suspensorium includes several bones and articulates with the snout, brain case, and the mandible. When the jaw opens epaxial muscles lift the chondrocranium, which is attached to the upper jaw, while adductor muscles act to close the lower jaw. This ability to open and close the jaw helps the bowfin to be an active predator that can catch bigger prey and digest them.
The vertebral column in bowfin is ossified and in comparison to earlier fish, the centra are the major support for the body, whereas in earlier fish the notochord was the main form of support. In bowfin neural spines and ribs also increase in prominence, an evolutionary aspect that helps provide additional support and stabilize unpaired fins. The evolution of the vertebral column allows the bowfin to withstand lateral bending that puts the column under compression without breaking. This, in turn, allows the bowfin to have more controlled and powerful movements, in comparison to fish that had only a notochord. The bowfin has a rounded heterocercal tail that resembles a homocercal tail. This type of tail gives the body a streamlined shape which allows the bowfin to improve its swimming ability by reducing drag. These types of tails are common in fish with gas bladders, because the bladder supplies the fish with natural buoyancy.
The bowfin is a member of actinopterygii which means that the pectoral girdle is partly endochondral but mostly dermal bone. In this group of fish the fins function to maneuver, brake, and for slight positional adjustments. The pectoral girdle of the bowfin has six parts. The post temporal, supracleithrum, postcleithrum, cleithrum, scapulacoracoid, and the clavicle make up the pectoral girdle. The pectoral girdle is attached to the skull. The paired pectoral and pelvic fins of fish are homologous with the limbs of tetrapods.
Physiology
Bowfin are physostomes, meaning they have a small "pneumatic duct" that connects their swim bladders to their digestrive tract. This allows them, like lungfish, to "breath" in two ways: they can extract oxygen from the water when breathing through their gills, but can also break the water's surface to breathe or gulp air through the pneumatic duct. When performing low-level physical activity, bowfin obtain more than half of their oxygen from breathing air. The fish have two distinct air-breathing mechanisms used to ventilate the gas bladder. Air breathing type I is consistent with the action of exhale / inhale exchange, stimulated by either air or water hypoxia, to regulate gas exchange; type II air breaths are inhalation alone, which is believed to regulate gas bladder volume, to control buoyancy. Bimodal respiration helps bowfin survive and maintain their metabolic rate in hypoxic (low-oxygen) conditions. Bowfin air breath more frequently when they are in darkness, and correspondingly more active.
Bowfin blood can adapt to warm, acidic waters. The fish becomes inactive in waters below ; at this temperature they breathe almost no air; however, with increasing temperature their air breathing increases. Their preferred temperature range is between , with the temperature of maximum activity. Air breathing is at a maximum in the range . Bowfin do not use central chemoreceptor regulation for respiration control. Experiments manipulating the oxygen content, carbon dioxide content, and pH of bowfin extradural fluid did not affect breathing rate, heart rate, or blood pressure pointing to a lack of central chemoreceptor regulation. Instead, bowfin respiratory patterns respond to water oxygen content and water temperature, as water temperatures play a role in oxygen content. In the lab, bowfin showed an increase in the breathing rate when the temperatures were raised above 10°C. Bowfin also showed an increase in breathing rate when exposed to lower oxygen levels in the water.
Herpetologist W. T. Neill reported in 1950 that he unearthed a bowfin aestivating (in a dormant state) in a chamber below the ground surface, in diameter, from a river. It was further noted that flood levels had previously reached the area, and receded. It is not unusual for riverine species like bowfin to move into backwaters with flood currents, and become trapped when water levels recede. While aestivation is anecdotally documented by multiple researchers, laboratory experiments have suggested instead that bowfin are physiologically incapable of surviving more than three to five days of air exposure. However, no field manipulation has been performed. Regardless of the lack of evidence confirming the bowfin's ability to aestivate, it has been noted that bowfin can survive prolonged conditions of exposure to air because they have the ability to breathe air. Their gill filaments and lamellae are rigid in structure which helps prevent the lamellae from collapsing and aids gas exchange even during air exposure.
Evolution and phylogeny
Competing hypotheses and debates continue over the evolution of Amia and relatives, including their relationship among basal extant teleosts, and organization of clades. Bowfin are the last remaining member of Halecomorphi, a group that includes many extinct species in several families. Halecomorphs were generally accepted as the sister group to Teleostei but not without question. While a halecostome pattern of neopterygian clades was produced in morphology-based analyses of extant actinopterygians, a different result was produced with fossil taxa which showed a monophyletic Holostei. Monophyletic Holostei were also recovered by at least two nuclear gene analyses, in an independent study of fossil and extant fish, and in an analysis of ultraconserved genomic elements.
The extant ray-finned fish of the subclass Actinopterygii include 42 orders, 431 families and over 23,000 species. They are currently classified into two infraclasses, Chondrostei (holosteans) and Neopterygii (teleost fishes). Sturgeons, paddlefish, bichirs and reed fish compose the thirty-eight species of chondrosteans, and are considered relict species. Included in the over 23,000 species of neopterygians are eight relict species comprising gars and the bowfin.
Infraclass Neopterygii
Neopterygians are the second major occurrence in the evolution of ray-finned fish and today include the majority of modern bony fish. They are distinguished from their earlier ancestors by major changes to the jaws, shape of the skull, and tail. They are divided into three divisions:
Division 1. Order Lepisosteiformes – the relict gars which include extant species of gars that first appeared in the Cretaceous.
Division 2. Order Amiiformes – the relict bowfin, (halecomorphids), the only extant species in the order Amiiformes which date back to the Triassic period.
Division 3. Division Teleostei – the stem group of Teleostei from which modern fish arose, including most of the bony fish we are familiar with today.
Species
The following is a species list
Genome evolution
The bowfin genome contains an intact ParaHox gene cluster, similar to the bichir and to most other vertebrates. This is in contrast, however, with teleost fish, which have a fragmented ParaHox cluster, probably because of a whole genome duplication event in their lineage. The presence of an intact ParaHox gene cluster suggests that bowfin ancestors separated from other fish before the last common ancestor of all teleosts appeared. Bowfin are thus possibly a better model to study vertebrate genome organization than common teleost model organisms such as zebrafish.
Feeding behavior
Bowfin are stalking, ambush predators that customarily move into the shallows at night to prey on fish, amphibians, and aquatic invertebrates such as crawfish, other crustaceans, mollusks, and aquatic insects. Young bowfin feed mostly on small crustaceans, while adults are mostly piscivorous, but also known to be opportunistic. Some common examples of prey include frogs, bass, other bowfin, dragonflies, sunfish, crawfish, etc. Bowfin are remarkably agile, can move quickly through the water, and they have a voracious appetite. Their undulating dorsal fin propels them silently through the water while stalking their prey. The attack is straightforward and swift with a movement that lasts approximately 0.075 seconds. There were also some studies regarding the capacity of the bowfin to survive without food. In 1916, a female bowfin was starved for twenty months. It was the longest period that any vertebrate had been without food, as far the writer was aware during the observation. Some independent studies focus on the bowfin's ability to use organic material as a source of food and studied the structure of the gill raker. They concluded that it did not benefit from the organic material in the water because the gill rakers were short with blunt processes and a short space between them. Even bacteria could enter and exit through the gill easily. Its structure alone indicated that the Amia do not use microorganisms as a source of food.
Distribution and habitat
Fossil deposits indicate amiiforms included freshwater and marine species that were once widely distributed in North America, South America, Eurasia and Africa. Today, bowfin (Amia calva) are the only remaining species in the order Amiiformes; they are demersal freshwater piscivores, and their range is restricted to freshwater environments in North America, including much of the eastern United States and adjacent southern Canada from the St. Lawrence River and Lake Champlain drainage of southern Ontario and Quebec westward around the Great Lakes in southern Ontario into Minnesota.
Historically, their distribution in North America included the drainage basins of the Mississippi River from Quebec to northern Minnesota, the St. Lawrence-Great Lakes, including Georgian Bay, Lake Nipissing and Simcoe, Ontario, south to the Gulf of Mexico; Atlantic and Gulf Coastal Plain from the Susquehanna River drainage in southeastern Pennsylvania to the Colorado River in Texas.
Stocking
Research from the late 1800s to the 1980s suggests a trend of intentional stockings of non-indigenous fish into ponds, lakes and rivers in the United States. At that time, little was known about environmental impacts, or long-term effects of new species establishment and spread as a result of "fish rescue and transfer" efforts, or the importance of nongame fish to the ecological balance of aquatic ecosystems. Introductions of bowfin to areas they were considered a non-indigenous species included various lakes, rivers and drainages in Connecticut, Delaware, Georgia, Illinois, Iowa, Kansas, Kentucky, Maryland, Massachusetts, Minnesota, Missouri, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, Virginia, West Virginia, and Wisconsin. Many of the introductions were intentional stockings; however, there is no way to positively determine distribution resulting from flood transfers, or other inadvertent migrations. Bowfin are typically piscivorous, but as an introduced species are capable of being voracious predators that pose a threat to native fish and their prey.
Preferred habitat
Bowfin prefer vegetated sloughs, lowland rivers and lakes, swamps, backwater areas, and are occasionally found in brackish water. They are well camouflaged, and not easy to spot in slow water with abundant vegetation. They often seek shelter under roots, and submerged logs. Oxygen-poor environments can be tolerated because of their ability to breathe air.
Life cycle
Bowfin spawn in the spring or early summer, typically between April and June, more commonly at night in abundantly vegetated, clear shallow water in weed beds over sand bars, and also under stumps, logs, and bushes.
Optimum temperatures for nesting and spawning range between . The males construct circular nests in fibrous root mats, clearing away leaves and stems. Depending on the density of surrounding vegetation there may be a tunnel-like entrance at one side. The diameter of the nests commonly range between , at a water depth of .
During spawning season, the fins and underside of male bowfin often change in color to a bright lime green. The courtship/spawning sequence lasts one to three hours, and can repeat up to five times. Courtship begins when a female approaches the nest. The ritual consists of intermittent nose bites, nudges, and chasing behavior by the male until the female becomes receptive, at which time the pair lie side by side in the nest. She deposits her eggs while he shakes his fins in a vibratory movement, and releases his milt for fertilization to occur. A male often has eggs from more than one female in his nest, and a single female often spawns in several nests.
Females vacate the nest after spawning, leaving the male behind to protect the eggs during the eight to ten days of incubation. A nest may contain 2,000 to 5,000 eggs, possibly more. Fecundity is usually related to size of the fish, so it is not unusual for the roe of a large gravid female to contain over 55,000 eggs. Bowfin eggs are adhesive, and will attach to aquatic vegetation, roots, gravel, and sand. After hatching, larval bowfin do not swim actively in search of food. During the seven to nine days required for yolk-sac absorption, they attach to vegetation by means of an adhesive organ on their snout, and remain protected by the parent male bowfin. Bowfin aggressively protect their spawn from the first day of incubation to a month or so after the eggs have hatched. When the fry are able to swim and forage on their own, they will form a school and leave the nest accompanied by the parent male bowfin who slowly circles them to prevent separation.
Bowfin reach sexually maturity at two to three years of age. They can live up to 33 years in the wild, and 30 years in captivity. Bowfin may live decades at adult size.
Diseases
A common parasite of bowfin is the anchor worm (Lernaea). These small crustaceans infest the skin and bases of fins, with consequences ranging from slowed growth to death. The mollusk Megalonaias gigantea lays eggs in the bowfin gills, that are then externally fertilized by sperm passing in the water flow. The small glochidia larvae then hatch and develop in the gill tubes.
Bowfin with liver cancer and with fatal leukemia have been reported.
Utilization
As a sport fish, bowfin are not considered desirable to many anglers. They were once considered a nuisance fish by anglers and early biologists who believed the bowfin's predatory nature was harmful to sport fish populations. As a result, efforts were taken to reduce their numbers. Research has since proven otherwise, and that knowledge together with a better understanding of maintaining overall balance of ecosystems, regulations were introduced to help protect and maintain viable populations of bowfin. Bowfin are strong fighters, a prized trait in game fish. However, they do have a jaw full of sharp teeth which requires careful handling. The current tackle record is
Bowfin were once considered to have little commercial value because of its poor-tasting meat which has been referred to as "soft, bland-tasting and of poor texture". However, it is considered quite palatable if cleaned properly and smoked, or prepared fried, blackened, used in courtbouillion, or in fishballs or fishcakes. Over the years, global efforts have imposed strict regulations on the international trade of caviar, particularly on the harvest of sturgeons from the Caspian Sea where the highly prized caviar from the beluga sturgeon originates. The bans imposed on Caspian sturgeons have created lucrative markets for affordable substitutes in the United States including paddlefish, bowfin, and various species of sturgeon. In Louisiana, bowfin are harvested in the wild, and cultured commercially in hatcheries for their meat and roe. The roe is processed into caviar, and sold as "Cajun caviar", or marketed under the trade name "Choupiquet Royale".
Accumulation of toxic substances
In some areas of the United States where aquatic environments have tested positive for elevated levels of toxins, such as mercury, arsenic, chromium, and copper, there are posted signs with warnings about the consumption of fish caught in those areas. Concentration of mercury biomagnifies as it passes up the food chain from organisms on lower trophic levels to apex predators. It bioaccumulates in the tissues of larger, long-lived predatory fish. When compared to smaller, short-lived fish, bowfin tend to concentrate mercury at higher levels thereby making them less safe for human consumption.
| Biology and health sciences | Holosteans | Animals |
245159 | https://en.wikipedia.org/wiki/Castor%20%28star%29 | Castor (star) | Castor is the second-brightest object in the zodiac constellation of Gemini. It has the Bayer designation α Geminorum, which is Latinised to Alpha Geminorum and abbreviated Alpha Gem or α Gem. With an apparent visual magnitude of 1.58, it is one of the brightest stars in the night sky. Castor appears singular to the naked eye, but it is actually a sextuple star system organized into three binary pairs. Although it is the 'α' (alpha) member of the constellation, it is half a magnitude fainter than 'β' (beta) Geminorum, Pollux.
Stellar system
Hierarchy of orbits in the Castor system
Castor is a multiple star system made up of six individual stars; there are three visual components, all of which are spectroscopic binaries. Appearing to the naked eye as a single star, Castor was first recorded as a double star in 1718 by James Pound, but it may have been resolved into at least two sources of light by Cassini as early as 1678. The separation between the binary systems Castor A and Castor B has increased from about 2″ (2 arcseconds of angular measurement) in 1970 to about 6″ in 2017. These pairs have magnitudes of 1.9 and 3.0, respectively.
Castor Aa and Ba both have orbits of a few days with a much fainter companion.
Castor C, or YY Geminorum, was discovered to vary in brightness with a regular period. It is an eclipsing binary with additional variations due to areas of different brightness on the surface of one or both stars, as well as irregular flares. The Castor C components orbit in less than a day. Castor C is believed to be in orbit around Castor AB, but with an extremely long period of several thousand years. It is 73″ distant from the bright components.
The combined apparent magnitude of all six stars is +1.58.
Physical properties
Castor is 49 light-years away from Earth, determined from its large annual parallax.
The two brightest stars are both A-type main-sequence stars, more massive and brighter than the Sun. The properties of their red dwarf companions are difficult to determine, but they are known to have masses 39% that of the Sun.
Castor B is an Am star, with particularly strong spectral lines of certain metals.
Castor C is a variable star, classified as a BY Draconis type. BY Draconis variables are cool dwarf stars which vary as they rotate due to starspots or other variations in their photospheres. The two red dwarfs of Castor C are almost identical, with masses around and luminosities less than 10% of the Sun. Since 2018 it is suspected a brown dwarf with a mass at least times the mass of Jupiter might be orbiting Castor C with a period of 50 years. If it is confirmed, Castor would turn out to be a seven-star system.
All the red dwarfs in the Castor system have emissions lines in their spectra, and all are flare stars.
Etymology and culture
α Geminorum (Latinised to Alpha Geminorum) is the star system's Bayer designation.
Castor and Pollux are the two "heavenly twin" stars that give the constellation Gemini (meaning twins in Latin) its name. The name Castor refers specifically to Castor, one of the twin sons of Zeus and Leda in Greek and Roman mythology.
The star was annotated by the Arabic description Al Ras al Taum al Muqadim, which translates as the head of the foremost twin. In the catalogue of stars in the Calendarium of al Achsasi al Mouakket, this star was designated Aoul al Dzira, which was translated into Latin as Prima Brachii, meaning the first in the paw.
In Chinese, (), meaning North River, refers to an asterism consisting of Castor, Rho Geminorum, and Pollux. Consequently, Castor itself is known as (, .)
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Castor for the star α Geminorum Aa.
Castor C also has the variable-star designation YY Geminorum.
| Physical sciences | Notable stars | Astronomy |
245206 | https://en.wikipedia.org/wiki/Plus%20and%20minus%20signs | Plus and minus signs | The plus sign () and the minus sign () are mathematical symbols used to denote positive and negative functions, respectively. In addition, represents the operation of addition, which results in a sum, while represents subtraction, resulting in a difference. Their use has been extended to many other meanings, more or less analogous. and are Latin terms meaning "more" and "less", respectively.
The forms and are used in many countries around the world. Other designs include for plus and for minus.
History
Though the signs now seem as familiar as the alphabet or the Hindu–Arabic numerals, they are not of great antiquity. The Egyptian hieroglyphic sign for addition, for example, resembled a pair of legs walking in the direction in which the text was written (Egyptian could be written either from right to left or left to right), with the reverse sign indicating subtraction:
Nicole Oresme's manuscripts from the 14th century show what may be one of the earliest uses of as a sign for plus.
In early 15th century Europe, the letters "P" and "M" were generally used. The symbols (P with overline, , for (more), i.e., plus, and M with overline, , for (less), i.e., minus) appeared for the first time in Luca Pacioli's mathematics compendium, , first printed and published in Venice in 1494.
The sign is a simplification of the (comparable to the evolution of the ampersand ). The may be derived from a macron written over when used to indicate subtraction; or it may come from a shorthand version of the letter itself.
In his 1489 treatise, Johannes Widmann referred to the symbols and as minus and mer (Modern German ; "more"): . They were not used for addition and subtraction in the treatise, but were used to indicate surplus and deficit; usage in the modern sense is attested in a 1518 book by Henricus Grammateus.
Robert Recorde, the designer of the equals sign, introduced plus and minus to Britain in 1557 in The Whetstone of Witte: "There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made − and betokeneth lesse."
Plus sign
The plus sign () is a binary operator that indicates addition, as in 2 + 3 = 5. It can also serve as a unary operator that leaves its operand unchanged (+x means the same as x). This notation may be used when it is desired to emphasize the positiveness of a number, especially in contrast with the negative numbers (+5 versus −5).
The plus sign can also indicate many other operations, depending on the mathematical system under consideration. Many algebraic structures, such as vector spaces and matrix rings, have some operation which is called, or is equivalent to, addition. It is though conventional to use the plus sign to only denote commutative operations.
The symbol is also used in chemistry and physics. For more, see .
Minus sign
The minus sign () has three main uses in mathematics:
The subtraction operator: a binary operator to indicate the operation of subtraction, as in 5 − 3 = 2. Subtraction is the inverse of addition.
The function whose value for any real or complex argument is the additive inverse of that argument. For example, if x = 3, then −x = −3, but if x = −3, then −x = +3. Similarly, −(−x) = x.
A prefix of a numeric constant. When it is placed immediately before an unsigned number, the combination names a negative number, the additive inverse of the positive number that the numeral would otherwise name. In this usage, '−5' names a number the same way 'semicircle' names a geometric figure, with the caveat that 'semi' does not have a separate use as a function name.
In many contexts, it does not matter whether the second or the third of these usages is intended: −5 is the same number. When it is important to distinguish them, a raised minus sign () is sometimes used for negative constants, as in elementary education, the programming language APL, and some early graphing calculators.
All three uses can be referred to as "minus" in everyday speech, though the binary operator is sometimes read as "take away". In American English nowadays, −5 (for example) is generally referred to as "negative five" though speakers born before 1950 often refer to it as "minus five". (Temperatures tend to follow the older usage; −5° is generally called "minus five degrees".) Further, a few textbooks in the United States encourage −x to be read as "the opposite of x" or "the additive inverse of x"—to avoid giving the impression that −x is necessarily negative (since x itself may already be negative).
In mathematics and most programming languages, the rules for the order of operations mean that −52 is equal to −25: Exponentiation binds more strongly than the unary minus, which binds more strongly than multiplication or division. However, in some programming languages (Microsoft Excel in particular), unary operators bind strongest, so in those cases is 25, but is −25.
Similar to the plus sign, the minus sign is also used in chemistry and physics. (For more, see below.)
Use in elementary education
Some elementary teachers use raised minus signs before numbers to disambiguate them from the operation of subtraction. The same convention is also used in some computer languages. For example, subtracting −5 from 3 might be read as "positive three take away negative 5", and be shown as
3 − −5 becomes 3 + 5 = 8,
which can be read as:
+3 −1(−5)
or even as
+3 − −5 becomes +3 + +5 = +8.
Use as a qualifier
When placed after a number, a plus sign can indicate an open range of numbers. For example, "18+" is commonly used as shorthand for "ages 18 and up" although "eighteen plus", for example, is now common usage.
In US grading systems, the plus sign indicates a grade one level higher and the minus sign a grade lower. For example, ("B minus") is one grade lower than . In some occasions, this is extended to two plus or minus signs (e.g., being two grades higher than ).
A common trend in branding, particularly with streaming video services, has been the use of the plus sign at the end of brand names, e.g. Google+, Disney+, Paramount+, and Apple TV+. Since the word "plus" can mean an advantage, or an additional amount of something, such "+" signs imply that a product offers extra features or benefits.
Positive and negative are sometimes abbreviated as and , and on batteries and cell terminals are often marked with and .
Mathematics
In mathematics the one-sided limit means approaches from the right (i.e., right-sided limit), and means approaches from the left (i.e., left-sided limit). For example, as but as .
Blood
Blood types are often qualified with a plus or minus to indicate the presence or absence of the Rh factor. For example, A+ means type A blood with the Rh factor present, while B− means type B blood with the Rh factor absent.
Music
In music, augmented chords are symbolized with a plus sign, although this practice is not universal (as there are other methods for spelling those chords). For example, "C+" is read "C augmented chord". Sometimes the plus is written as a superscript.
Uses in computing
As well as the normal mathematical usage, plus and minus signs may be used for a number of other purposes in computing.
Plus and minus signs are often used in tree view on a computer screen—to show if a folder is collapsed or not.
In some programming languages, concatenation of strings is written , and results in .
In most programming languages, subtraction and negation are indicated with the ASCII hyphen-minus character, . In APL a raised minus sign (here written using ) is used to denote a negative number, as in . While in J a negative number is denoted by an underscore, as in .
In C and some other computer programming languages, two plus signs indicate the increment operator and two minus signs a decrement; the position of the operator before or after the variable indicates whether the new or old value is read from it. For example, if x equals 6, then increments x to 7 but sets y to 6, whereas would set both x and y to 7. By extension, is sometimes used in computing terminology to signify an improvement, as in the name of the language C++.
In regular expressions, is often used to indicate "1 or more" in a pattern to be matched. For example, means "one or more of the letter x". This is the Kleene plus notation. Hyphen-minus usually indicates a range ( - any capital from 'A' to 'Z'), although it can stand for itself ( any capital from 'A' to 'E' or '-').
There is no concept of negative zero in mathematics, but in computing −0 may have a separate representation from zero. In the IEEE floating-point standard, 1 / −0 is negative infinity () whereas 1 / 0 is positive infinity ().
is also used to denote added lines in output in the or the .
Other uses
In physics, the use of plus and minus signs for different electrical charges was introduced by Georg Christoph Lichtenberg.
In chemistry, superscripted plus and minus signs are used to indicate an ion with a positive or negative charge of 1 (e.g., NH). If the charge is greater than 1, a number indicating the charge is written before the sign (as in SO).
A plus sign prefixed to a telephone number is used to indicate the form used for International Direct Dialing. Its precise usage varies by technology and national standards. In the International Phonetic Alphabet, subscripted plus and minus signs are used as diacritics to indicate advanced or retracted articulations of speech sounds.
The minus sign is also used as tone letter in the orthographies of Dan, Krumen, Karaboro, Mwan, Wan, Yaouré, Wè, Nyabwa, and Godié. The Unicode character used for the tone letter () is different from the mathematical minus sign.
The plus sign sometimes represents in the orthography of Huichol.
In the algebraic notation used to record games of chess, the plus sign is used to denote a move that puts the opponent into check, while a double plus is sometimes used to denote double check. Combinations of the plus and minus signs are used to evaluate a move (+/−, +/=, =/+, −/+).
In linguistics, a superscript plus sometimes replaces the asterisk, which denotes unattested linguistic reconstruction.
In botanical names, a plus sign denotes graft-chimaera.
In Catholicism, the plus sign before a last name denotes a Bishop, and a double plus is used to denote an Archbishop.
Codepoints
Variants of the symbols have unique codepoints in Unicode:
Alternative minus signs
There is a commercial minus sign, , which is used in Germany and Scandinavia. The symbol is used to denote subtraction in Scandinavia.
The hyphen-minus symbol () is the form of hyphen most commonly used in digital documents. On most keyboards, it is the only character that resembles a minus sign or a dash so it is also used for these. The name hyphen-minus derives from the original ASCII standard, where it was called hyphen–(minus). The character is referred to as a hyphen, a minus sign, or a dash according to the context where it is being used.
Alternative plus sign
A Jewish tradition that dates from at least the 19th century is to write plus using the symbol , to avoid the writing of a symbol that could look like a Christian cross. This practice was adopted into Israeli schools and is still commonplace today in elementary schools (including secular schools) but in fewer secondary schools. It is also used occasionally in books by religious authors, but most books for adults use the international symbol . Unicode has this symbol at position .
| Mathematics | Basics | null |
245307 | https://en.wikipedia.org/wiki/Treeswift | Treeswift | Treeswifts or crested swifts are a family, the Hemiprocnidae, of aerial near passerine birds, closely related to the true swifts. The family contains a single genus, Hemiprocne, with four species. They are distributed from India and Southeast Asia through Indonesia to New Guinea and the Solomon Islands.
Treeswifts are small to medium-sized swifts, ranging in length from 15 to 30 cm. They have long wings, with most of the length coming from the length of the primaries; their arms are actually quite short. They visibly differ from the other swifts in matters of plumage, which is softer, and they have crests or other facial ornaments, and long, forked tails. Anatomically they are separated from the true swifts by skeletal details in the cranium and palate, the anatomy of the tarsus, and a nonreversible hind toe that is used for perching on branches (an activity in which true swifts are unable to engage). The males have iridescent mantle plumage. They also have diastataxic wings, that is they lack a fifth secondary feather unlike swifts in the Apodini, which are eutaxic.
The treeswifts exhibit a wide range of habitat preferences. One species, the whiskered treeswift, is a species belonging to primary forest. Highly manoeuvrable, it feeds close to vegetation beneath the canopy, and only rarely ventures into secondary forests or plantations, but never over open ground. Other species are less restricted; the crested treeswift makes use of a range of habitats including humid forests and deciduous woodland, and the grey-rumped treeswift occupies almost every habitat type available from the mangrove forests to hill forests. All species feed on insects, although exact details of what prey are taken has not been studied in detail.
Nest-building responsibilities are shared by the male and female. They lay one egg in the nest, which is glued to an open tree branch. Egg colour varies from white to grey. Little information is available about incubation times, but they are thought to be longer for the larger species. Chicks hatch with a covering of grey down and are fed a bolus of regurgitated food by the parents.
Species
| Biology and health sciences | Apodiformes | Animals |
245364 | https://en.wikipedia.org/wiki/Ophidiiformes | Ophidiiformes | Ophidiiformes is an order of ray-finned fish that includes the cusk-eels (family Ophidiidae), pearlfishes (family Carapidae), viviparous brotulas (family Bythitidae), and others. Members of this order have small heads and long slender bodies. They have either smooth scales or no scales, a long dorsal fin and an anal fin that typically runs into the caudal fin. They mostly come from the tropics and subtropics, and live in both freshwater and marine habitats, including abyssal depths. They have adopted a range of feeding methods and lifestyles, including parasitism. The majority are egg-laying, but some are viviparous.
The earliest fossil members are known from the Maastrichtian, and include the basal ophidiiform Pastorius from Italy and several species of the basal cusk-eel Ampheristus from the United States and Germany.
Distribution
This order includes a variety of deep-sea species, including the deepest known, Abyssobrotula galatheae, found at in the Puerto Rico Trench. Many other species, however, live in shallow water, especially near coral reefs, while a few inhabit freshwater. Most species live in tropical or subtropical habitats, but some species are known from as far north as the coast of Greenland, and as far south as the Weddell Sea.
Characteristics
Ophidiiform fish typically have slender bodies with small heads, and either smooth scales, or none at all. They have long dorsal fins, and an anal fin that is typically united with the caudal fin. The group includes pelagic, benthic, and even parasitic species, although all have a similar body form. Some species are viviparous, giving birth to live young, rather than laying eggs. They range in size from Grammanoides opisthodon which measures just in length, to Lamprogrammus shcherbachevi at in length.
The families Ranicipitidae (tadpole cods) and Euclichthyidae (eucla cods) were formerly classified in this order, but are now preferred in Gadiformes; Ranicipitidae has been absorbed within the family Gadidae.
Timeline of genera
Classification
The order Ophidiiformes is subdivided into suborders and families as follows:
Genus †Pastorius (Late Cretaceous of Italy)
Suborder Ophidioidei
Family Carapidae Poey, 1867 — pearlfishes
Family Ophidiidae Rafinesque, 1810— cusk-eels
Suborder Bythitoidei
Family Bythitidae Gill, 1861 — viviparous brotulas
Family Aphyonidae Jordan & Evermann, 1898 — aphyonids, blind cusk-eel
Family Parabrotulidae Nielsen, 1968 — false brotulas
The suborder Ophidioidei may be a paraphyletic grouping but the suborder Bythitoidei are viviparous and seem to make up a monophyletic group, while the Ophidioidei are oviparous.
| Biology and health sciences | Fishes | null |
245432 | https://en.wikipedia.org/wiki/Protoplanetary%20disk | Protoplanetary disk | A protoplanetary disk is a rotating circumstellar disc of dense gas and dust surrounding a young newly formed star, a T Tauri star, or Herbig Ae/Be star. The protoplanetary disk may not be considered an accretion disk; while the two are similar, an accretion disk is hotter and spins much faster. It is also found on black holes, not stars. This process should not be confused with the accretion process thought to build up the planets themselves. Externally illuminated photo-evaporating protoplanetary disks are called proplyds.
Formation
Protostars form from molecular clouds consisting primarily of molecular hydrogen. When a portion of a molecular cloud reaches a critical size, mass, or density, it begins to collapse under its own gravity. As this collapsing cloud, called a solar nebula, becomes denser, random gas motions originally present in the cloud average out in favor of the direction of the nebula's net angular momentum. Conservation of angular momentum causes the rotation to increase as the nebula radius decreases. This rotation causes the cloud to flatten out—much like forming a flat pizza out of dough—and take the form of a disk. This occurs because centripetal acceleration from the orbital motion resists the gravitational pull of the star only in the radial direction, but the cloud remains free to collapse in the axial direction. The outcome is the formation of a thin disc supported by gas pressure in the axial direction. The initial collapse takes about 100,000 years. After that time the star reaches a surface temperature similar to that of a main sequence star of the same mass and becomes visible.
It is now a T Tauri star. Accretion of gas onto the star continues for another 10 million years, before the disk disappears, perhaps being blown away by the young star's stellar wind, or perhaps simply ceasing to emit radiation after accretion has ended. The oldest protoplanetary disk yet discovered is 25 million years old.
Protoplanetary disks around T Tauri stars differ from the disks surrounding the primary components of close binary systems with respect to their size and temperature. Protoplanetary disks have radii up to 1000 AU, and only their innermost parts reach temperatures above 1000 K. They are very often accompanied by jets.
Protoplanetary disks have been observed around several young stars in our galaxy. Observations by the Hubble Space Telescope have shown proplyds and planetary disks to be forming within the Orion Nebula.
Protoplanetary disks are thought to be thin structures, with a typical vertical height much smaller than the radius, and a typical mass much smaller than the central young star.
The mass of a typical proto-planetary disk is dominated by its gas, however, the presence of dust grains has a major role in its evolution. Dust grains shield the mid-plane of the disk from energetic radiation from outer space that creates a dead zone in which the magnetorotational instability (MRI) no longer operates.
It is believed that these disks consist of a turbulent envelope of plasma, also called the active zone, that encases an extensive region of quiescent gas called the dead zone. The dead zone located at the mid-plane can slow down the flow of matter through the disk which prohibits achieving a steady state.
Planetary system
The nebular hypothesis of solar system formation describes how protoplanetary disks are thought to evolve into planetary systems. Electrostatic and gravitational interactions may cause the dust and ice grains in the disk to accrete into planetesimals. This process competes against the stellar wind, which drives the gas out of the system, and gravity (accretion) and internal stresses (viscosity), which pulls material into the central T Tauri star. Planetesimals constitute the building blocks of both terrestrial and giant planets.
Some of the moons of Jupiter, Saturn, and Uranus are believed to have formed from smaller, circumplanetary analogs of the protoplanetary disks. The formation of planets and moons in geometrically thin, gas- and dust-rich disks is the reason why the planets are arranged in an ecliptic plane. Tens of millions of years after the formation of the Solar System, the inner few AU of the Solar System likely contained dozens of moon- to Mars-sized bodies that were accreting and consolidating into the terrestrial planets that we now see. The Earth's moon likely formed after a Mars-sized protoplanet obliquely impacted the proto-Earth ~30 million years after the formation of the Solar System.
Debris disks
Gas-poor disks of circumstellar dust have been found around many nearby stars—most of which have ages in the range of ~10 million years (e.g. Beta Pictoris, 51 Ophiuchi) to billions of years (e.g. Tau Ceti). These systems are usually referred to as "debris disks". Given the older ages of these stars, and the short lifetimes of micrometer-sized dust grains around stars due to Poynting Robertson drag, collisions, and radiation pressure (typically hundreds to thousands of years), it is thought that this dust is from the collisions of planetesimals (e.g. asteroids, comets). Hence the debris disks around these examples (e.g. Vega, Alphecca, Fomalhaut, etc.) are not "protoplanetary", but represent a later stage of disk evolution where extrasolar analogs of the asteroid belt and Kuiper belt are home to dust-generating collisions between planetesimals.
Relation to abiogenesis
Based on recent computer model studies, the complex organic molecules necessary for life may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of the Earth. According to the computer studies, this same process may also occur around other stars that acquire planets. (Also see Extraterrestrial organic molecules.)
Gallery
| Physical sciences | Stellar astronomy | Astronomy |
245434 | https://en.wikipedia.org/wiki/Proplyd | Proplyd | A proplyd, short for ionized protoplanetary disk, is an externally illuminated photoevaporating protoplanetary disk around a young star. Nearly 180 proplyds have been discovered in the Orion Nebula. Images of proplyds in other star-forming regions are rare, while Orion is the only region with a large known sample due to its relative proximity to Earth.
History
In 1979 observations with the Lallemand electronic camera at the Pic-du-Midi Observatory showed six unresolved high-ionization sources near the Trapezium Cluster. These sources were not interpreted as proplyds, but as partly ionized globules (PIGs). The idea was that these objects are being ionized from the outside by M42. Later observations with the Very Large Array showed solar-system-sized condensations associated with these sources. Here the idea appeared that these objects might be low-mass stars surrounded by an evaporating protostellar accretion disk.
Proplyds were clearly resolved in 1993 using images of the Hubble Space Telescope Wide Field Camera and the term "proplyd" was used.
Characteristics
In the Orion Nebula the proplyds observed are usually one of two types. Some proplyds glow around luminous stars, in cases where the disk is found close to the star, glowing from the star's luminosity. Other proplyds are found at a greater distance from the host star and instead show up as dark silhouettes due to the self-obscuration of cooler dust and gases from the disk itself. Some proplyds show signs of movement from solar irradiance shock waves pushing the proplyds. The Orion Nebula is approximately 1,500 light-years from the Sun with very active star formation. The Orion Nebula and the Sun are in the same spiral arm of the Milky Way galaxy.
A proplyd may form new planets and planetesimal systems. Current models show that the metallicity of the star and proplyd, along with the correct planetary system temperature and distance from the star, are keys to planet and planetesimal formation. To date, the Solar System, with 8 planets, 5 dwarf planets and 5 planetesimal systems, is the largest planetary system found. Most proplyds develop into a system with no planetesimal systems, or into one very large planetesimal system.
Proplyds in other star-forming regions
Photoevaporating proplyds in other star forming regions were found with the Hubble Space Telescope. NGC 1977 currently represents the star-forming region with the largest number of proplyds outside of the Orion Nebula, with 7 confirmed proplyds. It was also the first instance where a B-type star, 42 Orionis is responsible for the photoevaporation. In addition, 4 clear and 4 candidate proplyds were discovered in the very young region NGC 2024, two of which have been photoevaporated by a B star. The NGC 2024 proplyds are significant because they imply that external photoevaporation of protoplanetary disks could compete even with very early planet formation (within the first half a million years).
Another type of photoevaporating proplyd was discovered with the Spitzer Space Telescope. These cometary tails represent dust being pulled away from the disks. Westerhout 5 is a region with many dusty proplyds, especially around HD 17505. These dusty proplyds are depleted of any gas in the outer regions of the disk, but the photoevaporation could leave an inner, more robust, and possibly gas-rich disk component of radius 5-10 astronomical units.
The proplyds in the Orion Nebula and other star-forming regions represent proto-planetary disks around low-mass stars being externally photoevaporated. These low-mass proplyds are usually found within 0.3 parsec (60,000 astronomical units) of the massive OB star and the dusty proplyds have tails with a length of 0.1 to 0.2 parsec (20,000 to 40,000 au). There is a proposed type of intermediate massive counterpart, called proplyd-like objects. Objects in NGC 3603 and later in Cygnus OB2 were proposed as intermediate massive versions of the bright proplyds found in the Orion Nebula. The proplyd-like objects in Cygnus OB2 for example are 6 to 14 parsec distant to a large collection of OB stars and have tail lengths of 0.11 to 0.55 parsec (24,000 to 113,000 au). The nature of proplyd-like objects as intermediate massive proplyds is partly supported by a spectrum for one object, which showed that the mass loss rate is higher than the mass accretion rate. Another object did not show any outflow, but accretion.
List of star-forming regions with proplyds
List is sorted after distance.
Gallery
| Physical sciences | Stellar astronomy | Astronomy |
245466 | https://en.wikipedia.org/wiki/Sheaf%20%28mathematics%29 | Sheaf (mathematics) | In mathematics, a sheaf (: sheaves) is a tool for systematically tracking data (such as sets, abelian groups, rings) attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could be the ring of continuous functions defined on that open set. Such data are well-behaved in that they can be restricted to smaller open sets, and also the data assigned to an open set are equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original open set (intuitively, every datum is the sum of its constituent data).
The field of mathematics that studies sheaves is called sheaf theory.
Sheaves are understood conceptually as general and abstract objects. Their precise definition is rather technical. They are specifically defined as sheaves of sets or as sheaves of rings, for example, depending on the type of data assigned to the open sets.
There are also maps (or morphisms) from one sheaf to another; sheaves (of a specific type, such as sheaves of abelian groups) with their morphisms on a fixed topological space form a category. On the other hand, to each continuous map there is associated both a direct image functor, taking sheaves and their morphisms on the domain to sheaves and morphisms on the codomain, and an inverse image functor operating in the opposite direction. These functors, and certain variants of them, are essential parts of sheaf theory.
Due to their general nature and versatility, sheaves have several applications in topology and especially in algebraic and differential geometry. First, geometric structures such as that of a differentiable manifold or a scheme can be expressed in terms of a sheaf of rings on the space. In such contexts, several geometric constructions such as vector bundles or divisors are naturally specified in terms of sheaves. Second, sheaves provide the framework for a very general cohomology theory, which encompasses also the "usual" topological cohomology theories such as singular cohomology. Especially in algebraic geometry and the theory of complex manifolds, sheaf cohomology provides a powerful link between topological and geometric properties of spaces. Sheaves also provide the basis for the theory of D-modules, which provide applications to the theory of differential equations. In addition, generalisations of sheaves to more general settings than topological spaces, such as Grothendieck topology, have provided applications to mathematical logic and to number theory.
Definitions and examples
In many mathematical branches, several structures defined on a topological space (e.g., a differentiable manifold) can be naturally localised or restricted to open subsets : typical examples include continuous real-valued or complex-valued functions, -times differentiable (real-valued or complex-valued) functions, bounded real-valued functions, vector fields, and sections of any vector bundle on the space. The ability to restrict data to smaller open subsets gives rise to the concept of presheaves. Roughly speaking, sheaves are then those presheaves, where local data can be glued to global data.
Presheaves
Let be a topological space. A presheaf of sets on consists of the following data:
For each open set , there exists a set . This set is also denoted . The elements in this set are called the sections of over . The sections of over are called the global sections of .
For each inclusion of open sets , a function . In view of many of the examples below, the morphisms are called restriction morphisms. If , then its restriction is often denoted by analogy with restriction of functions.
The restriction morphisms are required to satisfy two additional (functorial) properties:
For every open set of , the restriction morphism is the identity morphism on .
If we have three open sets , then the composite
Informally, the second axiom says it does not matter whether we restrict to in one step or restrict first to , then to . A concise functorial reformulation of this definition is given further below.
Many examples of presheaves come from different classes of functions: to any , one can assign the set of continuous real-valued functions on . The restriction maps are then just given by restricting a continuous function on to a smaller open subset , which again is a continuous function. The two presheaf axioms are immediately checked, thereby giving an example of a presheaf. This can be extended to a presheaf of holomorphic functions and a presheaf of smooth functions .
Another common class of examples is assigning to the set of constant real-valued functions on . This presheaf is called the constant presheaf associated to and is denoted .
Sheaves
Given a presheaf, a natural question to ask is to what extent its sections over an open set are specified by their restrictions to open subsets of . A sheaf is a presheaf whose sections are, in a technical sense, uniquely determined by their restrictions.
Axiomatically, a sheaf is a presheaf that satisfies both of the following axioms:
(Locality) Suppose is an open set, is an open cover of with for all , and are sections. If for all , then .
(Gluing) Suppose is an open set, is an open cover of with for all , and is a family of sections. If all pairs of sections agree on the overlap of their domains, that is, if for all , then there exists a section such that for all .
In both of these axioms, the hypothesis on the open cover is equivalent to the assumption that .
The section whose existence is guaranteed by axiom 2 is called the gluing, concatenation, or collation of the sections . By axiom 1 it is unique. Sections and satisfying the agreement precondition of axiom 2 are often called compatible ; thus axioms 1 and 2 together state that any collection of pairwise compatible sections can be uniquely glued together. A separated presheaf, or monopresheaf, is a presheaf satisfying axiom 1.
The presheaf consisting of continuous functions mentioned above is a sheaf. This assertion reduces to checking that, given continuous functions which agree on the intersections , there is a unique continuous function whose restriction equals the . By contrast, the constant presheaf is usually not a sheaf as it fails to satisfy the locality axiom on the empty set (this is explained in more detail at constant sheaf).
Presheaves and sheaves are typically denoted by capital letters, being particularly common, presumably for the French word for sheaf, faisceau. Use of calligraphic letters such as is also common.
It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. This observation is used to construct another example which is crucial in algebraic geometry, namely quasi-coherent sheaves. Here the topological space in question is the spectrum of a commutative ring , whose points are the prime ideals in . The open sets form a basis for the Zariski topology on this space. Given an -module , there is a sheaf, denoted by on the , that satisfies
the localization of at .
There is another characterization of sheaves that is equivalent to the previously discussed.
A presheaf is a sheaf if and only if for any open and any open cover of , is the fibre product . This characterization is useful in construction of sheaves, for example, if are abelian sheaves, then the kernel of sheaves morphism is a sheaf, since projective limits commutes with projective limits. On the other hand, the cokernel is not always a sheaf because inductive limit not necessarily commutes with projective limits. One of the way to fix this is to consider Noetherian topological spaces; every open sets are compact so that the cokernel is a sheaf, since finite projective limits commutes with inductive limits.
Further examples
Sheaf of sections of a continuous map
Any continuous map of topological spaces determines a sheaf on by setting
Any such is commonly called a section of , and this example is the reason why the elements in are generally called sections. This construction is especially important when is the projection of a fiber bundle onto its base space. For example, the sheaves of smooth functions are the sheaves of sections of the trivial bundle.
Another example: the sheaf of sections of
is the sheaf which assigns to any the set of branches of the complex logarithm on .
Given a point and an abelian group , the skyscraper sheaf is defined as follows: if is an open set containing , then . If does not contain , then , the trivial group. The restriction maps are either the identity on , if both open sets contain , or the zero map otherwise.
Sheaves on manifolds
On an -dimensional -manifold , there are a number of important sheaves, such as the sheaf of -times continuously differentiable functions (with ). Its sections on some open are the -functions . For , this sheaf is called the structure sheaf and is denoted . The nonzero functions also form a sheaf, denoted . Differential forms (of degree ) also form a sheaf . In all these examples, the restriction morphisms are given by restricting functions or forms.
The assignment sending to the compactly supported functions on is not a sheaf, since there is, in general, no way to preserve this property by passing to a smaller open subset. Instead, this forms a cosheaf, a dual concept where the restriction maps go in the opposite direction than with sheaves. However, taking the dual of these vector spaces does give a sheaf, the sheaf of distributions.
Presheaves that are not sheaves
In addition to the constant presheaf mentioned above, which is usually not a sheaf, there are further examples of presheaves that are not sheaves:
Let be the two-point topological space with the discrete topology. Define a presheaf as follows: The restriction map is the projection of onto its first coordinate, and the restriction map is the projection of onto its second coordinate. is a presheaf that is not separated: a global section is determined by three numbers, but the values of that section over and determine only two of those numbers. So while we can glue any two sections over and , we cannot glue them uniquely.
Let be the real line, and let be the set of bounded continuous functions on . This is not a sheaf because it is not always possible to glue. For example, let be the set of all such that . The identity function is bounded on each . Consequently, we get a section on . However, these sections do not glue, because the function is not bounded on the real line. Consequently is a presheaf, but not a sheaf. In fact, is separated because it is a sub-presheaf of the sheaf of continuous functions.
Motivating sheaves from complex analytic spaces and algebraic geometry
One of the historical motivations for sheaves have come from studying complex manifolds, complex analytic geometry, and scheme theory from algebraic geometry. This is because in all of the previous cases, we consider a topological space together with a structure sheaf giving it the structure of a complex manifold, complex analytic space, or scheme. This perspective of equipping a topological space with a sheaf is essential to the theory of locally ringed spaces (see below).
Technical challenges with complex manifolds
One of the main historical motivations for introducing sheaves was constructing a device which keeps track of holomorphic functions on complex manifolds. For example, on a compact complex manifold (like complex projective space or the vanishing locus in projective space of a homogeneous polynomial), the only holomorphic functionsare the constant functions. This means there exist two compact complex manifolds which are not isomorphic, but nevertheless their rings of global holomorphic functions, denoted , are isomorphic. Contrast this with smooth manifolds where every manifold can be embedded inside some , hence its ring of smooth functions comes from restricting the smooth functions from .
Another complexity when considering the ring of holomorphic functions on a complex manifold is given a small enough open set , the holomorphic functions will be isomorphic to . Sheaves are a direct tool for dealing with this complexity since they make it possible to keep track of the holomorphic structure on the underlying topological space of on arbitrary open subsets . This means as becomes more complex topologically, the ring can be expressed from gluing the . Note that sometimes this sheaf is denoted or just , or even when we want to emphasize the space the structure sheaf is associated to.
Tracking submanifolds with sheaves
Another common example of sheaves can be constructed by considering a complex submanifold . There is an associated sheaf which takes an open subset and gives the ring of holomorphic functions on . This kind of formalism was found to be extremely powerful and motivates a lot of homological algebra such as sheaf cohomology since an intersection theory can be built using these kinds of sheaves from the Serre intersection formula.
Operations with sheaves
Morphisms
Morphisms of sheaves are, roughly speaking, analogous to functions between them. In contrast to a function between sets, which is simply an assignment of outputs to inputs, morphisms of sheaves are also required to be compatible with the local–global structures of the underlying sheaves. This idea is made precise in the following definition.
Let and be two sheaves of sets (respectively abelian groups, rings, etc.) on . A morphism consists of a morphism of sets (respectively abelian groups, rings, etc.) for each open set of , subject to the condition that this morphism is compatible with restrictions. In other words, for every open subset of an open set , the following diagram is commutative.
For example, taking the derivative gives a morphism of sheaves on ,
Indeed, given an (-times continuously differentiable) function (with in open), the restriction (to a smaller open subset ) of its derivative equals the derivative of .
With this notion of morphism, sheaves of sets (respectively abelian groups, rings, etc.) on a fixed topological space form a category. The general categorical notions of mono-, epi- and isomorphisms can therefore be applied to sheaves.
A morphism of sheaves on is an isomorphism (respectively monomorphism) if and only if there exists an open cover of such that are isomorphisms (respectively injective morphisms) of sets (respectively abelian groups, rings, etc.) for all . These statements give examples of how to work with sheaves using local information, but it's important to note that we cannot check if a morphism of sheaves is an epimorphism in the same manner. Indeed the statement that maps on the level of open sets are not always surjective for epimorphisms of sheaves is equivalent to non-exactness of the global sections functor—or equivalently, to non-triviality of sheaf cohomology.
Stalks of a sheaf
The stalk of a sheaf captures the properties of a sheaf "around" a point , generalizing the germs of functions.
Here, "around" means that, conceptually speaking, one looks at smaller and smaller neighborhoods of the point. Of course, no single neighborhood will be small enough, which requires considering a limit of some sort. More precisely, the stalk is defined by
the direct limit being over all open subsets of containing the given point . In other words, an element of the stalk is given by a section over some open neighborhood of , and two such sections are considered equivalent if their restrictions agree on a smaller neighborhood.
The natural morphism takes a section in to its germ at . This generalises the usual definition of a germ.
In many situations, knowing the stalks of a sheaf is enough to control the sheaf itself. For example, whether or not a morphism of sheaves is a monomorphism, epimorphism, or isomorphism can be tested on the stalks. In this sense, a sheaf is determined by its stalks, which are a local data. By contrast, the global information present in a sheaf, i.e., the global sections, i.e., the sections on the whole space , typically carry less information. For example, for a compact complex manifold , the global sections of the sheaf of holomorphic functions are just , since any holomorphic function
is constant by Liouville's theorem.
Turning a presheaf into a sheaf
It is frequently useful to take the data contained in a presheaf and to express it as a sheaf. It turns out that there is a best possible way to do this. It takes a presheaf and produces a new sheaf called the sheafification or sheaf associated to the presheaf . For example, the sheafification of the constant presheaf (see above) is called the constant sheaf. Despite its name, its sections are locally constant functions.
The sheaf can be constructed using the étalé space of , namely as the sheaf of sections of the map
Another construction of the sheaf proceeds by means of a functor from presheaves to presheaves that gradually improves the properties of a presheaf: for any presheaf , is a separated presheaf, and for any separated presheaf , is a sheaf. The associated sheaf is given by .
The idea that the sheaf is the best possible approximation to by a sheaf is made precise using the following universal property: there is a natural morphism of presheaves so that for any sheaf and any morphism of presheaves , there is a unique morphism of sheaves such that . In fact, is the left adjoint functor to the inclusion functor (or forgetful functor) from the category of sheaves to the category of presheaves, and is the unit of the adjunction. In this way, the category of sheaves turns into a Giraud subcategory of presheaves. This categorical situation is the reason why the sheafification functor appears in constructing cokernels of sheaf morphisms or tensor products of sheaves, but not for kernels, say.
Subsheaves, quotient sheaves
If is a subsheaf of a sheaf of abelian groups, then the quotient sheaf is the sheaf associated to the presheaf ; in other words, the quotient sheaf fits into an exact sequence of sheaves of abelian groups;
(this is also called a sheaf extension.)
Let be sheaves of abelian groups. The set of morphisms of sheaves from to forms an abelian group (by the abelian group structure of ). The sheaf hom of and , denoted by,
is the sheaf of abelian groups where is the sheaf on given by (note sheafification is not needed here). The direct sum of and is the sheaf given by , and the tensor product of and is the sheaf associated to the presheaf .
All of these operations extend to sheaves of modules over a sheaf of rings ; the above is the special case when is the constant sheaf .
Basic functoriality
Since the data of a (pre-)sheaf depends on the open subsets of the base space, sheaves on different topological spaces are unrelated to each other in the sense that there are no morphisms between them. However, given a continuous map between two topological spaces, pushforward and pullback relate sheaves on to those on and vice versa.
Direct image
The pushforward (also known as direct image) of a sheaf on is the sheaf defined by
Here is an open subset of , so that its preimage is open in by the continuity of .
This construction recovers the skyscraper sheaf mentioned above:
where is the inclusion, and is regarded as a sheaf on the singleton by .
For a map between locally compact spaces, the direct image with compact support is a subsheaf of the direct image. By definition, consists of those whose support is mapped properly. If is proper itself, then , but in general they disagree.
Inverse image
The pullback or inverse image goes the other way: it produces a sheaf on , denoted out of a sheaf on . If is the inclusion of an open subset, then the inverse image is just a restriction, i.e., it is given by for an open in . A sheaf (on some space ) is called locally constant if by some open subsets such that the restriction of to all these open subsets is constant. On a wide range of topological spaces , such sheaves are equivalent to representations of the fundamental group .
For general maps , the definition of is more involved; it is detailed at inverse image functor. The stalk is an essential special case of the pullback in view of a natural identification, where is as above:
More generally, stalks satisfy .
Extension by zero
For the inclusion of an open subset, the extension by zero (pronounced "j lower shriek of F") of a sheaf of abelian groups on is the sheafification of the presheaf defined by
if and otherwise.
For a sheaf on , this construction is in a sense complementary to , where is the inclusion of the complement of :
for in , and the stalk is zero otherwise, while
for in , and equals otherwise.
More generally, if is a locally closed subset, then there exists an open of containing such that is closed in . Let and be the natural inclusions. Then the extension by zero of a sheaf on is defined by .
Due to its nice behavior on stalks, the extension by zero functor is useful for reducing sheaf-theoretic questions on to ones on the strata of a stratification, i.e., a decomposition of into smaller, locally closed subsets.
Complements
Sheaves in more general categories
In addition to (pre-)sheaves as introduced above, where is merely a set, it is in many cases important to keep track of additional structure on these sections. For example, the sections of the sheaf of continuous functions naturally form a real vector space, and restriction is a linear map between these vector spaces.
Presheaves with values in an arbitrary category are defined by first considering the category of open sets on to be the posetal category whose objects are the open sets of and whose morphisms are inclusions. Then a -valued presheaf on is the same as a contravariant functor from to . Morphisms in this category of functors, also known as natural transformations, are the same as the morphisms defined above, as can be seen by unraveling the definitions.
If the target category admits all limits, a -valued presheaf is a sheaf if the following diagram is an equalizer for every open cover
of any open set :
Here the first map is the product of the restriction maps
and the pair of arrows the products of the two sets of restrictions
and
If is an abelian category, this condition can also be rephrased by requiring that there is an exact sequence
A particular case of this sheaf condition occurs for being the empty set, and the index set also being empty. In this case, the sheaf condition requires to be the terminal object in .
Ringed spaces and sheaves of modules
In several geometrical disciplines, including algebraic geometry and differential geometry, the spaces come along with a natural sheaf of rings, often called the structure sheaf and denoted by . Such a pair is called a ringed space. Many types of spaces can be defined as certain types of ringed spaces. Commonly, all the stalks of the structure sheaf are local rings, in which case the pair is called a locally ringed space.
For example, an -dimensional manifold is a locally ringed space whose structure sheaf consists of -functions on the open subsets of . The property of being a locally ringed space translates into the fact that such a function, which is nonzero at a point , is also non-zero on a sufficiently small open neighborhood of . Some authors actually define real (or complex) manifolds to be locally ringed spaces that are locally isomorphic to the pair consisting of an open subset of (respectively ) together with the sheaf of (respectively holomorphic) functions. Similarly, schemes, the foundational notion of spaces in algebraic geometry, are locally ringed spaces that are locally isomorphic to the spectrum of a ring.
Given a ringed space, a sheaf of modules is a sheaf such that on every open set of , is an -module and for every inclusion of open sets , the restriction map is compatible with the restriction map : the restriction of fs is the restriction of times that of for any in and in .
Most important geometric objects are sheaves of modules. For example, there is a one-to-one correspondence between vector bundles and locally free sheaves of -modules. This paradigm applies to real vector bundles, complex vector bundles, or vector bundles in algebraic geometry (where consists of smooth functions, holomorphic functions, or regular functions, respectively). Sheaves of solutions to differential equations are -modules, that is, modules over the sheaf of differential operators. On any topological space, modules over the constant sheaf are the same as sheaves of abelian groups in the sense above.
There is a different inverse image functor for sheaves of modules over sheaves of rings. This functor is usually denoted and it is distinct from . See inverse image functor.
Finiteness conditions for sheaves of modules
Finiteness conditions for module over commutative rings give rise to similar finiteness conditions for sheaves of modules: is called finitely generated (respectively finitely presented) if, for every point of , there exists an open neighborhood of , a natural number (possibly depending on ), and a surjective morphism of sheaves (respectively, in addition a natural number , and an exact sequence .) Paralleling the notion of a coherent module, is called a coherent sheaf if it is of finite type and if, for every open set and every morphism of sheaves (not necessarily surjective), the kernel of is of finite type. is coherent if it is coherent as a module over itself. Like for modules, coherence is in general a strictly stronger condition than finite presentation. The Oka coherence theorem states that the sheaf of holomorphic functions on a complex manifold is coherent.
The étalé space of a sheaf
In the examples above it was noted that some sheaves occur naturally as sheaves of sections. In fact, all sheaves of sets can be represented as sheaves of sections of a topological space called the étalé space, from the French word étalé , meaning roughly "spread out". If is a sheaf over , then the étalé space (sometimes called the étale space) of is a topological space together with a local homeomorphism such that the sheaf of sections of is . The space is usually very strange, and even if the sheaf arises from a natural topological situation, may not have any clear topological interpretation. For example, if is the sheaf of sections of a continuous function , then if and only if is a local homeomorphism.
The étalé space is constructed from the stalks of over . As a set, it is their disjoint union and is the obvious map that takes the value on the stalk of over . The topology of is defined as follows. For each element and each , we get a germ of at , denoted or . These germs determine points of . For any and , the union of these points (for all ) is declared to be open in . Notice that each stalk has the discrete topology as subspace topology. Two morphisms between sheaves determine a continuous map of the corresponding étalé spaces that is compatible with the projection maps (in the sense that every germ is mapped to a germ over the same point). This makes the construction into a functor.
The construction above determines an equivalence of categories between the category of sheaves of sets on and the category of étalé spaces over . The construction of an étalé space can also be applied to a presheaf, in which case the sheaf of sections of the étalé space recovers the sheaf associated to the given presheaf.
This construction makes all sheaves into representable functors on certain categories of topological spaces. As above, let be a sheaf on , let be its étalé space, and let be the natural projection. Consider the overcategory of topological spaces over , that is, the category of topological spaces together with fixed continuous maps to . Every object of this category is a continuous map , and a morphism from to is a continuous map that commutes with the two maps to . There is a functorsending an object to . For example, if is the inclusion of an open subset, thenand for the inclusion of a point , thenis the stalk of at . There is a natural isomorphism,which shows that (for the étalé space) represents the functor .
is constructed so that the projection map is a covering map. In algebraic geometry, the natural analog of a covering map is called an étale morphism. Despite its similarity to "étalé", the word étale has a different meaning in French. It is possible to turn into a scheme and into a morphism of schemes in such a way that retains the same universal property, but is not in general an étale morphism because it is not quasi-finite. It is, however, formally étale.
The definition of sheaves by étalé spaces is older than the definition given earlier in the article. It is still common in some areas of mathematics such as mathematical analysis.
Sheaf cohomology
In contexts where the open set is fixed, and the sheaf is regarded as a variable, the set is also often denoted
As was noted above, this functor does not preserve epimorphisms. Instead, an epimorphism of sheaves is a map with the following property: for any section there is a covering where of open subsets, such that the restriction are in the image of . However, itself need not be in the image of . A concrete example of this phenomenon is the exponential map
between the sheaf of holomorphic functions and non-zero holomorphic functions. This map is an epimorphism, which amounts to saying that any non-zero holomorphic function (on some open subset in , say), admits a complex logarithm locally, i.e., after restricting to appropriate open subsets. However, need not have a logarithm globally.
Sheaf cohomology captures this phenomenon. More precisely, for an exact sequence of sheaves of abelian groups
(i.e., an epimorphism whose kernel is ), there is a long exact sequenceBy means of this sequence, the first cohomology group is a measure for the non-surjectivity of the map between sections of and .
There are several different ways of constructing sheaf cohomology. introduced them by defining sheaf cohomology as the derived functor of . This method is theoretically satisfactory, but, being based on injective resolutions, of little use in concrete computations. Godement resolutions are another general, but practically inaccessible approach.
Computing sheaf cohomology
Especially in the context of sheaves on manifolds, sheaf cohomology can often be computed using resolutions by soft sheaves, fine sheaves, and flabby sheaves (also known as flasque sheaves from the French flasque meaning flabby). For example, a partition of unity argument shows that the sheaf of smooth functions on a manifold is soft. The higher cohomology groups for vanish for soft sheaves, which gives a way of computing cohomology of other sheaves. For example, the de Rham complex is a resolution of the constant sheaf on any smooth manifold, so the sheaf cohomology of is equal to its de Rham cohomology.
A different approach is by Čech cohomology. Čech cohomology was the first cohomology theory developed for sheaves and it is well-suited to concrete calculations, such as computing the coherent sheaf cohomology of complex projective space . It relates sections on open subsets of the space to cohomology classes on the space. In most cases, Čech cohomology computes the same cohomology groups as the derived functor cohomology. However, for some pathological spaces, Čech cohomology will give the correct but incorrect higher cohomology groups. To get around this, Jean-Louis Verdier developed hypercoverings. Hypercoverings not only give the correct higher cohomology groups but also allow the open subsets mentioned above to be replaced by certain morphisms from another space. This flexibility is necessary in some applications, such as the construction of Pierre Deligne's mixed Hodge structures.
Many other coherent sheaf cohomology groups are found using an embedding of a space into a space with known cohomology, such as , or some weighted projective space. In this way, the known sheaf cohomology groups on these ambient spaces can be related to the sheaves , giving . For example, computing the coherent sheaf cohomology of projective plane curves is easily found. One big theorem in this space is the Hodge decomposition found using a spectral sequence associated to sheaf cohomology groups, proved by Deligne. Essentially, the -page with termsthe sheaf cohomology of a smooth projective variety , degenerates, meaning . This gives the canonical Hodge structure on the cohomology groups . It was later found these cohomology groups can be easily explicitly computed using Griffiths residues. See Jacobian ideal. These kinds of theorems lead to one of the deepest theorems about the cohomology of algebraic varieties, the decomposition theorem, paving the path for Mixed Hodge modules.
Another clean approach to the computation of some cohomology groups is the Borel–Bott–Weil theorem, which identifies the cohomology groups of some line bundles on flag manifolds with irreducible representations of Lie groups. This theorem can be used, for example, to easily compute the cohomology groups of all line bundles on projective space and grassmann manifolds.
In many cases there is a duality theory for sheaves that generalizes Poincaré duality. See Grothendieck duality and Verdier duality.
Derived categories of sheaves
The derived category of the category of sheaves of, say, abelian groups on some space X, denoted here as , is the conceptual haven for sheaf cohomology, by virtue of the following relation:
The adjunction between , which is the left adjoint of (already on the level of sheaves of abelian groups) gives rise to an adjunction
(for ),
where is the derived functor. This latter functor encompasses the notion of sheaf cohomology since for .
Like , the direct image with compact support can also be derived. By virtue of the following isomorphism parametrizes the cohomology with compact support of the fibers of :
This isomorphism is an example of a base change theorem. There is another adjunction
Unlike all the functors considered above, the twisted (or exceptional) inverse image functor is in general only defined on the level of derived categories, i.e., the functor is not obtained as the derived functor of some functor between abelian categories. If and X is a smooth orientable manifold of dimension n, then
This computation, and the compatibility of the functors with duality (see Verdier duality) can be used to obtain a high-brow explanation of Poincaré duality. In the context of quasi-coherent sheaves on schemes, there is a similar duality known as coherent duality.
Perverse sheaves are certain objects in , i.e., complexes of sheaves (but not in general sheaves proper). They are an important tool to study the geometry of singularities.
Derived categories of coherent sheaves and the Grothendieck group
Another important application of derived categories of sheaves is with the derived category of coherent sheaves on a scheme denoted . This was used by Grothendieck in his development of intersection theory using derived categories and K-theory, that the intersection product of subschemes is represented in K-theory aswhere are coherent sheaves defined by the -modules given by their structure sheaves.
Sites and topoi
André Weil's Weil conjectures stated that there was a cohomology theory for algebraic varieties over finite fields that would give an analogue of the Riemann hypothesis. The cohomology of a complex manifold can be defined as the sheaf cohomology of the locally constant sheaf in the Euclidean topology, which suggests defining a Weil cohomology theory in positive characteristic as the sheaf cohomology of a constant sheaf. But the only classical topology on such a variety is the Zariski topology, and the Zariski topology has very few open sets, so few that the cohomology of any Zariski-constant sheaf on an irreducible variety vanishes (except in degree zero). Alexandre Grothendieck solved this problem by introducing Grothendieck topologies, which axiomatize the notion of covering. Grothendieck's insight was that the definition of a sheaf depends only on the open sets of a topological space, not on the individual points. Once he had axiomatized the notion of covering, open sets could be replaced by other objects. A presheaf takes each one of these objects to data, just as before, and a sheaf is a presheaf that satisfies the gluing axiom with respect to our new notion of covering. This allowed Grothendieck to define étale cohomology and ℓ-adic cohomology, which eventually were used to prove the Weil conjectures.
A category with a Grothendieck topology is called a site. A category of sheaves on a site is called a topos or a Grothendieck topos. The notion of a topos was later abstracted by William Lawvere and Miles Tierney to define an elementary topos, which has connections to mathematical logic.
History
The first origins of sheaf theory are hard to pin down – they may be co-extensive with the idea of analytic continuation. It took about 15 years for a recognisable, free-standing theory of sheaves to emerge from the foundational work on cohomology.
1936 Eduard Čech introduces the nerve construction, for associating a simplicial complex to an open covering.
1938 Hassler Whitney gives a 'modern' definition of cohomology, summarizing the work since J. W. Alexander and Kolmogorov first defined cochains.
1943 Norman Steenrod publishes on homology with local coefficients.
1945 Jean Leray publishes work carried out as a prisoner of war, motivated by proving fixed-point theorems for application to PDE theory; it is the start of sheaf theory and spectral sequences.
1947 Henri Cartan reproves the de Rham theorem by sheaf methods, in correspondence with André Weil (see De Rham–Weil theorem). Leray gives a sheaf definition in his courses via closed sets (the later carapaces).
1948 The Cartan seminar writes up sheaf theory for the first time.
1950 The "second edition" sheaf theory from the Cartan seminar: the sheaf space (espace étalé) definition is used, with stalkwise structure. Supports are introduced, and cohomology with supports. Continuous mappings give rise to spectral sequences. At the same time Kiyoshi Oka introduces an idea (adjacent to that) of a sheaf of ideals, in several complex variables.
1951 The Cartan seminar proves theorems A and B, based on Oka's work.
1953 The finiteness theorem for coherent sheaves in the analytic theory is proved by Cartan and Jean-Pierre Serre, as is Serre duality.
1954 Serre's paper Faisceaux algébriques cohérents (published in 1955) introduces sheaves into algebraic geometry. These ideas are immediately exploited by Friedrich Hirzebruch, who writes a major 1956 book on topological methods.
1955 Alexander Grothendieck in lectures in Kansas defines abelian category and presheaf, and by using injective resolutions allows direct use of sheaf cohomology on all topological spaces, as derived functors.
1956 Oscar Zariski's report Algebraic sheaf theory
1957 Grothendieck's Tohoku paper rewrites homological algebra; he proves Grothendieck duality (i.e., Serre duality for possibly singular algebraic varieties).
1957 onwards: Grothendieck extends sheaf theory in line with the needs of algebraic geometry, introducing: schemes and general sheaves on them, local cohomology, derived categories (with Verdier), and Grothendieck topologies. There emerges also his influential schematic idea of 'six operations' in homological algebra.
1958 Roger Godement's book on sheaf theory is published. At around this time Mikio Sato proposes his hyperfunctions, which will turn out to have sheaf-theoretic nature.
At this point sheaves had become a mainstream part of mathematics, with use by no means restricted to algebraic topology. It was later discovered that the logic in categories of sheaves is intuitionistic logic (this observation is now often referred to as Kripke–Joyal semantics, but probably should be attributed to a number of authors).
| Mathematics | Algebra | null |
245552 | https://en.wikipedia.org/wiki/Gaussian%20function | Gaussian function | In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form
and with parametric extension
for arbitrary real constants , and non-zero . It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter is the height of the curve's peak, is the position of the center of the peak, and (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell".
Gaussian functions are often used to represent the probability density function of a normally distributed random variable with expected value and variance . In this case, the Gaussian is of the form
Gaussian functions are widely used in statistics to describe the normal distributions, in signal processing to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics to solve heat equations and diffusion equations and to define the Weierstrass transform. They are also abundantly used in quantum chemistry to form basis sets.
Properties
Gaussian functions arise by composing the exponential function with a concave quadratic function:where
(Note: in , not to be confused with )
The Gaussian functions are thus those functions whose logarithm is a concave quadratic function.
The parameter is related to the full width at half maximum (FWHM) of the peak according to
The function may then be expressed in terms of the FWHM, represented by :
Alternatively, the parameter can be interpreted by saying that the two inflection points of the function occur at .
The full width at tenth of maximum (FWTM) for a Gaussian could be of interest and is
Gaussian functions are analytic, and their limit as is 0 (for the above case of ).
Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function:
Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using the Gaussian integral
and one obtains
This integral is 1 if and only if (the normalizing constant), and in this case the Gaussian is the probability density function of a normally distributed random variable with expected value and variance :
These Gaussians are plotted in the accompanying figure.
Gaussian functions centered at zero minimize the Fourier uncertainty principle.
The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances: . The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF.
Taking the Fourier transform (unitary, angular-frequency convention) of a Gaussian function with parameters , and yields another Gaussian function, with parameters , and . So in particular the Gaussian functions with and are kept fixed by the Fourier transform (they are eigenfunctions of the Fourier transform with eigenvalue 1).
A physical realization is that of the diffraction pattern: for example, a photographic slide whose transmittance has a Gaussian variation is also a Gaussian function.
The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting identity from the Poisson summation formula:
Integral of a Gaussian function
The integral of an arbitrary Gaussian function is
An alternative form is
where f must be strictly positive for the integral to converge.
Relation to standard Gaussian integral
The integral
for some real constants a, b and c > 0 can be calculated by putting it into the form of a Gaussian integral. First, the constant a can simply be factored out of the integral. Next, the variable of integration is changed from x to :
and then to :
Then, using the Gaussian integral identity
we have
Two-dimensional Gaussian function
Base form:
In two dimensions, the power to which e is raised in the Gaussian function is any negative-definite quadratic form. Consequently, the level sets of the Gaussian will always be ellipses.
A particular example of a two-dimensional Gaussian function is
Here the coefficient A is the amplitude, x0, y0 is the center, and σx, σy are the x and y spreads of the blob. The figure on the right was created using A = 1, x0 = 0, y0 = 0, σx = σy = 1.
The volume under the Gaussian function is given by
In general, a two-dimensional elliptical Gaussian function is expressed as
where the matrix
is positive-definite.
Using this formulation, the figure on the right can be created using , , , .
Meaning of parameters for the general equation
For the general form of the equation the coefficient A is the height of the peak and is the center of the blob.
If we setthen we rotate the blob by a positive, counter-clockwise angle (for negative, clockwise rotation, invert the signs in the b coefficient).
To get back the coefficients , and from , and use
Example rotations of Gaussian blobs can be seen in the following examples:
Using the following Octave code, one can easily see the effect of changing the parameters:
A = 1;
x0 = 0; y0 = 0;
sigma_X = 1;
sigma_Y = 2;
[X, Y] = meshgrid(-5:.1:5, -5:.1:5);
for theta = 0:pi/100:pi
a = cos(theta)^2 / (2 * sigma_X^2) + sin(theta)^2 / (2 * sigma_Y^2);
b = sin(2 * theta) / (4 * sigma_X^2) - sin(2 * theta) / (4 * sigma_Y^2);
c = sin(theta)^2 / (2 * sigma_X^2) + cos(theta)^2 / (2 * sigma_Y^2);
Z = A * exp(-(a * (X - x0).^2 + 2 * b * (X - x0) .* (Y - y0) + c * (Y - y0).^2));
surf(X, Y, Z);
shading interp;
view(-36, 36)
waitforbuttonpress
end
Such functions are often used in image processing and in computational models of visual system function—see the articles on scale space and affine shape adaptation.
Also see multivariate normal distribution.
Higher-order Gaussian or super-Gaussian function
A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power :
This function is known as a super-Gaussian function and is often used for Gaussian beam formulation. This function may also be expressed in terms of the full width at half maximum (FWHM), represented by :
In a two-dimensional formulation, a Gaussian function along and can be combined with potentially different and to form a rectangular Gaussian distribution:
or an elliptical Gaussian distribution:
Multi-dimensional Gaussian function
In an -dimensional space a Gaussian function can be defined as
where is a column of coordinates, is a positive-definite matrix, and denotes matrix transposition.
The integral of this Gaussian function over the whole -dimensional space is given as
It can be easily calculated by diagonalizing the matrix and changing the integration variables to the eigenvectors of .
More generally a shifted Gaussian function is defined as
where is the shift vector and the matrix can be assumed to be symmetric, , and positive-definite. The following integrals with this function can be calculated with the same technique:
where
Estimation of parameters
A number of fields such as stellar photometry, Gaussian beam characterization, and emission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a, b, c) and five for a 2D Gaussian function .
The most common method for estimating the Gaussian parameters is to take the logarithm of the data and fit a parabola to the resulting data set. While this provides a simple curve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem through weighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use an iteratively reweighted least squares procedure, in which the weights are updated at each iteration.
It is also possible to perform non-linear regression directly on the data, without involving the logarithmic data transformation; for more options, see probability distribution fitting.
Parameter precision
Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know how precise those estimates are. Any least squares estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also use Cramér–Rao bound theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data.
The noise in the measured profile is either i.i.d. Gaussian, or the noise is Poisson-distributed.
The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform.
The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region.
The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM).
When these assumptions are satisfied, the following covariance matrix K applies for the 1D profile parameters , , and under i.i.d. Gaussian noise and under Poisson noise:
where is the width of the pixels used to sample the function, is the quantum efficiency of the detector, and indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case,
and in the Poisson noise case,
For the 2D profile parameters giving the amplitude , position , and width of the profile, the following covariance matrices apply:
where the individual parameter variances are given by the diagonal elements of the covariance matrix.
Discrete Gaussian
One may ask for a discrete analog to the Gaussian;
this is necessary in discrete applications, particularly digital signal processing. A simple answer is to sample the continuous Gaussian, yielding the sampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the article scale space implementation.
An alternative approach is to use the discrete Gaussian kernel:
where denotes the modified Bessel functions of integer order.
This is the discrete analog of the continuous Gaussian in that it is the solution to the discrete diffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.
Applications
Gaussian functions appear in many contexts in the natural sciences, the social sciences, mathematics, and engineering. Some examples include:
In statistics and probability theory, Gaussian functions appear as the density function of the normal distribution, which is a limiting probability distribution of complicated sums, according to the central limit theorem.
Gaussian functions are the Green's function for the (homogeneous and isotropic) diffusion equation (and to the heat equation, which is the same thing), a partial differential equation that describes the time evolution of a mass-density under diffusion. Specifically, if the mass-density at time t=0 is given by a Dirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time t will be given by a Gaussian function, with the parameter a being linearly related to 1/ and c being linearly related to ; this time-varying Gaussian is described by the heat kernel. More generally, if the initial mass-density is φ(x), then the mass-density at later times is obtained by taking the convolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a Weierstrass transform.
A Gaussian function is the wave function of the ground state of the quantum harmonic oscillator.
The molecular orbitals used in computational chemistry can be linear combinations of Gaussian functions called Gaussian orbitals (see also basis set (chemistry)).
Mathematically, the derivatives of the Gaussian function can be represented using Hermite functions. For unit variance, the n-th derivative of the Gaussian is the Gaussian function itself multiplied by the n-th Hermite polynomial, up to scale.
Consequently, Gaussian functions are also associated with the vacuum state in quantum field theory.
Gaussian beams are used in optical systems, microwave systems and lasers.
In scale space representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations in computer vision and image processing. Specifically, derivatives of Gaussians (Hermite functions) are used as a basis for defining a large number of types of visual operations.
Gaussian functions are used to define some types of artificial neural networks.
In fluorescence microscopy a 2D Gaussian function is used to approximate the Airy disk, describing the intensity distribution produced by a point source.
In signal processing they serve to define Gaussian filters, such as in image processing where 2D Gaussians are used for Gaussian blurs. In digital signal processing, one uses a discrete Gaussian kernel, which may be approximated by the Binomial coefficient or sampling a Gaussian.
In geostatistics they have been used for understanding the variability between the patterns of a complex training image. They are used with kernel methods to cluster the patterns in the feature space.
| Mathematics | Specific functions | null |
245586 | https://en.wikipedia.org/wiki/Protoplanet | Protoplanet | A protoplanet is a large planetary embryo that originated within a protoplanetary disk and has undergone internal melting to produce a differentiated interior. Protoplanets are thought to form out of kilometer-sized planetesimals that gravitationally perturb each other's orbits and collide, gradually coalescing into the dominant planets.
The planetesimal hypothesis
A planetesimal is an object formed from dust, rock, and other materials, measuring from meters to hundreds of kilometers in size.
According to the Chamberlin–Moulton planetesimal hypothesis and the theories of Viktor Safronov, a protoplanetary disk of materials such as gas and dust would orbit a star early in the formation of a planetary system. The action of gravity on such materials form larger and larger chunks until some reach the size of planetesimals.
It is thought that the collisions of planetesimals created a few hundred larger planetary embryos. Over the course of hundreds of millions of years, they collided with one another. The exact sequence whereby planetary embryos collided to assemble the planets is not known, but it is thought that initial collisions would have replaced the first "generation" of embryos with a second generation consisting of fewer but larger embryos. These in their turn would have collided to create a third generation of fewer but even larger embryos. Eventually, only a handful of embryos were left, which collided to complete the assembly of the planets proper.
Early protoplanets had more radioactive elements, the quantity of which has been reduced over time due to radioactive decay. Heating due to radioactivity, impact, and gravitational pressure melted parts of protoplanets as they grew toward being planets. In melted zones their heavier elements sank to the center, whereas lighter elements rose to the surface. Such a process is known as planetary differentiation. The composition of some meteorites show that differentiation took place in some asteroids.
Evidence in the Solar System - surviving remnant protoplanets
In the case of the Solar System, it is thought that the collisions of planetesimals created a few hundred planetary embryos. Such embryos were similar to Ceres and Pluto with masses of about 1022 to 1023 kg and were a few thousand kilometers in diameter.
According to the giant impact hypothesis, the Moon formed from a colossal impact of a hypothetical protoplanet called Theia with Earth, early in the Solar System's history.
In the inner Solar System, the three protoplanets to survive more-or-less intact are the asteroids Ceres, Pallas, and Vesta. Psyche is likely the survivor of a violent hit-and-run with another object that stripped off the outer, rocky layers of a protoplanet. The asteroid Metis may also have a similar origin history to that of Psyche. The asteroid Lutetia also has characteristics that resemble a protoplanet. Kuiper-belt dwarf planets have also been referred to as protoplanets. Because iron meteorites have been found on Earth, it is deemed likely that there once were other metal-cored protoplanets in the asteroid belt that since have been disrupted and that are the source of these meteorites.
Extrasolar protoplanets - observed protoplanets
In February 2013 astronomers made the first direct observation of a candidate protoplanet forming in a disk of gas and dust around a distant star, HD 100546. Subsequent observations suggest that several protoplanets may be present in the gas disk.
Another protoplanet, AB Aur b, may be in the earliest observed stage of formation for a gas giant. It is located in the gas disk of the star AB Aurigae. AB Aur b is among the largest exoplanets identified, and has a distant orbit, three times as far as Neptune is from the Earth's sun. Observations of AB Aur b may challenge conventional thinking about how planets are formed. It was viewed by the Subaru Telescope and the Hubble Space Telescope.
Rings, gaps, spirals, dust concentrations and shadows in protoplanetary disks could be caused by protoplanets. These structures are not completely understood and are therefore not seen as a proof for the presence of a protoplanet. One new emerging way to study the effect of protoplanets on the disk are molecular line observations of protoplanetary disks in the form of gas velocity maps. HD 97048 b is the first protoplanet detected by disk kinematics in the form of a kink in the gas velocity map.
Unconfirmed protoplanets
The confident detection of protoplanets is difficult. Protoplanets usually exist in gas-rich protoplanetary disks. Such disks can produce over-densities by a process called disk fragmentation. Such fragments can be small enough to be unresolved and mimic the appearance of a protoplanet. A number of unconfirmed protoplanet candidates are known and some detections were later questioned.
| Physical sciences | Planetary science | Astronomy |
245794 | https://en.wikipedia.org/wiki/Salyut%201 | Salyut 1 | Salyut 1 (), also known as DOS-1 (Durable Orbital Station 1), was the world's first space station. It was launched into low Earth orbit by the Soviet Union on April 19, 1971. The Salyut program subsequently achieved five more successful launches of seven additional stations. The program's final module, Zvezda (DOS-8), became the core of the Russian Orbital Segment of the International Space Station and remains in orbit today.
Salyut 1 was adapted from an Almaz airframe and comprised five components: a transfer compartment, a main compartment, two auxiliary compartments, and the Orion 1 Space Observatory. It was visited by the Soyuz 10 and Soyuz 11 missions. While the crew of Soyuz 10 was able to soft dock, the hard-docking failed, forcing the crew to abort their mission. The Soyuz 11 crew successfully docked, spending 23 days aboard Salyut 1 conducting experiments. The Soyuz 11 crew died of asphyxia caused by a valve failure just before reentry, making them the only humans to have died above the Kármán line.
Following the deaths, the mission of Salyut 1 was terminated, and the station reentered Earth's atmosphere, burning up on October 11, 1971.
Background
Salyut 1 originated as a modification of the Soviet military's Almaz space station program that was then in development. After the landing of Apollo 11 on the Moon in July 1969, the Soviets began shifting the primary emphasis of their crewed space program to orbiting space stations, with a possible lunar landing later in the 1970s if the N-1 rocket became flight-worthy. Leonid Brezhnev canceled the lunar landing program in 1974 after four catastrophic N-1 launch failures. One other motivation for the space station program was a desire to one-up the US Skylab program then in development. The basic structure of Salyut 1 was adapted from the Almaz with a few modifications and would form the basis of all Soviet space stations through Mir.
Civilian Soviet space stations were internally referred to as DOS (Durable Orbital Station), although publicly, the Salyut name was used for the first six DOS stations (Mir was internally known as DOS-7). Several military experiments were nonetheless carried on Salyut 1, including the OD-4 optical visual ranger, the Orion ultraviolet instrument for characterizing rocket exhaust plumes, and the highly classified Svinets radiometer.
Construction and operational history
Construction of Salyut 1 began in early 1970, and after nearly a year it was shipped to the Baikonur Cosmodrome. Some remaining assembly had yet to be done, and this was completed at the launch center. The Salyut programme was managed by Kerim Kerimov, chairman of the state commission for Soyuz missions.
Launch was planned for April 12, 1971 to coincide with the 10th anniversary of Yuri Gagarin's flight on Vostok 1, but technical problems delayed it until April 19. The first crew launched later in the Soyuz 10 mission, but they ran into troubles while docking and were unable to enter the station; the Soyuz 10 mission was aborted and the crew returned safely to Earth. A replacement crew launched on Soyuz 11 and remained on board for 23 days. This was the first time in the history of spaceflight that a space station had been occupied, and a new record was set for time spent in space. This success was, however, short-lived when the crew was killed during reentry, as a pressure-equalization valve in the Soyuz 11 reentry capsule had opened prematurely, causing the crew to asphyxiate. They were the first and, as of 2025, only humans to have died in space. After this accident, all missions were suspended while the Soyuz spacecraft was redesigned. The station was intentionally destroyed by de-orbiting after six months in orbit, because it ran out of fuel before a redesigned Soyuz spacecraft could be launched to it.
Structure
At launch, the announced purpose of Salyut was to test the elements of the systems of a space station and to conduct scientific research and experiments. The craft was described as being in length, in maximum diameter, and in interior space with an on-orbit dry mass of . Of its several compartments, three were pressurized (100 m3 total), and two could be entered by the crew.
Transfer compartment
The transfer compartment was equipped with the only docking port of Salyut 1, which allowed one Soyuz 7K-OKS spacecraft to dock. It was the first use of the Soviet SSVP docking system that allowed internal crew transfer, a system that is in use today. The docking cone had a front diameter and a aft diameter.
Main compartment
The second and main compartment was about in diameter. Televised views showed enough space for eight large chairs (seven at work consoles), several control panels, and 20 portholes (some obstructed by instruments). The interior design used various colors (light and dark gray, apple green, light yellow) for supporting the cosmonauts’ orientation in weightlessness.
Auxiliary compartments
The third pressurized compartment contained the control and communications equipment, the power supply, the life support system, and other auxiliary equipment. The fourth and final unpressurized compartment was about 2 m in diameter and contained the engine installations and associated control equipment. Salyut had buffer chemical batteries, reserve supplies of oxygen and water, and regeneration systems. Externally mounted were two double sets of solar cell panels that extended like wings from the smaller compartments at each end, the heat regulation system's radiators, and orientation and control devices.
Salyut 1 was modified from one of the Almaz airframes. The unpressurized service module was the modified service module of a Soyuz craft.
Orion 1 Space Observatory
The astrophysical Orion 1 Space Observatory designed by Grigor Gurzadyan of Byurakan Observatory in Armenia, was installed in Salyut 1. Ultraviolet spectrograms of stars were obtained with the help of a mirror telescope of the Mersenne system and a spectrograph of the Wadsworth system using film sensitive to the far ultraviolet. The dispersion of the spectrograph was 32 Å/mm (3.2 nm/mm), while the resolution of the spectrograms derived was about 5 Å at 2600 Å (0.5 nm at 260 nm). Slitless spectrograms were obtained of the stars Vega and Beta Centauri between 2000 and 3800 Å (200 and 380 nm). The telescope was operated by crew member Viktor Patsayev, who became the first man to operate a telescope outside of the Earth's atmosphere.
Specifications
Length:
Maximum diameter:
Habitable volume:
Mass at launch:
Launch vehicle: Proton-K (Serial No. 254-01)
Span across solar arrays: ~
Area of solar arrays:
Number of solar arrays: 4
Resupply carriers: Salyut 1-type Soyuz (redesigned Soyuz missions were intended to take place, but this did not occur)
Number of docking ports: 1
Total crewed missions: 2
Total long-duration crewed missions: 1
Visiting spacecraft and crews
The only spacecraft that ever docked to Salyut 1 were Soyuz 10 and Soyuz 11. Soyuz 10 failed to hard-dock with Salyut 1 and had to abort the mission. Soyuz 11 conducted experiments in Salyut 1 for 23 days, however the cosmonauts later died during reentry in their Soyuz capsule.
Soyuz 10
Soyuz 10 was launched on April 22, 1971, carrying cosmonauts Vladimir Shatalov, Aleksei Yeliseyev, and Nikolai Rukavishnikov. After taking 24 hours for rendezvous and approach, Soyuz 10 soft-docked with Salyut 1 on April 24 at 01:47 UTC and remained for 5.5 h. Hard-docking was unsuccessful due to technical malfunctions. The crew could not enter the station and had to return to Earth on April 24.
Soyuz 11
Soyuz 11 was launched on June 6, 1971 at 04:55:09 UTC and took 3 hours and 19 minutes on June 7 to complete docking. The cosmonauts Georgy Dobrovolsky, Viktor Patsayev, and Vladislav Volkov entered to Salyut 1 and their mission was announced as:
Checking the design, units, onboard systems, and equipment of the orbital piloted station.
Testing the station's manual and autonomous procedures for orientation and navigation, as well as the control systems for maneuvering the space complex in orbit.
Studying Earth's surface geology, geography, meteorology, and snow and ice cover.
Studying physical characteristics, processes, and phenomena in the atmosphere and outer space in various regions of the electromagnetic spectrum.
Conducting medico-biological studies to determine the feasibility of having cosmonauts in the station perform various tasks, and studying the influence of space flight on the human organism.
On June 29, after 23 days and flying 362 orbits, the mission was cut short due to problems aboard the station, including an electrical fire. The crew transferred back to Soyuz 11 and reentered the Earth's atmosphere. The capsule parachuted to a soft landing at 23:16:52 UTC in Kazakhstan, but the recovery team opened the hatch to find all three crew members dead in their couches. An inquest found that a pressure relief valve had malfunctioned during reentry leading to a loss of cabin atmosphere. The crew were not wearing pressure suits, and it was decreed by the TsKBEM (the team of engineers who investigated the tragedy) that all further Soyuz missions would require the use of them.
Reentry of Salyut 1
Salyut 1 was moved to a higher orbit in July–August 1971 to ensure that it would not be destroyed prematurely through orbital decay. In the meantime, Soyuz capsules were being substantially redesigned to allow pressure suits to be worn during launch, docking maneuvers, and re-entry. The Soyuz redesign effort took too long however, and by September, Salyut 1 was running out of fuel. It was decided to conclude the station's mission and on October 11, the main engines were fired for a deorbit maneuver. After 175 days, the world's first space station burned up over the Pacific Ocean.
Pravda (October 26, 1971) reported that 75% of Salyut 1's studies were carried out by optical means and 20% by radio-technical means, while the remainder involved magnetometrical, gravitational, or other measurements. Synoptic readings were taken in both the visible and invisible parts of the electromagnetic spectrum.
| Technology | Crewed vehicles | null |
245926 | https://en.wikipedia.org/wiki/Self-driving%20car | Self-driving car | A self-driving car, also known as a autonomous car (AC), driverless car, robotaxi, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. Self-driving cars are responsible for all driving activities, such as perceiving the environment, monitoring important systems, and controlling the vehicle, which includes navigating from origin to destination.
, no system has achieved full autonomy (SAE Level 5). In December 2020, Waymo was the first to offer rides in self-driving taxis to the public in limited geographic areas (SAE Level 4), and offers services in Arizona (Phoenix) and California (San Francisco and Los Angeles). In June 2024, after a Waymo self-driving taxi crashed into a utility pole in Phoenix, Arizona, all 672 of its Jaguar I-Pace were recalled after they were found to have susceptibility to crashing into pole like items and had their software updated. In July 2021, DeepRoute.ai started offering self-driving taxi rides in Shenzhen, China. Starting in February 2022, Cruise offered self-driving taxi service in San Francisco, but suspended service in 2023. In 2021, Honda was the first manufacturer to sell an SAE Level 3 car, followed by Mercedes-Benz in 2023.
History
Experiments have been conducted on advanced driver assistance systems (ADAS) since at least the 1920s. The first ADAS system was cruise control, which was invented in 1948 by Ralph Teetor.
Trials began in the 1950s. The first semi-autonomous car was developed in 1977, by Japan's Tsukuba Mechanical Engineering Laboratory. It required specially marked streets that were interpreted by two cameras on the vehicle and an analog computer. The vehicle reached speeds of with the support of an elevated rail.
Carnegie Mellon University's Navlab and ALV semi-autonomous projects launched in the 1980s, funded by the United States' Defense Advanced Research Projects Agency (DARPA) starting in 1984 and Mercedes-Benz and Bundeswehr University Munich's EUREKA Prometheus Project in 1987. By 1985, ALV had reached , on two-lane roads. Obstacle avoidance came in 1986, and day and night off-road driving by 1987. In 1995 Navlab 5 completed the first autonomous US coast-to-coast journey. Traveling from Pittsburgh, Pennsylvania and San Diego, California, 98.2% of the trip was autonomous. It completed the trip at an average speed of . Until the second DARPA Grand Challenge in 2005, automated vehicle research in the United States was primarily funded by DARPA, the US Army, and the US Navy, yielding incremental advances in speeds, driving competence, controls, and sensor systems.
The US allocated US$650 million in 1991 for research on the National Automated Highway System, which demonstrated automated driving, combining highway-embedded automation with vehicle technology, and cooperative networking between the vehicles and highway infrastructure. The programme concluded with a successful demonstration in 1997. Partly funded by the National Automated Highway System and DARPA, Navlab drove across the US in 1995, or 98% autonomously. In 2015, Delphi piloted a Delphi technology-based Audi, over through 15 states, 99% autonomously. In 2015, Nevada, Florida, California, Virginia, Michigan, and Washington DC allowed autonomous car testing on public roads.
From 2016 to 2018, the European Commission funded development for connected and automated driving through Coordination Actions CARTRE and SCOUT programs. The Strategic Transport Research and Innovation Agenda (STRIA) Roadmap for Connected and Automated Transport was published in 2019.
In November 2017, Waymo announced testing of autonomous cars without a safety driver. However, an employee was in the car to handle emergencies.
In March 2018, Elaine Herzberg became the first reported pedestrian killed by a self-driving car, an Uber test vehicle with a human backup driver; prosecutors did not charge Uber, while the human driver was sentenced to probation.
In December 2018, Waymo was the first to commercialize a robotaxi service, in Phoenix, Arizona. In October 2020, Waymo launched a robotaxi service in a (geofenced) part of the area. The cars were monitored in real-time, and remote engineers intervened to handle exceptional conditions.
In March 2019, ahead of Roborace, Robocar set the Guinness World Record as the world's fastest autonomous car. Robocar reached 282.42 km/h (175.49 mph).
In March 2021, Honda began leasing in Japan a limited edition of 100 Legend Hybrid EX sedans equipped with Level 3 "Traffic Jam Pilot" driving technology, which legally allowed drivers to take their eyes off the road when the car was travelling under .
In December 2020, Waymo became the first service provider to offer driverless taxi rides to the general public, in a part of Phoenix, Arizona. Nuro began autonomous commercial delivery operations in California in 2021. DeepRoute.ai launched robotaxi service in Shenzhen in July 2021. In December 2021, Mercedes-Benz received approval for a Level 3 car. In February 2022, Cruise became the second service provider to offer driverless taxi rides to the general public, in San Francisco. In December 2022, several manufacturers scaled back plans for self-driving technology, including Ford and Volkswagen. In 2023, Cruise suspended its robotaxi service. Nuro was approved for Level 4 in Palo Alto in August, 2023.
, vehicles operating at Level 3 and above were an insignificant market factor; as of early 2024, Honda leases a Level 3 car in Japan, and Mercedes sells two Level 3 cars in Germany, California and Nevada.
Definitions
Organizations such as SAE have proposed terminology standards. However, most terms have no standard definition and are employed variously by vendors and others. Proposals to adopt aviation automation terminology for cars have not prevailed.
Names such as AutonoDrive, PilotAssist, Full-Self Driving or DrivePilot are used even though the products offer an assortment of features that may not match the names. Despite offering a system it called Full Self-Driving, Tesla stated that its system did not autonomously handle all driving tasks. In the United Kingdom, a fully self-driving car is defined as a car so registered, rather than one that supports a specific feature set. The Association of British Insurers claimed that the usage of the word autonomous in marketing was dangerous because car ads make motorists think "autonomous" and "autopilot" imply that the driver can rely on the car to control itself, even though they do not.
Automated driving system
SAE identified 6 levels for driving automation from level 0 to level 5. An ADS is an SAE J3016 level 3 or higher system.
Advanced driver assistance system
An ADAS is a system that automates specific driving features, such as Forward Collision Warning (FCW), Automatic Emergency Braking (AEB), Lane Departure Warning (LDW), Lane Keeping Assistance (LKA) or Blind Spot Warning (BSW). An ADAS requires a human driver to handle tasks that the ADAS does not support.
Autonomy versus automation
Autonomy implies that an automation system is under the control of the vehicle rather than a driver. Automation is function-specific, handling issues such as speed control, but leaves broader decision-making to the driver.
Euro NCAP defined autonomous as "the system acts independently of the driver to avoid or mitigate the accident".
In Europe, the words automated and autonomous can be used together. For instance, Regulation (EU) 2019/2144 supplied:
"automated vehicle" means a vehicle that can move without continuous driver supervision, but that driver intervention is still expected or required in the operational design domains (ODD);
"fully automated vehicle" means a vehicle that can move entirely without driver supervision;
Cooperative system
A remote driver is a driver that operates a vehicle at a distance, using a video and data connection.
According to SAE J3016,
Operational design domain
Vendors have taken a variety of approaches to the self-driving problem. Tesla's approach is to allow their "full self-driving" (FSD) system to be used in all ODDs as a Level 2 (hands/on, eyes/on) ADAS. Waymo picked specific ODDs (city streets in Phoenix and San Francisco) for their Level 5 robotaxi service. Mercedes Benz offers Level 3 service in Las Vegas in highway traffic jams at speeds up to . Mobileye's SuperVision system offers hands-off/eyes-on driving on all road types at speeds up to . GM's hands-free Super Cruise operates on specific roads in specific conditions, stopping or returning control to the driver when ODD changes. In 2024 the company announced plans to expand road coverage from 400,000 miles to 750,000 miles. Ford's BlueCruise hands-off system operates on 130,000 miles of US divided highways.
Self-driving
The Union of Concerned Scientists defined self-driving as "cars or trucks in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or 'driverless' cars, they combine sensors and software to control, navigate, and drive the vehicle."
The British Automated and Electric Vehicles Act 2018 law defines a vehicle as "driving itself" if the vehicle is "not being controlled, and does not need to be monitored, by an individual".
Another British government definition stated, "Self-driving vehicles are vehicles that can safely and lawfully drive themselves".
British definitions
In British English, the word automated alone has several meanings, such as in the sentence: "Thatcham also found that the automated lane keeping systems could only meet two out of the twelve principles required to guarantee safety, going on to say they cannot, therefore, be classed as 'automated driving', preferring 'assisted driving'". The first occurrence of the "automated" word refers to an Unece automated system, while the second refers to the British legal definition of an automated vehicle. British law interprets the meaning of "automated vehicle" based on the interpretation section related to a vehicle "driving itself" and an insured vehicle.
In November 2023 the British Government introduced the Automated Vehicles Bill. It proposed definitions for related terms:
Self-driving: "A vehicle “satisfies the self-driving test” if it is designed or adapted with the intention that a feature of the vehicle will allow it to travel autonomously, and it is capable of doing so, by means of that feature, safely and legally."
Autonomy: A vehicle travels "autonomously" if it is controlled by the vehicle, and neither the vehicle nor its surroundings are monitored by a person who can intervene.
Control: control of vehicle motion.
Safe: a vehicle that conforms to an acceptably safe standard.
Legal: a vehicle that offers an acceptably low risk of committing a traffic infraction.
SAE classification
A six-level classification system – ranging from fully manual to fully automated – was published in 2014 by SAE International as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; the details are revised occasionally. This classification is based on the role of the driver, rather than the vehicle's capabilities, although these are related. After SAE updated its classification in 2016, (J3016_201609), the National Highway Traffic Safety Administration (NHTSA) adopted the SAE standard. The classification is a topic of debate, with various revisions proposed.
Classifications
A "driving mode", aka driving scenario, combines an ODD with matched driving requirements (e.g., expressway merging, traffic jam). Cars may switch levels in accord with the driving mode.
Above Level 1, level differences are related to how responsibility for safe movement is divided/shared between ADAS and driver rather than specific driving features.
SAE Automation Levels have been criticized for their technological focus. It has been argued that the structure of the levels suggests that automation increases linearly and that more automation is better, which may not be the case. SAE Levels also do not account for changes that may be required to infrastructure and road user behavior.
Mobileye System
Mobileye CEO Amnon Shashua and CTO Shai Shalev-Shwartz proposed an alternative taxonomy for autonomous driving systems, claiming that a more consumer-friendly approach was needed. Its categories reflect the amount of driver engagement that is required. Some vehicle makers have informally adopted some of the terminology involved, while not formally committing to it.
Eyes-on/hands-on
The first level, hands-on/eyes-on, implies that the driver is fully engaged in operating the vehicle, but is supervised by the system, which intervenes according to the features it supports (e.g., adaptive cruise control, automatic emergency braking). The driver is entirely responsible, with hands on the wheel, and eyes on the road.
Eyes-on/hands-off
Eyes-on/hands-off allows the driver to let go of the wheel. The system drives, the driver monitors and remains prepared to resume control as needed.
Eyes-off/hands-off
Eyes-off/hands-off means that the driver can stop monitoring the system, leaving the system in full control. Eyes-off requires that no errors be reproducible (not triggered by exotic transitory conditions) or frequent, that speeds are contextually appropriate (e.g., 80 mph on limited-access roads), and that the system handle typical maneuvers (e.g., getting cut off by another vehicle). The automation level could vary according to the road (e.g., eyes-off on freeways, eyes-on on side streets).
No driver
The highest level does not require a human driver in the car: monitoring is done either remotely (telepresence) or not at all.
Safety
A critical requirement for the higher two levels is that the vehicle be able to conduct a Minimum Risk Maneuver and stop safely out of traffic without driver intervention.
Technology
Architecture
The perception system processes visual and audio data from outside and inside the car to create a local model of the vehicle, the road, traffic, traffic controls and other observable objects, and their relative motion. The control system then takes actions to move the vehicle, considering the local model, road map, and driving regulations.
Several classifications have been proposed to describe ADAS technology. One proposal is to adopt these categories: navigation, path planning, perception, and car control.
Navigation
Navigation involves the use of maps to define a path between origin and destination. Hybrid navigation is the use of multiple navigation systems. Some systems use basic maps, relying on perception to deal with anomalies. Such a map understands which roads lead to which others, whether a road is a freeway, a highway, are one-way, etc. Other systems require highly detailed maps, including lane maps, obstacles, traffic controls, etc.
Perception
ACs need to be able to perceive the world around them. Supporting technologies include combinations of cameras, LiDAR, radar, audio, and ultrasound, GPS, and inertial measurement. Deep neural networks are used to analyse inputs from these sensors to detect and identify objects and their trajectories. Some systems use Bayesian simultaneous localization and mapping (SLAM) algorithms. Another technique is detection and tracking of other moving objects (DATMO), used to handle potential obstacles. Other systems use roadside real-time locating system (RTLS) technologies to aid localization. Tesla's "vision only" system uses eight cameras, without LIDAR or radar, to create its bird's-eye view of the environment.
Path planning
Path planning finds a sequence of segments that a vehicle can use to move from origin to destination. Techniques used for path planning include graph-based search and variational-based optimization techniques. Graph-based techniques can make harder decisions such as how to pass another vehicle/obstacle. Variational-based optimization techniques require more stringent restrictions on the vehicle's path to prevent collisions. The large scale path of the vehicle can be determined by using a voronoi diagram, an occupancy grid mapping, or a driving corridor algorithm. The latter allows the vehicle to locate and drive within open space that is bounded by lanes or barriers.
Maps
Maps are necessary for navigation. Map sophistication varies from simple graphs that show which roads connect to each other, with details such as one-way vs two-way, to those that are highly detailed, with information about lanes, traffic controls, roadworks, and more. Researchers at the MITComputer Science and Artificial Intelligence Laboratory (CSAIL) developed a system called MapLite, which allows self-driving cars to drive with simple maps. The system combines the GPS position of the vehicle, a "sparse topological map" such as OpenStreetMap (which has only 2D road features), with sensors that observe road conditions. One issue with highly-detailed maps is updating them as the world changes. Vehicles that can operate with less-detailed maps do not require frequent updates or geo-fencing.
Sensors
Sensors are necessary for the vehicle to properly respond to the driving environment. Sensor types include cameras, LiDAR, ultrasound, and radar. Control systems typically combine data from multiple sensors. Multiple sensors can provide a more complete view of the surroundings and can be used to cross-check each other to correct errors. For example, radar can image a scene in, e.g., a nighttime snowstorm, that defeats cameras and LiDAR, albeit at reduced precision. After experimenting with radar and ultrasound, Tesla adopted a vision-only approach, asserting that humans drive using only vision, and that cars should be able to do the same, while citing the lower cost of cameras versus other sensor types. By contrast, Waymo makes use of the higher resolution of LiDAR sensors and cites the declining cost of that technology.
Drive by wire
Drive by wire is the use of electrical or electro-mechanical systems for performing vehicle functions such as steering or speed control that are traditionally achieved by mechanical linkages.
Driver monitoring
Driver monitoring is used to assess the driver's attention and alertness. Techniques in use include eye monitoring, and requiring the driver to maintain torque on the steering wheel. It attempts to understand driver status and identify dangerous driving behaviors.
Vehicle communication
Vehicles can potentially benefit from communicating with others to share information about traffic, road obstacles, to receive map and software updates, etc.
ISO/TC 22 specifies in-vehicle transport information and control systems, while ISO/TC 204 specifies information, communication and control systems in surface transport. International standards have been developed for ADAS functions, connectivity, human interaction, in-vehicle systems, management/engineering, dynamic map and positioning, privacy and security.
Rather than communicating among vehicles, they can communicate with road-based systems to receive similar information.
Software update
Software controls the vehicle, and can provide entertainment and other services. Over-the-air updates can deliver bug fixes and additional features over the internet. Software updates are one way to accomplish recalls that in the past required a visit to a service center. In March 2021, the UNECE regulation on software update and software update management systems was published.
Safety model
A safety model is software that attempts to formalize rules that ensure that ACs operate safely.
IEEE is attempting to forge a standard for safety models as "IEEE P2846: A Formal Model for Safety Considerations in Automated Vehicle Decision Making". In 2022, a research group at National Institute of Informatics (NII, Japan) enhanced Mobileye's Reliable Safety System as "Goal-Aware RSS" to enable RSS rules to deal with complex scenarios via program logic.
Notification
The US has standardized the use of turquoise lights to inform other drivers that a vehicle is driving autonomously. It will be used in the 2026 Mercedes-Benz EQS and S-Class sedans with Drive Pilot, an SAE Level 3 driving system.
As of 2023, the Turquoise light had not been standardized by the P.R.C or the UN-ECE.
Artificial Intelligence
Artificial intelligence (AI) plays a pivotal role in the development and operation of autonomous vehicles (AVs), enabling them to perceive their surroundings, make decisions, and navigate safely without human intervention. AI algorithms empower AVs to interpret sensory data from various onboard sensors, such as cameras, LiDAR, radar, and GPS, to understand their environment and improve its technological ability and overall safety over time.
Challenges
Obstacles
The primary obstacle to ACs is the advanced software and mapping required to make them work safely across the wide variety of conditions that drivers experience. In addition to handling day/night driving in good and bad weather on roads of arbitrary quality, ACs must cope with other vehicles, road obstacles, poor/missing traffic controls, flawed maps, and handle endless edge cases, such as following the instructions of a police officer managing traffic at a crash site.
Other obstacles include cost, liability, consumer reluctance, ethical dilemmas, security, privacy, and legal/regulatory framework. Further, AVs could automate the work of professional drivers, eliminating many jobs, which could slow acceptance.
Concerns
Deceptive marketing
Tesla calls its Level 2 ADAS "Full Self-Driving (FSD) Beta". US Senators Richard Blumenthal and Edward Markey called on the Federal Trade Commission (FTC) to investigate this marketing in 2021. In December 2021 in Japan, Mercedes-Benz was punished by the Consumer Affairs Agency for misleading product descriptions.
Mercedes-Benz was criticized for a misleading US commercial advertising E-Class models. At that time, Mercedes-Benz rejected the claims and stopped its "self-driving car" ad campaign that had been running. In August 2022, the California Department of Motor Vehicles (DMV) accused Tesla of deceptive marketing practices.
With the Automated Vehicles Bill (AVB) self-driving car-makers could face prison for misleading adverts in the United-Kingdom.
Security
In the 2020s, concerns over ACs' vulnerability to cyberattacks and data theft emerged.
Espionage
In 2018 and 2019 former Apple engineers were charged with stealing information related to Apple's self-driving car project. In 2021 the United States Department of Justice (DOJ) accused Chinese security officials of coordinating a hacking campaign to steal information from government entities, including research related to autonomous vehicles. China has prepared "the Provisions on Management of Automotive Data Security (Trial) to protect its own data".
Cellular Vehicle-to-Everything technologies are based on 5G wireless networks. , the US Congress was considering the possibility that imported Chinese AC technology could facilitate espionage.
Testing of Chinese automated cars in the US has raised concern over which US data are collected by Chinese vehicles to be stored in Chinese country and concern with any link with the Chinese communist party.
Driver communications
ACs complicate the need for drivers to communicate with each other, e.g., to decide which car enters an intersection first. In an AC without a driver, traditional means such as hand signals do not work (no driver, no hands).
Behavior prediction
ACs must be able to predict the behavior of possibly moving vehicles, pedestrians, etc in real time in order to proceed safely. The task becomes more challenging the further into the future the prediction extends, requiring rapid revisions to the estimate to cope with unpredicted behavior. One approach is to wholly recompute the position and trajectory of each object many times per second. Another is to cache the results of an earlier prediction for use in the next one to reduce computational complexity.
Handover
The ADAS has to be able to safely accept control from and return control to the driver.
Trust
Consumers will avoid ACs unless they trust them as safe. Robotaxis operating in San Francisco received pushback over perceived safety risks. Automatic elevators were invented in 1900, but did not become common until operator strikes and trust was built with advertising and features such as an emergency stop button. However, with repeated use of autonomous driving functions, drivers' behavior and trust in autonomous vehicles gradually improved and both entered a more stable state. At the same time this also improved the performance and reliability of the vehicle in complex conditions, thereby increasing public trust.
Economics
Autonomous also present various political and economic implications. The transportation sector holds significant sway in many the political and economic landscapes. For instance, many US states generates much annual revenue from transportation fees and taxes. The advent of self-driving cars could profoundly affect the economy by potentially altering state tax revenue streams. Furthermore, the transition to autonomous vehicles might disrupt employment patterns and labor markets, particularly in industries heavily reliant on driving professions. Data from the U.S. Bureau of Labor Statistics indicates that in 2019, the sector employed over two million individuals as tractor-trailer truck drivers. Additionally, taxi and delivery drivers represented approximately 370,400 positions, and bus drivers constituted a workforce of over 680,000. Collectively, this amounts to a conceivable displacement of nearly 2.9 million jobs, surpassing the job losses experienced in the 2008 Great Recession.
Equity and Inclusion
The prominence of certain demographic groups within the tech industry inevitably shapes the trajectory of autonomous vehicle (AV) development, potentially perpetuating existing inequalities. There are others in society without a political agenda who believe that the advancement of technology has nothing to do with promoting inequalities in certain groups and see this as a ridiculous presumption.
Ethical issues
Pedestrian Detection
Research from Georgia Tech revealed that autonomous vehicle detection systems were generally five percent less effective at recognizing darker-skinned individuals. This accuracy gap persisted despite adjustments for environmental variables like lighting and visual obstructions.
Rationale for liability
Standards for liability have yet to be adopted to address crashes and other incidents. Liability could rest with the vehicle occupant, its owner, the vehicle manufacturer, or even the ADAS technology supplier, possibly depending on the circumstances of the crash. Additionally, the infusion of ArtificiaI Intelligence technology in autonomous vehicles adds layers of complexity to ownership and ethical dynamics. Given that AI systems are inherently self-learning, a question arises of whether accountability should rest with the vehicle owner, the manufacturer, or the AI developer?
Trolley problem
The trolley problem is a thought experiment in ethics. Adapted for ACs, it considers an AC carrying one passenger confronts a pedestrian who steps in its way. The ADAS notionally has to choose between killing the pedestrian or swerving into a wall, killing the passenger. Possible frameworks include deontology (formal rules) and utilitarianism (harm reduction).
One public opinion survey reported that harm reduction was preferred, except that passengers wanted the vehicle to prefer them, while pedestrians took the opposite view. Utilitarian regulations were unpopular. Additionally, cultural viewpoints exert substantial influence on shaping responses to these ethical quandaries. Another study found that cultural biases impact preferences in prioritizing the rescue of certain individuals over others in car accident scenarios.
Privacy
Some ACs require an internet connection to function, opening the possibility that a hacker might gain access to private information such as destinations, routes, camera recordings, media preferences, and/or behavioral patterns, although this is true of an internet-connected device.
Road infrastructure
ACs make use of road infrastructure (e.g., traffic signs, turn lanes) and may require modifications to that infrastructure to fully achieve their safety and other goals. In March 2023, the Japanese government unveiled a plan to set up a dedicated highway lane for ACs. In April 2023, JR East announced their challenge to raise their self-driving level of Kesennuma Line bus rapid transit (BRT) in rural area from the current Level 2 to Level 4 at 60 km/h.
Testing
Approaches
ACs can be tested via digital simulations, in a controlled test environment, and/or on public roads. Road testing typically requires some form of permit or a commitment to adhere to acceptable operating principles. For example, New York requires a test driver to be in the vehicle, prepared to override the ADAS as necessary.
2010s and disengagements
In California, self-driving car manufacturers are required to submit annual reports describing how often their vehicles autonomously disengaged from autonomous mode. This is one measure of system robustness (ideally, the system should never disengage).
In 2017, Waymo reported 63 disengagements over of testing, an average distance of between disengagements, the highest (best) among companies reporting such figures. Waymo also logged more autonomous miles than other companies. Their 2017 rate of 0.18 disengagements per was an improvement over the 0.2 disengagements per in 2016, and 0.8 in 2015. In March 2017, Uber reported an average of per disengagement. In the final three months of 2017, Cruise (owned by GM) averaged per disengagement over .
2020s
Disengagement definitions
Reporting companies use varying definitions of what qualifies as a disengagement, and such definitions can change over time. Executives of self-driving car companies have criticized disengagements as a deceptive metric, because it does not consider varying road conditions.
Standards
In April 2021, WP.29 GRVA proposed a "Test Method for Automated Driving (NATM)".
In October 2021, Europe's pilot test, L3Pilot, demonstrated ADAS for cars in Hamburg, Germany, in conjunction with ITS World Congress 2021. SAE Level 3 and 4 functions were tested on ordinary roads.
In November 2022, an International Standard ISO 34502 on "Scenario based safety evaluation framework" was published.
Collision avoidance
In April 2022, collision avoidance testing was demonstrated by Nissan. Waymo published a document about collision avoidance testing in December 2022.
Simulation and validation
In September 2022, Biprogy released Driving Intelligence Validation Platform (DIVP) as part of Japanese national project "SIP-adus", which is interoperable with Open Simulation Interface (OSI) of ASAM.
Toyota
In November 2022, Toyota demonstrated one of its GR Yaris test cars, which had been trained using professional rally drivers. Toyota used its collaboration with Microsoft in FIA World Rally Championship since the 2017 season.
Pedestrian reactions
In 2023 David R. Large, senior research fellow with the Human Factors Research Group at the University of Nottingham, disguised himself as a car seat in a study to test people's reactions to driverless cars. He said, "We wanted to explore how pedestrians would interact with a driverless car and developed this unique methodology to explore their reactions." The study found that, in the absence of someone in the driving seat, pedestrians trust certain visual prompts more than others when deciding whether to cross the road.
Incidents
Tesla
As of 2023, Tesla's ADAS Autopilot/Full Self Driving (beta) was classified as Level 2 ADAS.
On 20 January 2016, the first of five known fatal crashes of a Tesla with Autopilot occurred, in China's Hubei province. Initially, Tesla stated that the vehicle was so badly damaged from the impact that their recorder was not able to determine whether the car had been on Autopilot at the time. However, the car failed to take evasive action.
Another fatal Autopilot crash occurred in May in Florida in a Tesla Model S that crashed into a tractor-trailer. In a civil suit between the father of the driver killed and Tesla, Tesla documented that the car had been on Autopilot. According to Tesla, "neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." Tesla claimed that this was Tesla's first known Autopilot death in over with Autopilot engaged. Tesla claimed that on average one fatality occurs every across all vehicle types in the US. However, this number also includes motorcycle/pedestrian fatalities. The ultimate National Transportation Safety Board (NTSB) report concluded Tesla was not at fault; the investigation revealed that for Tesla cars, the crash rate dropped by 40 percent after Autopilot was installed.
Google Waymo
In June 2015, Google confirmed that 12 vehicles had suffered collisions as of that date. Eight involved rear-end collisions at a stop sign or traffic light, in two of which the vehicle was side-swiped by another driver, one in which another driver rolled a stop sign, and one where a driver was controlling the car manually. In July 2015, three employees suffered minor injuries when their vehicle was rear-ended by a car whose driver failed to brake. This was the first collision that resulted in injuries.
According to Google Waymo's accident reports as of early 2016, their test cars had been involved in 14 collisions, of which other drivers were at fault 13 times, although in 2016 the car's software caused a crash. On 14 February 2016 a Google vehicle attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google stated, "In this case, we clearly bear some responsibility, because if our car hadn't moved, there wouldn't have been a collision." Google characterized the crash as a misunderstanding and a learning experience. No injuries were reported.
Uber's Advanced Technologies Group (ATG)
In March 2018, Elaine Herzberg died after she was hit by an AC tested by Uber's Advanced Technologies Group (ATG) in Arizona. A safety driver was in the car. Herzberg was crossing the road about 400 feet from an intersection. Some experts said a human driver could have avoided the crash. Arizona governor Doug Ducey suspended the company's ability to test its ACs citing an "unquestionable failure" of Uber to protect public safety. Uber also stopped testing in California until receiving a new permit in 2020.
NTSB's final report determined that the immediate cause of the accident was that safety driver Rafaela Vasquez failed to monitor the road, because she was distracted by her phone, but that Uber's "inadequate safety culture" contributed. The report noted that the victim had "a very high level" of methamphetamine in her body. The board called on federal regulators to carry out a review before allowing automated test vehicles to operate on public roads.
In September 2020, Vasquez pled guilty to endangerment and was sentenced to three years' probation.
NIO Navigate on Pilot
On 12 August 2021, a 31-year-old Chinese man was killed after his NIO ES8 collided with a construction vehicle. NIO's self-driving feature was in beta and could not deal with static obstacles. The vehicle's manual clearly stated that the driver must take over near construction sites. Lawyers of the deceased's family questioned NIO's private access to the vehicle, which they argued did not guarantee the integrity of the data.
Pony.ai
In November 2021, the California Department of Motor Vehicles (DMV) notified Pony.ai that it was suspending its testing permit following a reported collision in Fremont on 28 October. In May 2022, DMV revoked Pony.ai's permit for failing to monitor the driving records of its safety drivers.
Cruise
In April 2022, Cruise's testing vehicle was reported to have blocked a fire engine on emergency call, and sparked questions about its ability to handle unexpected circumstances.
Ford
In February 2024, a driver using the Ford BlueCruise hands-free driving feature struck and killed the driver of a stationary car with no lights on in the middle lane of a freeway in Texas.
In March 2024, a drunk driver who was speeding, holding her cell phone, and using BlueCruise on a Pennsylvania freeway struck and killed two people who had been driving two cars. The first car had become disabled and was on the left shoulder with part of the car in the left driving lane. The second driver had parked his car behind the first car presumably to help the first driver.
The NTSB is investigating both incidents.
Total incidents
The NHTSA began mandating incident reports from autonomous vehicle companies in June 2021. Some reports cite incidents from as early as August 2019, with current data available through June 17, 2024.
There have been a total of 3,979 autonomous vehicle incidents (both ADS and ADAS) reported during this timeframe. 2,146 of those incidents (53.9%) involved Tesla vehicles.
Public opinion surveys
2010s
In a 2011 online survey of 2,006 US and UK consumers, 49% said they would be comfortable using a "driverless car".
A 2012 survey of 17,400 vehicle owners found 37% who initially said they would be interested in purchasing a "fully autonomous car". However, that figure dropped to 20% if told the technology would cost US$3,000 more.
In a 2012 survey of about 1,000 German drivers, 22% had a positive attitude, 10% were undecided, 44% were skeptical and 24% were hostile.
A 2013 survey of 1,500 consumers across 10 countries found 57% "stated they would be likely to ride in a car controlled entirely by technology that does not require a human driver", with Brazil, India and China the most willing to trust automated technology.
In a 2014 US telephone survey, over three-quarters of licensed drivers said they would consider buying a self-driving car, rising to 86% if car insurance were cheaper. 31.7% said they would not continue to drive once an automated car was available.
In 2015, a survey of 5,000 people from 109 countries reported that average respondents found manual driving the most enjoyable. 22% did not want to pay more money for autonomy. Respondents were found to be most concerned about hacking/misuse, and were also concerned about legal issues and safety. Finally, respondents from more developed countries were less comfortable with their vehicle sharing data. The survey reported consumer interest in purchasing an AC, stating that 37% of surveyed current owners were either "definitely" or "probably" interested.
In 2016, a survey of 1,603 people in Germany that controlled for age, gender, and education reported that men felt less anxiety and more enthusiasm, whereas women showed the opposite. The difference was pronounced between young men and women and decreased with age.
In a 2016 US survey of 1,584 people, "66 percent of respondents said they think autonomous cars are probably smarter than the average human driver". People were worried about safety and hacking risk. Nevertheless, only 13% of the interviewees saw no advantages in this new kind of cars.
In a 2017 survey of 4,135 US adults found that many Americans anticipated significant impacts from various automation technologies including the widespread adoption of automated vehicles.
In 2019, results from two opinion surveys of 54 and 187 US adults respectively were published. The questionnaire was termed the autonomous vehicle acceptance model (AVAM), including additional description to help respondents better understand the implications of various automation levels. Users were less accepting of high autonomy levels and displayed significantly lower intention to use autonomous vehicles. Additionally, partial autonomy (regardless of level) was perceived as requiring uniformly higher driver engagement (usage of hands, feet and eyes) than full autonomy.
In the 2020s
In 2022, a survey reported that only a quarter (27%) of the world's population would feel safe in self-driving cars.
In 2024, a study by Saravanos et al. at New York University reported that 87% of their respondents (from a sample of 358) believed that conditionally automated cars (at Level 3) would be easy to use.
Opinion surveys may have little salience given that few respondents had any personal experience with ACs.
Regulation
The regulation of autonomous cars concerns liability, approvals, and international conventions.
In the 2010s, researchers openly worried that delayed regulations could delay deployment. In 2020, UNECE WP.29 GRVA was issued to address regulation of Level 3 automated driving.
Commercialization
Vehicles operating below Level 5 still offer many advantages.
most commercially available ADAS vehicles are SAE Level 2. A couple of companies reached higher levels, but only in restricted (geofenced) locations.
Level 2 – Partial Automation
SAE Level 2 features are available as part of the ADAS systems in many vehicles. In the US, 50% of new cars provide driver assistance for both steering and speed.
Ford started offering BlueCruise service on certain vehicles in 2022; the system is named ActiveGlide in Lincoln vehicles. The system provided features such as lane centering, street sign recognition, and hands-free highway driving on more than 130,000 miles of divided highways. The 2022 1.2 version added features including hands-free lane changing, in-lane repositioning, and predictive speed assist. In April 2023 BlueCruise was approved in the UK for use on certain motorways, starting with 2023 models of Ford's electric Mustang Mach-E SUV.
Tesla's Autopilot and its Full Self-Driving (FSD) ADAS suites are available on all Tesla cars since 2016. FSD offers highway and street driving (without geofencing), navigation/turn management, steering, and dynamic cruise control, collision avoidance, lane-keeping/switching, emergency braking, obstacle avoidance, but still requires the driver to remain ready to control the vehicle at any moment. Its driver management system combines eye tracking with monitoring pressure on the steering wheel to ensure that drives are both eyes on and hands on.
Tesla's FSD rewrite V12 (released in March 2024) uses a single deep learning transformer model for all aspects of perception, monitoring, and control. It relies on its eight cameras for its vision-only perception system, without use of LiDAR, radar, or ultrasound. As of April 2024, FSD has been deployed on two million Tesla cars. As of January 2024, Tesla has not initiated requests for Level 3 status for its systems and has not disclosed its reason for not doing so.
Development
General Motors is developing the "Ultra Cruise" ADAS system, that will be a dramatic improvement over their current "Super Cruise" system. Ultra Cruise will cover "95 percent" of driving scenarios on 2 million miles of roads in the US, according to the company. The system hardware in and around the car includes multiple cameras, short- and long-range radar, and a LiDAR sensor, and will be powered by the Qualcomm Snapdragon Ride Platform. The luxury Cadillac Celestiq electric vehicle will be one of the first vehicles to feature Ultra Cruise.
Europe is developing a new "Driver Control Assistance Systems" (DCAS) level 2 regulation to no longer limit the use of lane changing systems to roads with 2 lanes and a physical separation from traffic in the opposite direction.
Level 3 – Conditional Automation
, two car manufacturers have sold or leased Level 3 cars: Honda in Japan, and Mercedes in Germany, Nevada and California.
Mercedes Drive Pilot has been available on the EQS and S-class sedan in Germany since 2022, and in California and Nevada since 2023. A subscription costs between €5,000 and €7,000 for three years in Germany and $2,500 for one year in the United States. Drive Pilot can only be used when the vehicle is traveling under , there is a vehicle in front, readable line markings, during the day, clear weather, and on freeways mapped by Mercedes down to the centimeter (100,000 miles in California). As of April 2024, one Mercedes vehicle with this capability has been sold in California.
Development
Honda continued to enhance its Level 3 technology. As of 2023, 80 vehicles with Level 3 support had been sold.
Mercedes-Benz received authorization in early 2023 to pilot its Level 3 software in Las Vegas. California also authorized Drive Pilot in 2023.
BMW commercialized its AC in 2021. In 2023 BMW stated that its Level-3 technology was nearing release. It would be the second manufacturer to deliver Level-3 technology, but the only one with a Level 3 technology which works in the dark.
In 2023, in China, IM Motors, Mercedes, and BMW obtained authorization to test vehicles with Level 3 systems on motorways.
In September 2021, Stellantis presented its findings from its Level 3 pilot testing on Italian highways. Stellantis's Highway Chauffeur claimed Level 3 capabilities, as tested on the Maserati Ghibli and Fiat 500X prototypes.
Polestar, a Volvo Cars' brand, announced in January 2022 its plan to offer Level 3 autonomous driving system in the Polestar 3 SUV, a Volvo XC90 successor, with technologies from Luminar Technologies, Nvidia, and Zenseact.
In January 2022, Bosch and the Volkswagen Group subsidiary CARIAD released a collaboration for autonomous driving up to Level 3. This joint development targets Level 4 capabilities.
Hyundai Motor Company is enhancing cybersecurity of connected cars to offer a Level 3 self-driving Genesis G90. Kia and Hyundai Korean car makers delayed their Level 3 plans, and will not deliver Level 3 vehicles in 2023.
Level 4 – High Automation
Waymo offers robotaxi services in parts of Arizona (Phoenix) and California (San Francisco and Los Angeles), as fully autonomous vehicles without safety drivers.
In April 2023 in Japan, a Level 4 protocol became part of the amended Road Traffic Act. ZEN drive Pilot Level 4 made by AIST operates there.
Development
In July 2020, Toyota started public demonstration rides on Lexus LS (fifth generation) based TRI-P4 with Level 4 capability. In August 2021, Toyota operated a potentially Level 4 service using e-Palette around the Tokyo 2020 Olympic Village.
In September 2020, Mercedes-Benz introduced world's first commercial Level 4 Automated Valet Parking (AVP) system named Intelligent Park Pilot for its new S-Class. In November 2022, Germany’s Federal Motor Transport Authority (KBA) approved the system for use at Stuttgart Airport.
In September 2021, Cruise, General Motors, and Honda started a joint testing programme, using Cruise AV. In 2023, the Origin was put on indefinite hold following Cruise's loss of its operating permit.
In January 2023, Holon announced an autonomous shuttle during the 2023 Consumer Electronics Show (CES). The company claimed the vehicle is the world's first Level 4 shuttle built to automotive standard.
| Technology | Motorized road transport | null |
245963 | https://en.wikipedia.org/wiki/Distributed%20generation | Distributed generation | Distributed generation, also distributed energy, on-site generation (OSG), or district/decentralized energy, is electrical generation and storage performed by a variety of small, grid-connected or distribution system-connected devices referred to as distributed energy resources (DER).
Conventional power stations, such as coal-fired, gas, and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular, and more flexible technologies that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance, they are referred to as hybrid power systems.
DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables the collection of energy from many sources and may lower environmental impacts and improve the security of supply.
One of the major issues with the integration of the DER such as solar power, wind power, etc. is the uncertain nature of such electricity resources. This uncertainty can cause a few problems in the distribution system: (i) it makes the supply-demand relationships extremely complex, and requires complicated optimization tools to balance the network, and (ii) it puts higher pressure on the transmission network, and (iii) it may cause reverse power flow from the distribution system to transmission system.
Microgrids are modern, localized, small-scale grids, contrary to the traditional, centralized electricity grid (macrogrid). Microgrids can disconnect from the centralized grid and operate autonomously, strengthen grid resilience, and help mitigate grid disturbances. They are typically low-voltage AC grids, often use diesel generators, and are installed by the community they serve. Microgrids increasingly employ a mixture of different distributed energy resources, such as solar hybrid power systems, which significantly reduce the amount of carbon emitted.
Overview
Historically, central plants have been an integral part of the electric grid, in which large generating facilities are specifically located either close to resources or otherwise located far from populated load centers. These, in turn, supply the traditional transmission and distribution (T&D) grid that distributes bulk power to load centers and from there to consumers. These were developed when the costs of transporting fuel and integrating generating technologies into populated areas far exceeded the cost of developing T&D facilities and tariffs. Central plants are usually designed to take advantage of available economies of scale in a site-specific manner, and are built as "one-off", custom projects.
These economies of scale began to fail in the late 1960s and, by the start of the 21st century, Central Plants could arguably no longer deliver competitively cheap and reliable electricity to more remote customers through the grid, because the plants had come to cost less than the grid and had become so reliable that nearly all power failures originated in the grid. Thus, the grid had become the main driver of remote customers' power costs and power quality problems, which became more acute as digital equipment required extremely reliable electricity. Efficiency gains no longer come from increasing generating capacity, but from smaller units located closer to sites of demand.
For example, coal power plants are built away from cities to prevent their heavy air pollution from affecting the populace. In addition, such plants are often built near collieries to minimize the cost of transporting coal. Hydroelectric plants are by their nature limited to operating at sites with sufficient water flow.
Low pollution is a crucial advantage of combined cycle plants that burn natural gas. The low pollution permits the plants to be near enough to a city to provide district heating and cooling.
Distributed energy resources are mass-produced, small, and less site-specific. Their development arose out of:
concerns over perceived externalized costs of central plant generation, particularly environmental concerns;
the increasing age, deterioration, and capacity constraints upon T&D for bulk power;
the increasing relative economy of mass production of smaller appliances over heavy manufacturing of larger units and on-site construction;
Along with higher relative prices for energy, higher overall complexity and total costs for regulatory oversight, tariff administration, and metering and billing.
Capital markets have come to realize that right-sized resources, for individual customers, distribution substations, or microgrids, are able to offer important but little-known economic advantages over central plants. Smaller units achieved greater economic benefits through mass-production than larger units gained from their size alone. The increased value of these resources—resulting from improvements in financial risk, engineering flexibility, security, and environmental quality—often outweighs their apparent cost disadvantages. Distributed generation (DG), vis-à-vis central plants, must be justified on a life-cycle basis. Unfortunately, many of the direct, and virtually all of the indirect, benefits of DG are not captured within traditional utility cash-flow accounting.
While the levelized cost of DG is typically more expensive than conventional, centralized sources on a kilowatt-hour basis, this does not consider negative aspects of conventional fuels. The additional premium for DG is rapidly declining as demand increases and technology progresses, and sufficient and reliable demand may bring economies of scale, innovation, competition, and more flexible financing, that could make DG clean energy part of a more diversified future.
DG reduces the amount of energy lost in transmitting electricity because the electricity is generated very near where it is used, perhaps even in the same building. This also reduces the size and number of power lines that must be constructed.
Typical DER systems in a feed-in tariff (FIT) scheme have low maintenance, low pollution and high efficiencies. In the past, these traits required dedicated operating engineers and large complex plants to reduce pollution. However, modern embedded systems can provide these traits with automated operation and renewable energy, such as solar, wind and geothermal. This reduces the size of power plant that can show a profit.
Cybersecurity
Vulnerabilities in control systems from a single vendor used at thousands of installations of given source can result in hacking and remotely disabling all these sources by a single attacker, thus largely reversing the benefits of decentralised generation, which has been demonstrated in practice in case of solar power inverters and wind power control systems. In November 2024 Deye and Sol-Ark inverter manufacturer remotely disabled in some countries due to alleged regional sales policy dispute. The companies later claimed the blockage was not remote but due to geofencing mechanisms built into the inverters.
EU NIS2 directive expands the cybersecurity requirements to the energy generation market, which has faced backlash from renewable energy lobby groups.
Grid parity
Grid parity occurs when an alternative energy source can generate electricity at a levelized cost (LCOE) that is less than or equal to the end consumer's retail price. Reaching grid parity is considered to be the point at which an energy source becomes a contender for widespread development without subsidies or government support. Since the 2010s, grid parity for solar and wind has become a reality in a growing number of markets, including Australia, several European countries, and some states in the U.S.
Technologies
Distributed energy resource (DER) systems are small-scale power generation or storage technologies (typically in the range of 1 kW to 10,000 kW) used to provide an alternative to or an enhancement of the traditional electric power system. DER systems typically are characterized by high initial capital costs per kilowatt. DER systems also serve as storage device and are often called Distributed energy storage systems (DESS).
DER systems may include the following devices/technologies:
Combined heat power (CHP), also known as cogeneration or trigeneration
Fuel cells
Hybrid power systems (solar hybrid and wind hybrid systems)
Micro combined heat and power (MicroCHP)
Microturbines
Photovoltaic systems (typically rooftop solar PV)
Reciprocating engines
Small wind power systems
Stirling engines
or a combination of the above. For example, hybrid photovoltaic, CHP and battery systems can provide full electric power for single family residences without extreme storage expenses.
Cogeneration
Distributed cogeneration sources use steam turbines, natural gas-fired fuel cells, microturbines or reciprocating engines to turn generators. The hot exhaust is then used for space or water heating, or to drive an absorptive chiller for cooling such as air-conditioning. In addition to natural gas-based schemes, distributed energy projects can also include other renewable or low carbon fuels including biofuels, biogas, landfill gas, sewage gas, coal bed methane, syngas and associated petroleum gas.
Delta-ee consultants stated in 2013 that with 64% of global sales, the fuel cell micro combined heat and power passed the conventional systems in sales in 2012. 20.000 units were sold in Japan in 2012 overall within the Ene Farm project. With a Lifetime of around 60,000 hours for PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years. For a price of $22,600 before installation. For 2013 a state subsidy for 50,000 units is in place.
In addition, molten carbonate fuel cell and solid oxide fuel cells using natural gas, such as the ones from FuelCell Energy and the Bloom energy server, or waste-to-energy processes such as the Gate 5 Energy System are used as a distributed energy resource.
Solar power
Photovoltaics, by far the most important solar technology for distributed generation of solar power, uses solar cells assembled into solar panels to convert sunlight into electricity. It is a fast-growing technology doubling its worldwide installed capacity every couple of years. PV systems range from distributed, residential, and commercial rooftop or building integrated installations, to large, centralized utility-scale photovoltaic power stations.
The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its sunlight to electricity conversion efficiency, reduced the installation cost per watt as well as its energy payback time (EPBT) and levelised cost of electricity (LCOE), and has reached grid parity in at least 19 different markets in 2014.
As most renewable energy sources and unlike coal and nuclear, solar PV is variable and non-dispatchable, but has no fuel costs, operating pollution, as well as greatly reduced mining-safety and operating-safety issues. It produces peak power around local noon each day and its capacity factor is around 20 percent.
Wind power
Wind turbines can be distributed energy resources or they can be built at utility scale. These have low maintenance and low pollution, but distributed wind unlike utility-scale wind has much higher costs than other sources of energy. As with solar, wind energy is variable and non-dispatchable. Wind towers and generators have substantial insurable liabilities caused by high winds, but good operating safety. Distributed generation from wind hybrid power systems combines wind power with other DER systems. One such example is the integration of wind turbines into solar hybrid power systems, as wind tends to complement solar because the peak operating times for each system occur at different times of the day and year.
Hydro power
Hydroelectricity is the most widely used form of renewable energy and its potential has already been explored to a large extent or is compromised due to issues such as environmental impacts on fisheries, and increased demand for recreational access. However, using modern 21st century technology, such as wave power, can make large amounts of new hydropower capacity available, with minor environmental impact.
Modular and scalable Next generation kinetic energy turbines can be deployed in arrays to serve the needs on a residential, commercial, industrial, municipal or even regional scale. Microhydro kinetic generators neither require dams nor impoundments, as they utilize the kinetic energy of water motion, either waves or flow. No construction is needed on the shoreline or sea bed, which minimizes environmental impacts to habitats and simplifies the permitting process. Such power generation also has minimal environmental impact and non-traditional microhydro applications can be tethered to existing construction such as docks, piers, bridge abutments, or similar structures.
Waste-to-energy
Municipal solid waste (MSW) and natural waste, such as sewage sludge, food waste and animal manure will decompose and discharge methane-containing gas that can be collected and used as fuel in gas turbines or micro turbines to produce electricity as a distributed energy resource. Additionally, a California-based company, Gate 5 Energy Partners, Inc. has developed a process that transforms natural waste materials, such as sewage sludge, into biofuel that can be combusted to power a steam turbine that produces power. This power can be used in lieu of grid-power at the waste source (such as a treatment plant, farm or dairy).
Energy storage
A distributed energy resource is not limited to the generation of electricity but may also include a device to store distributed energy (DE). Distributed energy storage systems (DESS) applications include several types of battery, pumped hydro, compressed air, and thermal energy storage. Access to energy storage for commercial applications is easily accessible through programs such as energy storage as a service (ESaaS).
PV storage
Common rechargeable battery technologies used in today's PV systems include, the valve regulated lead-acid battery (lead–acid battery), nickel–cadmium and lithium-ion batteries. Compared to the other types, lead-acid batteries have a shorter lifetime and lower energy density. However, due to their high reliability, low self-discharge (4–6% per year) as well as low investment and maintenance costs, they are currently the predominant technology used in small-scale, residential PV systems, as lithium-ion batteries are still being developed and about 3.5 times as expensive as lead-acid batteries. Furthermore, as storage devices for PV systems are stationary, the lower energy and power density and therefore higher weight of lead-acid batteries are not as critical as for electric vehicles.
However, lithium-ion batteries, such as the Tesla Powerwall, have the potential to replace lead-acid batteries in the near future, as they are being intensively developed and lower prices are expected due to economies of scale provided by large production facilities such as the Gigafactory 1. In addition, the Li-ion batteries of plug-in electric cars may serve as future storage devices, since most vehicles are parked an average of 95 percent of the time, their batteries could be used to let electricity flow from the car to the power lines and back. Other rechargeable batteries that are considered for distributed PV systems include, sodium–sulfur and vanadium redox batteries, two prominent types of a molten salt and a flow battery, respectively.
Vehicle-to-grid
Future generations of electric vehicles may have the ability to deliver power from the battery in a vehicle-to-grid into the grid when needed. An electric vehicle network has the potential to serve as a DESS.
Flywheels
An advanced flywheel energy storage (FES) stores the electricity generated from distributed resources in the form of angular kinetic energy by accelerating a rotor (flywheel) to a very high speed of about 20,000 to over 50,000 rpm in a vacuum enclosure. Flywheels can respond quickly as they store and feed back electricity into the grid in a matter of seconds.
Integration with the grid
For reasons of reliability, distributed generation resources would be interconnected to the same transmission grid as central stations. Various technical and economic issues occur in the integration of these resources into a grid. Technical problems arise in the areas of power quality, voltage stability, harmonics, reliability, protection, and control. Behavior of protective devices on the grid must be examined for all combinations of distributed and central station generation. A large scale deployment of distributed generation may affect grid-wide functions such as frequency control and allocation of reserves. As a result, smart grid functions, virtual power plants and grid energy storage such as power to gas stations are added to the grid. Conflicts occur between utilities and resource managing organizations.
Each distributed generation resource has its own integration issues. Solar PV and wind power both have intermittent and unpredictable generation, so they create many stability issues for voltage and frequency. These voltage issues affect mechanical grid equipment, such as load tap changers, which respond too often and wear out much more quickly than utilities anticipated. Also, without any form of energy storage during times of high solar generation, companies must rapidly increase generation around the time of sunset to compensate for the loss of solar generation. This high ramp rate produces what the industry terms the duck curve that is a major concern for grid operators in the future. Storage can fix these issues if it can be implemented. Flywheels have shown to provide excellent frequency regulation. Also, flywheels are highly cyclable compared to batteries, meaning they maintain the same energy and power after a significant amount of cycles( on the order of 10,000 cycles). Short term use batteries, at a large enough scale of use, can help to flatten the duck curve and prevent generator use fluctuation and can help to maintain voltage profile. However, cost is a major limiting factor for energy storage as each technique is prohibitively expensive to produce at scale and comparatively not energy dense compared to liquid fossil fuels.
Finally, another method of aiding in integration is in the use of intelligent inverters that have the capability to also store the energy when there is more energy production than consumption.
Mitigating voltage and frequency issues of DG integration
There have been some efforts to mitigate voltage and frequency issues due to increased implementation of DG. Most notably, IEEE 1547 sets the standard for interconnection and interoperability of distributed energy resources. IEEE 1547 sets specific curves signaling when to clear a fault as a function of the time after the disturbance and the magnitude of the voltage irregularity or frequency irregularity. Voltage issues also give legacy equipment the opportunity to perform new operations. Notably, inverters can regulate the voltage output of DGs. Changing inverter impedances can change voltage fluctuations of DG, meaning inverters have the ability to control DG voltage output. To reduce the effect of DG integration on mechanical grid equipment, transformers and load tap changers have the potential to implement specific tap operation vs. voltage operation curves mitigating the effect of voltage irregularities due to DG. That is, load tap changers respond to voltage fluctuations that last for a longer period than voltage fluctuations created from DG equipment.
Stand alone hybrid systems
It is now possible to combine technologies such as photovoltaics, batteries and cogeneration to make stand alone distributed generation systems.
Recent work has shown that such systems have a low levelized cost of electricity.
Many authors now think that these technologies may enable a mass-scale grid defection because consumers can produce electricity using off grid systems primarily made up of solar photovoltaic technology. For example, the Rocky Mountain Institute has proposed that there may wide scale grid defection. This is backed up by studies in the Midwest.
Cost factors
Cogenerators find favor because most buildings already burn fuels, and the cogeneration can extract more value from the fuel. Local production has no electricity transmission losses on long distance power lines or energy losses from the Joule effect in transformers where in general 8-15% of the energy is lost (see also cost of electricity by source). Some larger installations utilize combined cycle generation. Usually this consists of a gas turbine whose exhaust boils water for a steam turbine in a Rankine cycle. The condenser of the steam cycle provides the heat for space heating or an absorptive chiller. Combined cycle plants with cogeneration have the highest known thermal efficiencies, often exceeding 85%. In countries with high pressure gas distribution, small turbines can be used to bring the gas pressure to domestic levels whilst extracting useful energy. If the UK were to implement this countrywide an additional 2-4 GWe would become available. (Note that the energy is already being generated elsewhere to provide the high initial gas pressure – this method simply distributes the energy via a different route.)
Microgrid
A microgrid is a localized grouping of electricity generation, energy storage, and loads that normally operates connected to a traditional centralized grid (macrogrid). This single point of common coupling with the macrogrid can be disconnected. The microgrid can then function autonomously. Generation and loads in a microgrid are usually interconnected at low voltage and it can operate in DC, AC, or the combination of both. From the point of view of the grid operator, a connected microgrid can be controlled as if it were one entity.
Microgrid generation resources can include stationary batteries, fuel cells, solar, wind, or other energy sources. The multiple dispersed generation sources and ability to isolate the microgrid from a larger network would provide highly reliable electric power. Produced heat from generation sources such as microturbines could be used for local process heating or space heating, allowing flexible trade off between the needs for heat and electric power.
Micro-grids were proposed in the wake of the July 2012 India blackout:
Small micro-grids covering 30–50 km radius
Small power stations of 5–10 MW to serve the micro-grids
Generate power locally to reduce dependence on long-distance transmission lines and cut transmission losses.
Micro-grids have seen implementation in a number of communities over the world. For example, Tesla has implemented a solar micro-grid in the Samoan island of Ta'u, powering the entire island with solar energy. This localized production system has helped save over of diesel fuel. It is also able to sustain the island for three whole days if the sun were not to shine at all during that period. This is a great example of how micro-grid systems can be implemented in communities to encourage renewable resource usage and localized production.
To plan and install Microgrids correctly, engineering modelling is needed. Multiple simulation tools and optimization tools exist to model the economic and electric effects of Microgrids. A widely used economic optimization tool is the Distributed Energy Resources Customer Adoption Model (DER-CAM) from Lawrence Berkeley National Laboratory. Another frequently used commercial economic modelling tool is Homer Energy, originally designed by the National Renewable Laboratory. There are also some power flow and electrical design tools guiding the Microgrid developers. The Pacific Northwest National Laboratory designed the public available GridLAB-D tool and the Electric Power Research Institute (EPRI) designed OpenDSS to simulate the distribution system (for Microgrids). A professional integrated DER-CAM and OpenDSS version is available via BankableEnergy . A European tool that can be used for electrical, cooling, heating, and process heat demand simulation is EnergyPLAN from the Aalborg University, Denmark.
Communication in DER systems
IEC 61850-7-420 is published by IEC TC 57: Power systems management and associated information exchange. It is one of the IEC 61850 standards, some of which are core Standards required for implementing smart grids. It uses communication services mapped to MMS as per IEC 61850-8-1 standard.
OPC is also used for the communication between different entities of DER system.
Institute of Electrical and Electronics Engineers IEEE 2030.7 microgrid controller standard. That concept relies on 4 blocks: a) Device Level control (e.g. Voltage and Frequency Control), b) Local Area Control (e.g. data communication), c) Supervisory (software) controller (e.g. forward looking dispatch optimization of generation and load resources), and d) Grid Layer (e.g. communication with utility).
A wide variety of complex control algorithms exist, making it difficult for small and residential Distributed Energy Resource (DER) users to implement energy management and control systems. Especially, communication upgrades and data information systems can make it expensive. Thus, some projects try to simplify the control of DER via off-the shelf products and make it usable for the mainstream (e.g. using a Raspberry Pi).
Legal requirements for distributed generation
In 2010 Colorado enacted a law requiring that by 2020 that 3% of the power generated in Colorado utilize distributed generation of some sort.
On 11 October 2017, California Governor Jerry Brown signed into law a bill, SB 338, that makes utility companies plan "carbon-free alternatives to gas generation" in order to meet peak demand. The law requires utilities to evaluate issues such as energy storage, efficiency, and distributed energy resources.
| Technology | Electricity transmission and distribution | null |
245973 | https://en.wikipedia.org/wiki/Mouth%20ulcer | Mouth ulcer | A mouth ulcer (aphtha), or sometimes called a canker sore or salt blister, is an ulcer that occurs on the mucous membrane of the oral cavity. Mouth ulcers are very common, occurring in association with many diseases and by many different mechanisms, but usually there is no serious underlying cause. Rarely, a mouth ulcer that does not heal may be a sign of oral cancer. These ulcers may form individually or multiple ulcers may appear at once (i.e., a "crop" of ulcers). Once formed, an ulcer may be maintained by inflammation and/or secondary infection.
The two most common causes of oral ulceration are local trauma (e.g. rubbing from a sharp edge on a broken filling or braces, biting one's lip, etc.) and aphthous stomatitis ("canker sores"), a condition characterized by the recurrent formation of oral ulcers for largely unknown reasons. Mouth ulcers often cause pain and discomfort and may alter the person's choice of food while healing occurs (e.g. avoiding acidic, sugary, salty or spicy foods and beverages).
Definition
An ulcer (; from Latin ulcus, "ulcer, sore") is a break in the skin or mucous membrane with loss of surface tissue and the disintegration and necrosis of epithelial tissue. A mucosal ulcer is an ulcer which specifically occurs on a mucous membrane.
An ulcer is a tissue defect which has penetrated the epithelial-connective tissue border, with its base at a deep level in the submucosa, or even within muscle or periosteum. An ulcer is a deeper breach of epithelium compared to an erosion or excoriation, and involves damage to both epithelium and lamina propria.
An erosion is a superficial breach of the epithelium, with little damage to the underlying lamina propria. A mucosal erosion is an erosion which specifically occurs on a mucous membrane. Only the superficial epithelial cells of the epidermis or of the mucosa are lost, and the lesion can reach the depth of the basement membrane. Erosions heal without scar formation.
Excoriation is a term sometimes used to describe a breach of the epithelium which is deeper than an erosion but shallower than an ulcer. This type of lesion is tangential to the rete pegs and shows punctiform (small pinhead spots) bleeding, caused by exposed capillary loops.
Causes
Mouth ulcers may be caused by physical means (for example accidental biting of the cheek), certain medical conditions (such as some vitamin deficiencies), or as an adverse effect of some medications.
Pathophysiology
The exact pathogenesis is dependent upon the cause.
Simple mechanisms which predispose the mouth to trauma and ulceration are xerostomia (dry mouth – as saliva usually lubricates the mucous membrane and controls bacterial levels) and epithelial atrophy (thinning, e.g., after radiotherapy), making the lining more fragile and easily breached. Stomatitis is a general term meaning inflammation within the mouth, and often may be associated with ulceration.
Pathologically, the mouth represents a transition between the gastrointestinal tract and the skin, meaning that many gastrointestinal and cutaneous conditions can involve the mouth. Some conditions usually associated with the whole gastrointestinal tract may present only in the mouth, e.g., orofacial granulomatosis/oral Crohn's disease.
Similarly, cutaneous (skin) conditions can also involve the mouth and sometimes only the mouth, sparing the skin. The different environmental conditions (saliva, thinner mucosa, trauma from teeth and food) mean that some cutaneous disorders which produce characteristic lesions on the skin produce only nonspecific lesions in the mouth. The vesicles and bullae of blistering mucocutaneous disorders progress quickly to ulceration in the mouth, because of moisture and trauma from food and teeth. The high bacterial load in the mouth means that ulcers may become secondarily infected. Cytotoxic drugs administered during chemotherapy target cells with fast turnovers such as malignant cells. However, the epithelia of the mouth also has a high turnover rate and makes oral ulceration (mucositis) a common side effect of chemotherapy.
Erosions, which involve the epithelial layer, are red in appearance since the underlying lamina propria shows through. When the full thickness of the epithelium is penetrated (ulceration), the lesion becomes covered with a fibrinous exudate and takes on a yellow-grey color. Because an ulcer is a breach of the normal lining, when seen in cross section, the lesion is a crater. A "halo" may be present, which is a reddening of the surrounding mucosa and is caused by inflammation. There may also be edema (swelling) around the ulcer. Chronic trauma may produce an ulcer with a keratotic (white, thickened mucosa) margin. Malignant lesions may ulcerate either because the tumor infiltrates the mucosa from adjacent tissues, or because the lesion originates within the mucosa itself, and the disorganized growth leads to a break in the normal architecture of the lining tissues. Repeat episodes of mouth ulcers can be indicative of an immunodeficiency, signaling low levels of immunoglobulin in the oral mucous membranes. Chemotherapy, HIV, and mononucleosis are all causes of immunodeficiency/immunosuppression with which oral ulcers may become a common manifestation. Autoimmunity is also a cause of oral ulceration. Mucous membrane pemphigoid, an autoimmune reaction to the epithelial basement membrane, causes desquamation/ulceration of the oral mucosa.
Numerous aphthous ulcers could be indicative of an inflammatory autoimmune disease called Behçet's disease. This can later involve skin lesions and uveitis in the eyes. Vitamin C deficiency may lead to scurvy which impairs wound healing, which can contribute to ulcer formation. For a detailed discussion of the pathophysiology of aphthous stomatitis, see Aphthous stomatitis#Causes.
Diagnosis
Diagnosis of mouth ulcers usually consists of a medical history followed by an oral examination as well as examination of any other involved area. The following details may be pertinent: The duration that the lesion has been present, the location, the number of ulcers, the size, the color and whether it is hard to touch, bleeds or has a rolled edge. As a general rule, a mouth ulcer that does not heal within 2 or 3 weeks should be examined by a health care professional who is able to rule out oral cancer (e.g. a dentist, oral physician, oral surgeon, or maxillofacial surgeon). If there have been previous ulcers that have healed, then this again makes cancer unlikely.
An ulcer that keeps forming on the same site and then healing may be caused by a nearby sharp surface, and ulcers that heal and then recur at different sites are likely to be recurrent aphthous stomatitis (RAS). Malignant ulcers are likely to be single in number, and conversely, multiple ulcers are very unlikely to be oral cancer. The size of the ulcers may be helpful in distinguishing the types of RAS, as can the location (minor RAS mainly occurs on non-keratinizing mucosa, major RAS occurs anywhere in the mouth or oropharynx). Induration, contact bleeding and rolled margins are features of a malignant ulcer. There may be nearby causative factor, e.g. a broken tooth with a sharp edge that is traumatizing the tissues. Otherwise, the person may be asked about problems elsewhere, e.g. ulceration of the genital mucous membranes, eye lesions or digestive problems, swollen glands in neck (lymphadenopathy) or a general unwell feeling.
The diagnosis comes mostly from the history and examination, but the following special investigations may be involved: blood tests (vitamin deficiency, anemia, leukemia, Epstein-Barr virus, HIV infection, diabetes) microbiological swabs (infection), or urinalysis (diabetes). A biopsy (minor procedure to cut out a small sample of the ulcer to look at under a microscope) with or without immunofluorescence may be required, to rule out cancer, but also if a systemic disease is suspected. Ulcers caused by local trauma are painful to touch and sore. They usually have an irregular border with erythematous margins and the base is yellow. As healing progresses, a keratotic (thickened, white mucosa) halo may occur.
Differential diagnosis
Due to various factors (saliva, relative thinness of oromucosa, trauma from teeth, chewing, etc.), vesicles and bullae which form on the mucous membranes of the oral cavity tend to be fragile and quickly break down to leave ulcers.
Aphthous stomatitis and local trauma are very common causes of oral ulceration; the many other possible causes are all rare in comparison.
Traumatic ulceration
Most mouth ulcers that are not associated with recurrent aphthous stomatitis are caused by local trauma. The mucous membrane lining of the mouth is thinner than the skin, and easily damaged by mechanical, thermal (heat/cold), chemical, or electrical means, or by irradiation.
Mechanical
Common causes of oral ulceration include rubbing on sharp edges of teeth, fillings, crowns, false teeth (dentures), or braces (orthodontic appliances), or accidental biting caused by a lack of awareness of painful stimuli in the mouth (e.g., following local anesthetic used during dental treatment, which the person becomes aware of as the anesthetic wears off).
Eating hard foods (e.g., potato chips) can damage the lining of the mouth. Some people cause damage inside their mouths themselves, either through an absentminded habit or as a type of deliberate self-harm (factitious ulceration). Examples include biting the cheek, tongue, or lips, or rubbing a fingernail, pen, or toothpick inside the mouth. Tearing (and subsequent ulceration) of the upper labial frenum may be a sign of child abuse (non-accidental injury).
Iatrogenic ulceration can also occur during dental treatment, where incidental abrasions to the soft tissues of the mouth are common. Some dentists apply a protective layer of petroleum jelly to the lips before carrying out dental work to minimize this.
The lingual frenum is also vulnerable to ulceration by repeated friction during oral sexual activity ("cunnilingus tongue"). Rarely, infants can ulcerate the tongue or lower lip with the teeth, termed Riga-Fede disease.
Thermal and electrical burn
Thermal burns usually result from placing hot food or beverages in the mouth. This may occur in those who eat or drink before a local anesthetic has worn off. The normal painful sensation is absent and a burn may occur. Microwave ovens sometimes produce food that is cold externally and very hot internally, and this has led to a rise in the frequency of intra-oral thermal burns. Thermal food burns are usually on the palate or posterior buccal mucosa, and appear as zones of erythema and ulceration with necrotic epithelium peripherally. Electrical burns more commonly affect the oral commissure (corner of the mouth). The lesions are usually initially painless, charred and yellow with little bleeding. Swelling then develops and by the fourth day following the burn the area becomes necrotic and the epithelium sloughs off.
Electrical burns in the mouth are usually caused by chewing on live electrical wiring (an act that is relatively common among young children). Saliva acts as a conducting medium and an electrical arc flows between the electrical source and the tissues, causing extreme heat and possible tissue destruction.
Chemical injury
Caustic chemicals may cause ulceration of the oral mucosa if they are of strong-enough concentration and in contact for a sufficient length of time. The holding of medication in the mouth instead of swallowing it occurs mostly in children, those under psychiatric care, or simply because of a lack of understanding. Holding an aspirin tablet next to a painful tooth in an attempt to relieve pulpitis (toothache) is common, and leads to epithelial necrosis. Chewable aspirin tablets should be swallowed, with the residue quickly cleared from the mouth.
Other caustic medications include eugenol and chlorpromazine. Hydrogen peroxide, used to treat gum disease, is also capable of causing epithelial necrosis at concentrations of 1–3%. Silver nitrate, sometimes used for pain relief from aphthous ulceration, acts as a chemical cauterant and destroys nerve endings, but the mucosal damage is increased. Phenol is used during dental treatment as a cavity sterilizing agent and cauterizing material, and it is also present in some over-the-counter agents intended to treat aphthous ulcerations. Mucosal necrosis has been reported to occur with concentrations of 0.5%. Other materials used in endodontics are also caustic, which is part of the reason why use of a rubber dam is now recommended.
Irradiation
As a result of radiotherapy to the mouth, radiation-induced stomatitis may develop, which can be associated with mucosal erosions and ulceration. If the salivary glands are irradiated, there may also be xerostomia (dry mouth), making the oral mucosa more vulnerable to frictional damage as the lubricating function of saliva is lost, and mucosal atrophy (thinning), which makes a breach of the epithelium more likely. Radiation to the bones of the jaws causes damage to osteocytes and impairs the blood supply. The affected hard tissues become hypovascular (reduced number of blood vessels), hypocellular (reduced number of cells), and hypoxic (low levels of oxygen). Osteoradionecrosis is the term for when such an area of irradiated bone does not heal from this damage. This usually occurs in the mandible, and causes chronic pain and surface ulceration, sometimes resulting in non-healing bone being exposed through a soft tissue defect. Prevention of osteradionecrosis is part of the reason why all teeth of questionable prognosis are removed before the start of a course of radiotherapy.
Aphthous stomatitis
Aphthous stomatitis (also termed recurrent aphthous stomatitis, RAS, and commonly called "canker sores") is a very common cause of oral ulceration. 10–25% of the general population have this non-contagious condition. Three types of aphthous stomatitis exists based on their appearance, namely minor, major and herpetiform major aphthous ulceration. Minor aphthous ulceration is the most common type, presenting with 1–6 small (2-4mm diameter), round/oval ulcers with a yellow-grey color and an erythematous (red) "halo". These ulcers heal with no permanent scarring in about 7–10 days. Ulcers recur at intervals of about 1–4 months. Major aphthous ulceration is less common than the minor type, but produces more severe lesions and symptoms. Major aphthous ulceration presents with larger (>1 cm diameter) ulcers that take much longer to heal (10–40 days) and may leave scarring. The minor and major subtypes of aphthous stomatitis usually produce lesions on the non-keratinized oral mucosa (i.e. the inside of the cheeks, lips, underneath the tongue and the floor of mouth), but less commonly major aphthous ulcers may occur in other parts of the mouth on keratinized mucosal surfaces. The least common type is herpetiform ulceration, so named because the condition resembles primary herpetic gingivostomatitis. Herpetiform ulcers begin as small blisters (vesicles) which break down into 2-3mm sized ulcers. Herpetiform ulcers appear in "crops" sometimes hundreds in number, which can coalesce to form larger areas of ulceration. This subtype may cause extreme pain, heals with scarring and may recur frequently.
The exact cause of aphthous stomatitis is unknown, but there may be a genetic predisposition in some people. Other possible causes include hematinic deficiency (folate, vitamin B, iron), stopping smoking, stress, menstruation, trauma, food allergies or hypersensitivity to sodium lauryl sulphate (found in many brands of toothpaste). Aphthous stomatitis has no clinically detectable signs or symptoms outside the mouth, but the recurrent ulceration can cause much discomfort to those affected. Treatment is aimed at reducing the pain and swelling and speeding healing, and may involve systemic or topical steroids, analgesics (pain killers), antiseptics, anti-inflammatories or barrier pastes to protect the raw area(s).
Infection
Many infections can cause oral ulceration (see table). The most common are herpes simplex virus (herpes labialis, primary herpetic gingivostomatitis), varicella zoster (chicken pox, shingles), and coxsackie A virus (hand, foot and mouth disease). Human immunodeficiency virus (HIV) creates immunodeficiencies which allow opportunistic infections or neoplasms to proliferate. Bacterial processes leading to ulceration can be caused by Mycobacterium tuberculosis (tuberculosis) and Treponema pallidum (syphilis).
Opportunistic activity by combinations of otherwise normal bacterial flora, such as aerobic streptococci, Neisseria, Actinomyces, spirochetes, and Bacteroides species can prolong the ulcerative process. Fungal causes include Coccidioides immitis (valley fever), Cryptococcus neoformans (cryptococcosis), and Blastomyces dermatitidis ("North American Blastomycosis"). Entamoeba histolytica, a parasitic protozoan, is sometimes known to cause mouth ulcers through formation of cysts. Epstein-Barr virus-positive mucocutaneous ulcer is a rare form of the Epstein-Barr virus-associated lymphoproliferative diseases in which infiltrating, Epstein-Barr virus (i.e. EBV)-infected B cells cause solitary, well-circumscribed ulcers in mucous membranes and skin.
Drug-induced
Many drugs can cause mouth ulcers as a side effect. Common examples are alendronate (a bisphosphonate, commonly prescribed for osteoporosis), cytotoxic drugs (e.g. methotrexate, i.e. chemotherapy), non-steroidal anti-inflammatory drugs, nicorandil (may be prescribed for angina) and propylthiouracil (e.g. used for hyperthyroidism). Some recreational drugs can cause ulceration, e.g. cocaine.
Malignancy
Rarely, a persistent, non-healing mouth ulcer may be a cancerous lesion. Malignancies in the mouth are usually carcinomas, but lymphomas, sarcomas and others may also be possible. Either the tumor arises in the mouth, or it may grow to involve the mouth, e.g. from the maxillary sinus, salivary glands, nasal cavity or peri-oral skin. The most common type of oral cancer is squamous cell carcinoma. The main risk factors are long-term smoking and alcohol consumption (particularly when combined) and betel use.
Common sites of oral cancer are the lower lip, the floor of the mouth, and the sides, underside of the tongue and mandibular alveolar ridge, but it is possible to have a tumor anywhere in the mouth. Appearances vary greatly, but a typical malignant ulcer would be a persistent, expanding lesion that is totally red (erythroplasia) or speckled red and white (erythroleukoplakia). Malignant lesions also typically feel indurated (hardened) and attached to adjacent structures, with "rolled" margins or a punched out appearance and bleeds easily on gentle manipulation. If someone has an unexplained mouth ulcer persisting for more than 3 weeks this may indicate a need for a referral from the GDP or GP to hospital to exclude oral cancer.
Vesiculobullous diseases
Some of the viral infections mentioned above are also classified as vesiculobullous diseases. Other example vesiculobullous diseases include pemphigus vulgaris, mucous membrane pemphigoid, bullous pemphigoid, dermatitis herpetiformis, linear IgA disease, and epidermolysis bullosa.
Allergy
Rarely, allergic reactions of the mouth and lips may manifest as erosions; however, such reactions usually do not produce frank ulceration. An example of one common allergen is Balsam of Peru. If individuals allergic to this substance have oral exposure they may experience stomatitis and cheilitis (inflammation, rash, or painful erosion of the lips, oropharyngeal mucosa, or angles of their mouth). Balsam of Peru is used in foods and drinks for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties.
Other causes
A wide range of other diseases may cause mouth ulcers. Hematological causes include anemia, hematinic deficiencies, neutropenia, hypereosinophilic syndrome, leukemia, myelodysplastic syndromes, other white cell dyscrasias, and gammopathies. Gastrointestinal causes include celiac disease, Crohn's disease (orofacial granulomatosis), and ulcerative colitis. Dermatological causes include chronic ulcerative stomatitis, erythema multiforme (Stevens-Johnson syndrome), angina bullosa haemorrhagica and lichen planus. Other examples of systemic disease capable of causing mouth ulcers include lupus erythematosus, Sweet syndrome, reactive arthritis, Behçet syndrome, granulomatosis with polyangiitis, periarteritis nodosa, giant cell arteritis, diabetes, glucagonoma, sarcoidosis and periodic fever, aphthous stomatitis, pharyngitis and adenitis.
The conditions eosinophilic ulcer and necrotizing sialometaplasia may present as oral ulceration.
Macroglossia, an abnormally large tongue, can be associated with ulceration if the tongue protrudes constantly from the mouth. Caliber persistent artery describes a common vascular anomaly where a main arterial branch extends into superficial submucosal tissues without a reduction of diameter. This commonly occurs in elderly people on the lip and may be associated with ulceration.
Treatment
Treatment is cause-related, but also symptomatic if the underlying cause is unknown or not correctable. It is also important to note that most ulcers will heal completely without any intervention. Treatment can range from:
Smoothing or removing a local cause of trauma
Addressing dry mouth
Substituting a problem medication or switching to SLS-free toothpaste
Maintaining good oral hygiene and use of an antiseptic mouthwash or spray (e.g. chlorhexidine), which can prevent secondary infection and therefore hasten healing
A topical analgesic (e.g. benzydamine mouthwash) to reduce pain
Topical (gels, creams or inhalers) or systemic steroids may be used to reduce inflammation
An antifungal drug may be used to prevent oral candidiasis developing in those who use prolonged steroids
People with mouth ulcers may prefer to avoid hot or spicy foods, which can increase the pain
Self-inflicted ulceration can be difficult to manage, and psychiatric input may be required in some people
For recurrent ulcers, vitamin B12 has been shown to be effective
Epidemiology
Oral ulceration is a common reason for people to seek medical or dental advice. A breach of the oral mucosa probably affects most people at various times during life. For a discussion of the epidemiology of aphthous stomatitis, see the epidemiology of aphthous stomatitis.
| Biology and health sciences | Types | Health |
245982 | https://en.wikipedia.org/wiki/Buoyancy | Buoyancy | Buoyancy (), or upthrust is a net upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
For this reason, an object whose average density is greater than that of the fluid in which it is submerged tends to sink. If the object is less dense than the liquid, the force can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction.
Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water.
Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist.
The center of buoyancy of an object is the center of gravity of the displaced volume of fluid.
Archimedes' principle
Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces:
—with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object.
More tersely: buoyant force = weight of displaced fluid.
Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle that buoyancy = weight of displaced fluid remains valid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). In simple terms, the principle states that the buoyancy force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravitational acceleration, g. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor. It is generally easier to lift an object up through the water than it is to pull it out of the water.
Assuming Archimedes' principle to be reformulated as follows,
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes:
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the car's acceleration (i.e., towards the rear). The balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed "out of the way", and will actually drift in the same direction as the car's acceleration (i.e., forward). If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve, the balloon will drift towards the inside of the curve.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally.
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Static stability
A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up.
Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'.
The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position.
Fluids and objects
As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.
Compressible objects
As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands.
If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation.
Submarines
Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion.
Balloons
The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.
Divers
Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies.
Density
If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise.
If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position.
An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink.
A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.
| Physical sciences | Fluid mechanics | null |
245990 | https://en.wikipedia.org/wiki/Commutative%20algebra | Commutative algebra | Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers ; and p-adic integers.
Commutative algebra is the main technical tool of algebraic geometry, and many results and concepts of commutative algebra are strongly related with geometrical concepts.
The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras.
Overview
Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry.
Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings.
Polynomial rings in several indeterminates over a field are examples of commutative rings. Since algebraic geometry is fundamentally the study of the common zeros of these rings, many results and concepts of algebraic geometry have counterparts in commutative algebra, and their names recall often their geometric origin; for example "Krull dimension", "localization of a ring", "local ring", "regular ring".
An affine algebraic variety corresponds to a prime ideal in a polynomial ring, and the points of such an affine variety correspond to the maximal ideals that contain this prime ideal. The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal.
The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology. This is the starting point of scheme theory, a generalization of algebraic geometry introduced by Grothendieck, which is strongly based on commutative algebra, and has induced, in turns, many developments of commutative algebra.
History
The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem.
The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that of regular local rings. He established the concept of the Krull dimension of a ring, first for Noetherian rings before moving on to expand his theory to cover general valuation rings and Krull rings. To this day, Krull's principal ideal theorem is widely considered the single most important foundational theorem in commutative algebra. These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject.
Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Krull and Noether.
Main tools and results
Noetherian rings
A Noetherian ring, named after Emmy Noether, is a ring in which every ideal is finitely generated; that is, all elements of any ideal can be written as a linear combinations of a finite set of elements, with coefficients in the ring.
Many commonly considered commutative rings are Noetherian, in particular, every field, the ring of the integer, and every polynomial ring in one or several indeterminates over them. The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem.
Moreover, many ring constructions preserve the Noetherian property. In particular, if a commutative ring is Noetherian, the same is true for every polynomial ring over it, and for every quotient ring, localization, or completion of the ring.
The importance of the Noetherian property lies in its ubiquity and also in the fact that many important theorems of commutative algebra require that the involved rings are Noetherian, This is the case, in particular of Lasker–Noether theorem, the Krull intersection theorem, and Nakayama's lemma.
Furthermore, if a ring is Noetherian, then it satisfies the descending chain condition on prime ideals, which implies that every Noetherian local ring has a finite Krull dimension.
Primary decomposition
An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals.
The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic:
For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime.
Localization
The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions
.
where the denominators s range in a given subset S of R. The archetypal example is the construction of the ring Q of rational numbers from the ring Z of integers.
Completion
A completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have simpler structure than the general ones and Hensel's lemma applies to them.
Zariski topology on prime ideals
The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets
where A is a fixed commutative ring and I is an ideal. This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring.
Connections with algebraic geometry
Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form a category that is antiequivalent (dual) to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc. Nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions, leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks.
| Mathematics | Algebra | null |
246018 | https://en.wikipedia.org/wiki/Alluvial%20fan | Alluvial fan | An alluvial fan is an accumulation of sediments that fans outwards from a concentrated source of sediments, such as a narrow canyon emerging from an escarpment. They are characteristic of mountainous terrain in arid to semiarid climates, but are also found in more humid environments subject to intense rainfall and in areas of modern glaciation. They range in area from less than to almost .
Alluvial fans typically form where flow emerges from a confined channel and is free to spread out and infiltrate the surface. This reduces the carrying capacity of the flow and results in deposition of sediments. The flow can take the form of infrequent debris flows or one or more ephemeral or perennial streams.
Alluvial fans are common in the geologic record, such as in the Triassic basins of eastern North America and the New Red Sandstone of south Devon. Such fan deposits likely contain the largest accumulations of gravel in the geologic record. Alluvial fans have also been found on Mars and Titan, showing that fluvial processes have occurred on other worlds.
Some of the largest alluvial fans are found along the Himalaya mountain front on the Indo-Gangetic plain. A shift of the feeder channel (a nodal avulsion) can lead to catastrophic flooding, as occurred on the Kosi River fan in 2008.
Description
An alluvial fan is an accumulation of sediments that fans out from a concentrated source of sediments, such as a narrow canyon emerging from an escarpment. This accumulation is shaped like a section of a shallow cone, with its apex at the source of sediments.
Alluvial fans vary greatly in size, from only a few meters across at the base to as much as 150 kilometers across, with a slope of 1.5 to 25 degrees. Some giant alluvial fans have areas of almost . The slope measured from the apex is generally concave, with the steepest slope near the apex (the proximal fan or fanhead) and becoming less steep further out (the medial fan or midfan) and shallowing at the edges of the fan (the distal fan or outer fan). Sieve deposits, which are lobes of coarse gravel, may be present on the proximal fan. The sediments in an alluvial fan are usually coarse and poorly sorted, with the coarsest sediments found on the proximal fan.
When there is enough space in the alluvial plain for all of the sediment deposits to fan out without contacting other valley walls or rivers, an unconfined alluvial fan develops. Unconfined alluvial fans allow sediments to naturally fan out, and the shape of the fan is not influenced by other topological features. When the alluvial plain is more restricted, so that the fan comes into contact with topographic barriers, a confined fan is formed.
Wave or channel erosion of the edge of the fan (lateral erosion) sometimes produces a "toe-trimmed" fan, in which the edge of the fan is marked by a small escarpment. Toe-trimmed fans may record climate changes or tectonic processes, and the process of lateral erosion may enhance the aquifer or petroleum reservoir potential of the fan. Toe-trimmed fans on the planet Mars provide evidence of past river systems.
When numerous rivers and streams exit a mountain front onto a plain, the fans can combine to form a continuous apron. This is referred to as a bajada or piedmont alluvial plain.
Formation
Alluvial fans usually form where a confined feeder channel exits a mountain front or a glacier margin. As the flow exits the feeder channel onto the fan surface, it is able to spread out into wide, shallow channels or to infiltrate the surface. This reduces the carrying power of the flow and results in deposition of sediments.
Flow in the proximal fan, where the slope is steepest, is usually confined to a single channel (a fanhead trench), which may be up to deep. This channel is subject to blockage by accumulated sediments or debris flows, which causes flow to periodically break out of its old channel (nodal avulsion) and shift to a part of the fan with a steeper gradient, where deposition resumes. As a result, normally only part of the fan is active at any particular time, and the bypassed areas may undergo soil formation or erosion.
Alluvial fans can be dominated by debris flows (debris flow fans) or stream flow (fluvial fans). Which kind of fan is formed is controlled by climate, tectonics, and the type of bedrock in the area feeding the flow onto the fan.
Debris flow
Debris flow fans receive most of their sediments in the form of debris flows. Debris flows are slurry-like mixtures of water and particles of all sizes, from clay to boulders, that resemble wet concrete. They are characterized by having a yield strength, meaning that they are highly viscous at low flow velocities but become less viscous as the flow velocity increases. This means that a debris flow can come to a halt while still on moderately tilted ground. The flow then becomes consolidated under its own weight.
Debris flow fans occur in all climates but are more common where the source rock is mudstone or matrix-rich saprolite rather than coarser, more permeable regolith. The abundance of fine-grained sediments encourages the initial hillslope failure and subsequent cohesive flow of debris. Saturation of clay-rich colluvium by locally intense thunderstorms initiates slope failure. The resulting debris flow travels down the feeder channel and onto the surface of the fan.
Debris flow fans have a network of mostly inactive distributary channels in the upper fan that gives way to mid- to lower-level lobes. The channels tend to be filled by subsequent cohesive debris flows. Usually only one lobe is active at a time, and inactive lobes may develop desert varnish or develop a soil profile from eolian dust deposition, on time scales of 1,000 to 10,000 years. Because of their high viscosity, debris flows tend to be confined to the proximal and medial fan even in a debris-flow-dominated alluvial fan, and streamfloods dominate the distal fan. However, some debris-flow-dominated fans in arid climates consist almost entirely of debris flows and lag gravels from eolian winnowing of debris flows, with no evidence of sheetflood or sieve deposits. Debris-flow-dominated fans tend to be steep and poorly vegetated.
Fluvial
Fluvial fans (streamflow-dominated fans) receive most of their sediments in the form of stream flow rather than debris flows. They are less sharply distinguished from ordinary fluvial deposits than are debris flow fans.
Fluvial fans occur where there is perennial, seasonal, or ephemeral stream flow that feeds a system of distributary channels on the fan. In arid or semiarid climates, deposition is dominated by infrequent but intense rainfall that produces flash floods in the feeder channel. This results in sheetfloods on the alluvial fan, where sediment-laden water leaves its channel confines and spreads across the fan surface. These may include hyperconcentrated flows containing 20% to 45% sediments, which are intermediate between sheetfloods having 20% or less of sediments and debris flows with more than 45% sediments. As the flood recedes, it often leaves a lag of gravel deposits that have the appearance of a network of braided streams.
Where the flow is more continuous, as with spring snow melt, incised-channel flow in channels high takes place in a network of braided streams. Such alluvial fans tend to have a shallower slope but can become enormous. The Kosi and other fans along the Himalaya mountain front in the Indo-Gangetic plain are examples of gigantic stream-flow-dominated alluvial fans, sometimes described as megafans. Here, continued movement on the Main Boundary Thrust over the last ten million years has focused the drainage of of mountain frontage into just three enormous fans.
Geologic record
Alluvial fans are common in the geologic record, but may have been particularly important before the evolution of land plants in the mid-Paleozoic. They are characteristic of fault-bounded basins and can be or thicker due to tectonic subsidence of the basin and uplift of the mountain front. Most are red from hematite produced by diagenetic alteration of iron-rich minerals in a shallow, oxidizing environment. Examples of paleofans include the Triassic basins of eastern North America and the New Red Sandstone of south Devon, the Devonian Hornelen Basin of Norway, and the Devonian-Carboniferous in the Gaspé Peninsula of Canada. Such fan deposit likely contain the largest accumulations of gravel in the geologic record.
Depositional facies
Several kinds of sediment deposits (facies) are found in alluvial fans.
Alluvial fans are characterized by coarse sedimentation, though the sediments making up the fan become less coarse further from the apex. Gravels show well-developed imbrication with the pebbles dipping towards the apex. Fan deposits typically show well-developed reverse grading caused by outbuilding of the fan: Finer sediments are deposited at the edge of the fan, but as the fan continues to grow, increasingly coarse sediments are deposited on top of the earlier, less coarse sediments. However, a few fans show normal grading indicating inactivity or even fan retreat, so that increasingly fine sediments are deposited on earlier coarser sediments. Normal or reverse grading sequences can be hundreds to thousands of meters in thickness. Depositional facies that have been reported for alluvial fans include debris flows, sheet floods and upper regime stream floods, sieve deposits, and braided stream flows, each leaving their own characteristic sediment deposits that can be identified by geologists.
Debris flow deposits are common in the proximal and medial fan. These deposits lack sedimentary structure, other than occasional reverse-graded bedding towards the base, and they are poorly sorted. The proximal fan may also include gravel lobes that have been interpreted as sieve deposits, where runoff rapidly infiltrates and leaves behind only the coarse material. However, the gravel lobes have also been interpreted as debris flow deposits. Conglomerate originating as debris flows on alluvial fans is described as fanglomerate.
Stream flow deposits tend to be sheetlike, better sorted than debris flow deposits, and sometimes show well-developed sedimentary structures such as cross-bedding. These are more prevalent in the medial and distal fan. In the distal fan, where channels are very shallow and braided, stream flow deposits consist of sandy interbeds with planar and trough slanted stratification. The medial fan of a streamflow-dominated alluvial fan shows nearly the same depositional facies as ordinary fluvial environments, so that identification of ancient alluvial fans must be based on radial paleomorphology in a piedmont setting.
Occurrences
Alluvial fans are characteristic of mountainous terrain in arid to semiarid climates, but are also found in more humid environments subject to intense rainfall and in areas of modern glaciation. They have also been found on other bodies of the Solar System.
Terrestrial
Alluvial fans are built in response to erosion induced by tectonic uplift. The upwards coarsening of the beds making up the fan reflects cycles of erosion in the highlands that feed sediments to the fan. However, climate and changes in base level may be as important as tectonic uplift. For example, alluvial fans in the Himalayas show older fans entrenched and overlain by younger fans. The younger fans, in turn, are cut by deep incised valleys showing two terrace levels. Dating via optically stimulated luminescence suggests a hiatus of 70,000 to 80,000 years between the old and new fans, with evidence of tectonic tilting at 45,000 years ago and an end to fan deposition 20,000 years ago. Both the hiatus and the more recent end to fan deposition are thought to be connected to periods of enhanced southwest monsoon precipitation. Climate has also influenced fan formation in Death Valley, California, US, where dating of beds suggests that peaks of fan deposition during the last 25,000 years occurred during times of rapid climate change, both from wet to dry and from dry to wet.
Alluvial fans are often found in desert areas, which are subjected to periodic flash floods from nearby thunderstorms in local hills. The typical watercourse in an arid climate has a large, funnel-shaped basin at the top, leading to a narrow defile, which opens out into an alluvial fan at the bottom. Multiple braided streams are usually present and active during water flows. Phreatophytes (plants with long tap roots capable of reaching a deep water table) are sometimes found in sinuous lines radiating from arid climate fan toes. These fan-toe phreatophyte strips trace buried channels of coarse sediments from the fan that have interfingered with impermeable playa sediments.
Alluvial fans also develop in wetter climates when high-relief terrain is located adjacent to low-relief terrain. In Nepal, the Koshi River has built a megafan covering some below its exit from Himalayan foothills onto the nearly level plains where the river traverses into India before joining the Ganges.
Along the upper Koshi tributaries, tectonic forces elevate the Himalayas several millimeters annually. Uplift is approximately in equilibrium with erosion, so the river annually carries some of sediment as it exits the mountains. Deposition of this magnitude over millions of years is more than sufficient to account for the megafan.
In North America, streams flowing into California's Central Valley have deposited smaller but still extensive alluvial fans, such as that of the Kings River flowing out of the Sierra Nevada. Like the Himalayan megafans, these are streamflow-dominated fans.
Extraterrestrial
Mars
Alluvial fans are also found on Mars. Unlike alluvial fans on Earth, those on Mars are rarely associated with tectonic processes, but are much more common on crater rims. The crater rim alluvial fans appear to have been deposited by sheetflow rather than debris flows.
Three alluvial fans have been found in Saheki Crater. These fans confirmed past fluvial flow on the planet and further supported the theory that liquid water was once present in some form on the Martian surface. In addition, observations of fans in Gale crater made by satellites from orbit have now been confirmed by the discovery of fluvial sediments by the Curiosity rover. Alluvial fans in Holden crater have toe-trimmed profiles attributed to fluvial erosion.
The few alluvial fans associated with tectonic processes include those at Coprates Chasma and Juventae Chasma, which are part of the Valles Marineris canyon system. These provide evidence of the existence and nature of faulting in this region of Mars.
Titan
Alluvial fans have been observed by the Cassini-Huygens mission on Titan using the Cassini orbiter's synthetic aperture radar instrument. These fans are more common in the drier mid-latitudes at the end of methane/ethane rivers where it is thought that frequent wetting and drying occur due to precipitation, much like arid fans on Earth. Radar imaging suggests that fan material is most likely composed of round grains of water ice or solid organic compounds about two centimeters in diameter.
Impact on humans
Alluvial fans are the most important groundwater reservoirs in many regions. Many urban, industrial, and agricultural areas are located on alluvial fans, including the conurbations of Los Angeles, California; Salt Lake City, Utah; and Denver, Colorado, in the western United States, and in many other parts of the world. However, flooding on alluvial fans poses unique problems for disaster prevention and preparation.
Aquifers
The beds of coarse sediments associated with alluvial fans form aquifers that are the most important groundwater reservoirs in many regions. These include both arid regions, such as Egypt or Iraq, and humid regions, such as central Europe or Taiwan.
Flood hazards
Alluvial fans are subject to infrequent but often very damaging flooding, whose unusual characteristics distinguish alluvial fan floods from ordinary riverbank flooding. These include great uncertainty in the likely flood path, the likelihood of abrupt deposition and erosion of sediments carried by the flood from upstream sources, and a combination of the availability of sediments and of the slope and topography of the fan that creates extraordinary hazards. These hazards cannot reliably be mitigated by elevation on fill (raising existing buildings up to a meter (three feet) and building new foundations beneath them). At a minimum, major structural flood control measures are required to mitigate risk, and in some cases, the only alternative is to restrict development on the fan surface. Such measures can be politically controversial, particularly since the hazard is not obvious to property owners. In the United States, areas at risk of alluvial fan flooding are marked as Zone AO on flood insurance rate maps.
Alluvial fan flooding commonly takes the form of short (several hours) but energetic flash floods that occur with little or no warning. They typically result from heavy and prolonged rainfall, and are characterized by high velocities and capacity for sediment transport. Flows cover the range from floods through hyperconcentrated flows to debris flows, depending on the volume of sediments in the flow. Debris flows resemble freshly poured concrete, consisting mostly of coarse debris. Hyperconcentrated flows are intermediate between floods and debris flows, with a water content between 40 and 80 weight percent. Floods may transition to hyperconcentrated flows as they entrain sediments, while debris flows may become hyperconcentrated flows if they are diluted by water. Because flooding on alluvial fans carries large quantities of sediment, channels can rapidly become blocked, creating great uncertainty about flow paths that magnifies the dangers.
Alluvial fan flooding in the Apennine Mountains of Italy have resulted in repeated loss of life. A flood on 1 October 1581 at Piedimonte Matese resulted in the loss of 400 lives. Loss of life from alluvial fan floods continued into the 19th century, and the hazard of alluvial fan flooding remains a concern in Italy.
On January 1, 1934, record rainfall in a recently burned area of the San Gabriel Mountains, California, caused severe flooding of the alluvial fan on which the towns of Montrose and Glendale were built. The floods caused significant loss of life and property.
The Koshi River in India has built up a megafan where it exits the Himalayas onto the Ganges plain. The river has a history of frequently and capriciously changing its course, so that it has been called the Sorrow of Bihar for contributing disproportionately to India's death tolls in flooding. These exceed those of all countries except Bangladesh. Over the last few hundred years, the river had generally shifted westward across its fan, and by 2008, the main river channel was located on the extreme western part of the megafan. In August 2008, high monsoon flows breached the embankment of the Koshi River. This diverted most of the river into an unprotected ancient channel and flooded the central part of the megafan. This was an area with a high population density that had been stable for over 200 years. Over a million people were rendered homeless, about a thousand lost their lives and thousands of hectares of crops were destroyed.
Petroleum reservoirs
Buried alluvial fans are sometimes found at the margins of petroleum basins. Debris flow fans make poor petroleum reservoirs, but fluvial fans are potentially significant reservoirs. Though fluvial fans are typically of poorer quality than reservoirs closer to the basin center, due to their complex structure, the episodic flooding channels of the fans are potentially lucrative targets for petroleum exploration. Alluvial fans that experience toe-trimming (lateral erosion) by an axial river (a river running the length of an escarpment-bounded basin) may have increased potential as reservoirs. The river deposits relatively porous, permeable axial river sediments that alternate with fan sediment beds.
| Physical sciences | Fluvial landforms | null |
246019 | https://en.wikipedia.org/wiki/Elastic-rebound%20theory | Elastic-rebound theory |
In geology, the elastic-rebound theory is an explanation for how energy is released during an earthquake.
As the Earth's crust deforms, the rocks which span the opposing sides of a fault are subjected to shear stress. Slowly they deform, until their internal rigidity is exceeded. Then they separate with a rupture along the fault; the sudden movement releases accumulated energy, and the rocks snap back almost to their original shape. The previously solid mass is divided between the two slowly moving plates, the energy released through the surroundings in a seismic wave.
Theory
After the great 1906 San Francisco earthquake, geophysicist Harry Fielding Reid examined the displacement of the ground surface along the San Andreas Fault in the 50 years before the earthquake. He found evidence for 3.2 m of bending during that period. He concluded that the quake must have been the result of the elastic rebound of the strain energy stored in the rocks on either side of the fault. Later measurements using the global positioning system largely support Reid's theory as the basis of seismic movement.
Explanation
The two sides of an active but locked fault are slowly moving in different directions, where elastic strain energy builds up in any rock mass that adjoins them. Thus, if a road is built straight across the fault as in Time 1 of the figure panel, it is perpendicular to the fault trace at point E, where the fault is locked. The overall fault movement (large arrows) causes the rocks across the locked fault to accrue elastic deformation, as in Time 2. This deformation may build at the rate of a few centimeters per year. When the accumulated strain is great enough to overcome the strength of the rocks, the result is a sudden break, or a springing back to the original shape as much as possible, a jolt which is felt on the surface as an earthquake. This sudden movement results in the shift of the roadway's surface, as shown in Time 3. The stored energy is released partly as heat, partly in alteration of the rock, and partly as a seismic wave.
| Physical sciences | Seismology | Earth science |
246074 | https://en.wikipedia.org/wiki/Forecasting | Forecasting | Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and assessment of its accuracy. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
Risk and uncertainty are central to forecasting and prediction; it is generally considered a good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to be as accurate as possible. In some cases the data used to predict the variable of interest is itself forecast. A forecast is not to be confused with a Budget; budgets are more specific, fixed-term financial plans used for resource allocation and control, while forecasts provide estimates of future financial performance, allowing for flexibility and adaptability to changing circumstances. Both tools are valuable in financial planning and decision-making, but they serve different functions.
Applications
Forecasting has applications in a wide range of fields where estimates of future conditions are useful. Depending on the field, accuracy varies significantly. If the factors that relate to what is being forecast are known and well understood and there is a significant amount of data that can be used, it is likely the final value will be close to the forecast. If this is not the case or if the actual outcome is affected by the forecasts, the reliability of the forecasts can be significantly lower.
Climate change and increasing energy prices have led to the use of Egain Forecasting for buildings. This attempts to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is used in customer demand planning in everyday business for manufacturing and distribution companies.
While the veracity of predictions for actual stock returns are disputed through reference to the efficient-market hypothesis, forecasting of broad economic trends is common. Such analysis is provided by both non-profit groups as well as by for-profit private institutions.
Forecasting foreign exchange movements is typically achieved through a combination of historical and current data (summarized in charts) and fundamental analysis. An essential difference between chart analysis and fundamental economic analysis is that chartists study only the price action of a market, whereas fundamentalists attempt to look to the reasons behind the action. Financial institutions assimilate the evidence provided by their fundamental and chartist researchers into one note to provide a final projection on the currency in question.
Forecasting has also been used to predict the development of conflict situations. Forecasters perform research that uses empirical results to gauge the effectiveness of certain forecasting models. However research has shown that there is little difference between the accuracy of the forecasts of experts knowledgeable in the conflict situation and those by individuals who knew much less. Similarly, experts in some studies argue that role thinking — standing in other people's shoes to forecast their decisions — does not contribute to the accuracy of the forecast.
An important, albeit often ignored aspect of forecasting, is the relationship it holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning predicts what the future should look like. There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.). A good way to find a method is by visiting a selection tree. An example of a selection tree can be found here.
Forecasting has application in many situations:
Supply chain management and customer demand planning — Forecasting can be used in supply chain management to ensure that the right product is at the right place at the right time. Accurate forecasting will help retailers reduce excess inventory and thus increase profit margin. Accurate forecasting will also help them meet consumer demand. The discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical forecasting and a consensus process. Studies have shown that extrapolations are the least accurate, while company earnings forecasts are the most reliable.
Economic forecasting
Earthquake prediction
Egain forecasting
Energy forecasting for renewable power integration
Finance against risk of default via credit ratings and credit scores
Land use forecasting
Player and team performance in sports
Political forecasting
Product forecasting
Sales forecasting
Technology forecasting
Telecommunications forecasting
Transport planning and forecasting
Weather forecasting, flood forecasting and meteorology
Forecasting as training, betting and futarchy
In several cases, the forecast is either more or less than a prediction of the future.
In Philip E. Tetlock's Superforecasting: The Art and Science of Prediction, he discusses forecasting as a method of improving the ability to make decisions. A person can become better calibrated — i.e. having things they give 10% credence to happening 10% of the time. Or they can forecast things more confidently — coming to the same conclusion but earlier. Some have claimed that forecasting is a transferable skill with benefits to other areas of discussion and decision making.
Betting on sports or politics is another form of forecasting. Rather than being used as advice, bettors are paid based on if they predicted correctly. While decisions might be made based on these bets (forecasts), the main motivation is generally financial.
Finally, futarchy is a form of government where forecasts of the impact of government action are used to decide which actions are taken. Rather than advice, in futarchy's strongest form, the action with the best forecasted result is automatically taken.
Forecast improvements
Forecast improvement projects have been operated in a number of sectors: the National Hurricane Center's Hurricane Forecast Improvement Project (HFIP) and the Wind Forecast Improvement Project sponsored by the US Department of Energy are examples. In relation to supply chain management, the Du Pont model has been used to show that an increase in forecast accuracy can generate increases in sales and reductions in inventory, operating expenses and commitment of working capital. The Groceries Code Adjudicator in the United Kingdom, which regulates supply chain management practices in the groceries retail industry, has observed that all the retailers who fall within the scope of his regulation "are striving for continuous improvement in forecasting practice and activity in relation to promotions".
Categories of forecasting methods
Qualitative vs. quantitative methods
Qualitative forecasting techniques are subjective, based on the opinion and judgment of consumers and experts; they are appropriate when past data are not available.
They are usually applied to intermediate- or long-range decisions. Examples of qualitative forecasting methods are informed opinion and judgment, the Delphi method, market research, and historical life-cycle analogy.
Quantitative forecasting models are used to forecast future data as a function of past data. They are appropriate to use when past numerical data is available and when it is reasonable to assume that some of the patterns in the data are expected to continue into the future.
These methods are usually applied to short- or intermediate-range decisions. Examples of quantitative forecasting methods are last period demand, simple and weighted N-Period moving averages, simple exponential smoothing, Poisson process model based forecasting and multiplicative seasonal indexes. Previous research shows that different methods may lead to different level of forecasting accuracy. For example, GMDH neural network was found to have better forecasting performance than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network.
Average approach
In this approach, the predictions of all future values are equal to the mean of the past data. This approach can be used with any sort of data where past data is available. In time series notation:
where is the past data.
Although the time series notation has been used here, the average approach can also be used for cross-sectional data (when we are predicting unobserved values; values that are not included in the data set). Then, the prediction for unobserved values is the average of the observed values.
Naïve approach
Naïve forecasts are the most cost-effective forecasting model, and provide a benchmark against which more sophisticated models can be compared. This forecasting method is only suitable for time series data. Using the naïve approach, forecasts are produced that are equal to the last observed value. This method works quite well for economic and financial time series, which often have patterns that are difficult to reliably and accurately predict. If the time series is believed to have seasonality, the seasonal naïve approach may be more appropriate where the forecasts are equal to the value from last season. In time series notation:
Drift method
A variation on the naïve method is to allow the forecasts to increase or decrease over time, where the amount of change over time (called the drift) is set to be the average change seen in the historical data. So the forecast for time is given by
This is equivalent to drawing a line between the first and last observation, and extrapolating it into the future.
Seasonal naïve approach
The seasonal naïve method accounts for seasonality by setting each prediction to be equal to the last observed value of the same season. For example, the prediction value for all subsequent months of April will be equal to the previous value observed for April. The forecast for time is
where =seasonal period and is the smallest integer greater than .
The seasonal naïve method is particularly useful for data that has a very high level of seasonality.
Deterministic approach
A deterministic approach is when there is no stochastic variable involved and the forecasts depend on the selected functions and parameters. For example, given the function
The short term behaviour and the is the medium-long term trend are
where are some parameters.
This approach has been proposed to simulate bursts of seemingly stochastic activity, interrupted by quieter periods. The assumption is that the presence of a strong deterministic ingredient is hidden by noise. The deterministic approach is noteworthy as it can reveal the underlying dynamical systems structure, which can be exploited for steering the dynamics into a desired regime.
Time series methods
Time series methods use historical data as the basis of estimating future outcomes. They are based on the assumption that past demand history is a good indicator of future demand.
Moving average
Weighted moving average
Exponential smoothing
Autoregressive moving average (ARMA) (forecasts depend on past values of the variable being forecast and on past prediction errors)
Autoregressive integrated moving average (ARIMA) (ARMA on the period-to-period change in the forecast variable)
e.g. Box–Jenkins
Seasonal ARIMA or SARIMA or ARIMARCH,
Extrapolation
Linear prediction
Trend estimation (predicting the variable as a linear or polynomial function of time)
Growth curve (statistics)
Recurrent neural network
Relational methods
Some forecasting methods try to identify the underlying factors that might influence the variable that is being forecast. For example, including information about climate patterns might improve the ability of a model to predict umbrella sales. Forecasting models often take account of regular seasonal variations. In addition to climate, such variations can also be due to holidays and customs: for example, one might predict that sales of college football apparel will be higher during the football season than during the off season.
Several informal methods used in causal forecasting do not rely solely on the output of mathematical algorithms, but instead use the judgment of the forecaster. Some forecasts take account of past relationships between variables: if one variable has, for example, been approximately linearly related to another for a long period of time, it may be appropriate to extrapolate such a relationship into the future, without necessarily understanding the reasons for the relationship.
Causal methods include:
Regression analysis includes a large group of methods for predicting future values of a variable using information about other variables. These methods include both parametric (linear or non-linear) and non-parametric techniques.
Autoregressive moving average with exogenous inputs (ARMAX)
Quantitative forecasting models are often judged against each other by comparing their in-sample or out-of-sample mean square error, although some researchers have advised against this. Different forecasting approaches have different levels of accuracy. For example, it was found in one context that GMDH has higher forecasting accuracy than traditional ARIMA.
Judgmental methods
Judgmental forecasting methods incorporate intuitive judgement, opinions and subjective probability estimates. Judgmental forecasting is used in cases where there is a lack of historical data or during completely new and unique market conditions.
Judgmental methods include:
Composite forecasts
Cooke's method
Delphi method
Forecast by analogy
Scenario building
Statistical surveys
Technology forecasting
Artificial intelligence methods
Artificial neural networks
Group method of data handling
Support vector machines
Often these are done today by specialized programs loosely labeled
Data mining
Machine learning
Pattern recognition
Geometric extrapolation with error prediction
Can be created with 3 points of a sequence and the "moment" or "index". This type of extrapolation has 100% accuracy in predictions in a big percentage of known series database (OEIS).
Other methods
Granger causality
Simulation
Demand forecasting
Probabilistic forecasting and Ensemble forecasting
Forecasting accuracy
The forecast error (also known as a residual) is the difference between the actual value and the forecast value for the corresponding period:
where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.
A good forecasting method will yield residuals that are uncorrelated. If there are correlations between residual values, then there is information left in the residuals which should be used in computing forecasts. This can be accomplished by computing the expected value of a residual as a function of the known past residuals, and adjusting the forecast by the amount by which this expected value differs from zero.
A good forecasting method will also have zero mean. If the residuals have a mean other than zero, then the forecasts are biased and can be improved by adjusting the forecasting technique by an additive constant that equals the mean of the unadjusted residuals.
Measures of aggregate error:
Scaled-dependent errors
The forecast error, E, is on the same scale as the data, as such, these accuracy measures are scale-dependent and cannot be used to make comparisons between series on different scales.
Mean absolute error (MAE) or mean absolute deviation (MAD):
Mean squared error (MSE) or mean squared prediction error (MSPE):
Root mean squared error (RMSE):
Average of Errors (E):
Percentage errors
These are more frequently used to compare forecast performance between different data sets because they are scale-independent. However, they have the disadvantage of being extremely large or undefined if Y is close to or equal to zero.
Mean absolute percentage error (MAPE):
Mean absolute percentage deviation (MAPD):
Scaled errors
Hyndman and Koehler (2006) proposed using scaled errors as an alternative to percentage errors.
Mean absolute scaled error (MASE):
where m=seasonal period or 1 if non-seasonal
Other measures
Forecast skill (SS):
Business forecasters and practitioners sometimes use different terminology. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE. For more information, see Calculating demand forecast accuracy.
When comparing the accuracy of different forecasting methods on a specific data set, the measures of aggregate error are compared with each other and the method that yields the lowest error is preferred.
Training and test sets
When evaluating the quality of forecasts, it is invalid to look at how well a model fits the historical data; the accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model. When choosing models, it is common to use a portion of the available data for fitting, and use the rest of the data for testing the model, as was done in the above examples.
Cross-validation
Cross-validation is a more sophisticated version of training a test set.
For cross-sectional data, one approach to cross-validation works as follows:
Select observation i for the test set, and use the remaining observations in the training set. Compute the error on the test observation.
Repeat the above step for i = 1,2,..., N where N is the total number of observations.
Compute the forecast accuracy measures based on the errors obtained.
This makes efficient use of the available data, as only one observation is omitted at each step
For time series data, the training set can only include observations prior to the test set. Therefore, no future observations can be used in constructing the forecast. Suppose k observations are needed to produce a reliable forecast; then the process works as follows:
Starting with i=1, select the observation k + i for the test set, and use the observations at times 1, 2, ..., k+i–1 to estimate the forecasting model. Compute the error on the forecast for k+i.
Repeat the above step for i = 2,...,T–k where T is the total number of observations.
Compute the forecast accuracy over all errors.
This procedure is sometimes known as a "rolling forecasting origin" because the "origin" (k+i -1) at which the forecast is based rolls forward in time. Further, two-step-ahead or in general p-step-ahead forecasts can be computed by first forecasting the value immediately after the training set, then using this value with the training set values to forecast two periods ahead, etc.
| Physical sciences | Science basics | Basics and measurement |
246160 | https://en.wikipedia.org/wiki/Summation | Summation | In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need for parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one summand results in the summand itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.
Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as . Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as
For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,
Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
Notation
Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter sigma. This is defined as
where is the index of summation; is an indexed variable representing each term of the sum; is the lower bound of summation, and is the upper bound of summation. The "" under the summation symbol means that the index starts out equal to . The index, , is incremented by one for each successive term, stopping when .
This is read as "sum of , from to ".
Here is an example showing the summation of squares:
In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as , , , and ; the latter is also often used for the upper bound of a summation.
Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n. For example, one might write that:
Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
is an alternative notation for the sum of over all (integers) in the specified range. Similarly,
is the sum of over all elements in the set , and
is the sum of over all positive integers dividing .
There are also ways to generalize the use of many sigma signs. For example,
is the same as
A similar notation is used for the product of a sequence, where , an enlarged form of the Greek capital letter pi, is used instead of
Special cases
It is possible to sum fewer than 2 numbers:
If the summation has one summand , then the evaluated sum is .
If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case.
For example, if in the definition above, then there is only one term in the sum; if , then there is none.
Algebraic sum
The phrase 'algebraic sum' refers to a sum of terms which may have positive or negative signs. Terms with positive signs are added, while terms with negative signs are subtracted.
Formal definition
Summation may be defined recursively as follows:
, for ;
, for .
Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
where is the subset of the integers from to , and where is the counting measure over the integers.
Calculus of finite differences
Given a function that is defined over the integers in the interval , the following equation holds:
This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences, which states that:
where
is the derivative of .
An example of application of the above equation is the following:
Using binomial theorem, this may be rewritten as:
The above formula is more commonly used for inverting of the difference operator , defined by:
where is a function defined on the nonnegative integers.
Thus, given such a function , the problem is to compute the antidifference of , a function such that . That is,
This function is defined up to the addition of a constant, and may be chosen as
There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where and, by linearity, for every polynomial function of .
Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any increasing function f:
and for any decreasing function f:
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
since the right-hand side is by definition the limit for of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
Identities
The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.
General identities
(distributivity)
(commutativity and associativity)
(index shift)
for a bijection from a finite set onto a set (index change); this generalizes the preceding formula.
(splitting a sum, using associativity)
(a variant of the preceding formula)
(the sum from the first term up to the last is equal to the sum from the last down to the first)
(a particular case of the formula above)
(commutativity and associativity, again)
(another application of commutativity and associativity)
(splitting a sum into its odd and even parts, for even indexes)
(splitting a sum into its odd and even parts, for odd indexes)
(distributivity)
(distributivity allows factorization)
(the logarithm of a product is the sum of the logarithms of the factors)
(the exponential of a sum is the product of the exponential of the summands)
for any function from .
Powers and logarithm of arithmetic progressions
for every that does not depend on
(Sum of the simplest arithmetic progression, consisting of the first n natural numbers.)
(Sum of first odd natural numbers)
(Sum of first even natural numbers)
(A sum of logarithms is the logarithm of the product)
(Sum of the first squares, see square pyramidal number.)
(Nicomachus's theorem)
More generally, one has Faulhaber's formula for
where denotes a Bernoulli number, and is a binomial coefficient.
Summation index in exponents
In the following summations, is assumed to be different from 1.
(sum of a geometric progression)
(special case for )
( times the derivative with respect to of the geometric progression)
(sum of an arithmetico–geometric sequence)
Binomial coefficients and factorials
There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
Involving the binomial theorem
the binomial theorem
the special case where
, the special case where , which, for expresses the sum of the binomial distribution
the value at of the derivative with respect to of the binomial theorem
the value at of the antiderivative with respect to of the binomial theorem
Involving permutation numbers
In the following summations, is the number of -permutations of .
, where and denotes the floor function.
Others
Harmonic numbers
(the th harmonic number)
(a generalized harmonic number)
Growth rates
The following are useful approximations (using theta notation):
for real c greater than −1
(See Harmonic number)
for real c greater than 1
for non-negative real c
for non-negative real c, d
for non-negative real b > 1, c, d
History
In 1675, Gottfried Wilhelm Leibniz, in a letter to Henry Oldenburg, suggests the symbol ∫ to mark the sum of differentials (Latin: calculus summatorius), hence the S-shape. The renaming of this symbol to integral arose later in exchanges with Johann Bernoulli.
In 1755, the summation symbol Σ is attested in Leonhard Euler's Institutiones calculi differentialis. Euler uses the symbol in expressions like:
In 1772, usage of Σ and Σn is attested by Lagrange.
In 1823, the capital letter S is attested as a summation symbol for series. This usage was apparently widespread.
In 1829, the summation symbol Σ is attested by Fourier and C. G. J. Jacobi. Fourier's use includes lower and upper bounds, for example:
| Mathematics | Sequences and series | null |
246173 | https://en.wikipedia.org/wiki/Phenolphthalein | Phenolphthalein | Phenolphthalein ( ) is a chemical compound with the formula C20H14O4 and is often written as "HIn", "HPh", "phph" or simply "Ph" in shorthand notation. Phenolphthalein is often used as an indicator in acid–base titrations. For this application, it turns colorless in acidic solutions and pink in basic solutions. It belongs to the class of dyes known as phthalein dyes.
Phenolphthalein is slightly soluble in water and usually is dissolved in alcohols in experiments. It is a weak acid, which can lose H+ ions in solution. The nonionized phenolphthalein molecule is colorless and the double deprotonated phenolphthalein ion is fuchsia. Further proton loss in higher pH occurs slowly and leads to a colorless form. Phenolphthalein ion in concentrated sulfuric acid is orange red due to sulfonation.
Uses
pH indicator
Phenolphthalein's common use is as an indicator in acid-base titrations. It also serves as a component of universal indicator, together with methyl red, bromothymol blue, and thymol blue.
Phenolphthalein adopts different forms in aqueous solution depending on the pH of the solution. Inconsistency exists in the literature about hydrated forms of the compounds and the color of sulfuric acid. Wittke reported in 1983 that it exists in protonated form (H3In+) under strongly acidic conditions, providing an orange coloration. However, a later paper suggested that this color is due to sulfonation to phenolsulfonphthalein.
The lactone form (H2In) is colorless between strongly acidic and slightly basic conditions. The doubly deprotonated (In2-) phenolate form (the anion form of phenol) gives the familiar pink color. In strongly basic solutions, phenolphthalein is converted to its In(OH)3− form, and its pink color undergoes a rather slow fading reaction and becomes completely colorless when pH is greater than 13.
The pKa values of phenolphthalein were found to be 9.05, 9.50 and 12 while those of phenolsulfonphthalein are 1.2 and 7.70. The pKa for the color change is 9.50.
Carbonation of concrete
Phenolphthalein's pH sensitivity is exploited in other applications: concrete has naturally high pH due to the calcium hydroxide formed when Portland cement reacts with water. As the concrete reacts with carbon dioxide in the atmosphere, pH decreases to 8.5–9. When a 1% phenolphthalein solution is applied to normal concrete, it turns bright pink. However, if it remains colorless, it shows that the concrete has undergone carbonation. In a similar application, some spackling used to repair holes in drywall contains phenolphthalein. When applied, the basic spackling material retains a pink color; when the spackling has cured by reaction with atmospheric carbon dioxide, the pink color fades.
Education
In a highly basic solution, phenolphthalein's slow change from pink to colorless as it is converted to its Ph(OH)3− form is used in chemistry classes for the study of reaction kinetics.
Entertainment
Phenolphthalein is used in toys, for example as a component of disappearing inks, or disappearing dye on the "Hollywood Hair" Barbie hair. In the ink, it is mixed with sodium hydroxide, which reacts with carbon dioxide in the air. This reaction leads to the pH falling below the color change threshold as hydrogen ions are released by the reaction:
OH−(aq) + CO2(g) → (aq) + H+(aq).
To develop the hair and "magic" graphical patterns, the ink is sprayed with a solution of hydroxide, which leads to the appearance of the hidden graphics by the same mechanism described above for color change in alkaline solution. The pattern will eventually disappear again because of the reaction with carbon dioxide. Thymolphthalein is used for the same purpose and in the same way, when a blue color is desired.
Detection of blood
A reduced form of phenolphthalein, phenolphthalin, which is colorless, is used in a test to identify substances thought to contain blood, commonly known as the Kastle–Meyer test. A dry sample is collected with a swab or filter paper. A few drops of alcohol, then a few drops of phenolphthalein, and finally a few drops of hydrogen peroxide are dripped onto the sample. If the sample contains hemoglobin, it will turn pink immediately upon addition of the peroxide, because of the generation of phenolphthalein. A positive test indicates the sample contains hemoglobin and, therefore, is likely blood. A false positive can result from the presence of substances with catalytic activity similar to hemoglobin. This test is not destructive to the sample; it can be kept and used in further tests. This test has the same reaction with blood from any animal whose blood contains hemoglobin, including almost all vertebrates; further testing would be required to determine whether it originated from a human.
Laxative
Phenolphthalein has been used for over a century as a laxative, but is now being removed from over-the-counter laxatives over concerns of carcinogenicity. Laxative products formerly containing phenolphthalein have often been reformulated with alternative active ingredients: Feen-a-Mint switched to bisacodyl, and Ex-Lax was switched to a senna extract.
Thymolphthalein is a related laxative made from thymol.
Despite concerns regarding its carcinogenicity based on rodent studies, the use of phenolphthalein as a laxative is unlikely to cause ovarian cancer. Some studies suggest a weak association with colon cancer, while others show none at all.
Phenolphthalein is described as a stimulant laxative. In addition, it has been found to inhibit human cellular calcium influx via store-operated calcium entry (SOCE, see ) in vivo. This is effected by its inhibiting thrombin and thapsigargin, two activators of SOCE that increase intracellular free calcium.
Phenolphthalein has been added to the European Chemicals Agency's candidate list for substance of very high concern (SVHC). It is on the IARC group 2B list for substances "possibly carcinogenic to humans".
The discovery of phenolphthalein's laxative effect was due to an attempt by the Hungarian government to label genuine local white wine with the substance in 1900. Phenolphthalein did not change the taste of the wine and would change color when a base is added, making it a good label in principle. However, it was found that ingestion of the substance led to diarrhea. Max Kiss, a Hungarian-born pharmacist residing in New York, heard about the news and launched Ex-Lax in 1906.
Synthesis
Phenolphthalein can be synthesized by condensation of phthalic anhydride with two equivalents of phenol under acidic conditions. It was discovered in 1871 by Adolf von Baeyer.
| Physical sciences | Chemical methods | Chemistry |
246223 | https://en.wikipedia.org/wiki/Component%20%28graph%20theory%29 | Component (graph theory) | In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components.
The number of components in a given graph is an important graph invariant, and is closely related to invariants of matroids, topological spaces, and matrices. In random graphs, a frequently occurring phenomenon is the incidence of a giant component, one component that is significantly larger than the others; and of a percolation threshold, an edge probability above which a giant component exists and below which it does not.
The components of a graph can be constructed in linear time, and a special case of the problem, connected-component labeling, is a basic technique in image analysis. Dynamic connectivity algorithms maintain components as edges are inserted or deleted in a graph, in low time per change. In computational complexity theory, connected components have been used to study algorithms with limited space complexity, and sublinear time algorithms can accurately estimate the number of components.
Definitions and examples
A component of a given undirected graph may be defined as a connected subgraph that is not part of any larger connected subgraph. For instance, the graph shown in the first illustration has three components. Every vertex of a graph belongs to one of the graph's components, which may be found as the induced subgraph of the set of vertices reachable from Every graph is the disjoint union of its components. Additional examples include the following special cases:
In an empty graph, each vertex forms a component with one vertex and zero edges. More generally, a component of this type is formed for every isolated vertex in any graph.
In a connected graph, there is exactly one component: the whole graph.
In a forest, every component is a tree.
In a cluster graph, every component is a maximal clique. These graphs may be produced as the transitive closures of arbitrary undirected graphs, for which finding the transitive closure is an equivalent formulation of identifying the connected components.
Another definition of components involves the equivalence classes of an equivalence relation defined on the graph's vertices.
In an undirected graph, a is reachable from a if there is a path from or equivalently a walk (a path allowing repeated vertices and edges).
Reachability is an equivalence relation, since:
It is reflexive: There is a trivial path of length zero from any vertex to itself.
It is symmetric: If there is a path from the same edges in the reverse order form a path from
It is transitive: If there is a path from and a path from the two paths may be concatenated together to form a walk from
The equivalence classes of this relation partition the vertices of the graph into disjoint sets, subsets of vertices that are all reachable from each other, with no additional reachable pairs outside of any of these subsets. Each vertex belongs to exactly one equivalence class. The components are then the induced subgraphs formed by each of these equivalence classes. Alternatively, some sources define components as the sets of vertices rather than as the subgraphs they induce.
Similar definitions involving equivalence classes have been used to defined components for other forms of graph connectivity, including the weak components and strongly connected components of directed graphs and the biconnected components of undirected graphs.
Number of components
The number of components of a given finite graph can be used to count the number of edges in its spanning forests: In a graph with vertices and components, every spanning forest will have exactly edges. This number is the matroid-theoretic rank of the graph, and the rank of its graphic matroid. The rank of the dual cographic matroid equals the circuit rank of the graph, the minimum number of edges that must be removed from the graph to break all its cycles. In a graph with edges, vertices and components, the circuit rank is
A graph can be interpreted as a topological space in multiple ways, for instance by placing its vertices as points in general position in three-dimensional Euclidean space and representing its edges as line segments between those points. The components of a graph can be generalized through these interpretations as the topological connected components of the corresponding space; these are equivalence classes of points that cannot be separated by pairs of disjoint closed sets. Just as the number of connected components of a topological space is an important topological invariant, the zeroth Betti number, the number of components of a graph is an important graph invariant, and in topological graph theory it can be interpreted as the zeroth Betti number of the graph.
The number of components arises in other ways in graph theory as well. In algebraic graph theory it equals the multiplicity of 0 as an eigenvalue of the Laplacian matrix of a finite graph. It is also the index of the first nonzero coefficient of the chromatic polynomial of the graph, and the chromatic polynomial of the whole graph can be obtained as the product of the polynomials of its components. Numbers of components play a key role in the Tutte theorem characterizing finite graphs that have perfect matchings and the associated Tutte–Berge formula for the size of a maximum matching, and in the definition of graph toughness.
Algorithms
It is straightforward to compute the components of a finite graph in linear time (in terms of the numbers of the vertices and edges of the graph) using either breadth-first search or depth-first search. In either case, a search that begins at some particular will find the entire component (and no more) before returning. All components of a graph can be found by looping through its vertices, starting a new breadth-first or depth-first search whenever the loop reaches a vertex that has not already been included in a previously found component. describe essentially this algorithm, and state that it was already "well known".
Connected-component labeling, a basic technique in computer image analysis, involves the construction of a graph from the image and component analysis on the graph.
The vertices are the subset of the pixels of the image, chosen as being of interest or as likely to be part of depicted objects. Edges connect adjacent pixels, with adjacency defined either orthogonally according to the Von Neumann neighborhood, or both orthogonally and diagonally according to the Moore neighborhood. Identifying the connected components of this graph allows additional processing to find more structure in those parts of the image or identify what kind of object is depicted. Researchers have developed component-finding algorithms specialized for this type of graph, allowing it to be processed in pixel order rather than in the more scattered order that would be generated by breadth-first or depth-first searching. This can be useful in situations where sequential access to the pixels is more efficient than random access, either because the image is represented in a hierarchical way that does not permit fast random access or because sequential access produces better memory access patterns.
There are also efficient algorithms to dynamically track the components of a graph as vertices and edges are added, by using a disjoint-set data structure to keep track of the partition of the vertices into equivalence classes, replacing any two classes by their union when an edge connecting them is added. These algorithms take amortized time per operation, where adding vertices and edges and determining the component in which a vertex falls are both operations, and is a very slowly growing inverse of the very quickly growing Ackermann function. One application of this sort of incremental connectivity algorithm is in Kruskal's algorithm for minimum spanning trees, which adds edges to a graph in sorted order by length and includes an edge in the minimum spanning tree only when it connects two different components of the previously-added subgraph. When both edge insertions and edge deletions are allowed, dynamic connectivity algorithms can still maintain the same information, in amortized time per change and time per connectivity query, or in near-logarithmic randomized expected time.
Components of graphs have been used in computational complexity theory to study the power of Turing machines that have a working memory limited to a logarithmic number of bits, with the much larger input accessible only through read access rather than being modifiable. The problems that can be solved by machines limited in this way define the complexity class L. It was unclear for many years whether connected components could be found in this model, when formalized as a decision problem of testing whether two vertices belong to the same component, and in 1982 a related complexity class, SL, was defined to include this connectivity problem and any other problem equivalent to it under logarithmic-space reductions. It was finally proven in 2008 that this connectivity problem can be solved in logarithmic space, and therefore that
In a graph represented as an adjacency list, with random access to its vertices, it is possible to estimate the number of connected components, with constant probability of obtaining additive (absolute) error at most , in sublinear time .
In random graphs
In random graphs the sizes of components are given by a random variable, which, in turn, depends on the specific model of how random graphs are chosen.
In the version of the Erdős–Rényi–Gilbert model, a graph on vertices is generated by choosing randomly and independently for each pair of vertices whether to include an edge connecting that pair, with of including an edge and probability of leaving those two vertices without an edge connecting them. The connectivity of this model depends and there are three different ranges with very different behavior from each other. In the analysis below, all outcomes occur with high probability, meaning that the probability of the outcome is arbitrarily close to one for sufficiently large values The analysis depends on a parameter , a positive constant independent of that can be arbitrarily close to zero.
Subcritical
In this range of , all components are simple and very small. The largest component has logarithmic size. The graph is a pseudoforest. Most of its components are trees: the number of vertices in components that have cycles grows more slowly than any unbounded function of the number of vertices. Every tree of fixed size occurs linearly many times.
Critical
The largest connected component has a number of vertices proportional to There may exist several other large components; however, the total number of vertices in non-tree components is again proportional to
Supercritical
There is a single giant component containing a linear number of vertices. For large values of its size approaches the whole graph: where is the positive solution to the equation The remaining components are small, with logarithmic size.
In the same model of random graphs, there will exist multiple connected components with high probability for values of below a significantly higher threshold, and a single connected component for values above the threshold, This phenomenon is closely related to the coupon collector's problem: in order to be connected, a random graph needs enough edges for each vertex to be incident to at least one edge. More precisely, if random edges are added one by one to a graph, then with high probability the first edge whose addition connects the whole graph touches the last isolated vertex.
For different models including the random subgraphs of grid graphs, the connected components are described by percolation theory. A key question in this theory is the existence of a percolation threshold, a critical probability above which a giant component (or infinite component) exists and below which it does not.
| Mathematics | Graph theory | null |
246267 | https://en.wikipedia.org/wiki/Magnesium%20sulfate | Magnesium sulfate | Magnesium sulfate or magnesium sulphate is a chemical compound, a salt with the formula , consisting of magnesium cations (20.19% by mass) and sulfate anions . It is a white crystalline solid, soluble in water but not in ethanol.
Magnesium sulfate is usually encountered in the form of a hydrate , for various values of n between 1 and 11. The most common is the heptahydrate , known as Epsom salt, which is a household chemical with many traditional uses, including bath salts.
The main use of magnesium sulfate is in agriculture, to correct soils deficient in magnesium (an essential plant nutrient because of the role of magnesium in chlorophyll and photosynthesis). The monohydrate is favored for this use; by the mid 1970s, its production was 2.3 million tons per year. The anhydrous form and several hydrates occur in nature as minerals, and the salt is a significant component of the water from some springs.
Hydrates
Magnesium sulfate can crystallize as several hydrates, including:
Anhydrous, ; unstable in nature, hydrates to form epsomite.
Monohydrate, ; kieserite, monoclinic.
Monohydrate, ; triclinic.
or .
Dihydrate, ; orthorhombic.
or .
Trihydrate, .
Tetrahydrate, ; starkeyite, monoclinic.
Pentahydrate, ; pentahydrite, triclinic.
Hexahydrate, ; hexahydrite, monoclinic.
Heptahydrate, ("Epsom salt"); epsomite, orthorhombic.
Enneahydrate, , monoclinic.
Decahydrate, .
Undecahydrate, ; meridianiite, triclinic.
As of 2017, the existence of the decahydrate apparently has not been confirmed.
All the hydrates lose water upon heating. Above 320 °C, only the anhydrous form is stable. It decomposes without melting at 1124 °C into magnesium oxide (MgO) and sulfur trioxide ().
Heptahydrate
The heptahydrate takes its common name "Epsom salt" from a bitter saline spring in Epsom in Surrey, England, where the salt was produced from the springs that arise where the porous chalk of the North Downs meets the impervious London clay.
The heptahydrate readily loses one equivalent of water to form the hexahydrate.
It is a natural source of both magnesium and sulphur. Epsom salts are commonly used in bath salts, exfoliants, muscle relaxers and pain relievers. However, these are different from Epsom salts that are used for gardening, as they contain aromas and perfumes not suitable for plants.
Monohydrate
Magnesium sulfate monohydrate, or kieserite, can be prepared by heating the heptahydrate to 120 °C. Further heating to 250 °C gives anhydrous magnesium sulfate. Kieserite exhibits monoclinic symmetry at pressures lower than 2.7 GPa after which it transforms to phase of triclinic symmetry.
Undecahydrate
The undecahydrate , meridianiite, is stable at atmospheric pressure only below 2 °C. Above that temperature, it liquefies into a mix of solid heptahydrate and a saturated solution. It has a eutectic point with water at −3.9 °C and 17.3% (mass) of . Large crystals can be obtained from solutions of the proper concentration kept at 0 °C for a few days.
At pressures of about 0.9 GPa and at 240 K, meridianiite decomposes into a mixture of ice VI and the enneahydrate .
Enneahydrate
The enneahydrate was identified and characterized only recently, even though it seems easy to produce (by cooling a solution of and sodium sulfate in suitable proportions).
The structure is monoclinic, with unit-cell parameters at 250 K: a = 0.675 nm, b = 1.195 nm, c = 1.465 nm, β = 95.1°, V = 1.177 nm3 with Z = 4. The most probable space group is P21/c. Magnesium selenate also forms an enneahydrate , but with a different crystal structure.
Natural occurrence
As and ions are respectively the second most abundant cation and anion present in seawater after and , magnesium sulfates are common minerals in geological environments. Their occurrence is mostly connected with supergene processes. Some of them are also important constituents of evaporitic potassium-magnesium (K-Mg) salts deposits.
Bright spots observed by the Dawn Spacecraft in Occator Crater on the dwarf planet Ceres are most consistent with reflected light from magnesium sulfate hexahydrate.
Almost all known mineralogical forms of are hydrates. Epsomite is the natural analogue of "Epsom salt". Meridianiite, , has been observed on the surface of frozen lakes and is thought to also occur on Mars. Hexahydrite is the next lower hydrate. Three next lower hydrates – pentahydrite, starkeyite, and especially sanderite – are rare. Kieserite is a monohydrate and is common among evaporitic deposits. Anhydrous magnesium sulfate was reported from some burning coal dumps.
Preparation
Magnesium sulfate is usually obtained directly from dry lake beds and other natural sources. It can also be prepared by reacting magnesite (magnesium carbonate, ) or magnesia (oxide, MgO) with sulfuric acid ():
Another possible method is to treat seawater or magnesium-containing industrial wastes so as to precipitate magnesium hydroxide and react the precipitate with sulfuric acid.
Also, magnesium sulfate heptahydrate (epsomite, ) is manufactured by dissolution of magnesium sulfate monohydrate (kieserite, ) in water and subsequent crystallization of the heptahydrate.
Physical properties
Magnesium sulfate relaxation is the primary mechanism that causes the absorption of sound in seawater at frequencies above 10 kHz (acoustic energy is converted to thermal energy). Lower frequencies are less absorbed by the salt, so that low frequency sound travels farther in the ocean. Boric acid and magnesium carbonate also contribute to absorption.
Uses
Medical
Magnesium sulfate is used both externally (as Epsom salt) and internally.
The main external use is the formulation as bath salts, especially for foot baths to soothe sore feet. Such baths have been claimed to also soothe and hasten recovery from muscle pain, soreness, or injury. Potential health effects of magnesium sulfate are reflected in medical studies on the impact of magnesium on resistant depression
and as an analgesic for migraine and chronic pain.
Magnesium sulfate has been studied in the treatment of asthma,
preeclampsia and eclampsia.
Magnesium sulfate is usually the main component of the concentrated salt solution used in isolation tanks to increase its specific gravity to approximately 1.25–1.26. This high density allows an individual to float effortlessly on the surface of water in the closed tank, eliminating stimulation of as many of the external senses as possible.
In the UK, a medication containing magnesium sulfate and phenol, called "drawing paste", is useful for small boils or localized infections and removing splinters.
Internally, magnesium sulfate may be administered by oral, respiratory, or intravenous routes. Internal uses include replacement therapy for magnesium deficiency, treatment of acute and severe arrhythmias, as a bronchodilator in the treatment of asthma, preventing eclampsia and cerebral palsy, a tocolytic agent, and as an anticonvulsant. The effectiveness and safety of magnesium sulfate for treating acute bronchiolitis in children under the age of 2 years old is not well understood.
It also may be used as laxative.
Agriculture
In agriculture, magnesium sulfate is used to increase magnesium or sulfur content in soil. It is most commonly applied to potted plants, or to magnesium-hungry crops such as potatoes, tomatoes, carrots, peppers, lemons, and roses. The advantage of magnesium sulfate over other magnesium soil amendments (such as dolomitic lime) is its high solubility, which also allows the option of foliar feeding. Solutions of magnesium sulfate are also nearly pH neutral, compared with the slightly alkaline salts of magnesium as found in limestone; therefore, the use of magnesium sulfate as a magnesium source for soil does not significantly change the soil pH. Contrary to the popular belief that magnesium sulfate is able to control pests and slugs, helps seeds germination, produce more flowers, improve nutrient uptake, and is environmentally friendly, it does none of the purported claims except for correcting magnesium deficiency in soils. Magnesium sulfate can even pollute water if used in excessive amounts.
Magnesium sulfate was historically used as a treatment for lead poisoning prior to the development of chelation therapy, as it was hoped that any lead ingested would be precipitated out by the magnesium sulfate and subsequently purged from the digestive system. This application saw particularly widespread use among veterinarians during the early-to-mid 20th century; Epsom salt was already available on many farms for agricultural use, and it was often prescribed in the treatment of farm animals that had inadvertently ingested lead.
Food preparation
Magnesium sulfate is used as:
Brewing salt in making beer
Coagulant for making tofu
Salt substitute
A food additive to add taste to bottled water.
Chemistry
Anhydrous magnesium sulfate is commonly used as a desiccant in organic synthesis owing to its affinity for water and compatibility with most organic compounds. During work-up, an organic phase is treated with anhydrous magnesium sulfate. The hydrated solid is then removed by filtration, decantation, or by distillation (if the boiling point is low enough). Other inorganic sulfate salts such as sodium sulfate and calcium sulfate may be used in the same way.
Construction
Magnesium sulfate is used to prepare specific cements by the reaction between magnesium oxide and magnesium sulfate solution, which are of good binding ability and more resistance than Portland cement. This cement is mainly utilized in the production of lightweight insulation panels, although its poor water resistance limits its usage.
Magnesium (or sodium) sulfate is also used for testing aggregates for soundness in accordance with ASTM C88 standard, when there are no service records of the material exposed to actual weathering conditions. The test is accomplished by repeated immersion in saturated solutions followed by oven drying to dehydrate the salt precipitated in permeable pore spaces. The internal expansive force, derived from the rehydration of the salt upon re-immersion, simulates the expansion of water on freezing.
Magnesium sulfate is also used to test the resistance of concrete to external sulfate attack (ESA).
Aquaria
Magnesium sulfate heptahydrate is also used to maintain the magnesium concentration in marine aquaria which contain large amounts of stony corals, as it is slowly depleted in their calcification process. In a magnesium-deficient marine aquarium, calcium and alkalinity concentrations are very difficult to control because not enough magnesium is present to stabilize these ions in the saltwater and prevent their spontaneous precipitation into calcium carbonate.
Double salts
Double salts containing magnesium sulfate exist. There are several known as sodium magnesium sulfates and potassium magnesium sulfates. A mixed copper-magnesium sulfate heptahydrate was found to occur in mine tailings and was given the mineral name alpersite.
| Physical sciences | Salts | null |
246275 | https://en.wikipedia.org/wiki/Earwig | Earwig | Earwigs make up the insect order Dermaptera. With about 2,000 species in 12 families, they are one of the smaller insect orders. Earwigs have characteristic cerci, a pair of forceps-like pincers on their abdomen, and membranous wings folded underneath short, rarely used forewings, hence the scientific order name, "skin wings". Some groups are tiny parasites on mammals and lack the typical pincers. Earwigs are found on all continents except Antarctica.
Earwigs are mostly nocturnal and often hide in small, moist crevices during the day, and are active at night, feeding on a wide variety of insects and plants. Damage to foliage, flowers, and various crops is commonly blamed on earwigs, especially the common earwig Forficula auricularia.
Earwigs have five molts in the year before they become adults. Many earwig species display maternal care, which is uncommon among insects. Female earwigs may care for their eggs; the ones that do will continue to watch over nymphs until their second molt. As the nymphs molt, sexual dimorphism such as differences in pincer shapes begins to show.
Extant Dermaptera belong to the suborder Neodermaptera, which first appeared during the Cretaceous. Some earwig specimen fossils are placed with extinct suborders Archidermaptera or Eodermaptera, the former dating to the Late Triassic and the latter to the Middle Jurassic. Dermaptera belongs to the major grouping Polyneoptera, and are amongst the earliest diverging members of the group, alongside angel insects (Zoraptera), and stoneflies (Plecoptera), but the exact relationship among the three groups is uncertain.
Etymology
The scientific name for the order, Dermaptera, is Greek in origin, stemming from the words , meaning , and (plural ), meaning . It was coined by Charles De Geer in 1773. The common term, earwig, is derived from the Old English , which means , and , which means , or literally, . Entomologists suggest that the origin of the name is a reference to the appearance of the hindwings, which are unique and distinctive among insects, and resemble a human ear when unfolded. The name is more popularly thought to be related to the old wives' tale that earwigs burrowed into the brains of humans through the ear and laid their eggs there. Earwigs are not known to purposefully climb into ear canals, but there has been at least one anecdotal report of earwigs being found in the ear.
Distribution
Earwigs are abundant and can be found throughout the Americas and Eurasia. The common earwig was introduced into North America in 1907 from Europe, but tends to be more common in the southern and southwestern parts of the United States. The only native species of earwig found in the north of the United States is the spine-tailed earwig (Doru aculeatum), found as far north as Canada, where it hides in the leaf axils of emerging plants in southern Ontario wetlands. However, other families can be found in North America, including Forficulidae (Doru and Forficula being found there), Spongiphoridae, Anisolabididae, and Labiduridae.
Few earwigs survive winter outdoors in cold climates. They can be found in tight crevices in woodland, fields and gardens. Out of about 1,800 species, about 25 occur in North America, 45 in Europe (including 7 in Great Britain), and 60 in Australia.
Morphology
Most earwigs are flattened (which allows them to fit inside tight crevices, such as under bark) with an elongated body generally long. The largest extant species is the Australian giant earwig (Titanolabis colossea) which is approximately long, while the possibly extinct (declared extinct in 2014) Saint Helena earwig (Labidura herculeana) reached . Earwigs are characterized by the cerci, or the pair of forceps-like pincers on their abdomen; male earwigs generally have more curved pincers than females. These pincers are used to capture prey, defend themselves and fold their wings under the short tegmina. The antennae are thread-like with at least 10 segments.
Males in the six families Karschiellidae, Pygidicranidae, Diplatyidae, Apachyidae, Anisolabisidae and Labiduridae have paired penises, while the males in the remaining groups have a single penis. Both penises are symmetrical in Pygidicranidae and Diplatyidae, but in Karschiellidae the left one is strongly reduced. Apachyidae, Anisolabisidae, and Labiduridae have an asymmetrical pair, with left and right one pointing on opposite directions when not in use. The females have just a single genital opening, so only one of the paired penises is ever used during copulation.
The forewings are short oblong leathery plates used to cover the hindwings like the elytra of a beetle, rather than to fly. Most species have short and leather-like forewings with very thin hindwings, though species in the former suborders Arixeniina and Hemimerina (epizoic species, sometimes considered as ectoparasites) are wingless and blind with filiform segmented cerci (today these are both included merely as families in the suborder Neodermaptera). The hindwing is a very thin membrane that expands like a fan, radiating from one point folded under the forewing. Even though most earwigs have wings and are capable of flight, they are rarely seen in flight. These wings are unique in venation and in the pattern of folding that requires the use of the cerci.
Internal
The neuroendocrine system is typical of insects. There is a brain, a subesophageal ganglion, three thoracic ganglia, and six abdominal ganglia. Strong neuron connections connect the neurohemal corpora cardiaca to the brain and frontal ganglion, where the closely related median corpus allatum produces juvenile hormone III in close proximity to the neurohemal dorsal arota. The digestive system of earwigs is like all other insects, consisting of a fore-, mid-, and hindgut, but earwigs lack gastric caecae which are specialized for digestion in many species of insect. Long, slender (excretory) malpighian tubules can be found between the junction of the mid- and hind gut.
The reproductive system of females consist of paired ovaries, lateral oviducts, spermatheca, and a genital chamber. The lateral ducts are where the eggs leave the body, while the spermatheca is where sperm is stored. Unlike other insects, the gonopore, or genital opening is behind the seventh abdominal segment. The ovaries are primitive in that they are polytrophic (the nurse cells and oocytes alternate along the length of the ovariole). In some species these long ovarioles branch off the lateral duct, while in others, short ovarioles appear around the duct.
Life cycle and reproduction
Earwigs are hemimetabolous, meaning they undergo incomplete metamorphosis, developing through a series of four to six molts. The developmental stages between molts are called instars. Earwigs live for about a year from hatching. They start mating in the autumn, and can be found together in the autumn and winter. The male and female will live in a chamber in debris, crevices, or soil deep. After mating, the sperm may remain in the female for months before the eggs are fertilized. From midwinter to early spring, the male will leave, or be driven out by the female. Afterward the female will begin to lay 20 to 80 pearly white eggs in two days. Some earwigs, those parasitic in the suborders Arixeniina and Hemimerina, are viviparous (give birth to live young); they would be fed by a sort of placenta. When first laid, the eggs are white or cream-colored and oval-shaped, but right before hatching they become kidney-shaped and brown. Each egg is approximately tall and wide.
Earwigs are among the few non-social insect species that show maternal care. The mother pays close attention to the needs of her eggs, such as warmth and protection. She faithfully defends the eggs from predators, not leaving them even to eat unless the clutch goes bad. She also continuously cleans the eggs to protect them from fungi. Studies have found that the urge to clean the eggs persists for only a few days after they are removed, and does not return even if the eggs are replaced; however, when the eggs were continuously replaced after hatching, the mother continued to clean the new eggs for up to three months.
Studies have also shown that the mother does not immediately recognize her own eggs. After laying them, she gathers them together, and studies have found mothers to pick up small egg-shaped wax balls or stones by accident. Eventually, the impostor eggs were rejected for not having the proper scent.
The eggs hatch in about seven days. The mother may assist the nymphs in hatching. When the nymphs hatch, they eat the egg casing and continue to live with the mother. The nymphs look similar to their parents, only smaller, and will nest under their mother and she will continue to protect them until their second molt. The nymphs feed on food regurgitated by the mother, and on their own molts. If the mother dies before the nymphs are ready to leave, the nymphs may eat her.
After five to six instars, the nymphs will molt into adults. The male's forceps will become curved, while the females' forceps remain straight. They will also develop their natural color, which can be anything from a light brown (as in the tawny earwig) to a dark black (as in the ringlegged earwig). In species of winged earwigs, the wings will start to develop at this time. The forewings of an earwig are sclerotized to serve as protection for the membranous hindwings.
Behavior
Most earwigs are nocturnal and inhabit small crevices, living in small amounts of debris, in various forms such as bark and fallen logs. Species have been found to be blind and living in caves, or cavernicolous, reported to be found on the island of Hawaii and in South Africa. Food typically consists of a wide array of living and dead plant and animal matter. For protection from predators, the species Doru taeniatum of earwigs can squirt foul-smelling yellow liquid in the form of jets from scent glands on the dorsal side of the third and fourth abdominal segment. It aims the discharges by revolving the abdomen, a maneuver that enables it simultaneously to use its pincers in defense.
Under exceptional circumstances, earwigs form swarms and can take over significant areas of a district. In August 1755 they appeared in vast numbers near Stroud, Gloucestershire, UK, especially in the cracks and crevices of "old wooden buildings...so that they dropped out oftentimes in such multitudes as to literally cover the floor". A similar "plague" occurred in 2006, in and around a woodland cabin near the Blue Ridge Mountains of the eastern United States; it persisted through winter and lasted at least two years.
Ecology
Earwigs are mostly scavengers, but some are omnivorous or predatory. The abdomen of the earwig is flexible and muscular. It is capable of maneuvering as well as opening and closing the forceps. The forceps are used for a variety of purposes. In some species, the forceps have been observed in use for holding prey, and in copulation. The forceps tend to be more curved in males than in females.
The common earwig is an omnivore, eating plants and ripe fruit as well as actively hunting arthropods. To a large extent, this species is also a scavenger, feeding on decaying plant and animal matter if given the chance. Observed prey include largely plant lice, but also large insects such as bluebottle flies and woolly aphids. Plants that they feed on typically include clover, dahlias, zinnias, butterfly bush, hollyhock, lettuce, cauliflower, strawberry, blackberry, sunflowers, celery, peaches, plums, grapes, potatoes, roses, seedling beans and beets, and tender grass shoots and roots; they have also been known to eat corn silk, damaging the crop.
Species of the suborders Arixeniina and Hemimerina are generally considered epizoic, or living on the outside of other animals, mainly mammals. In the Arixeniina, family Arixeniidae, species of the genus Arixenia are normally found deep in the skin folds and gular pouch of Malaysian hairless bulldog bats (Cheiromeles torquatus), apparently feeding on bats' body or glandular secretions. On the other hand, species in the genus Xeniaria (still of the suborder Arixeniina) are believed to feed on the guano and possibly the guanophilous arthropods in the bat's roost, where it has been found. Hemimerina includes Araeomerus found in the nest of long-tailed pouch rats (Beamys), and Hemimerus which are found on giant Cricetomys rats.
Earwigs are generally nocturnal, and typically hide in small, dark, and often moist areas in the daytime. They can usually be seen on household walls and ceilings. Interaction with earwigs at this time results in a defensive free-fall to the ground followed by a scramble to a nearby cleft or crevice. During the summer they can be found around damp areas such as near sinks and in bathrooms. Earwigs tend to gather in shady cracks or openings or anywhere that they can remain concealed during daylight. Picnic tables, compost and waste bins, patios, lawn furniture, window frames, or anything with minute spaces (even artichoke blossoms) can potentially harbour them.
Predators and parasites
Earwigs are regularly preyed upon by birds, and like many other insect species they are prey for insectivorous mammals, amphibians, lizards, centipedes, assassin bugs, and spiders. European naturalists have observed bats preying upon earwigs. Their primary insect predators are parasitic species of Tachinidae, or tachinid flies, whose larvae are endoparasites. One species of tachinid fly, Triarthria setipennis, has been demonstrated to be successful as a biological control of earwigs for almost a century. Another tachinid fly and parasite of earwigs, Ocytata pallipes, has shown promise as a biological control agent as well. The common predatory wasp, the yellow jacket (Vespula maculifrons), preys upon earwigs when abundant. A small species of roundworm, Mermis nigrescens, is known to occasionally parasitize earwigs that have consumed roundworm eggs with plant matter. At least 26 species of parasitic fungus from the order Laboulbeniales have been found on earwigs. The eggs and nymphs are also cannibalized by other earwigs. A species of tyroglyphoid mite, Histiostoma polypori (Histiostomatidae, Astigmata), are observed on common earwigs, sometimes in great densities; however, this mite feeds on earwig cadavers and not its live earwig transportation. Hippolyte Lucas observed scarlet acarine mites on European earwigs.
Evolution
The fossil record of the Dermaptera starts in the Late Triassic to Early Jurassic period about in England and Australia, and comprises about 70 specimens in the extinct suborder Archidermaptera. Some of the traits believed by neontologists to belong to modern earwigs are not found in the earliest fossils, but adults had five-segmented tarsi (the final segment of the leg), well developed ovipositors, veined tegmina (forewings) and long segmented cerci; in fact the pincers would not have been curled or used as they are now. The theorized stem group of the Dermaptera are the Protelytroptera, which are similar to modern Blattodea (cockroaches) with shell-like forewings and the large, unequal anal fan, are known from the Permian of North America, Europe and Australia. No fossils from the Triassic — during which Dermaptera would have evolved from Protelytroptera — have been found. Amongst the most frequently suggested order of insects to be the closest relatives of Dermaptera is Notoptera, theorized by Giles in 1963. However, other arguments have been made by other authors linking them to Phasmatodea, Embioptera, Plecoptera, and Dictyoptera. A 2012 mitochondrial DNA study suggested that this order is the sister to stoneflies of the order Plecoptera. A 2018 phylogenetic analysis found that their closest living relatives were angel insects of the order Zoraptera, with very high support.
Archidermaptera is believed to be sister to the remaining earwig groups, the extinct Eodermaptera and the living suborder Neodermaptera (= former suborders Forficulina, Hemimerina, and Arixeniina). The extinct suborders have tarsi with five segments (unlike the three found in Neodermaptera) as well as unsegmented cerci. No fossil Hemimeridae and Arixeniidae are known. Species in Hemimeridae were at one time in their own order, Diploglassata, Dermodermaptera, or Hemimerina. Like most other epizoic species, there is no fossil record, but they are probably no older than late Tertiary.
Some evidence of early evolutionary history is the structure of the antennal heart, a separate circulatory organ consisting of two ampullae, or vesicles, that are attached to the frontal cuticle near the bases of the antennae. These features have not been found in other insects. An independent organ exists for each antenna, consisting of an ampulla, attached to the frontal cuticle medial to the antenna base and forming a thin-walled sac with a valved ostium on its ventral side. They pump blood by elastic connective tissue, rather than muscle.
Taxonomy
Distinguishing characteristics
The characteristics which distinguish the order Dermaptera from other insect orders are:
General body shape: Elongate; dorso-ventrally flattened.
Head: Prognathous. Antennae are segmented. Biting-type mouthparts. Ocelli absent. Compound eyes in most species, reduced or absent in some taxa.
Appendages: Two pairs of wings normally present. The forewings are modified into short smooth, veinless tegmina. Hindwings are membranous and semicircular with veins radiating outwards.
Abdomen: Cerci are unsegmented and resemble forceps. The ovipositor in females is reduced or absent.
The overwhelming majority of earwig species are in the former suborder Forficulina, grouped into nine families of 180 genera, including Forficula auricularia, the common European Earwig. Species within Forficulina are free-living, have functional wings and are not parasites. The cerci are unsegmented and modified into large, forceps-like structures.
The first epizoic species of earwig was discovered by a London taxidermist on the body of a Malaysian hairless bulldog bat in 1909, then described by Karl Jordan. By the 1950s, the two suborders Arixeniina and Hemimerina had been added to Dermaptera. These were subsequently demoted to family Arixeniidae and superfamily Hemimeroidea (with sole family Hemimeridae), respectively. They are now grouped together with the former Forficulina in the new suborder Neodermaptera.
Arixeniidae represents two genera, Arixenia and Xeniaria, with a total of five species in them. As with Hemimeridae, they are blind and wingless, with filiform segmented cerci. Hemimeridae are viviparous ectoparasites, preferring the fur of African rodents in either Cricetomys or Beamys genera. Hemimerina also has two genera, Hemimerus and Araeomerus, with a total of 11 species.
Phylogeny
Dermaptera is relatively small compared to the other orders of Insecta, with only about 2,000 species, 3 suborders and 15 families, including the extinct suborders Archidermaptera and Eodermaptera with their extinct families Protodiplatyidae, Dermapteridae, Semenoviolidae, and Turanodermatidae. The phylogeny of the Dermaptera is still debated. The extant Dermaptera appear to be monophyletic and there is support for the monophyly of the families Forficulidae, Chelisochidae, Labiduridae and Anisolabididae, however evidence has supported the conclusion that the former suborder Forficulina was paraphyletic through the exclusion of Hemimerina and Arixeniina which should instead be nested within the Forficulina. Thus, these former suborders were eliminated in the most recent higher classification.
Relationship with humans
Earwigs are fairly abundant and are found in many areas around the world. There is no evidence that they transmit diseases to humans or other animals. Their pincers are commonly believed to be dangerous, but in reality, even the curved pincers of males cause little or no harm to humans. Earwigs have been rarely known to crawl into the ears of humans, and they do not lay eggs inside the human body or human brain as is often claimed.
There is a debate whether earwigs are harmful or beneficial to crops, as they eat both the foliage and the insects eating such foliage, such as aphids, though it would take a large population to do considerable damage. The common earwig eats a wide variety of plants, and also a wide variety of foliage, including the leaves and petals. They have been known to cause economic losses in fruit and vegetable crops. Some examples are the flowers, hops, red raspberries, and corn crops in Germany, and in the south of France, earwigs have been observed feeding on peaches and apricots. The earwigs attacked mature plants and made cup-shaped bite marks in diameter.
In literature and folklore
One of the primary characters of James Joyce's experimental novel Finnegans Wake is referred to by the initials "HCE," which primarily stand for "Humphrey Chimpden Earwicker," a reference to earwigs. Earwig imagery is found throughout the book, and also occurs in the author's Ulysses in the Laestrygonians chapter.
Oscar Cook wrote the short story (appearing in Switch On The Light, April, 1931; A Century Of Creepy Stories 1934; Pan Horror 2, 1960) "Boomerang", which was later adapted by Rod Serling for the Night Gallery TV-series episode, "The Caterpillar". It tells the tale of the use of an earwig as a murder instrument applied by a man obsessed with the wife of an associate.
Thomas Hood discusses the myth of earwigs finding shelter in the human ear in the poem "Love Lane" by saying the following: 'Tis vain to talk of hopes and fears, / And hope the least reply to win, / From any maid that stops her ears / In dread of earwigs creeping in!"
In some parts of rural England the earwig is called "battle-twig", which is present in Alfred, Lord Tennyson's poem The Spinster's Sweet-Arts: 'Twur as bad as battle-twig 'ere i' my oan blue chamber to me."
In some regions of Japan, earwigs are called "Chinpo-Basami" or "Chinpo-Kiri", which means "penis cutter". Kenta Takada, a Japanese cultural entomologist, has inferred that these names may be derived from the fact that earwigs were seen around old Japanese-style toilets.
In Roald Dahl's children's book George's Marvellous Medicine, George's Grandma encourages him to eat unwashed celery with beetles and earwigs still on them. A big fat earwig is very tasty,' Grandma said, licking her lips. 'But you've got to be very quick, my dear, when you put one of those in your mouth. It has a pair of sharp nippers on its back end and if it grabs your tongue with those, it never lets go. So you've got to bite the earwig first, chop chop, before it bites you.
| Biology and health sciences | Insects and other hexapods | null |
246333 | https://en.wikipedia.org/wiki/Mayfly | Mayfly | Mayflies (also known as shadflies or fishflies in Canada and the upper Midwestern United States, as Canadian soldiers in the American Great Lakes region, and as up-winged flies in the United Kingdom) are aquatic insects belonging to the order Ephemeroptera. This order is part of an ancient group of insects termed the Palaeoptera, which also contains dragonflies and damselflies. Over 3,000 species of mayfly are known worldwide, grouped into over 400 genera in 42 families.
Mayflies have ancestral traits that were probably present in the first flying insects, such as long tails and wings that do not fold flat over the abdomen. Their immature stages are aquatic fresh water forms (called "naiads" or "nymphs"), whose presence indicates a clean, unpolluted and highly oxygenated aquatic environment. They are unique among insect orders in having a fully winged terrestrial preadult stage, the subimago, which moults into a sexually mature adult, the imago.
Mayflies "hatch" (emerge as adults) from spring to autumn, not necessarily in May, in enormous numbers. Some hatches attract tourists. Fly fishermen make use of mayfly hatches by choosing artificial fishing flies that resemble them. One of the most famous English mayflies is Rhithrogena germanica, the fisherman's "March brown mayfly".
The brief lives of mayfly adults have been noted by naturalists and encyclopaedists since Aristotle and Pliny the Elder in classical antiquity. The German engraver Albrecht Dürer included a mayfly in his 1495 engraving The Holy Family with the Mayfly to suggest a link between heaven and earth. The English poet George Crabbe compared the brief life of a daily newspaper with that of a mayfly in the satirical poem "The Newspaper" (1785), both being known as "ephemera".
Description
Nymph
Immature mayflies are aquatic and are referred to as nymphs or naiads. In contrast to their short lives as adults, they may live for several years in the water. They have an elongated, cylindrical or somewhat flattened body that passes through a number of instars (stages), moulting and increasing in size each time. When ready to emerge from the water, nymphs vary in length, depending on species, from . The head has a tough outer covering of sclerotin, often with various hard ridges and projections; it points either forwards or downwards, with the mouth at the front. There are two large compound eyes, three ocelli (simple eyes) and a pair of antennae of variable lengths, set between or in front of the eyes. The mouthparts are designed for chewing and consist of a flap-like labrum, a pair of strong mandibles, a pair of maxillae, a membranous hypopharynx and a labium.
The thorax consists of three segments – the hindmost two, the mesothorax and metathorax, being fused. Each segment bears a pair of legs which usually terminate in a single claw. The legs are robust and often clad in bristles, hairs or spines. Wing pads develop on the mesothorax, and in some species, hindwing pads develop on the metathorax.
The abdomen consists of ten segments, some of which may be obscured by a large pair of operculate gills, a thoracic shield (expanded part of the prothorax) or the developing wing pads. In most taxa up to seven pairs of gills arise from the top or sides of the abdomen, but in some species they are under the abdomen, and in a very few species the gills are instead located on the coxae of the legs, or the bases of the maxillae. The abdomen terminates in slender thread-like projections, consisting of a pair of cerci, with or without a third central caudal filament.
Subimago
The final moult of the nymph is not to the full adult form, but to a winged stage called a subimago that physically resembles the adult, but which is usually sexually immature and duller in colour. The subimago, or dun, often has partially cloudy wings fringed with minute hairs known as microtrichia; its eyes, legs and genitalia are not fully developed. Females of some mayflies (subfamily Palingeniinae) do not moult from a subimago state into an adult stage and are sexually mature while appearing like a subimago with microtrichia on the wing membrane. Oligoneuriine mayflies form another exception in retaining microtrichia on their wings but not on their bodies. Subimagos are generally poor fliers, have shorter appendages, and typically lack the colour patterns used to attract mates. In males of Ephoron leukon, the subimagos have forelegs that are short and compressed, with accordion like folds, and expands to more than double its length after moulting. After a period, usually lasting one or two days but in some species only a few minutes, the subimago moults to the full adult form, making mayflies the only insects where a winged form undergoes a further moult.
Imago
Adult mayflies, or imagos, are relatively primitive in structure, exhibiting traits that were probably present in the first flying insects. These include long tails and wings that do not fold flat over the abdomen. Mayflies are delicate-looking insects with one or two pairs of membranous, triangular wings, which are extensively covered with veins. At rest, the wings are held upright, like those of a butterfly. The hind wings are much smaller than the forewings and may be vestigial or absent. The second segment of the thorax, which bears the forewings, is enlarged to hold the main flight muscles. Adults have short, flexible antennae, large compound eyes, three ocelli and non-functional mouthparts. In most species, the males' eyes are large and the front legs unusually long, for use in locating and grasping females during the mid-air mating. In the males of some families, there are two large cylindrical "turban" eyes (also known as turbanate or turbinate eyes) that face upwards in addition to the lateral eyes. They are capable of detecting ultraviolet light and are thought to be used during courtship to detect females flying above them. In some species all the legs are functionless, apart from the front pair in males. The abdomen is long and roughly cylindrical, with ten segments and two or three long cerci (tail-like appendages) at the tip. Like Entognatha, Archaeognatha and Zygentoma, the spiracles on the abdomen do not have closing muscles. Uniquely among insects, mayflies possess paired genitalia, with the male having two aedeagi (penis-like organs) and the female two gonopores (sexual openings).
Biology
Reproduction and life cycle
Mayflies are hemimetabolous (they have "incomplete metamorphosis"). They are unique among insects in that they moult one more time after acquiring functional wings; this last-but-one winged (alate) instar usually lives a very short time and is known as a subimago, or to fly fishermen as a dun. Mayflies at the subimago stage are a favourite food of many fish, and many fishing flies are modelled to resemble them. The subimago stage does not survive for long, rarely for more than 24 hours. In some species, it may last for just a few minutes, while the mayflies in the family Palingeniidae have sexually mature subimagos and no true adult form at all.
Often, all the individuals in a population mature at once (a hatch), and for a day or two in the spring or autumn, mayflies are extremely abundant, dancing around each other in large groups, or resting on every available surface. In many species the emergence is synchronised with dawn or dusk, and light intensity seems to be an important cue for emergence, but other factors may also be involved. Baetis intercalaris, for example, usually emerges just after sunset in July and August, but in one year, a large hatch was observed at midday in June. The soft-bodied subimagos are very attractive to predators. Synchronous emergence is probably an adaptive strategy that reduces the individual's risk of being eaten. The lifespan of an adult mayfly is very short, varying with the species. The primary function of the adult is reproduction; adults do not feed and have only vestigial mouthparts, while their digestive systems are filled with air. Dolania americana has the shortest adult lifespan of any mayfly: the adult females of the species live for less than five minutes.
Male adults may patrol individually, but most congregate in swarms a few metres above water with clear open sky above it, and perform a nuptial or courtship dance. Each insect has a characteristic up-and-down pattern of movement; strong wingbeats propel it upwards and forwards with the tail sloping down; when it stops moving its wings, it falls passively with the abdomen tilted upwards. Females fly into these swarms, and mating takes place in the air. A rising male clasps the thorax of a female from below using his front legs bent upwards, and inseminates her. Copulation may last just a few seconds, but occasionally a pair remains in tandem and flutters to the ground. Males may spend the night in vegetation and return to their dance the following day. Although they do not feed, some briefly touch the surface to drink a little water before flying off.
Females typically lay between four hundred and three thousand eggs. The eggs are often dropped onto the surface of the water; sometimes the female deposits them by dipping the tip of her abdomen into the water during flight, releasing a small batch of eggs each time, or deposits them in bulk while standing next to the water. In a few species, the female submerges and places the eggs among plants or in crevices underwater, but in general, they sink to the bottom. The incubation time is variable, depending at least in part on temperature, and may be anything from a few days to nearly a year. Eggs can go into a quiet dormant phase or diapause. The larval growth rate is also temperature-dependent, as is the number of moults. At anywhere between ten and fifty, these post-embryonic moults are more numerous in mayflies than in most other insect orders. The nymphal stage of mayflies may last from several months to several years, depending on species and environmental conditions.
Around half of all mayfly species whose reproductive biology has been described are parthenogenetic (able to asexually reproduce), including both partially and exclusively parthenogenetic populations and species.
Many species breed in moving water, where there is a tendency for the eggs and nymphs to get washed downstream. To counteract this, females may fly upriver before depositing their eggs. For example, the female Tisza mayfly, the largest European species with a length of , flies up to upstream before depositing eggs on the water surface. These sink to the bottom and hatch after 45 days, the nymphs burrowing their way into the sediment where they spend two or three years before hatching into subimagos.
When ready to emerge, several different strategies are used. In some species, the transformation of the nymph occurs underwater and the subimago swims to the surface and launches itself into the air. In other species, the nymph rises to the surface, bursts out of its skin, remains quiescent for a minute or two resting on the exuviae (cast skin) and then flies upwards, and in some, the nymph climbs out of the water before transforming.
Ecology
Nymphs live primarily in streams under rocks, in decaying vegetation or in sediments. Few species live in lakes, but they are among the most prolific. For example, the emergence of one species of Hexagenia was recorded on Doppler weather radar by the shoreline of Lake Erie in 2003. In the nymphs of most mayfly species, the paddle-like gills do not function as respiratory surfaces because sufficient oxygen is absorbed through the integument, instead serving to create a respiratory current. However, in low-oxygen environments such as the mud at the bottom of ponds in which Ephemera vulgata burrows, the filamentous gills act as true accessory respiratory organs and are used in gaseous exchange.
In most species, the nymphs are herbivores or detritivores, feeding on algae, diatoms or detritus, but in a few species, they are predators of chironomid and other small insect larvae and nymphs. Nymphs of Povilla burrow into submerged wood and can be a problem for boat owners in Asia. Some are able to shift from one feeding group to another as they grow, thus enabling them to utilise a variety of food resources. They process a great quantity of organic matter as nymphs and transfer a lot of phosphates and nitrates to terrestrial environments when they emerge from the water, thus helping to remove pollutants from aqueous systems. Along with caddisfly larvae and gastropod molluscs, the grazing of mayfly nymphs has a significant impact on the primary producers, the plants and algae, on the bed of streams and rivers.
The nymphs are eaten by a wide range of predators and form an important part of the aquatic food chain. Fish are among the main predators, picking nymphs off the bottom or ingesting them in the water column, and feeding on emerging nymphs and adults on the water surface. Carnivorous stonefly, caddisfly, alderfly and dragonfly larvae feed on bottom-dwelling mayfly nymphs, as do aquatic beetles, leeches, crayfish and amphibians. Besides the direct mortality caused by these predators, the behaviour of their potential prey is also affected, with the nymphs' growth rate being slowed by the need to hide rather than feed. The nymphs are highly susceptible to pollution and can be useful in the biomonitoring of water bodies. Once they have emerged, large numbers are preyed on by birds, bats and by other insects, such as Rhamphomyia longicauda.
Mayfly nymphs may serve as hosts for parasites such as nematodes and trematodes. Some of these affect the nymphs' behaviour in such a way that they become more likely to be predated. Other nematodes turn adult male mayflies into quasi-females which haunt the edges of streams, enabling the parasites to break their way out into the aqueous environment they need to complete their life cycles. The nymphs can also serve as intermediate hosts for the horsehair worm Paragordius varius, which causes its definitive host, a grasshopper, to jump into water and drown.
Effects on ecosystem functioning
Mayflies are involved in both primary production and bioturbation. A study in laboratory simulated streams revealed that the mayfly genus Centroptilum increased the export of periphyton, thus indirectly affecting primary production positively, which is an essential process for ecosystems. The mayfly can also reallocate and alter the nutrient availability in aquatic habitats through the process of bioturbation. By burrowing in the bottom of lakes and redistributing nutrients, mayflies indirectly regulate phytoplankton and epibenthic primary production. Once burrowing to the bottom of the lake, mayfly nymphs begin to billow their respiratory gills. This motion creates current that carries food particles through the burrow and allows the nymph to filter feed. Other mayfly nymphs possess elaborate filter feeding mechanisms like that of the genus Isonychia. The nymph have forelegs that contain long bristle-like structures that have two rows of hairs. Interlocking hairs form the filter by which the insect traps food particles. The action of filter feeding has a small impact on water purification but an even larger impact on the convergence of small particulate matter into matter of a more complex form that goes on to benefit consumers later in the food chain.
Distribution
Mayflies are distributed all over the world in clean freshwater habitats, though absent from Antarctica. They tend to be absent from oceanic islands or represented by one or two species that have dispersed from nearby mainland. Female mayflies may be dispersed by wind, and eggs may be transferred by adhesion to the legs of waterbirds. The greatest generic diversity is found in the Neotropical realm, while the Holarctic has a smaller number of genera but a high degree of speciation. Some thirteen families are restricted to a single bioregion. The main families have some general habitat preferences: the Baetidae favour warm water; the Heptageniidae live under stones and prefer fast-flowing water; and the relatively large Ephemeridae make burrows in sandy lake or river beds.
Conservation
The nymph is the dominant life history stage of the mayfly. Different insect species vary in their tolerance to water pollution, but in general, the larval stages of mayflies, stoneflies (Plecoptera) and caddis flies (Trichoptera) are susceptible to a number of pollutants including sewage, pesticides and industrial effluent. In general, mayflies are particularly sensitive to acidification, but tolerances vary, and certain species are exceptionally tolerant to heavy metal contamination and to low pH levels. Ephemerellidae are among the most tolerant groups and Siphlonuridae and Caenidae the least. The adverse effects on the insects of pollution may be either lethal or sub-lethal, in the latter case resulting in altered enzyme function, poor growth, changed behaviour or lack of reproductive success. As important parts of the food chain, pollution can cause knock-on effects to other organisms; a dearth of herbivorous nymphs can cause overgrowth of algae, and a scarcity of predacious nymphs can result in an over-abundance of their prey species. Fish that feed on mayfly nymphs that have bioaccumulated heavy metals are themselves at risk. Adult female mayflies find water by detecting the polarization of reflected light. They are easily fooled by other polished surfaces which can act as traps for swarming mayflies.
The threat to mayflies applies also to their eggs. "Modest levels" of pollution in rivers in England are sufficient to kill 80% of mayfly eggs, which are as vulnerable to pollutants as other life-cycle stages; numbers of the blue-winged olive mayfly (Baetis) have fallen dramatically, almost to none in some rivers. The major pollutants thought to be responsible are fine sediment and phosphate from agriculture and sewage.
The status of many species of mayflies is unknown because they are known from only the original collection data. Four North American species are believed to be extinct. Among these, Pentagenia robusta was originally collected from the Ohio River near Cincinnati, but this species has not been seen since its original collection in the 1800s. Ephemera compar is known from a single specimen, collected from the "foothills of Colorado" in 1873, but despite intensive surveys of the Colorado mayflies reported in 1984, it has not been rediscovered.
The International Union for Conservation of Nature (IUCN) red list of threatened species includes one mayfly: Tasmanophlebia lacuscoerulei, the large blue lake mayfly, which is a native of Australia and is listed as endangered because its alpine habitat is vulnerable to climate change.
Taxonomy and phylogeny
Ephemeroptera was defined by Alpheus Hyatt and Jennie Maria Arms Sheldon in 1890–1. The taxonomy of the Ephemeroptera was reworked by George F. Edmunds and Jay R Traver, starting in 1954. Traver contributed to the 1935 work The Biology of Mayflies, and has been called "the first Ephemeroptera specialist in North America".
As of 2012, over 3,000 species of mayfly in 42 families and over 400 genera are known worldwide, including about 630 species in North America. Mayflies are an ancient group of winged (pterygote) insects. Putative fossil stem group representatives (e.g. Syntonopteroidea-like Lithoneura lameerrei) are already known from the late Carboniferous. The name Ephemeroptera is from the Greek ἐφήμερος, "short-lived" (literally "lasting a day", cf. English "ephemeral"), and πτερόν, , "wing", referring to the brief lifespan of adults. The English common name is for the insect's emergence in or around the month of May in the UK. The name shadfly is from the Atlantic fish the shad, which runs up American East Coast rivers at the same time as many mayflies emerge.
From the Permian, numerous stem group representatives of mayflies are known, which are often lumped into a separate taxon Permoplectoptera (e.g. including Protereisma permianum in the Protereismatidae, and Misthodotidae). The larvae of Permoplectoptera still had 9 pairs of abdominal gills, and the adults still had long hindwings. Maybe the fossil family Cretereismatidae from the Lower Cretaceous Crato Formation of Brazil also belongs as the last offshoot to Permoplectoptera. The Crato outcrops otherwise yielded fossil specimens of modern mayfly families or the extinct (but modern) family Hexagenitidae. However, from the same locality the strange larvae and adults of the extinct family Mickoleitiidae (order Coxoplectoptera) have been described, which represents the fossil sister group of modern mayflies, even though they had very peculiar adaptations such as raptorial forelegs.
The oldest mayfly inclusion in amber is Cretoneta zherichini (Leptophlebiidae) from the Lower Cretaceous of Siberia. In the much younger Baltic amber numerous inclusions of several modern families of mayflies have been found (Ephemeridae, Potamanthidae, Leptophlebiidae, Ametropodidae, Siphlonuridae, Isonychiidae, Heptageniidae, and Ephemerellidae). The modern genus Neoephemera is represented in the fossil record by the Ypresian species N. antiqua from Washington state.
Grimaldi and Engel, reviewing the phylogeny in 2005, commented that many cladistic studies had been made with no stability in Ephemeroptera suborders and infraorders; the traditional division into Schistonota and Pannota was wrong because Pannota is derived from the Schistonota.
The phylogeny of the Ephemeroptera was first studied using molecular analysis by Ogden and Whiting in 2005. They recovered the Baetidae as sister to the other clades.
Mayfly phylogeny was further studied using morphological and molecular analyses by Ogden and others in 2009. They found that the Asian genus Siphluriscus was sister to all other mayflies. Some existing lineages such as Ephemeroidea, and families such as Ameletopsidae, were found not to be monophyletic, through convergence among nymphal features.
The following traditional classification, with two suborders Pannota and Schistonota, was introduced in 1979 by W. P. McCafferty and George F. Edmunds. The list is based on Peters and Campbell (1991), in Insects of Australia.
Suborder Pannota
Superfamily Ephemerelloidea
Ephemerellidae
Leptohyphidae
Tricorythidae
Superfamily Caenoidea
Neoephemeridae
Baetiscidae
Caenidae
Prosopistomatidae
Suborder Schistonota
Superfamily Baetoidea
Siphlonuridae
Baetidae
Oniscigastridae
Ameletopsidae
Ametropodidae
Superfamily Heptagenioidea
Coloburiscidae
Oligoneuriidae
Isonychiidae
Heptageniidae
Superfamily Leptophlebioidea
Leptophlebiidae
Superfamily Ephemeroidea
Behningiidae
Potamanthidae
Euthyplociidae
Polymitarcyidae
Ephemeridae
Palingeniidae
Phylogeny
After
In human culture
In art
The Dutch Golden Age author Augerius Clutius (Outgert Cluyt) illustrated some mayflies in his 1634 De Hemerobio ("On the Mayfly"), the earliest book written on the group. Maerten de Vos similarly illustrated a mayfly in his 1587 depiction of the fifth day of creation, amongst an assortment of fish and water birds.
In 1495 Albrecht Dürer included a mayfly in his engraving The Holy Family with the Mayfly. The critics Larry Silver and Pamela H. Smith argue that the image provides "an explicit link between heaven and earth ... to suggest a cosmic resonance between sacred and profane, celestial and terrestrial, macrocosm and microcosm."
In literature
The Ancient Greek biologist and philosopher Aristotle wrote in his History of Animals that
The Ancient Roman encyclopaedist Pliny the Elder described the mayfly as the "hemerobius" in his Natural History:
The Roman lawyer Cicero wrote philosophically of them in his Tusculan Disputations:
In his 1789 book The Natural History and Antiquities of Selborne, Gilbert White described in the entry for "June 10th, 1771" how
The mayfly has come to symbolise the transitoriness and brevity of life. The English poet George Crabbe, known to have been interested in insects, compared the brief life of a newspaper with that of mayflies, both being known as "Ephemera", things that live for a day:
The theme of brief life is echoed in the artist Douglas Florian's 1998 poem, "The Mayfly". The American Poet Laureate Richard Wilbur's 2005 poem "Mayflies" includes the lines "I saw from unseen pools a mist of flies, In their quadrillions rise, And animate a ragged patch of glow, With sudden glittering".
Another literary reference to mayflies is seen in The Epic of Gilgamesh, one of the earliest surviving great works of literature. The briefness of Gilgamesh's life is compared to that of the adult mayfly. In Szeged, Hungary, mayflies are celebrated in a monument near the Belvárosi bridge, the work of local sculptor Pal Farkas, depicting the courtship dance of mayflies. The American playwright David Ives wrote a short comedic play, Time Flies, in 2001, as to what two mayflies might discuss during their one day of existence.
In fly fishing
Mayflies are the primary source of models for artificial flies, hooks tied with coloured materials such as threads and feathers, used in fly fishing. These are based on different life-cycle stages of mayflies. For example, the flies known as "emergers" in North America are designed by fly fishermen to resemble subimago mayflies, and are intended to lure freshwater trout. In 1983, Patrick McCafferty recorded that artificial flies had been based on 36 genera of North American mayfly, from a total of 63 western species and 103 eastern/central species. A large number of these species have common names among fly fishermen, who need to develop a substantial knowledge of mayfly "habitat, distribution, seasonality, morphology and behavior" in order to match precisely the look and movements of the insects that the local trout are expecting.
Izaak Walton describes the use of mayflies for catching trout in his 1653 book The Compleat Angler; for example, he names the "Green-drake" for use as a natural fly, and "duns" (mayfly subimagos) as artificial flies. These include for example the "Great Dun" and the "Great Blue Dun" in February; the "Whitish Dun" in March; the "Whirling Dun" and the "Yellow Dun" in April; the "Green-drake", the "Little Yellow May-Fly" and the "Grey-Drake" in May; and the "Black-Blue Dun" in July. Nymph or "wet fly" fishing was restored to popularity on the chalk streams of England by G. E. M. Skues with his 1910 book Minor Tactics of the Chalk Stream. In the book, Skues discusses the use of duns to catch trout. The March brown is "probably the most famous of all British mayflies", having been copied by anglers to catch trout for over 500 years.
Some English public houses beside trout streams such as the River Test in Hampshire are named "The Mayfly".
As a spectacle
The hatch of the giant mayfly Palingenia longicauda on the Tisza and Maros Rivers in Hungary and Serbia, known as "Tisza blooming", is a tourist attraction. The 2014 hatch of the large black-brown mayfly Hexagenia bilineata on the Mississippi River in the US was imaged on weather radar; the swarm flew up to 760 m (2,500 feet) above the ground near La Crosse, Wisconsin, creating a radar signature that resembled a "significant rain storm", and the mass of dead insects covering roads, cars and buildings caused a "slimy mess".
During the weekend of 13–14 June 2015, a large swarm of mayflies caused several vehicular accidents on the Columbia–Wrightsville Bridge, carrying Pennsylvania Route 462 across the Susquehanna River between Columbia and Wrightsville, Pennsylvania. The bridge had to be closed to traffic twice during that period due to impaired visibility and obstructions posed by piles of dead insects.
As food
Mayflies are consumed in several cultures and are estimated to contain the most raw protein content of any edible insect by dry weight. In Malawi, kungu, a paste of mayflies (Caenis kungu) and mosquitoes is made into a cake for eating. Adult mayflies are collected and eaten in many parts of China and Japan. Near Lake Victoria, Povilla mayflies are collected, dried and preserved for use in food preparations.
As a name for ships and aircraft
"Mayfly" was the crew's nickname for His Majesty's Airship No. 1, an aerial scout airship built by Vickers but wrecked by strong winds in 1911 before her trial flights.
Two vessels of the Royal Navy were named : a torpedo boat launched in January 1907, and a Fly-class river gunboat constructed in sections at Yarrow in 1915.
The Seddon Mayfly, which was constructed in 1908, was an aircraft that was unsuccessful in early flight. The first aircraft designed by a woman, Lillian Bland, was titled the Bland Mayfly.
Other human uses
In pre-1950s France, "chute de manne" was obtained by pressing mayflies into cakes and using them as bird food and fishbait. From an economic standpoint, mayflies also provide fisheries with an excellent diet for fish. Mayflies could find uses in the biomedical, pharmaceutical, and cosmetic industries. Their exoskeleton contains chitin, which has applications in these industries.
Research on genome expression in the mayfly Cloeon dipterum, has provided ideas on the evolution of the insect wing and giving support to the so-called gill theory which suggests that the ancestral insect wing may have evolved from larval gills of aquatic insects like mayflies.
Mayfly larvae do not survive in polluted aquatic habitats and, thus, have been chosen as bioindicators, markers of water quality in ecological assessments.
In marketing, Nike produced a line of running shoes in 2003 titled "Mayfly". The shoes were designed with a wing venation pattern like the mayfly and were also said to have a finite lifetime. The telecommunication company Vodafone featured mayflies in a 2006 branding campaign, telling consumers to "make the most of now".
| Biology and health sciences | Insects: General | Animals |
246449 | https://en.wikipedia.org/wiki/Axiom%20of%20infinity | Axiom of infinity | In axiomatic set theory and the branches of mathematics and philosophy that use it, the axiom of infinity is one of the axioms of Zermelo–Fraenkel set theory. It guarantees the existence of at least one infinite set, namely a set containing the natural numbers. It was first published by Ernst Zermelo as part of his set theory in 1908.
Formal statement
In the formal language of the Zermelo–Fraenkel axioms, the axiom is expressed as follows:
In technical language, this formal expression is interpreted as "there exists a set 𝐼 (the set that is postulated to be infinite) such that the empty set is an element of it and, for every element of 𝐼, there exists an element of 𝐼 consisting of just the elements of and itself."
This formula can be abbreviated as:
Some mathematicians may call a set built this way an inductive set.
Interpretation and consequences
This axiom is closely related to the von Neumann construction of the natural numbers in set theory, in which the successor of x is defined as x ∪ {x}. If x is a set, then it follows from the other axioms of set theory that this successor is also a uniquely defined set. Successors are used to define the usual set-theoretic encoding of the natural numbers. In this encoding, zero is the empty set:
0 = {}.
The number 1 is the successor of 0:
1 = 0 ∪ {0} = {} ∪ {0} = {0} = {{}}.
Likewise, 2 is the successor of 1:
2 = 1 ∪ {1} = {0} ∪ {1} = {0, 1} = { {}, {{}} },
and so on:
3 = {0, 1, 2} = { {}, {{}}, {{}, {{}}} };
4 = {0, 1, 2, 3} = { {}, {{}}, { {}, {{}} }, { {}, {{}}, {{}, {{}}} } }.
A consequence of this definition is that every natural number is equal to the set of all preceding natural numbers. The count of elements in each set, at the top level, is the same as the represented natural number, and the nesting depth of the most deeply nested empty set {}, including its nesting in the set that represents the number of which it is a part, is also equal to the natural number that the set represents.
This construction forms the natural numbers. However, the other axioms are insufficient to prove the existence of the set of all natural numbers, . Therefore, its existence is taken as an axiom – the axiom of infinity. This axiom asserts that there is a set I that contains 0 and is closed under the operation of taking the successor; that is, for each element of I, the successor of that element is also in I.
Thus the essence of the axiom is:
There is a set, I, that includes all the natural numbers.
The axiom of infinity is also one of the von Neumann–Bernays–Gödel axioms.
Extracting the natural numbers from the infinite set
The infinite set I is a superset of the natural numbers. To show that the natural numbers themselves constitute a set, the axiom schema of specification can be applied to remove unwanted elements, leaving the set N of all natural numbers. This set is unique by the axiom of extensionality.
To extract the natural numbers, we need a definition of which sets are natural numbers. The natural numbers can be defined in a way that does not assume any axioms except the axiom of extensionality and the axiom of induction—a natural number is either zero or a successor and each of its elements is either zero or a successor of another of its elements. In formal language, the definition says:
Or, even more formally:
Alternative method
An alternative method is the following. Let be the formula that says "x is inductive"; i.e. . Informally, what we will do is take the intersection of all inductive sets. More formally, we wish to prove the existence of a unique set such that
(*)
For existence, we will use the Axiom of Infinity combined with the Axiom schema of specification. Let be an inductive set guaranteed by the Axiom of Infinity. Then we use the axiom schema of specification to define our set – i.e. is the set of all elements of , which also happen to be elements of every other inductive set. This clearly satisfies the hypothesis of (*), since if , then is in every inductive set, and if is in every inductive set, it is in particular in , so it must also be in .
For uniqueness, first note that any set that satisfies (*) is itself inductive, since 0 is in all inductive sets, and if an element is in all inductive sets, then by the inductive property so is its successor. Thus if there were another set that satisfied (*) we would have that since is inductive, and since is inductive. Thus . Let denote this unique element.
This definition is convenient because the principle of induction immediately follows: If is inductive, then also , so that .
Both these methods produce systems that satisfy the axioms of second-order arithmetic, since the axiom of power set allows us to quantify over the power set of , as in second-order logic. Thus they both completely determine isomorphic systems, and since they are isomorphic under the identity map, they must in fact be equal.
An apparently weaker version
Some old texts use an apparently weaker version of the axiom of infinity, to wit:
This says that x is non-empty and for every element y of x there is another element z of x such that y is a subset of z and y is not equal to z. This implies that x is an infinite set without saying much about its structure. However, with the help of the other axioms of ZF, we can show that this implies the existence of ω. First, if we take the powerset of any infinite set x, then that powerset will contain elements that are subsets of x of every finite cardinality (among other subsets of x). Proving the existence of those finite subsets may require either the axiom of separation or the axioms of pairing and union. Then we can apply the axiom of replacement to replace each element of that powerset of x by the initial ordinal number of the same cardinality (or zero, if there is no such ordinal). The result will be an infinite set of ordinals. Then we can apply the axiom of union to that to get an ordinal greater than or equal to ω.
Independence
The axiom of infinity cannot be proved from the other axioms of ZFC if they are consistent. (To see why, note that ZFC Con(ZFC − Infinity) and use Gödel's Second incompleteness theorem.)
The negation of the axiom of infinity cannot be derived from the rest of the axioms of ZFC, if they are consistent. (This is tantamount to saying that ZFC is consistent, if the other axioms are consistent.) Thus, ZFC implies neither the axiom of infinity nor its negation and is compatible with either.
Indeed, using the von Neumann universe, we can build a model of ZFC − Infinity + (¬Infinity). It is , the class of hereditarily finite sets, with the inherited membership relation. Note that if the axiom of the empty set is not taken as a part of this system (since it can be derived from ZF + Infinity), then the empty domain also satisfies ZFC − Infinity + ¬Infinity, as all of its axioms are universally quantified, and thus trivially satisfied if no set exists.
The cardinality of the set of natural numbers, aleph null (), has many of the properties of a large cardinal. Thus the axiom of infinity is sometimes regarded as the first large cardinal axiom, and conversely large cardinal axioms are sometimes called stronger axioms of infinity.
| Mathematics | Axiomatic systems | null |
246469 | https://en.wikipedia.org/wiki/Maxwell%20%28unit%29 | Maxwell (unit) | The maxwell (symbol: Mx) is the CGS (centimetre–gram–second) unit of magnetic flux ().
History
The unit name honours James Clerk Maxwell, who presented a unified theory of electromagnetism. The maxwell was recommended as a CGS unit at the International Electrical Congress held in 1900 at Paris. This practical unit was previously called a line, reflecting Faraday's conception of the magnetic field as curved lines of magnetic force, which he designated as line of magnetic induction. Kiloline (103 line) and megaline (106 line) were sometimes used because 1 line was very small relative to the phenomena that it was used to measure.
The maxwell was affirmed again unanimously as the unit name for magnetic flux at the Plenary Meeting of the International Electrotechnical Commission (IEC) in July 1930 at Oslo. In 1933, the Electric and Magnetic Magnitudes and Units committee of the IEC recommended to adopt the metre–kilogram–second (MKS) system (Giorgi system), and the name weber was proposed for the practical unit of magnetic flux (), subject to approval of various national committees, which was achieved in 1935. The weber was thus adopted as a practical unit of magnetic flux by the IEC.
Definition
The maxwell is a non-SI unit.
1 maxwell = 1 gauss × (centimetre)2
That is, one maxwell is the total flux across a surface of one square centimetre perpendicular to a magnetic field of strength one gauss.
The weber is the related SI unit of magnetic flux, which was defined in 1946.
1 maxwell ≘ 10−4 tesla × (10−2 metre)2 = 10−8 weber
| Physical sciences | Magnetic flux | Basics and measurement |
246471 | https://en.wikipedia.org/wiki/Crab-eating%20macaque | Crab-eating macaque | The crab-eating macaque (Macaca fascicularis), also known as the long-tailed macaque or cynomolgus macaque, is a cercopithecine primate native to Southeast Asia. As a synanthropic species, the crab-eating macaque thrives near human settlements and in secondary forest. Crab-eating macaques have developed attributes and roles assigned to them by humans, ranging from cultural perceptions as being smart and adaptive, to being sacred animals, being regarded as vermin and pests, and becoming resources in modern biomedical research. They have been described as a species on the edge, living on the edge of forests, rivers, and seas, at the edge of human settlements, and perhaps on the edge of rapid extinction.
Crab-eating macaques are omnivorous and frugivorous. They live in matrilineal groups ranging from 10 to 85 individuals, with groups exhibiting female philopatry and males emigrating from natal group at puberty. Crab-eating macaques are the only old-world monkey known to use stone tools in their daily foraging, and they engage in a robbing and bartering behavior in some tourist locations.
The crab-eating macaque is the most traded primate species, the most culled primate species, the most persecuted primate species and also the most popular species used in scientific research. Due to these threats, the crab-eating macaque was listed as Endangered on the IUCN Red List in 2022.
Etymology
Macaca comes from the Portuguese word macaco, which was derived from makaku, a word in Ibinda, a language of Central Africa (kaku means monkey in Ibinda). The specific epithet fascicularis is Latin for a small band or stripe. Sir Thomas Raffles, who gave the animal its scientific name in 1821, did not specify what he meant by the use of this word.
In Indonesia and Malaysia, the crab-eating macaque and other macaque species are known generically as kera.
The crab-eating macaque has several common names. It is often referred to as the long-tailed macaque due to its tail, which is the length of their body and head combined. The name crab-eating macaque refers to it to it being seen foraging beaches for crabs. Another common name for M. fascicularis, often used in laboratory settings, is the cynomolgus monkey which derives from Greek Kynamolgoi meaning "dog milkers". It has also been suggested that cynomolgus refers to a race of humans with long hair and handsome beards who used dogs for hunting according to Aristophanes of Byzantium, who seemingly derived the etymology of the word cynomolgus from the Greek κύων, cyon 'dog' (gen. cyno-s) and the verb , 'to milk' (adj. amolg-os), by claiming that they milked female dogs.
Perceptions and terminology
Crab-eating macaques are understood and perceived in many ways: smart, pestiferous, exploited, sacred, vermin, invasive.
In 2000 the crab-eating macaque was placed on the list of 100 most invasive species. For example, they are considered an invasive alien species (IAS) on Mauritius, articles argue for long-tailed macaques spreading seeds of invasive plants, competing with native species like the Mauritian flying fox, and having a detrimental impact on native threatened species. Several authors pointed out that the present evidence indicates that predation on birds by monkeys may have been overestimated. address these accusations and they point out the crab-eating macaques do not prefer primary forest thus it is unlikely that Mauritius macaques were ever a major source of indigenous forest destruction. The primary driver of bird extinction has been habitat destruction by humans. Sussman and Tattersall mention that the Dutch abandoned the island in 1710–12 due to monkeys and rats destroying plantations, they point out that the human population was low at this time and the crab-eating macaques would have had plenty of primary forest to exploit, yet they chose to brave the dangers of raiding plantations. They do not deny that macaques on Mauritius prey on bird eggs and disseminate seeds of exotic plants yet the major loss of species on Mauritius is due to habitat loss caused by humans – macaques are successful because they prefer secondary forest and disturbed habitats. This is significant because the perception of crab-eating macaques being invasive and destructive to "native" biodiversity are used as a justification for use in biomedical research. It is important to be aware of perceptions, and how we categorize other beings because, for example, the label of "pest" or "invasive" provides justification and moral comfort about killing those that don't "belong" – these lives are viewed as not legitimate, killable, bare life lacking grievability.
"Weed" and "non-weed" species are distinguished based on that species ability to thrive in close proximity and association with human settlements. This label was not intentionally proposed to disparage crab-eating macaques but this term, like pest and invasive, can affect how people perceive this species and can trigger negatives perceptions.
Taxonomy
Previously ten subspecies of Macaca fascicularis, but the Philippine long-tailed macaque (M.f. philippinensis) is under dispute and is tentatively removed from IUCN Red List assessments, with those individuals included with M.f. fascicularis.
M.f. fascicularis, common long-tailed macaque – Indonesia, Malaysia, Philippines, Thailand, Cambodia, Singapore, Vietnam
M.f. aurea, Burmese long-tailed macaque – Myanmar, Laos, western and southern Thailand near Myanmar border
M.f. antriceps, Dark crowned long-tailed macaque – Kram Yai Island, Thailand
M.f. condorensis, Con Song long-tailed macaque – Con Son Island, Hon Ba Island, Vietnam
M.f. karimondjiwae, Karimunjawa long-tailed macaque – Karimunjawa Islands, Indonesia
M.f. umbrosa, Nicobar long-tailed macaque – Nicobar islands, India
M.f. fusca, Simeulue long-tailed macaque – Simeulue Island, Indonesia
M.f. lasiae, Lasia long-tailed macaque – Lasia island, Indonesia
M.f. tua, Maratua long-tailed macaque – Maratua Island, Indonesia
M.f. fascicularis has the largest range, followed by M.f. aurea. The other seven subspecies are isolated on small islands: M.f. antriceps, M.f. condorensis, and M.f. karimondjiwae all populate small shallow-water fringe-islands; M.f. umbrosa, M.f. fusca, M.f. lasiae, and M.f. tua all inhabit deep-water fringing-islands.
Evolution
The macaque originated in northeastern Africa some 7 million years ago and spread through most of continental Asia by , and subdivided into four groups (sylvanus, sinica, silenus, and fascicularis). The earliest split in the genus Macaca likely occurred ~4.5 mya between an ancestor of the silenus group and a fascicularis-like ancestor from which non-silenus species later evolved. The species of the fascicularis group (which include m. fascicularis, m. mulatta, and m. fuscata) share a common ancestor that lived 2.5 mya. It is suggested that M. fascicularis are the most plesiomorphic (ancestral) taxon in the fascicularis clade, thus it is argued that M. mulatta evolved from a fascicularis-like ancestor that reached mainland from its homeland in Indonesia around 1mya.
A phylogenetic analysis found evidence that the fascicularis group originated from an ancient hybridization between the sinica and silenus groups ~3.45–3.56 mya, soon after the initial separation of two parent lineages (proto-sinica and proto-silenus) ~3.86 mya. This divergence and subsequent hybridization occurred during rapid glacial-eustatic fluctuations in the early Pleistocene: high sea levels may have led to the initial separation of proto-sinica and proto-silenus while the subsequent lowering of sea levels facilitated the secondary contact needed for hybridization.
Known fossils indicate that crab-eating macaques inhabited the Sunda Shelf since at least early Pleistocene, ~1mya. It is likely that crab-eating macaques were introduced to Timor and Flores (both on the east side of the Wallace line), by humans around 4,000–5,000 years ago. Crab-eating macaques are the only species on both sides of the Wallace line.
The possible stages of crab-eating macaque evolution and dispersal were proposed:
Stage 1: more than , crab-eating dispersed into the Sunda Shelf area. Earliest fossil record of crab-eating macaques was found in Java (this collection included H. erectus and leaf monkey species). They probably reached Java by dry land during a period of glacial advance and low sea levels
Stage 2: around 160 thousand years ago, dispersal and isolation of progenitors of the strongly differentiated deep water fringing island populations occurred. These include M.f. umbrosa, M.f. fusca, M.f., tua [fooden includes M.f. philippinensis but their subspecies status is currently under debate]. It is thought that the progenitors of these subspecies reached deep water habitats during the penultimate glacial maximum when sea levels were lower than present. These populations became isolated during the interglacial period around 120 kya
Stage 3: more than 18 thousand years ago, the differentiation of progenitors of populations of the Indochinese peninsula and northern part of the isthmus of Kra occurred. These subspecies include M.f. aurea and M.f. fascicularis. These two subspecies became differentiated before the last glacial maximum
Stage 4: 18 thousand years ago, the dispersal and isolation of progenitors of weakly differentiated deep water fringing island populations occurred (M. f. fascicularis)
Stage 5: less than 18 thousand years ago, the isolation of the progenitors of shallow water fringing island populations and populations in Penida and Lombok (deep water) occurred. These subspecies include M.f. karimondjawae, M.f. atriceps, M.f. condorensis, M.f. fascicularis
Stage 6: 4.5 thousand years ago, the dispersal and isolation of progenitors of populations in easter lesser Sunda islands (deep water), occurred (M.f. fascicularis).
Characteristics
Crab-eating macaques are sexually dimorphic, males weigh between 4.7 and 8.3 kg and females weigh 2.5–5.7 kg. The height of an adult male is between 412-648mm and 385-505mm for adult females. Their tails are the length of their head and body combined. Dorsal pelage is generally greyish or brownish with a white underbelly with black and white highlights around the crown and face. The face skin is brownish to pinkish except for the eyelids which are white. Adults are usually bearded on and around the face, except for around the snout and eyes. Older females have the fullest beards, with males' being more whisker like. Subspecies on islands seem to have black coloration of their pelage and large island, and mainland subspecies are lighter.
Genetics
Hybridity
Along the northern part of range crab-eating macaques hybridize with rhesus macaques (M. mulatta). They also have been known to hybridize with southern pig-tailed macaques (M. nemestrina). Hybrids also occur across subspecies too. Rhesus and crab-eating macaques hybridize within a contact zone where their ranges overlap, which has been proposed to lie between 15 and 20 degrees north and includes Thailand, Myanmar, Laos, Vietnam. Their offspring are fertile, and they continue to mate which leads to a broad range of admixture proportions. Introgression from rhesus to crab-eating macaque populations extends beyond Indochina and the Kra Isthmus, whereas introgression from crab-eating to rhesus macaques is more restricted. There seems to be a rhesus biased and male biased gene flow between rhesus and crab-eating macaque population which has led to different degrees of genetic admixture in these two species.
Distribution and habitat
The crab-eating macaque's native range encompasses most of mainland Southeast Asia, through the Malay Peninsula and Singapore, the Maritime Southeast Asia islands of Sumatra, Java, and Borneo, offshore islands, the islands of the Philippines, and the Nicobar Islands in the Bay of Bengal. This primate is a rare example of a terrestrial mammal that violates the Wallace line, being found out across the Lesser Sunda Islands. It lives in a wide variety of habitats, including primary lowland rainforests, disturbed and secondary rainforests, shrubland, and riverine and coastal forests of nipa palm and mangrove. It also easily adjusts to human settlements and is considered sacred at some Hindu temples and on some small islands, but as a pest around farms and villages. Typically, it prefers disturbed habitats and forest periphery.
Introduction to other regions
Humans have transported crab-eating macaques to at least five islands: Mauritius, West Papua, Ngeaur, Tinjil Island near Java, and Kabaena Island off of Sulawesi, and to Kowloon Hills of Hong Kong.
There was no indigenous human population on Mauritius. Early exploration of Mauritius by Phoenicians, Swahili people and Arab merchants has been suggested but it was not until the early 16th century that there is hard evidence of human presence on the island, with the Portuguese using it as a refreshing post. The Dutch reached the island in 1598 and attempted a permanent settlement from 1638 to 1658 when they abandoned the island, they resettled again from 1664 to 1710, but abandoned the island again due in part to monkeys and rats destroying plantations. Crab-eating macaques were brought to Mauritius either by the Portuguese or the Dutch in the late 1500s to early 1600s. This founder population likely came from Java, although a mixed origin has been suggested.
From the mid-1980s to mid-1990s the population of crab-eating macaques on Mauritius was estimated at 35,000 to 40,000. The present population is not known but estimates indicate it may be as low as 8,000. This significant decline in the population is likely correlated to the booming Macaque breeding industry on Mauritius. As crab-eating macaques are considered invasive and destructive this justifies their use in biomedical research. On Mauritius macaques are also perceived as sacred, source of tourism, pets, pest, and food.
Crab-eating macaques first appeared on Ngeaur Island, during German rule in the early 20th century. Population size has fluctuated between 800 and 400 individuals. The population losses due to eradication efforts, yet the population has survived despite typhoons and WWII bombing on the island.
In Kowloon Hills there are groups of differing species and their hybrids, where they were released during the 1910s. Rhesus macaques and crab-eating macaques interbred and hybridized. Tibetan macaques were also released but did not interbreed. This location has become a popular tourist attraction.
The immunovaccine porcine zona pellucida (PZP), which causes infertility in females, is currently being tested in Hong Kong to investigate its use as potential population control.
Crab-eating macaques have been in West Papua for around 30 to 100 years, but this population has not expanded, remaining at around 60 to 70 individuals.
There is little known of the population on Kabaena Island, Sulawesi. These crab-eating macaques appear to have distinct morphology, which may suggest that they have been on the island for a long period of time.
Between 1988 and 1994, a total of 520 crab-eating macaques including 58 males and 462 females were released on Tinjil Island for the purpose of starting a natural habitat breeding facility. This may be a sustainable way of supplying monkeys for research, but it is in a legal gray area for trading regulations, using captive bred codes (F, C) rather than wild-caught (W).
Population size
Because crab-eating macaques are synanthropic, enhancing their visibility to humans, this leads to an overestimation in their population size. Researchers have been raising alarms about crab-eating macaque population decline at least since 1986. Many authors cite a 40% decline in the entire crab-eating macaque population between 1980 and 2006. This comes from a population estimate of 5 million in the 1980s-90s. population estimate of 3 million in 2006. It is unclear how the 3 million estimate was reached.
Using a noninvasive probability model to estimate the maximum population abundance, it was estimated that the current population of crab-eating macaques is 1 million, which reflects a continuous decline in the population – 80% reduction over 35 years. This study used a model that overestimated population so the true decline is probably even greater.
A population Viability Analysis (PVA) for crab-eating macaques revealed that the presence and absence of females in a population are key to its short and long term viability. Anything that negatively targets females is likely to threaten population viability, e.g., harvesting for biomedical research targets females.
Behavior and ecology
The crab-eating macaque is highly adaptive, living near and benefiting from humans and environmental modifications.
Group size and structure
Crab-eating macaques live in matrilineal groups ranging from 10 to 85 members, but most often fall in the range of 35–50. Group size varies greatly, especially between non-provisioned and provisioned groups. Large groups live in secondary forest, savanna and thorn scrub vegetation, and urban habitats and temples. Smaller groups live in primary forest, swamp and mangrove forests. Groups will break into subgroups during the day throughout their range. Composition of groups is multi-male/multi-female but females outnumber males with the sex ratio varying between 1:5–6 and 1:2. Groups exhibit female philopatry with males emigrating from natal group at puberty. Males leave natal group as late juveniles or subadults before the age of seven. On average, adult females and juveniles in groups are related at the level of cousins, whereas adult males are unrelated. Higher relatedness in females is expected due to female philopatry.
Social organization
Macaque social groups have a clear dominance hierarchy among females, these ranks are stable over a female's lifetime and the matriline's rank may be sustained for generations. Matrilines creating interesting group dynamics, for example males are dominant to females at the individual level but groups of closely related females can have some level of dominance over males. The dominant male within a group is not often stable, and males probably change troops several times during their life; rank below the dominant male is not consistent or stable either – males show sophisticated decision-making when it comes to transferring dominance.
Intergroup encounters
Direct encounters between adjacent non-provisioned troops are relatively rare which suggests mutual avoidance.
Interspecific behavior
Interactions have been reported between crab-eating and southern pig-tailed macaques, Colobinae species, proboscis monkey, gibbons and orangutans. Dusky leaf monkeys, crab-eating macaques and white-thighed surilis form tolerant foraging associations, with juveniles playing together. Crab-eating macaques have also been observed grooming Raffles' banded langurs in Malaysia.
Conflict
Group living in all species is dependent on the tolerance of other group members. In crab-eating macaques, successful social group living requires postconflict resolution. Usually, less dominant individuals lose to a higher-ranking individual when conflict arises. After the conflict has taken place, lower-ranking individuals tend to fear the winner of the conflict to a greater degree. In one study, this was seen in the ability to drink water together. Postconflict observations showed a staggered time between when the dominant individual begins to drink and the subordinate. Long-term studies reveal the gap in drinking time closes as the conflict moves further into the past.
Grooming and support in conflict among primates is considered to be an act of reciprocal altruism. In crab-eating macaques, an experiment was performed in which individuals were given the opportunity to groom one another under three conditions: after being groomed by the other, after grooming the other, and without prior grooming. After grooming took place, the individual that received the grooming was much more likely to support their groomer than one that had not previously groomed that individual.
Crab-eating macaques demonstrate two of the three forms of suggested postconflict behavior. In both captive and wild studies, they demonstrated reconciliation, or an affiliative interaction between former opponents, and redirection, or acting aggressively towards a third individual. Consolation was not seen in any study performed.
When crab-eating macaques are approached by others while foraging, they tend to move away.
Postconflict anxiety has been reported in crab-eating macaques that have acted as the aggressor. After a conflict within a group, the aggressor appears to scratch itself at a higher rate than before the conflict. Though the scratching behavior cannot definitely be termed as an anxious behavior, evidence suggests this is the case. An aggressor's scratching decreases significantly after reconciliation. This suggests reconciliation rather than a property of the conflict is the cause of the reduction in scratching behavior. Though these results seem counterintuitive, the anxiety of the aggressor appears to have a basis in the risks of ruining cooperative relationships with the opponent.
Kin altruism and spite
In a study, a group of crab-eating macaques was given ownership of a food object. Adult females favored their own offspring by passively, yet preferentially, allowing them to feed on the objects they held. When juveniles were in possession of an object, mothers robbed them and acted aggressively at an increased rate towards their own offspring compared to other juveniles. These observations suggest close proximity influences behavior in ownership, as a mother's kin are closer to her on average. When given a nonfood object and two owners, one being a kin and one not, the rival will choose the older individual to attack regardless of kinship. Though the hypothesis remains that mother-juvenile relationships may facilitate social learning of ownership, the combined results clearly point to aggression towards the least-threatening individual.
A study was conducted in which food was given to 11 females. They were then given a choice to share the food with kin or nonkin. The kin altruism hypothesis suggests the mothers would preferentially give food to their own offspring. Yet eight of the 11 females did not discriminate between kin and nonkin. The remaining three did, in fact, give more food to their kin. The results suggest it was not kin selection, but instead spite that fueled feeding kin preferentially. This is due to the observation that food was given to kin for a significantly longer period of time than needed. The benefit to the mother is decreased due to less food availability for herself and the cost remains great for nonkin due to not receiving food. If these results are correct, crab-eating macaques are unique in the animal kingdom, as they appear not only to behave according to the kin selection theory, but also act spitefully toward one another.
Reproduction
After a gestation period of 162–193 days, the female gives birth to one infant. The infant's weight at birth is about . Infants are born with black fur which will begin to turn to a grey or reddish-brown shade (depending on the subspecies) after about three months of age. This natal coat may indicate to others the status of the infant, and other group members treat infants with care and rush to their defense when distressed. Immigrant males sometimes kill infants not their own in order to shorten interbirth intervals. High-ranking females will sometimes kidnap the infants of lower-ranking females. These kidnappings can result in the death of the infants, as the other female is usually not lactating. A young juvenile stays mainly with its mother and relatives. As male juveniles get older, they become more peripheral to the group. Here they play together, forming crucial bonds that may help them when they leave their natal group. Males that emigrate with a partner are more successful than those that leave alone. Young females, though, stay with the group and become incorporated into the matriline into which they were born.
Male crab-eating macaques groom females to increase the chance of mating. A female is more likely to engage in sexual activity with a male that has recently groomed her than with one that has not.
Studies have found that the dominant male copulates more than other males in the group. DNA tests indicate that dominant males sire most of the offspring in natural crab-eating macaque troops. Reproductive success in females is also linked to dominance. High ranking females have more offspring over their life-time than low-ranking females – higher ranking females reproduce at a younger age and their offspring have a higher chance of survival.
Diet
Crab-eating macaques are omnivorous frugivores and eat fruits, leaves, flowers, shoots, roots, invertebrates, and small animals in variable quantities. They eat durians, such as Durio graveolens and D. zibethinus, and are a major seed disperser for the latter species.
They exhibit particularly low tolerance for swallowing seeds, but spit seeds out if larger than . This decision to spit seeds is thought to be adaptive; it avoids filling the monkey's stomach with wasteful bulky seeds that cannot be used for energy.
Fruit makes up 40% to over 80% of the diet of wild crab-eating macaques, except in highly provisioned populations or highly disturbed environments.
Crab-eating macaques can become synanthrope, living off human resources when feeding in cropfields on young dry rice, cassava leaves, rubber fruit, taro plants, coconuts, mangos, and other crops, often causing significant losses to local farmers. In villages, towns, and cities, they frequently take food from garbage cans and refuse piles.
In Padangtegal Bali macaque 70% of their diet is provisioned.
They become unafraid of humans in these conditions, which can lead to macaques directly taking food from people, both passively and aggressively.
Tool use
Crab-eating macaques are the only old world monkey known to use stone tools in their daily foraging. This is mainly observed in populations along the ocean of Thailand and Myanmar (M.f. aurea subspecies). A 1887 report described observations to tool use in a Myanmar population. Over 100 years later the first published report is published in 2007. describing crab-eating macaques in Thailand using ax shaped stones to crack rock oysters, detached gastropods, bivalves, and swimming crabs. Also in Thailand, crab-eating macaques have been observed using tools to crack open oil palm nuts in abandoned plantations, the rapid uptake of oil palm nutcracking shows macaques ability to take advantage of anthropogenic changes and the recent establishment of this behavior indicates the potential for macaques to exhibit cultural tendencies. Unfortunately, human activities can negatively impact tool-using macaques, thus disrupting the persistence of these stone tool use traditions.
Another instance of tool use is washing and rubbing foods, such as sweet potatoes, cassava roots, and papaya leaves, before consumption. Crab-eating macaques either soak these foods in water or rub them through their hands as if to clean them. They also peel the sweet potatoes, using their incisors and canine teeth. Adolescents appear to acquire these behaviors by observational learning of older individuals.
Robbing and bartering
Robbing and bartering is a behavioral pattern in which free ranging nonhuman primates spontaneously steal an object from a human and then hold onto that object until that or another human solicits an exchange by offering food. This behavior is seen in crab-eating macaques at Uluwatu population in Bali, and is described as a population specific behavioral practice, prevalent and persistent across generations and characterized by marked intergroup variation. Synchronized expression of robbing and bartering was socially influenced and more specifically explained by response facilitation. This result further supports the cultural nature of robbing and bartering.
Token-robbing and token/reward-bartering are cognitively challenging tasks for the Uluwatu macaques that revealed unprecedented economic decision-making processes, i.e., value based token selection and payoff maximization. This spontaneous, population specific, prevalent, cross-generational, learned and socially influenced practice may be the first example of a culturally maintained token economy in free-ranging animals.
Threats
The crab-eating macaque has been categorized as Endangered on the IUCN Red List; it is threatened by habitat loss due to rapid land use changes in the landscapes of Southeast Asia and the surging demand by the medical industry during the COVID-19 pandemic. A 2008 review of population trends suggested a need for better monitoring of populations due to increased wild trade and rising levels of human-macaque conflict, which continue to decrease overall population levels despite the species' wide distribution.
Each subspecies faces differing levels of threats, and too little information is available on some subspecies to assess their conditions. M. f. umbrosa is likely of important biological significance and has been recommended as a candidate for protection in the Nicobar Islands, where its small, native population has been seriously fragmented. It is listed as vulnerable on the IUCN Red List. The Philippine long-tailed macaque (M. f. philippensis) is listed as near threatened, and M. f. condorensis is vulnerable. All other subspecies are listed as data deficient and need further study; although recent work is showing M. f. aurea and M. f. karimondjawae need increased protection.
Trade
The crab-eating macaque is one of the most widely traded species of mammal listed on the CITES appendices. The international trade in crab-eating macaques is a multibillion-dollar industry. Crab-eating macaques are sold for up to $20,000 to $24,000, and prices rise when supply reduces. International crab-eating macaque trade does not appear to follow a particular trend but continues to change over time. Although peak exports often correlate with declarations of public health emergences.
In the 1970s, India was the largest supplier of macaques, mostly rhesus macaques, but put a ban on export because when it became apparent that monkeys were used to test military weapons. After this ban, crab-eating macaques began to be used more in biomedical research. Imports of crab-eating macaques in the US and elsewhere began to increase during the worldwide reduction and subsequent ban of rhesus macaque exports from India.
In the 1980s, crab-eating macaques were introduced to China and began being bred in captive facilities. Since then, captive macaques have been favored in biomedical trade.
In the 1990s, four major commercial monkey farms operated by Chinese entrepreneurs began exporting wild caught macaques as captive bred, and monkeys smuggled from Laos and Cambodia were likely part of these transactions.
By 2001, China was exporting significantly more crab-eating macaques than rhesus macaques. Cambodia grants harvest permits to five monkey farms to breed crab-eating macaques for export. Crab-eating macaque harvesting began to accelerate as farms and holding areas were established near protected areas. At this time, international trade of crab-eating macaques expanded rapidly.
Between 2000 and 2018, the US was the largest importer of crab-eating macaques ranging from 41.7 to 70,1% of imports. other major importers: France up to 17.1%, Great Britain up to 15.9%, Japan up to 37.9%, and China up to 33.5%. During this time, China was the largest exporter of crab-eating macaques. Other exporters include Mauritius, Laos, Cambodia, Thailand, Indonesia, and Vietnam. Between 2008 and 2019, at least 450,000 live crab-eating macaques and over 700,000 specimens were traded, with mover 50,000 identified as wild caught.
After 2018, Cambodia became the largest exporter of crab-eating macaques, contributing 59% of all macaques traded in 2019 and 2020. Between 2019 and 2020, Chinese crab-eating macaque trade decreased 96%. China banned animal trade in January 2020 due to concerns of COVID-19, yet this cannot account for the significant decrease in crab-eating macaque exports in 2019, the drivers of this decline are still unclear.
Crab-eating macaques are one of the most commonly internationally traded mammals and are also the most common primates in domestic trade, most often for pets or food. Macaques are regularly sold and kept as pets in China, Vietnam, and Indonesia. In Indonesia pet macaques are usually taken from the wild, which was illegal since 2009, but in 2021 the Indonesian government lifted the harvest ban and reinstated a harvest quota. In Indonesia, crab-eating macaques and pig tailed macaques are the only primates that are not included in the list of protected species. Often infants and juveniles are caught and sold in wildlife markets.
Laundering ring
In November 2022, following a five-year investigation by the DoJ and US Fish and Wildlife, the DoJ indicted Cambodian government officials and Cambodian owner and staff of Vanny Bio Research Corporation LtD, a macaque breeding center in Cambodia, for their alleged involvement in laundering wild-caught monkeys as captive bred. Charles River Laboratories is also under investigation. Unfortunately, the crab-eating macaques involved in the Cambodian smuggling ring imported by Charles River are in limbo – they are ineligible for research but they cannot go back to the wild either. This laundering is a sophisticated trans-border wildlife trafficking network. Crab-eating macaques are harvested in places like Cambodia, Laos, and Myanmar and then laundered through Vietnam and illegally smuggled to places like China.
Conservation
The crab-eating macaque is listed on CITES Appendix II.
Its IUCN Red List status was uplisted in 2020 and again in 2022 from the Least Concern classification in 2008 as a result of declining population resulting from hunting and troublesome interactions with humans, despite its wide range and ability to adapt to different habitats. These interactions include the skyrocketing demand for crab-eating macaques by the medical industry during the COVID-19 pandemic, and the rapid development of the landscape in Southeast Asia. A 2008 review of their populations suggested a need for better monitoring of populations due to increased wild trade and rising levels of human-macaque conflict, which continue to decrease overall population levels despite the species' wide distribution.
The Long-Tailed Macaque Project and The Macaque Coalition are engaged in conservation of the crab-eating macaque through research and public engagement.
Relationship with humans
Crab-eating macaques extensively overlap with humans across their range in Southeast Asia. Consequently, they live together in many locations. Some of these areas are associated with religious sites and local customs, such as the monkey forests and temples of Bali in Indonesia, Thailand, and Cambodia, while other areas are characterized by conflict as a result of habitat loss and competition over food and space. Humans and crab-eating macaques have shared environments since prehistoric times, and both tend to frequent forest and river edge habitats. Crab-eating macaques are occasionally used as a food source for some indigenous forest-dwelling peoples. In Mauritius, they are captured and sold to the pharmaceutical industry, and in Angaur island in Palau, they are sold as pets. Macaques feed on sugarcane and other crops, affecting agriculture and livelihoods, and can be aggressive towards humans. Macaques may carry potentially fatal human diseases, including herpes B virus. In Singapore, they have adapted into the urban environment.
In places like Thailand and Singapore human-macaque conflict task forces have been created to try and resolve some of these conflicts.
In scientific research
M. fascicularis is also used extensively in medical experiments, in particular those connected with neuroscience and disease. Due to their close physiology, they can share infections with humans. Some cases of concern have been an isolated event of Reston ebolavirus found in a captive-bred population shipped to the US from the Philippines, which was later found to be a strain of Ebola that has no known pathological consequences in humans, unlike the African strains. Furthermore, they are a known carrier of monkey B virus (Herpesvirus simiae), a virus which has produced disease in some lab workers working mainly with rhesus macaques (M. mulatta). Plasmodium knowlesi, which causes malaria in M. fascicularis, can also infect humans. A few cases have been documented in humans, but for how long humans have been getting infections of this malarial strain is unknown. It is, therefore, not possible to assess if this is a newly emerging health threat, or if just newly discovered due to improved malarial detection techniques. Given the long history of humans and macaques living together in Southeast Asia, it is likely the latter.
Crab-eating macaques are one of the most popular species used for scientific research. Crab-eating macaques are used primarily by the biotechnology and pharmaceutical industry in the evaluation of pharmacokinetics, pharmacodynamics, efficacy, and safety of new biologics and drugs, they are also used in infectious disease, TB, HIV/AIDS, and neuroscience studies.
The use of crab-eating macaques and other nonhuman primates in experimentation is controversial with critics charging that the experiments are cruel, unnecessary and lead to dubious findings. One of the most well known examples of experiments on crab-eating macaques is the 1981 Silver Spring monkeys case.
In 2014, 21,768 crab-eating macaques were imported in the United States to be used in experimentation.
Clones
On 24 January 2018, scientists in China reported in the journal Cell the creation of two crab-eating macaque clones, named Zhong Zhong and Hua Hua, using the complex DNA transfer method that produced Dolly the sheep.
Abuse scandal
In June 2023, BBC exposed a global online network of sadists who shared videos of baby long-tailed macaques being tortured by caretakers in Indonesia. There were many torture methods, from teasing the primates with baby bottles to killing them in blenders, sawing them in half, or cutting off their tails and limbs. Enthusiasts would pay for the caretakers to film videos torturing the macaques. Investigation has led to some prisons and police searches in both Indonesia and the United States, where many of the torture enthusiasts were located.
| Biology and health sciences | Old World monkeys | Animals |
246472 | https://en.wikipedia.org/wiki/Monarch%20butterfly | Monarch butterfly | The monarch butterfly or simply monarch (Danaus plexippus) is a milkweed butterfly (subfamily Danainae) in the family Nymphalidae. Other common names, depending on region, include milkweed, common tiger, wanderer, and black-veined brown. It is among the most familiar of North American butterflies and an iconic pollinator, although it is not an especially effective pollinator of milkweeds. Its wings feature an easily recognizable black, orange, and white pattern, with a wingspan of . A Müllerian mimic, the viceroy butterfly, is similar in color and pattern, but is markedly smaller and has an extra black stripe across each hindwing.
The eastern North American monarch population is notable for its annual southward late-summer/autumn instinctive migration from the northern and central United States and southern Canada to Florida and Mexico. During the fall migration, monarchs cover thousands of miles, with a corresponding multigenerational return north in spring. The western North American population of monarchs west of the Rocky Mountains often migrates to sites in southern California, but individuals have been found in overwintering Mexican sites, as well. Non-migratory populations are found further south in the Americas, and in parts of Europe, Oceania, and Southeast Asia.
Etymology
The name "monarch" is believed to have been given in honor of King William III of England, as the butterfly's main color is that of the king's secondary title, Prince of Orange. The monarch was originally described by Carl Linnaeus in his Systema Naturae of 1758 and placed in the genus Papilio. In 1780, Jan Krzysztof Kluk used the monarch as the type species for a new genus, Danaus. Although works published between at least 1883 and 1944 identified the species as Anosia plexippus, Anosia was generally considered a subgenus of Danaus.
Danaus (Ancient Greek ), a great-grandson of Zeus, was a mythical king in Egypt or Libya, who founded Argos; Plexippus () was one of the 50 sons of Aegyptus, the twin brother of Danaus. In Homeric Greek, his name means "one who urges on horses", i.e., "rider" or "charioteer". In the tenth edition of Systema Naturae, at the bottom of page 467, Linnaeus wrote that the names of the Danai festivi, the division of the genus to which Papilio plexippus belonged, were derived from the sons of Aegyptus. Linnaeus divided his large genus Papilio, containing all known butterfly species, into what we would now call subgenera. The Danai festivi formed one of the "subgenera", containing colorful species, as opposed to the Danai candidi, containing species with bright white wings. Linnaeus wrote: "" (English: "The names of the Danai candidi have been derived from the daughters of Danaus, those of the Danai festivi from the sons of Aegyptus.").
Robert Michael Pyle suggested Danaus is a masculinized version of Danaë (Greek ), Danaus's great-great-granddaughter, to whom Zeus came as a shower of gold, which seemed to him a more appropriate source for the name of this butterfly.
Taxonomy
Monarchs belong in the subfamily Danainae of the family Nymphalidae. Danainae was formerly considered a separately family Danaidae. The three species of monarch butterflies are:
D. plexippus, described by Linnaeus in 1758, is the species known most commonly as the monarch butterfly of North America. Its range actually extends worldwide, including Hawaii, Australia, New Zealand, Spain, and the Pacific Islands.
D. erippus, the southern monarch, was described by Pieter Cramer in 1775. This species is found in tropical and subtropical latitudes of South America, mainly in Brazil, Uruguay, Paraguay, Argentina, Bolivia, Chile, and southern Peru. The South American monarch and the North American monarch may have been one species at one time. Some researchers believe the southern monarch separated from the monarch's population some two million years ago, at the end of the Pliocene. Sea levels were higher, and the entire Amazonas lowland was a vast expanse of brackish swamp that offered limited butterfly habitat.
D. cleophile, the Jamaican monarch, described by Jean-Baptiste Godart in 1819, ranges from Jamaica to Hispaniola.
Six subspecies and two color morphs of D. plexippus have been identified:
D. p. plexippus – nominate subspecies, described by Linnaeus in 1758, is the migratory subspecies known from most of North America.
D. p. p. "form nivosus", the white monarch commonly found on Oahu, Hawaii, and rarely in other locations.
D. p. p. (as yet unnamed) – a color morph lacking some wing vein markings.
D. p. nigrippus (Richard Haensch, 1909) – South America - as forma: Danais archippus f. nigrippus. Hay-Roe et al. in 2007 identified this taxon as a subspecies
D. p. megalippe (Jacob Hübner, [1826]) – nonmigratory subspecies, and is found from Florida and Georgia southwards, throughout the Caribbean and Central America to the Amazon River.
D. p. leucogyne (Arthur G. Butler, 1884) − St. Thomas
D. p. portoricensis Austin Hobart Clark, 1941 − Puerto Rico
D. p. tobagi Austin Hobart Clark, 1941 − Tobago
The population level of the white morph in Oahu is nearing 10%. On other Hawaiian islands, the white morph occurs at a relatively low frequency. White monarchs (D. p. p. "form nivosus") have been found throughout the world, including Australia, New Zealand, Indonesia, and the United States. However, some taxonomists disagree on these classifications.
Genome
The monarch was the first butterfly to have its genome sequenced. The 273-million-base pair draft sequence includes a set of 16,866 protein-coding genes. The genome provides researchers insights into migratory behavior, the circadian clock, juvenile hormone pathways, and microRNAs that are differentially expressed between summer and migratory monarchs. More recently, the genetic basis of monarch migration and warning coloration has been described.
No genetic differentiation exists between the migratory populations of eastern and western North America. Recent research has identified the specific areas in the genome of the monarch that regulate migration. No genetic difference is seen between a migrating and nonmigrating monarch, but the gene is expressed in migrating monarchs, but not expressed in nonmigrating monarchs.
A 2015 publication identified genes from wasp bracoviruses in the genome of the North American monarch leading to articles about monarch butterflies being genetically modified organisms.
Life cycle
Like all Lepidoptera, monarchs undergo complete metamorphosis; their life cycle has four phases: egg, larva, pupa, and adult. Monarchs transition from eggs to adults during warm summer temperatures in as little as 25 days, extending to as many as seven weeks during cool spring conditions. During their development, both larvae and their milkweed hosts are vulnerable to weather extremes, predators, parasites, and diseases; commonly fewer than 10% of monarch eggs and caterpillars survive.
Egg
The egg is derived from materials ingested as a larva and from the spermatophores received from males during mating. Female monarchs lay eggs singly, most often on the underside of a young leaf of a milkweed plant during the spring and summer. Females secrete a small amount of glue to attach their eggs directly to the plant. They typically lay 300 to 500 eggs over a two- to five-week period.
Eggs are cream colored or light green, ovate to conical in shape, and about in size. The eggs weigh less than each and have raised ridges that form longitudinally from the point to apex to the base. Although each egg is the mass of the female, she may lay up to her own mass in eggs. Females lay smaller eggs as they age. Larger females lay larger eggs. The number of eggs laid by a female, which may mate several times, can reach 1,180.
Eggs take three to eight days to develop and hatch into larvae or caterpillars. The offspring's consumption of milkweed benefits health and helps defend them against predators. Monarchs lay eggs along the southern migration route.
Larva
The larva (caterpillar) has five stages (instars), molting at the end of each instar. Instars last about 3 to 5 days, depending on factors such as temperature and food availability.
The first-instar caterpillar that emerges from the egg is pale green or grayish-white, shiny, and almost translucent, with a large, black head. It lacks banding coloration or tentacles. The larvae or caterpillar eats its egg case and begins to feed on milkweed with a circular motion, often leaving a characteristic, arc-shaped hole in the leaf. Older first-instar larvae have dark stripes on a greenish background and develop small bumps that later become front tentacles. The first instar is usually between long.
The second-instar larva develops a characteristic pattern of white, yellow, and black transverse bands. The larva has a yellow triangle on the head and two sets of yellow bands around this central triangle. It is no longer translucent, and is covered in short setae. Pairs of black tentacles begin to grow, a larger pair on the thorax and a smaller pair on the abdomen. The second instar is usually between and long.
The third-instar larva has more distinct bands and the two pairs of tentacles become longer. Legs on the thorax differentiate into a smaller pair near the head and larger pairs further back. Third-instar larvae usually feed using a cutting motion on leaf edges. The third instar is usually between long. The fourth-instar larva has a different banding pattern. It develops white spots on the prolegs near its back, and is usually between long.
The fifth-instar larva has a more complex banding pattern and white dots on the prolegs, with front legs that are small and very close to the head. Its length ranges from .
The larvae typically chew through a latex vein to relieve the pressure and feed above it. Fifth-instar larvae often chew a notch in the petiole of the leaf they are eating, which relieves the latex pressure and causes the leaf to fall into a vertical position.
As the caterpillar completes its growth, it is long (large specimens can reach ) and wide, and weighs about , compared to the first instar, which is long and wide. Fifth-instar larvae greatly increase in size and weight. They then stop feeding and are often found far from milkweed plants as they seek a site for pupating.A monarch caterpillar can travel up to 10 meters from its milkweed plant to find a safe place to pupate.
In a laboratory setting, the fourth- and fifth-instar stages of the caterpillar showed signs of aggressive behavior with lower food availability. Attacked caterpillars were found to be attacked when it was feeding on milkweed leaves, and the caterpillars attacked when foraging for milkweed. This demonstrates the aggressive behavior of monarch caterpillars due to the availability of milkweed.
Pupa
To prepare for the pupal or chrysalis stage, the caterpillar chooses a safe place for pupation, where it spins a silk pad on a downward-facing horizontal surface. At this point, it turns around and securely latches on with its last pair of hind legs and hangs upside down, in the form of the letter J. After "J-hanging" for about 12–16 hours, it soon straightens out its body and goes into peristalsis some seconds before its skin splits behind its head. It then sheds its skin over a period of a few minutes, revealing a green chrysalis (video: https://www.youtube.com/watch?v=QLQmrIUILzc). At first, the chrysalis is long, soft, and somewhat amorphous, but over a few hours, it compacts into its distinct shape – an opaque, pale-green chrysalis with small golden dots near the bottom, and a gold-and-black rim around the dorsal side near the top. At first, its exoskeleton is soft and fragile, but it hardens and becomes more durable within about a day. At this point, it is about long and wide, weighing about . At normal summer temperatures, it matures in 8–15 days (usually 11–12 days). During this pupal stage, the adult butterfly forms inside. A day or so before emerging, the exoskeleton first becomes translucent and the chrysalis more bluish. Finally, within 12 hours or so, it becomes transparent, revealing the black and orange colors of the butterfly inside before it ecloses (emerges).
In 2009, monarchs were reared on the International Space Station, successfully emerging from pupae located in the station's Commercial Generic Bioprocessing Apparatus.
Adult
The adult emerges from its chrysalis after about two weeks of pupation (video: https://www.youtube.com/watch?v=tQDGizDHVs8 ). The emergent adult hangs upside down for several hours while it pumps fluids and air into its wings, which expand, dry, and stiffen. The butterfly then extends and retracts its wings. Once conditions allow, it flies and feeds on a variety of nectar plants. During the breeding season, adults reach sexual maturity in 4–5 days. However, the migrating generation does not reach maturity until overwintering is complete.
The adult's wingspan ranges from . The upper sides of the wings are tawny orange, the veins and margins are black, and two series of small white spots occur in the margins. Monarch forewings also have a few orange spots near their tips. Wing undersides are similar, but the tips of forewings and hindwings are yellow brown instead of tawny orange and the white spots are larger. The shape and color of the wings change at the beginning of the migration and appear redder and more elongated than later migrants. Wings size and shape differ between migratory and nonmigratory monarchs. Monarchs from eastern North America have larger and more angular forewings than those in the western population.
In eastern North American populations, overall wing size in the physical dimensions of wings varies. Males tend to have larger wings than females, and are typically heavier than females. Both males and females have similar thoracic dimensions. Female monarchs tended to have thicker wings, which is thought to convey greater tensile strength and reduce the likelihood of being damaged during migration. Additionally, females had lower wing loading than males, which would mean females require less energy to fly.
Adults are sexually dimorphic. Males are slightly larger than females and have a black spot on a vein on each hindwing. The spots contain scales that produce pheromones that many Lepidoptera use during courtship. Females are often darker than males and have wider veins on their wings. The ends of the abdomens of males and females differ in shape.
The adult's thorax has six legs, but as in all of the Nymphalidae, the forelegs are small and held against the body. The butterfly uses only its middle and hindlegs when walking and clinging.
Adults typically live for 2–5 weeks during their breeding season. Larvae growing in high densities are smaller, have lower survival, and weigh less as adults compared with those growing in lower densities.
Vision
Physiological experiments suggest that monarch butterflies view the world through a tetrachromatic system. Like humans, their retina contain three types of opsin proteins, expressed in distinct photoreceptor cells, each of which absorbs light at a different wavelength. Unlike humans, one of those types of photoreceptor cells corresponds to a wavelength in the ultraviolet range; the other two correspond to blue and green.
In addition to these three photoreceptor cells in the main retina, monarch butterfly eyes contain orange filtering pigments that filter the light reaching some green-absorbing opsins, thereby making a fourth photoreceptor cell sensitive to longer-wavelength light. The combination of filtered and unfiltered green opsins permits the butterflies to distinguish yellow from orange colors. The ultraviolet opsin protein has also been detected in the dorsal rim region of monarch eyes. One study suggests that this allows the butterflies the ability to detect ultraviolet polarized skylight to orient themselves with the sun for their long migratory flight.
These butterflies are capable of distinguishing colors based on their wavelength only, and not based on intensity; this phenomenon is termed "true color vision". This is important for many butterfly behaviors, including seeking nectar for nourishment, choosing a mate, and finding milkweed on which to lay eggs. One study found that floral color is more easily recognized at a distance by butterflies searching for nectar than floral shape. This may be because flowers have highly contrasting colors to the green background of a vegetative landscape. On the other hand, leaf shape is important for oviposition so that the butterflies can ensure their eggs are being laid on milkweed.
Beyond the perception of color, the ability to remember certain colors is essential in the life of monarch butterflies. These insects can easily learn to associate color, and to a lesser extent, shape, with sugary food rewards. When searching for nectar, color is the first cue that draws the insect's attention toward a potential food source, and shape is a secondary characteristic that promotes the process. When searching for a place to lay its eggs, the roles of color and shape are switched. Also, a difference may exist between male and female butterflies from other species in terms of the ability to learn certain colors; however, no differences are noted between the sexes for monarch butterflies.
Courtship and mating
Monarch courtship occurs in two phases. During the aerial phase, a male pursues and often forces a female to the ground. During the ground phase, the butterflies copulate and remain attached for about 30 to 60 minutes. Only 30% of mating attempts end in copulation, suggesting that females may be able to avoid mating, though some have more success than others. During copulation, a male transfers his spermatophore to a female. Along with sperm, the spermatophore provides a female with nutrition, which aids her in laying eggs. An increase in spermatophore size increases the fecundity of female monarchs. Males that produce larger spermatophores also fertilize more females' eggs.
Females and males typically mate more than once. Females that mate several times lay more eggs. Mating for the overwintering populations occurs in the spring, prior to dispersion. Mating is less dependent on pheromones than in other species in its genus. Male search and capture strategies may influence copulatory success, and human-induced changes to the habitat can influence monarch mating activity at overwintering sites.
Distribution and habitat
The range of the western and eastern populations of D. p. plexippus expands and contracts depending upon the season. The range differs between breeding areas, migration routes, and winter roosts. However, no genetic differences between the western and eastern monarch populations exist; reproductive isolation has not led to subspeciation of these populations, as it has elsewhere within the species' range.
In the Americas, the monarch ranges from southern Canada through northern South America. It is also found in Bermuda, the Cook Islands, Hawaii, Cuba, and other Caribbean islands, the Solomons, New Caledonia, New Zealand, Papua New Guinea, Australia, the Azores, the Canary Islands, Madeira, continental Portugal, Gibraltar, the Philippines, and Morocco. It appears in the UK in some years as an accidental migrant.
Overwintering populations of D. p. plexippus are found in Mexico, California, along the Gulf Coast of the United States, year-round in Florida, and in Arizona where the habitat has the specific conditions necessary for their survival. On the East Coast of the United States, they have overwintered as far north as Lago Mar, Virginia Beach, Virginia. Their wintering habitat typically provides access to streams, plenty of sunlight (enabling body temperatures that allow flight), and appropriate roosting vegetation, and is relatively free of predators.
Overwintering, roosting butterflies have been seen on basswoods, elms, sumacs, locusts, oaks, osage-oranges, mulberries, pecans, willows, cottonwoods, and mesquites. While breeding, monarch habitats can be found in agricultural fields, pasture land, prairie remnants, urban and suburban residential areas, gardens, trees, and roadsides – anywhere there is access to larval host plants.
Larval host plants
The host plants used by the monarch caterpillar include:
Araujia sericifera – white bladderflower
Asclepias angustifolia – Arizona milkweed
Asclepias albicans – whitestem milkweed
Asclepias asperula – antelope horns milkweed
Asclepias californica – California milkweed
Asclepias cordifolia – heartleaf milkweed
Asclepias curassavica - blood flower
Asclepias eriocarpa – woollypod milkweed
Asclepias erosa – desert milkweed
Asclepias exaltata – poke milkweed
Asclepias fascicularis – Mexican whorled milkweed
Asclepias hirtella - tall green milkweed
Asclepias humistrata – sandhill/pinewoods milkweed
Asclepias incarnata – swamp milkweed
Asclepias lanceolata - fewflower milkweed
Asclepias linaria – pineneedle milkweed
Asclepias meadii - Meade's milkweed
Asclepias nivea – Caribbean milkweed
Asclepias oenotheroide – zizotes milkweed
Asclepias perennis – aquatic milkweed
Asclepias quadrifolia - four-leaved milkweed
Asclepias speciosa – showy milkweed
Asclepias subulata – rush milkweed
Asclepias sullivantii - prairie milkweed
Asclepias syriaca – common milkweed
Asclepias tuberosa – butterfly weed
Asclepias variegata – white milkweed
Asclepias verticillata – whorled milkweed
Asclepias vestita – woolly milkweed
Asclepias viridis – green antelopehorn milkweed
Calotropis gigantea – crown flower
Calotropis procera - giant milkweed
Cynanchum laeve – sand vine milkweed
Gomphocarpus fruticosus – swan plant
Gomphocarpus physocarpus – balloon plant
Sarcostemma clausa – white vine
The eastern monarch migration largely depends upon only three of these species: Asclepias syriaca, A. viridis, and A. asperula. However, Asclepias curassavica, or tropical milkweed, is often planted as an ornamental in butterfly gardens. Year-round plantings in the USA are controversial and criticized, as they may be the cause of new overwintering sites along the U.S. Gulf Coast, leading to year-round breeding of monarchs. This is thought to adversely affect migration patterns, and to cause a dramatic buildup of the dangerous parasite, Ophryocystis elektroscirrha. New research also has shown that monarch larvae reared on tropical milkweed show reduced migratory development (reproductive diapause), and when migratory adults are exposed to tropical milkweed, it stimulates reproductive tissue growth.
Adult food sources
Although larvae eat only milkweed, adult monarchs feed on the nectar of many plants, including:
Apocynum cannabinum – Indian hemp
Asclepias spp. – milkweed
Buddleja davidii - butterfly-bush
Cirsium sp. – thistle
Daucus carota – wild carrot
Dipsacus sylvestris – teasel
Echinacea sp. – coneflower
Erigeron canadensis – horseweed
Eupatorium maculatum – spotted Joe-Pye weed
Eupatorium perfoliatum – common boneset
Hesperis matronalis – dame's rocket
Liatris sp. – blazing stars
Medicago sativa – alfalfa
Solidago sp. – goldenrod
Symphyotrichum sp. – New World aster
Syringa vulgaris – lilac
Trifolium pratense – red clover
Vernonia altissima – tall ironweed
Monarchs obtain moisture and minerals from damp soil and wet gravel, a behavior known as mud-puddling. The monarch has also been noticed puddling at an oil stain on pavement.
Flight and migration
In North America, monarchs migrate both north and south on an annual basis, making long-distance journeys that are fraught with risks. This is a multi-generational migration, with individual monarchs only making part of the full journey. The population east of the Rocky Mountains attempts to migrate to the sanctuaries of the Mariposa Monarca Biosphere Reserve in the Mexican state of Michoacán and parts of Florida. The western population tries to reach overwintering destinations in various coastal sites in central and southern California. The populations east of the Rocky Mountains, which mostly overwinter in central Mexico, may return the following spring as far north as Texas and Oklahoma before producing offspring to carry the journey northward. The second, third, and fourth generations return to their northern locations in the United States and Canada later in the spring and far into the summer.
Captive-raised monarchs appear capable of migrating to overwintering sites in Mexico, though they have a much lower migratory success rate than do wild monarchs (see section on captive-rearing below). Monarch overwintering sites have been discovered recently in Arizona. Monarchs from the eastern US generally migrate longer distances than monarchs from the western US.
Since the 1800s, monarchs have spread throughout the world, and there are now many non-migratory populations globally.
Flight speeds of adults are around .
Interactions with predators
In both caterpillar and butterfly form, monarchs are aposematic, warding off predators with a bright display of contrasting colors to warn potential predators of their undesirable taste and poisonous characteristics. One monarch researcher emphasizes that predation on eggs, larvae or adults is natural, since monarchs are part of the food chain, thus people should not take steps to kill predators of monarchs.
Larvae feed exclusively on milkweed and consume protective cardiac glycosides. Toxin levels in Asclepias species vary. Not all monarchs are unpalatable, but exhibit Batesian or automimics. Cardiac glycosides levels are higher in the abdomen and wings. Some predators can differentiate between these parts and consume the most palatable ones.
Butterfly weed (A. tuberosa) lacks significant amounts of cardiac glycosides (cardenolides), but instead contains other types of toxic glycosides, including pregnanes. This difference may reduce the toxicity of monarchs whose larvae feed on that milkweed species and affect the butterfly's breeding choices, as a naturalist and others have reported that egg-laying monarchs do not favor the plant. Some other milkweeds have similar characteristics.
Types of predators
While monarchs have a wide range of natural predators, none of these is suspected of causing harm to the overall population, or are the cause of the long-term declines in winter colony sizes.
Several species of birds have acquired methods that allow them to ingest monarchs without experiencing the ill effects associated with the cardiac glycosides (cardenolides). The black-backed oriole is able to eat the monarch through an exaptation of its feeding behavior that gives it the ability to identify cardenolides by taste and reject them. The black-headed grosbeak, though, has developed an insensitivity to secondary plant poisons that allows it to ingest monarchs without vomiting. As a result, these orioles and grosbeaks periodically have high levels of cardenolides in their bodies, and they are forced to go on periods of reduced monarch consumption. This cycle effectively reduces potential predation of monarchs by 50% and indicates that monarch aposematism has a legitimate purpose. The black-headed grosbeak has also evolved resistance mutations in the molecular target of the heart poisons, the sodium pump. The specific mutations that evolved in one of the grosbeak's four copies of the sodium pump gene are the same as those found in some rodents that have also evolved to resist cardiac glycosides. Known bird predators include brown thrashers, grackles, robins, cardinals, sparrows, scrub jays, and pinyon jays.
The monarch's white morph appeared in Oahu after the 1965–1966 introduction of two bulbul bird species, Pycnonotus cafer and Pycnonotus jocosus. These are now the most common avian insectivores in Hawaii, and probably the only ones that eat insects as large as monarchs. Although Hawaiian monarchs have low cardiac glycoside levels, the birds may also be tolerant of that toxin. The two species hunt the larvae and some pupae from the branches and undersides of leaves in milkweed bushes. The bulbuls also eat resting and ovipositing adults, but rarely flying ones. Because of its color, the white morph has a higher survival rate than the orange one. This is either because of apostatic selection (i.e., the birds have learned the orange monarchs can be eaten), because of camouflage (the white morph matches the white pubescence of milkweed or the patches of light shining through foliage), or because the white morph does not fit the bird's search image of a typical monarch, so is thus avoided.
Some mice, particularly the black-eared mouse (Peromyscus melanotis), are, like all rodents, able to tolerate large doses of cardenolides and are able to eat monarchs. Overwintering adults become less toxic over time making them more vulnerable to predators. In Mexico, about 14% of the overwintering monarchs are eaten by birds and mice and black-eared mice can eat up to 40 monarchs per night.
In North America, eggs and first-instar larvae of the monarch are eaten by larvae and adults of the introduced Asian lady beetle (Harmonia axyridis). The Chinese mantis (Tenodera sinensis) will consume the larvae once the gut is removed thus avoiding cardenolides. Predatory wasps commonly consume larvae. Many Hemipteran bugs including predatory stink bugs in the subfamily Asopinae and assassin bugs in family Reduviidae eat monarchs. Larvae can sometimes avoid predation by dropping from the plant or by jerking their bodies.
Parasitoids, including tachinid flies and braconid wasps develop inside the monarch larvae eventually killing it and emerging from the larvae or pupa. Non-insect parasites and infectious diseases (pathogens) also kill monarchs.
Aposematism
Monarchs are toxic and foul-tasting because of the presence of cardenolides in their bodies, which the caterpillars ingest as they feed on milkweed. Monarchs and other cardenolide-resistant insects rely on a resistant form of the Na+/ K+-ATPase enzyme to tolerate significantly higher concentrations of cardenolides than nonresistant species. By ingesting a large amount of plants in the genus Asclepias, primarily milkweed, monarch caterpillars are able to sequester cardiac glycosides, or more specifically cardenolides, which are steroids that act in heart-arresting ways similar to digitalis. It has been found that monarchs are able to sequester cardenolides most effectively from plants of intermediate cardenolide content rather than those of high or low content. Three mutations that evolved in the monarch's Na+/ K+-ATPase were found to be sufficient together to confer resistance to dietary cardiac glycosides. This was tested by swapping these mutations into the same gene in the fruit fly Drosophila melanogaster using CRISPR-Cas9 genome editing. These fruit flies-turned monarch flies were completely resistant to dietary ouabain, a cardiac glycoside found in Apocynaceae, and even sequestered some through metamorphosis, like the monarch.
Different species of milkweed have different effects on growth, virulence, and transmission of parasites. One species, Asclepias curassavica, appears to reduce the symptoms of Ophryocystis elektroscirrha (OE) infection. The two possible explanations for this include that it promotes overall monarch health to boost the monarch's immune system or that chemicals from the plant have a direct negative effect on the OE parasites. A. curassavica does not cure or prevent the infection with OE; it merely allows infected monarchs to live longer, and this would allow infected monarchs to spread the OE spores for longer periods. For the average home butterfly garden, this scenario only adds more OE to the local population.
After the caterpillar becomes a butterfly, the toxins shift to different parts of the body. Since many birds attack the wings of the butterfly, having three times the cardiac glycosides in the wings leaves predators with a very foul taste and may prevent them from ever ingesting the body of the butterfly. To combat predators that remove the wings only to ingest the abdomen, monarchs keep the most potent cardiac glycosides in their abdomens.
Mimicry
Monarchs share the defense of noxious taste with the similar-appearing viceroy butterfly in what is perhaps one of the most well-known examples of mimicry. Though long purported to be an example of Batesian mimicry, the viceroy is actually more unpalatable than the monarch, making this a case of Müllerian mimicry.
Human interaction
The monarch is the state insect of Alabama, Idaho, Illinois, Minnesota, Texas, Vermont, and West Virginia. Legislation was introduced to make it the national insect of the United States, but this failed in 1989 and again in 1991.
Homeowners are increasingly establishing butterfly gardens; monarchs can be attracted by cultivating a butterfly garden with specific milkweed species and nectar plants. Efforts are underway to establish these monarch waystations.
A 2012 IMAX film, Flight of the Butterflies, describes the story of the Urquharts, Brugger, and Trail to document the then-unknown monarch migration to Mexican overwintering areas.
Sanctuaries and reserves have been created at overwintering locations in Mexico and California to limit habitat destruction. These sites can generate significant tourism revenue. However, with less tourism, monarch butterflies may exhibit higher survival rates, as butterflies in tourist isolated areas have shown increases in protein content, immune response and oxidative defense.
Organizations and individuals participate in tagging programs. Tagging information is used to study migration patterns.
The 2012 novel by Barbara Kingsolver, Flight Behavior, deals with the fictional appearance of a large population in the Appalachians.
Captive rearing
Humans interact with monarchs when rearing them in captivity, which has become increasingly popular. However, risks occur in this controversial activity. On one hand, captive rearing has many positive aspects. Monarchs are bred in schools and used for butterfly releases at hospices, memorial events, and weddings. Memorial services for the September 11 attacks include the release of captive-bred monarchs. Monarchs are used in schools and nature centers for educational purposes. Many homeowners raise monarchs in captivity as a hobby and for educational purposes. Monarchs born in captivity are friendly to humans and are a pleasure to play with (https://www.youtube.com/watch?v=3brZuY-yWh0). They may need to be taught how to feed on artificial food (https://www.youtube.com/shorts/5KH3NLDaRvU).
On the other hand, this practice becomes problematic when monarchs are "mass-reared". Stories in the Huffington Post in 2015 and Discover magazine in 2016 have summarized the controversy around this issue.
The frequent media reports of monarch declines have encouraged many homeowners to attempt to rear as many monarchs as possible in their homes and then release them to the wild in an effort to "boost the monarch population". Some individuals, such as one in Linn County, Iowa, have reared thousands of monarchs at the same time.
Some monarch scientists do not condone the practice of rearing "large" numbers of monarchs in captivity for release into the wild because of the risks of genetic issues and disease spread. One of the biggest concerns of mass rearing is the potential for spreading the monarch parasite Ophryocystis elektroscirrha into the wild. This parasite can rapidly build up in captive monarchs, especially if they are housed together. The spores of the parasite also can quickly contaminate all housing equipment, so that all subsequent monarchs reared in the same containers then become infected. One researcher stated that rearing more than 100 monarchs constitutes "mass rearing" and should not be done.
In addition to the disease risks, researchers believe these captive-reared monarchs are not as fit as wild ones, owing to the unnatural conditions in which they are raised. Homeowners often raise monarchs in plastic or glass containers in their kitchens, basements, porches, etc., and under artificial lighting and controlled temperatures. Such conditions would not mimic what the monarchs are used to in the wild, and may result in adults that are unsuited for the realities of their wild existence. In support of this, a recent study by a citizen scientist found that captive-reared monarchs have a lower migration success rate than wild monarchs do.
A 2019 study shed light on the fitness of captive-reared monarchs, by testing reared and wild monarchs on a tethered flight apparatus that assessed navigational ability. In that study, monarchs that were reared to adulthood in artificial conditions showed a reduction in navigational ability. This happened even with monarchs that were brought into captivity from the wild for a few days. A few captive-reared monarchs did show proper navigation. This study revealed the fragility of monarch development; if the conditions are not suitable, their ability to properly migrate could be impaired. The same study also examined the genetics of a collection of reared monarchs purchased from a butterfly breeder, and found they were dramatically different from wild monarchs, so much so that the lead author described them as "franken-monarchs".
An unpublished study in 2019 compared behavior of captive-reared versus wild monarch larvae. The study showed that reared larvae exhibited more defensive behavior than wild larvae. The reason for this is unknown, but it could relate to the fact that reared larvae are frequently handled and/or disturbed.
Threats
In February 2015, the United States Fish and Wildlife Service (USFWS) reported a study that showed that nearly a billion monarchs had vanished from the butterfly's overwintering sites since 1990. The agency attributed the monarch's decline in part to a loss of milkweed caused by herbicides that farmers and homeowners had used.
A 2017 report included mention of the new ethanol-in-gasoline standards as reducing the amount of acreage left fallow in the U.S. midwest: "Federal policies such as the Ethanol Fuel Standards (Renewable Fuel Standard), crop insurance, and waning Farm Bill support for CRP reduce support for integrated agro-ecological landscapes capable of sustaining both food production and monarch habitat, principally because these policies promote row crops over mixed, herbaceous perennial vegetation."
In 2018, a study correlated monarch butterfly decline to the fact that 95% of corn and soybean crops grown in the United States were using genetically modified seeds resistant to the herbicide glyphosate. This meant that instead of spreading the herbicide only prior to seed planting, now farmers could have the herbicide spread a second time by air when weeds had begun to challenge the crops. Air application of the herbicide meant that the unplowed margins between field and road that previously supported milkweed and a range of nectar flowers were now greatly diminished.
By 2024, the USFWS calculated that the eastern group of butterflies had declined by approximately 80 percent since the 1980s. The western population was more imperiled, as it had declined by 95 percent. According to the USFWS, the species faces a host of threats, including the loss and degradation of its breeding, migratory, and overwintering habitats, exposure to insecticides, and the growing impacts of climate change.
Western monarch populations
Based on a 2014 20-year comparison, the overwintering numbers west of the Rocky Mountains have dropped more than 50% since 1997 and the overwintering numbers east of the Rockies have declined by more than 90% since 1995. According to the Xerces Society, the monarch population in California decreased 86% in 2018, going from millions of butterflies to tens of thousands of butterflies.
The society's annual 2020–2021 winter count showed a significant decline in the California population. One Pacific Grove site did not have a single monarch butterfly. A primary explanation for this was the destruction of the butterfly's milkweed habitats. This particular population is believed to comprise less than 2000 individuals, .
Eastern and midwestern monarch populations
A 2016 study attributed the previous decade's 90% decline in overwintering numbers of the eastern monarch population primarily to the loss of breeding habitat and milkweed. The publication's authors stated that an 11%–57% probability existed that this population will become "quasi-extinct" over the next 20 years (i.e. unable to sustain a stable population). Other threats identified in the study include climate change, insecticides, and disease.
Chip Taylor, the director of Monarch Watch at the University of Kansas, stated in 2013 that the Midwest milkweed habitat "is virtually gone" with 120–150million acres lost. However, he predicted in 2024 that in the immediate future and perhaps into the next two decades, the eastern monarch butterfly population will be relatively stable because it is not presently on a continuous downward trend as it was from 2000-2006. To help fight the population decline, Monarch Watch encourages the planting of "Monarch Waystations".
Habitat loss due to herbicide use and genetically modified crops
Declines in milkweed abundance and monarch populations between 1999 and 2010 are correlated with the adoption of herbicide-tolerant genetically modified (GM) corn and soybeans, which now constitute 89% and 94% of these crops, respectively, in the U.S. GM corn and soybeans are resistant to the effect of the herbicide glyphosate. Some conservationists attribute the disappearance of milkweed to agricultural practices in the Midwest, where GM seeds are bred to resist herbicides that farmers use to kill unwanted plants that grow near their rows of food crops.
In 2015, the Natural Resources Defense Council filed a suit against the United States Environmental Protection Agency (EPA). The Council argued that the agency ignored warnings about the dangers of glyphosate usage for monarchs. However, a 2018 study has suggested that the decline in milkweed predates the arrival of GM crops.
Losses during migration
Eastern and midwestern monarchs are apparently experiencing problems reaching Mexico. A number of monarch researchers have cited recent evidence obtained from long-term citizen science data that show that the number of breeding (adult) monarchs has not declined in the last two decades.
The lack of long-term declines in the numbers of breeding and migratory monarchs, yet the clear declines in overwintering numbers, suggests a growing disconnect exists between these life stages. One researcher has suggested that mortality from car strikes constitutes an increasing threat to migrating monarchs. A study of road mortality in northern Mexico, published in 2019, showed very high mortality from just two "hotspots" each year, amounting to 200,000 monarchs killed.
Importance of overwintering habitat
The area of Mexican forest to which eastern and midwestern monarchs migrate reached its lowest level in two decades in 2013. The decline was expected to increase during the 2013–2014 season. Mexican environmental authorities continue to monitor illegal logging of the oyamel fir trees; however, organized criminals have repeatedly crushed such efforts in the name of very-short-term financial gain. The oyamel is a major species of evergreen on which the overwintering butterflies spend a significant time during their winter diapause, or suspended development.
A 2014 study acknowledged that while "the protection of overwintering habitat has no doubt gone a long way towards conserving monarchs that breed throughout eastern North America", their research indicates that habitat loss on breeding grounds in the United States is the main cause of both recent and projected population declines.
Western monarch populations have rebounded slightly since 2014 with the Western Monarch Thanksgiving Count tallying 335,479 monarchs in 2022. The population still has much to go for a full recovery.
Parasites
Parasites include the tachinid flies Sturmia convergens, Compsilura concinnata, Madremyia saundersii, Hyphantrophaga virilis, Nilea erecta, and Lespesia archippivora. Lespesia-parasitized butterfly larvae suspend, but die prior to pupation. The fly's maggot lowers itself to the ground, forms a brown puparium and then emerges as an adult.
Pteromalid wasps, specifically Pteromalus cassotis, parasitize monarch pupae. These wasps lay their eggs in the pupae while the chrysalis is still soft. Up to 400 adults emerge from the chrysalis after 14–20 days, killing the monarch.
The bacterium Micrococcus flacidifex danai also infects larvae. Just before pupation, the larvae migrate to a horizontal surface and die a few hours later, attached only by one pair of prolegs, with the thorax and abdomen hanging limp. The body turns black shortly thereafter. The bacterium Pseudomonas aeruginosa has no invasive powers, but causes secondary infections in weakened insects. It is a common cause of death in laboratory-reared insects.
Ophryocystis elektroscirrha is another parasite of the monarch. It infects the subcutaneous tissues and propagates by spores formed during the pupal stage. The spores are found over all of the body of infected butterflies, with the greatest number on the abdomen. These spores are passed, from female to caterpillar, when spores rub off during egg laying and are then ingested by caterpillars. Severely infected individuals are weak, unable to expand their wings, or unable to eclose, and have shortened lifespans, but parasite levels vary in populations. This is not the case in laboratory rearing, where after a few generations, all individuals can be infected.
Infection with O. elektroscirrha creates an effect known as culling, whereby migrating monarchs that are infected are less likely to complete the migration. This results in overwintering populations with lower parasite loads. Owners of commercial butterfly-breeding operations claim that they take steps to control this parasite in their practices, although this claim is doubted by many scientists who study monarchs.
Confusion of host plants
The black swallow-wort (Cynanchum louiseae) and pale swallow-wort (Cynanchum rossicum) plants are problematic for monarchs in North America. Monarchs lay their eggs on these relatives of native vining milkweed (Cynanchum laeve) because they produce stimuli similar to milkweed. Once the eggs hatch, the caterpillars are poisoned by the toxicity of this invasive plant from Europe.
Climate
Climate variations during the fall and summer affect butterfly reproduction. Rainfall and freezing temperatures affect milkweed growth. Omar Vidal, director general of WWF-Mexico, said, "The monarch's lifecycle depends on the climatic conditions in the places where they breed. Eggs, larvae, and pupae develop more quickly in milder conditions. Temperatures above can be lethal for larvae, and eggs dry out in hot, arid conditions, causing a drastic decrease in hatch rate." If a monarch's body temperatures is below , a monarch cannot fly. To warm up, they sit in the sun or rapidly shiver their wings to warm themselves.
Climate change may dramatically affect the monarch migration. A study from 2015 examined the impact of warming temperatures on the breeding range of the monarch, and showed that in the next 50 years the monarch host plant will expand its range further north into Canada, and that the monarchs will follow this. While this will expand the breeding locations of the monarch, it will also have the effect of increasing the distance that monarchs must travel to reach their overwintering destination in Mexico, which could result in greater mortality during the migration.
Milkweeds grown at increased temperatures have been shown to contain higher cardenolide concentrations, making the leaves too toxic for the monarch caterpillars. However, these increased concentrations are likely in response to increased insect herbivory, which is also caused by the increased temperatures. Whether increased temperatures make milkweed too toxic for monarch caterpillars when other factors are not present is unknown. Additionally, milkweed grown at carbon dioxide levels of 760 parts per million was found to produce a different mix of the toxic cardenolides, one of which was less effective against monarch parasites.
Conservation status
The number of monarchs overwintering in Mexico has shown a long-term downward trend. Since 1995, coverage numbers have been as high as during the winter of 1996–1997, but on average about . Coverage declined to its lowest point to date () during the winter of 2013–2014, but rebounded to in 2015–2016. The average population of monarchs in 2016 was estimated at 200million. Historically, on average there are 300million monarchs. The 2016 increase was attributed to favorable breeding conditions in the summer of 2015. However, coverage declined by 27% to during the winter of 2016–2017. Some believe this was because of a storm that had occurred during March 2016 in the monarchs' previous overwintering season. However, this seems unlikely since most current research shows that the overwintering colony sizes do not predict the size of the next summer breeding population.
On July 20, 2022, the International Union for Conservation of Nature added the migratory monarch butterfly (the subspecies common in North America) to its red list as an endangered species. However, a petition in 2023 resulted in its status being changed to "vulnerable".
The monarch butterfly is not currently listed under the Convention on International Trade in Endangered Species of Wild Fauna and Flora or protected specifically under U.S. domestic laws.
On August 14, 2014, the Center for Biological Diversity and the Center for Food Safety petitioned the United States Secretary of the Interior through the USFWS to protect the Danaus plexippus plexippus subspecies of the monarch butterfly as a threatened species under the Endangered Species Act. On December 31, 2014, the USFWS initiated a review of the status of the butterfly to determine whether the petitioned action was warranted, with a due date for the submission of information of March 3, 2015, later extended to December 15, 2020.
On December 17, 2020, the USFWS published in the Federal Register a notice in which it stated that adding the butterfly to the list of threatened and endangered species was "warranted-but-precluded" because budgetary limitations required it to devote its resources to species with higher priorities for listing. The notice stated that the USFWS had 422 12-month petition findings for domestic species yet to be initiated and completed at the beginning of Fiscal Year 2020 (October 1, 2019).
On June 27, 2023, the USFWS published in the Federal Register a Notice of Proposed Rulemaking for a rule that would give the butterfly a listing priority number (LPN) of 8 for adding its species to that list. LPNs range from 1 to 12 (the lower the LPN, the higher the listing priority).
On December 12, 2024, the USFWS published in the Federal Register a proposed rule that would list the butterfly as a threatened species and would designate the butterfly's critical habitat in accordance with the provisions of the Endangered Species Act. The USFWS estimated in the proposed rule that the probability of extinction in the foreseeable future (60 years) is 56-74 percent for the eastern monarch migratory population and 99 percent for the western migratory population. The proposed rule designated seven areas near California's Pacific coast as "critical habitat units" for monarch butterflies. The USFWS will accept comments on the proposed rule until March 12, 2025.
In Ontario, Canada, the monarch butterfly is listed as a species of special concern. In fall 2016, the Committee on the Status of Endangered Wildlife in Canada proposed that the monarch be listed as endangered in Canada, as opposed to its current listing as a "species of concern" in that country. This move, once enacted, would protect critical monarch habitat in Canada, such as major fall accumulation areas in southern Ontario, but it would also have implications for citizen scientists who work with monarchs, and for classroom activities. If the monarch were federally protected in Canada, these activities could be limited, or require federal permits.
In Nova Scotia, the monarch is listed as endangered at the provincial level, . This decision (as well as the Ontario decision) apparently is based on a presumption that the overwintering colony declines in Mexico create declines in the breeding range in Canada. Two recent studies have been conducted examining long-term trends in monarch abundance in Canada, using either butterfly atlas records or citizen science butterfly surveys, and neither shows evidence of a population decline in Canada.
Conservation efforts
Although numbers of breeding monarchs in eastern North America have apparently not decreased, reports of declining numbers of overwintering butterflies have inspired efforts to conserve the species.
Federal actions
On June 20, 2014, President Barack Obama issued a presidential memorandum entitled "Creating a Federal Strategy to Promote the Health of Honey Bees and Other Pollinators". The memorandum established a Pollinator Health Task Force, to be co-chaired by the Secretary of Agriculture and the Administrator of the Environmental Protection Agency, and stated:
The U.S. General Services Administration (GSA) publishes sets of landscape performance requirements in its P100 documents, which mandate standards for the GSA's Public Buildings Service. Beginning in March 2015, those performance requirements and their updates have included four primary aspects for planting designs that are intended to provide adequate on-site foraging opportunities for targeted pollinators. The targeted pollinators include bees, butterflies, and other beneficial insects.
In May 2015, the Pollinator Health Task Force issued a "National Strategy to Promote the Health of Honey Bees and Other Pollinators". The strategy laid out federal actions to achieve three goals, two of which were:
Many of the priority projects that the National Strategy identified focused on the I-35 corridor, which extends for from Texas to Minnesota. The area through which that highway travels provides spring and summer breeding habitats in the United States' key monarch migration corridor.
The Task Force simultaneously issued a "Pollinator Research Action Plan". The Plan outlined five main action areas, covered in ten subject-specific chapters. The action areas were: (1) Setting a Baseline; (2) Assessing Environmental Stressors; (3) Restoring Habitat; (4) Understanding and Supporting Stakeholders; (5) Curating and Sharing Knowledge.
In May 2015, the U.S. Department of Agriculture (USDA) and the U.S. Department of the Interior (USDI) issued a 52-page document entitled "Pollinator-Friendly Best Management Practices for Federal Lands". The document consolidated general information about the practices and procedures to use when considering pollinator needs in project development and management of Federal lands that are managed for native diversity and multiple uses. The document also contained a series of actions to be considered when determining those lands best suited for restoration and rehabilitation of monarch habitat. These included an assurance that native wildflowers are available, diverse and abundant to provide nectar for monarchs, and an assurance that milkweed species that female monarchs prefer for egg laying are available or will be planted. The document identified those milkweed species for each of seven regions within the United States.
On December 4, 2015, President Obama signed into law the Fixing America's Surface Transportation (FAST) Act (Pub. L. 114-94). The FAST Act placed a new emphasis on efforts to support pollinators. To accomplish this, the FAST Act amended Title 23 (Highways) of the United States Code. The amendment directed the United States Secretary of Transportation, when carrying out programs under that title in conjunction with willing states, to:
encourage integrated vegetation management practices on roadsides and other transportation rights-of-way, including reduced mowing; and
encourage the development of habitat and forage for Monarch butterflies, other native pollinators, and honey bees through plantings of native forbs and grasses, including noninvasive, native milkweed species that can serve as migratory way stations for butterflies and facilitate migrations of other pollinators.
The FAST Act also stated that activities to establish and improve pollinator habitat, forage, and migratory way stations may be eligible for Federal funding if related to transportation projects funded under Title 23.
In February 2016, the Office of the U.S. Secretary of the Interior issued a memorandum containing an attachment entitled "Strategy for Implementing Pollinator-Friendly Landscaping Design and Maintenance at Department of the Interior Sites". The attachment described specific actions that would address the incorporation of pollinator-friendly landscaping design and maintenance into new construction and major renovations, existing sites, contracts, leases and occupancy agreements, and education/outreach programs. The memorandum containing the attachment directed the USDI's bureaus and offices (which include the Fish and Wildlife Service, the Bureau of Land Management, and the National Park Service) to implement those actions to the extent that they are appropriate for, and consistent with, the mission and function of the facility/site.
In June 2016, the Pollinator Health Task Force issued a "Pollinator Partnership Action Plan". That Plan provided examples of past, ongoing, and possible future collaborations between the federal government and non-federal institutions to support pollinator health under each of the National Strategy's goals.
The USDA's Farm Service Agency helps increase U.S. populations of monarch butterfly and other pollinators through its Conservation Reserve Program's State Acres for Wildlife Enhancement (SAFE) Initiative. The SAFE Initiative provides an annual rental payment to farmers who agree to remove environmentally sensitive land from agricultural production and who plant species that will improve environmental health and quality. Among other things, the initiative encourages landowners to establish wetlands, grasses, and trees to create habitats for species that the FWS has designated to be threatened or endangered.
As part of its targeted monarch butterfly effort, the USDA's Natural Resources Conservation Service (NRCS) works with agricultural producers in the midwest and southern Great Plains to combat the decline of monarch butterflies by planting milkweed and other nectar-rich plants on private lands. The NRCS also provides region-specific guides and plant lists that support populations of monarch butterflies and other pollinators in the Greater Appalachian Mountains Region, the Midwest Region, the Northern and Southern Great Plains, and the Western Coastal Plain.
Other actions
Agriculture companies and other organizations are being asked to set aside areas that remain unsprayed to allow monarchs to breed. In addition, national and local initiatives are underway to help establish and maintain pollinator habitats along corridors containing power lines and roadways. The Federal Highway Administration, state governments, and local jurisdictions are encouraging highway departments and others to limit their use of herbicides, to reduce mowing, to help milkweed to grow and to encourage monarchs to reproduce within their right-of-ways.
National Cooperative Highway Research Program report
In 2020, the National Cooperative Highway Research Program (NCHRP) of the Transportation Research Board issued a 208-page report that described a project that had examined the potential for roadway corridors to provide habitat for monarch butterflies. A part of the project developed tools for roadside managers to optimize potential habitat for monarch butterflies in their road rights-of-way.
Such efforts are controversial because the risk of butterfly mortality near roads is high. Several studies have shown that motor vehicles kill millions of monarchs and other butterflies every year. Also, some evidence indicates that monarch larvae living near roads experience physiological stress conditions, as evidenced by elevations in their heart rate.
The NCHRP report acknowledged that, among other hazards, roads present a danger of traffic collisions for monarchs, stating that these effects appear to be more concentrated in particular funnel areas during migration. Nevertheless, the report concluded:
Butterfly gardening and monarch waystations
The practice of butterfly gardening and creating "monarch waystations" is commonly thought to increase the populations of butterflies. Efforts to restore falling monarch populations by establishing butterfly gardens and monarch waystations require particular attention to the butterfly's food preferences and population cycles, as well to the conditions needed to propagate and maintain milkweed.
For example, in the Washington, DC, area and elsewhere in the northeastern and midwestern United States, common milkweed (Asclepias syriaca) is among the most important food plants for monarch caterpillars. A U.S. Department of Agriculture conservation planting guide for Maryland recommends that, for optimum wildlife and pollinator habitat in mesic sites (especially for monarchs), a seed mix should contain 6.0% A. syriaca by weight and 2.0% by seed.
However, monarchs prefer to lay eggs on A. syriaca when its foliage is soft and fresh. Because monarch reproduction peaks in those areas during the late summer when milkweed foliage is old and tough, A. syriaca needs to be mowed or cut back in June through August to assure that it will be regrowing rapidly when monarch reproduction reaches its peak. Similar conditions exist for showy milkweed (A. speciosa) in Michigan and for green antelopehorn milkweed (A. viridis), where it grows in the Southern Great Plains and the Western United States. Further, the seeds of A. syriaca and some other milkweeds need periods of cold treatment (cold stratification) before they will germinate.
To protect seeds from washing away during heavy rains and from seed–eating birds, one can cover the seeds with a light fabric or with an layer of straw mulch. However, mulch acts as an insulator. Thicker layers of mulch can prevent seeds from germinating if they prevent soil temperatures from rising enough when winter ends. Further, few seedlings can push through a thick layer of mulch.
Although monarch caterpillars will feed on butterfly weed (A. tuberosa) in butterfly gardens, it is typically not a heavily used host plant for the species. The plant has rough leaves and a layer of trichomes, which may inhibit oviposition or decrease a female's ability to sense leaf chemicals. The plant's low levels of cardenolides may also deter monarchs from laying eggs on the plant.
While A. tuberosa colorful flowers provide nectar for many adult butterflies, the plant may be less suitable for use in butterfly gardens and monarch waystations than are other milkweed species.
Breeding monarchs prefer to lay eggs on swamp milkweed (A. incarnata) in the midwest. However, A. incarnata is an early successional plant that usually grows at the margins of wetlands and in seasonally flooded areas. The plant is slow to spread via seeds, does not spread by runners and tends to disappear as vegetative densities increase and habitats dry out. Although A. incarnata plants can survive for up to 20 years, most live only two-five years in gardens. The species is not shade-tolerant and is not a good vegetative competitor.
| Biology and health sciences | Lepidoptera | Animals |
246507 | https://en.wikipedia.org/wiki/Silage | Silage | Silage is fodder made from green foliage crops which have been preserved by fermentation to the point of souring. It is fed to cattle, sheep and other ruminants. The fermentation and storage process is called ensilage, ensiling, or silaging. The exact methods vary, depending on available technology, local tradition and prevailing climate.
Silage is usually made from grass crops including maize, sorghum or other cereals, using the entire green plant (not just the grain). Specific terms may be used for silage made from particular crops: oatlage for oats, haylage for alfalfa (haylage may also refer to high dry matter silage made from hay).
History
Using the same technique as the process for making sauerkraut, green fodder was preserved for animals in parts of Germany since the start of the 19th century. This gained the attention of French agriculturist Auguste Goffart of Sologne, near Orléans. He published a book in 1877 which described the experiences of preserving green crops in silos. Goffart's experience attracted considerable attention. The conditions of dairy farming in the United States suited the ensiling of green corn fodder, and was soon adopted by New England farmers. Francis Morris of Maryland prepared the first silage produced in America in 1876. The favourable results obtained in the US led to the introduction of the system in the United Kingdom, where Thomas Kirby first introduced the process for British dairy herds.
The modern silage preserved with acid and by preventing contact with air was invented by Finnish academic and professor of chemistry Artturi Ilmari Virtanen. Virtanen was awarded the 1945 Nobel prize in chemistry "for his research and inventions in agricultural and nutrition chemistry, especially for his fodder preservation method", practically inventing modern silage.
Early silos were made of stone or concrete either above or below ground, but it is recognized that air may be sufficiently excluded in a tightly pressed stack, though in this case a few inches of the fodder around the sides is generally useless owing to mildew. In the US, structures were typically constructed of wooden cylinders to 35 or 40 ft. in depth.
In the early days of mechanized agriculture (late 1800s), stalks were cut and collected manually using a knife and horsedrawn wagon, and fed into a stationary machine called a "silo filler" that chopped the stalks and blew them up a narrow tube to the top of a tower silo.
Production
The crops most often used for ensilage are the ordinary grasses, clovers, alfalfa, vetches, oats, rye and maize. Many crops have ensilaging potential, including potatoes and various weeds, notably spurrey such as Spergula arvensis. Silage must be made from plant material with a suitable moisture content: about 50% to 60% depending on the means of storage, the degree of compression, and the amount of water that will be lost in storage, but not exceeding 75%. Weather during harvest need not be as fair and dry as when harvesting for drying. For corn, harvest begins when the whole-plant moisture is at a suitable level, ideally a few days before it is ripe. For pasture-type crops, the grass is mown and allowed to wilt for a day or so until the moisture content drops to a suitable level. Ideally the crop is mowed when in full flower, and deposited in the silo on the day of its cutting.
After harvesting, crops are shredded to pieces about long. The material is spread in uniform layers over the floor of the silo, and closely packed. When the silo is filled or the stack built, a layer of straw or some other dry porous substance may be spread over the surface. In the silo, the pressure of the material, when chaffed, excludes air from all but the top layer; in the case of the stack, extra pressure is applied by weights to prevent excessive heating.
Equipment
Forage harvesters collect and chop the plant material, and deposit it in trucks or wagons. These forage harvesters can be either tractor-drawn or self-propelled. Harvesters blow the chaff into the wagon through a chute at the rear or side of the machine. Chaff may also be emptied into a bagger, which puts the silage into a large plastic bag that is laid out on the ground.
In North America, Australia, northwestern Europe, and New Zealand it is common for silage to be placed in large heaps on the ground, rolled by tractor to push out the air, then covered with plastic sheets that are held down by used tires or tire ring walls. In New Zealand and Northern Europe, 'clamps' made of concrete or old wooden railway ties (sleepers) and built into the side of a bank are sometimes used. The chopped grass can then be dumped in at the top, to be drawn from the bottom in winter. This requires considerable effort to compress the stack in the silo to cure it properly. Again, the pit is covered with plastic sheet and weighed down with tires.
In an alternative method, the cut vegetation is formed into bales using a baler, making balage (North America) or silage bales (UK, Australia, New Zealand). The grass or other forage is cut and partly dried until it contains 30–40% moisture (much drier than bulk silage, but too damp to be stored as dry hay). It is then made into large bales which are wrapped tightly in plastic to exclude air. The plastic may wrap the whole of each cylindrical or cuboid bale, or be wrapped around only the curved sides of a cylindrical bale, leaving the ends uncovered. In this case, the bales are placed tightly end to end on the ground, making a long continuous "sausage" of silage, often at the side of a field. The wrapping may be performed by a bale wrapper, while the baled silage is handled using a bale handler or a front-loader, either impaling the bale on a flap, or by using a special grab. The flaps do not hole the bales.
In the UK, baled silage is most often made in round bales about , individually wrapped with four to six layers of "bale wrap plastic" (black, white or green 25-micrometre stretch film). The percentage of dry matter can vary from about 20% dry matter upwards. The continuous "sausage" referred to above is made with a special machine which wraps the bales as they are pushed through a rotating hoop which applies the bale wrap to the outside of the bales (round or square) in a continuous wrap. The machine places the bales on the ground after wrapping by moving forward slowly during the wrapping process.
Haylage
Haylage sometimes refers to high dry matter silage of around 40% to 60%, typically made from hay. Horse haylage is usually 60% to 70% dry matter, made in small bales or larger bales.
Handling of wrapped bales is most often with some type of gripper that squeezes the plastic-covered bale between two metal parts to avoid puncturing the plastic. Simple fixed versions are available for round bales which are made of two shaped pipes or tubes spaced apart to slide under the sides of the bale, but when lifted will not let it slip through. Often used on the tractor's loader as an attachment called a bale grabber, they incorporate a trip tipping mechanism which can flip the bales over on to the flat side or end for storage on the thickest plastic layers.
Fermentation
Silage undergoes anaerobic fermentation, which starts about 48 hours after the silo is filled, and converts sugars to acids. Fermentation is essentially complete after about two weeks.
Before anaerobic fermentation starts, there is an aerobic phase in which the trapped oxygen is consumed. How closely the fodder is packed determines the nature of the resulting silage by regulating the chemical reactions that occur in the stack. When closely packed, the supply of oxygen is limited, and the attendant acid fermentation brings about decomposition of the carbohydrates present into acetic, butyric and lactic acids. This product is named sour silage. If the fodder is unchaffed and loosely packed, or the silo is built gradually, oxidation proceeds more rapidly and the temperature rises; if the mass is compressed when the temperature is , the action ceases and sweet silage results. The nitrogenous ingredients of the fodder also change: in making sour silage, as much as one-third of the albuminoids may be converted into amino and ammonium compounds; in making sweet silage, a smaller proportion is changed, but they become less digestible. If the fermentation process is poorly managed, sour silage acquires an unpleasant odour due to excess production of ammonia or butyric acid (the latter is responsible for the smell of rancid butter).
In the past, the fermentation was conducted by indigenous microorganisms, but, today, some bulk silage is inoculated with specific microorganisms to speed fermentation or improve the resulting silage. Silage inoculants contain one or more strains of lactic acid bacteria, and the most common is Lactobacillus plantarum. Other bacteria used include Lactobacillus buchneri, Enterococcus faecium and Pediococcus species.
Ryegrasses have high sugars and respond to nitrogen fertiliser better than any other grass species. These two qualities have made ryegrass the most popular grass for silage-making for the last sixty years. There are three ryegrasses in seed form and commonly used: Italian, Perennial and Hybrid.
Pollution and waste
The fermentation process of silo or pit silage releases liquid. Silo effluent is corrosive. It can also contaminate water sources unless collected and treated. The high nutrient content can lead to eutrophication (hypertrophication), the growth of bacterial or algal blooms.
Plastic sheeting used for sealing pit or baled silage needs proper disposal, and some areas have recycling schemes for it. Traditionally, farms have burned silage plastics; however odor and smoke concerns have led certain communities to restrict that practice.
Storing silage
Silage must be firmly packed to minimize the oxygen content, lest it spoil.
Silage goes through four major stages in a silo:
Presealing, which, after the first few days after filling a silo, enables some respiration and some dry matter (DM) loss, but stops.
Fermentation, which occurs over a few weeks. pH drops, and there is more DM loss, but hemicellulose is broken down; aerobic respiration stops.
Infiltration, which enables some oxygen infiltration, allowing for limited microbial respiration. Available carbohydrates (CHOs) are lost as heat and gas.
Emptying, which exposes surface, causing additional loss; rate of loss increases.
Safety
Silos are potentially hazardous: deaths may occur in the process of filling and maintaining them, and several safety precautions are necessary. There is a risk of injury by machinery or from falls. When a silo is filled, fine dust particles in the air can become explosive because of their large aggregate surface area. Also, fermentation presents respiratory hazards. The ensiling process produces "silo gas" during the early stages of the fermentation process. Silage gas contains nitric oxide (NO), which will react with oxygen (O2) in the air to form nitrogen dioxide (NO2), which is toxic. Lack of oxygen inside the silo can cause asphyxiation. Molds that grow when air reaches cured silage can cause organic dust toxic syndrome. Collapsing silage from large bunker silos has caused deaths.
Silage itself poses no special danger however the improvement in legislation to regulate the animal food industry has reduced the problems concerning food-related human diseases by improvement of the hygienic quality of silage. Milk from cows fed with silage containing clostridial spores could represent a risk in hard cheese production. A special focus has to be directed to zoonotic pathogens like listeria, mycotoxins, clostridia, and E. coli bacteria as a result of deficient hygiene in silage production and could end in dairy products.
Nutrition
Ensilage can be substituted for root crops. Bulk silage is commonly fed to dairy cattle, while baled silage tends to be used for beef cattle, sheep and horses. The advantages of silage as animal feed are several:
During fermentation, the silage bacteria act on the cellulose and carbohydrates in the forage to produce volatile fatty acids (VFAs), such as acetic, propionic, lactic, and butyric acids. By lowering pH, these produce a hostile environment for competing bacteria that might cause spoilage. The VFAs thus act as natural preservatives, in the same way that the lactic acid in yogurt and cheese increases the preservability of what began as milk, or how vinegar (dilute acetic acid) preserves pickled vegetables. This preservative action is particularly important during winter in temperate regions, when green forage is unavailable.
When silage is prepared under optimal conditions, the modest acidity also has the effect of improving palatability, and provides a dietary contrast for the animal. (However, excessive production of acetic and butyric acids can reduce palatability: the mix of bacteria is ideally chosen so as to maximize lactic acid production.)
Several of the fermenting organisms produce vitamins: for example, lactobacillus species produce folic acid and vitamin B12.
The fermentation process that produces VFA also yields energy that the bacteria use: some of the energy is released as heat. Silage is thus modestly lower in caloric content than the original forage, in the same way that yogurt has modestly fewer calories than milk. However, this loss of energy is offset by the preservation characteristics and improved digestibility of silage.
Anaerobic digestion
Silage may be used for anaerobic digestion.
Fish silage
Fish silage is a method used for conserving by-products from fishing for later use as feed in fish farming. This way, the parts of the fish that are not used as human food such as fish guts (entrails), fish heads and trimmings are utilized as ingredients in feed pellets. The silage is performed by first grinding the remains and mixing it with formic acid, and then storing it in a tank. The acid helps with preservation as well as further dissolving the residues. Process tanks for fish silage can be aboard ships or on land.
| Technology | Agronomical techniques | null |
10406714 | https://en.wikipedia.org/wiki/Acacia%20penninervis | Acacia penninervis | Acacia penninervis, commonly known as mountain hickory wattle, or blackwood, is a perennial shrub or tree is an Acacia belonging to subgenus Phyllodineae, that is native to eastern Australia.
Description
The shrub or tree typically grows to a height of and has an erect to spreading habit. It has finely or deeply fissured bark that is usually a dark grey colour. The glabrous branchlets are more or less terete and occasionally covered in a fine white powdery coating. Like most species of Acacia, it has phyllodes rather than true leaves. The glabrous and evergreen phyllodes have a narrowly oblanceolate or narrowly elliptic shape and are straight to slightly curved with a length of and a width of with a prominent midvein and marginal veins and are finely penniveined. The plant blooms throughout the year producing pale yellow flowers.
Taxonomy
The species was first formally described by the botanist Augustin Pyramus de Candolle in 1825 as part of the work Leguminosae. Prodromus Systematis Naturalis Regni Vegetabilis. It was reclassified as Racosperma penninerve by Leslie Pedley in 1986 then transferred back to genus Acacia in 2006. Other synonyms include; Acacia impressa, Acacia penninervis var. impressa and Acacia impressa var. impressa.
Varieties
Acacia penninervis var. longiracemosa
Acacia penninervis var. penninervis
Distribution
It occurs in the Australian
states of the Australian Capital Territory, New South Wales, Queensland and Victoria, and as an introduced species on New Zealand's North Island and South Island. The variety A. p. var. penninervis occurs in the same Australian states. The variety A. p. var. longiracemosa occurs in coastal districts of southern Queensland and northern New South Wales.
Uses
The 1889 book The Useful Native Plants of Australia records that common names included "Hickory" and "Blackwood" and that "The bark (and, according to some, the leaves) of this tree was formerly used by the aboriginals [sic.] of southern New South Wales for catching fish. They would throw them into a waterhole when the fish would rise to the top and be easily caught. Neither the leaves nor bark contain strictly poisonous substances, but, like the other species of Acacia, they would be deleterious, owing to their astringency."
Its uses include environmental management. The tannin content of the bark is approximately 18%.
| Biology and health sciences | Fabales | Plants |
4672524 | https://en.wikipedia.org/wiki/Syngnathiformes | Syngnathiformes | The Syngnathiformes are an order of ray-finned fishes that includes the leafy seadragons, sea moths, trumpetfishes and seahorses, among others.
These fishes have generally elongate, narrow bodies surrounded by a series of bony rings, with small, tubular mouths. The shape of their mouth—at least, in syngnathids—allows for the ingestion of prey at close range via suction. Many species of Syngnathiformes also employ strategic camouflage (such as cryptic coloration and overall physical form) to hunt successfully and gain closer access to prey, as well as to protect themselves from larger predators. Several groups, for example, live among seaweed, not only swimming with their bodies aligned vertically (to blend in with the floating plant matter) but have also developed physical features that mimic the seaweed. The pygmy seahorses are among the smallest of all syngnathids, with most being so tiny—and mimicking the specific coral they spend their lives on—that they were only recently described by scientists.
The most defining characteristic of Syngnathiformes is their reproductive and sexual system, in which syngnathid males become "pregnant" and carry the embryonic fry. The males house the fertilized eggs in an osmo-regulated brood pouch, or (in some species) adhere them to their tail, until the eggs reach maturity.
The name Syngnathiformes means "conjoined-jaws". It is derived from Ancient Greek syn (συν, "together") + gnathos (γνάθος, "jaw"). The ending for orders, "-formes", is derived from Latin, and indicates "of similar form".
Fossil record
The earliest known syngnathiform is Gasteroramphosus from the late Cretaceous (either Santonian or Campanian) of Italy, which is similar in form to Marcroramphosus but which has some characters which are suggestive of a relation to Gasterosteoidei. However, most recent studies have reaffirmed it being a syngnathiform. The second oldest syngnathiform is the syngnathoid Eekaulostomus from the early Paleocene (Danian) of Mexico. Many fossil syngnathiform families are known from the Paleogene.
Systematics and taxonomy
In some models, these fishes are placed as the suborder Syngnathoidei of the order Gasterosteiformes together with the sticklebacks and their relatives. Better supported by the evidence now available is the traditional belief that they are better considered separate orders, and indeed among the Acanthopterygii, they might not be particularly close relatives at all.
In addition, the Pegasidae (dragonfishes and sea moths) are variously placed with the pipefish or the stickleback lineage. While the placement in Syngnathiformes seems to be correct for the latter, the former is possibly an actinopterygian order of its own. Following the convention of the major fish classification organizations (Fish Base, ITIS, Encyclopedia of Life), the Indostomidae are currently placed in the Gasterosteiformes.
Morphological traits uniting the flying gurnards (Dactylopteridae) and the Syngnathiformes have long been noted. Most authors, however, placed them with the Scorpaeniformes. However, DNA sequence data quite consistently support the belief that the latter are paraphyletic with the Gasterosteiformes sensu lato. As it seems, flying gurnards are particularly close to Aulostomidae and Fistulariidae, and probably should be included with these.
The order as set out in the 5th Edition of Fishes of the World is classified as follows:
Order Syngnathiformes
Suborder Syngnathoidei
Superfamily Pegasoidea
Family Pegasidae (seamoths)
Superfamily Syngnathoidea
Family Solenostomidae (ghost pipefishes)
Family Syngnathidae (pipefishes and seahorses)
Subfamily Syngnathinae (pipefishes)
Subfamily Hippocampinae (seahorses and pygmy pipehorses)
Suborder Aulostomoidei
Superfamily Aulostomoidea
Family Aulostomidae (trumpetfishes)
Family Fistularidae (cornetfishes)
Superfamily Centriscoidea
Family Macroramphosidae (snipefishes)
Family Centriscidae (shrimpfishes)
Family Dactylopteridae (flying gurnards)
Other authorities are of the view that without the inclusion of other taxa within Syngnathiformes then the order is paraphyletic. This wider order consists of a "long snouted" clade and a benthic clade and this classification is:
Order Syngnathiformes
"long snouted clade"
Suborder Syngnathoidei
Family Solenostomidae (ghost pipefishes)
Family Syngnathidae (pipefishes and seahorses)
Family Aulostomidae (trumpetfishes)
Family Fistularidae (cornetfishes)
Family Centriscidae (shrimpfishes and snipefishes)
"Benthic clade"
Suborder Callionymoidei
Family Callionymidae (dragonets)
Family Draconettidae (slope dragonets)
Suborder Mulloidei
Family Mullidae (goatfishes)
Suborder Dactylopteroidei
Family Dactylopteridae (flying gurnards)
Family Pegasidae (seamoths)
In their study Longo et al (2017) found that there were short distances between the groupings on the Syngnathiform phylogenetic tree and this supported a hypothesis that there had been a rapid but ancient radiation in the basal Syngnathiformes.
Fossil families
The following fossil families are known:
Family †Aulorhamphidae
Family †Paraeoliscidae
Suborder Syngnathoidei
Family †Eekaulostomidae
Superfamily Aulostomoidea
Family †Fistularioididae
Family †Parasynarcualidae
Family †Urosphenidae
Superfamily Syngnathoidea
Family †Protosyngnathidae
Superfamily Centriscoidea
Family †Gerpegezhidae
Suborder Dactylopteroidei
Superfamily Pegasoidea
Family †Rhamphosidae
| Biology and health sciences | Acanthomorpha | Animals |
4673783 | https://en.wikipedia.org/wiki/Double%20fisherman%27s%20knot | Double fisherman's knot | The double fisherman's knot or grapevine knot is a bend. This knot and the triple fisherman's knot are the variations used most often in climbing, arboriculture, and search and rescue. The knot is formed by tying a double overhand knot, in its strangle knot form, with each end around the opposite line's standing part.
Usage
A primary use of this knot is to form high strength (round) slings of cord for connecting pieces of a climber's protection system.
Other uses
This knot, along with the basic fisherman's knot can be used to join the ends of a necklace cord. The two strangle knots are left separated, and in this way the length of the necklace can be adjusted without breaking or untying the strand.
Tying
Line form
Drop form
Security
Dyneema/Spectra's very high lubricity leads to poor knot-holding ability and has led to the recommendation to use the triple fisherman's knot rather than the traditional double fisherman's knot in 6 mm Dyneema core cord to avoid a particular failure mechanism of the double fisherman's, where first the sheath fails at the knot, then the core slips through.
| Technology | Flexible components | null |
4675301 | https://en.wikipedia.org/wiki/Seymouria | Seymouria | Seymouria is an extinct genus of seymouriamorph from the Early Permian of North America and Europe. Although they were amphibians (in a biological sense), Seymouria were well-adapted to life on land, with many reptilian features—so many, in fact, that Seymouria was first thought to be a primitive reptile. It is primarily known from two species, Seymouria baylorensis and Seymouria sanjuanensis. The type species, S. baylorensis, is more robust and specialized, though its fossils have only been found in Texas. On the other hand, S. sanjuanensis is more abundant and widespread. This smaller species is known from multiple well-preserved fossils, including a block of six skeletons found in the Cutler Formation of New Mexico, and a pair of fully grown skeletons from the Tambach Formation of Germany, which were fossilized lying next to each other.
For the first half of the 20th century, Seymouria was considered one of the oldest and most "primitive" known reptiles. Paleontologists noted how the general body shape resembled that of early reptiles such as captorhinids, and that certain adaptations of the limbs, hip, and skull were also similar to that of early reptiles, rather than any species of modern or extinct amphibians known at the time. The strongly-built limbs and backbone also supported the idea that Seymouria was primarily terrestrial, spending very little time in the water. However, in the 1950s, fossilized tadpoles were discovered in Discosauriscus, which was a close relative of Seymouria in the group Seymouriamorpha. This shows that seymouriamorphs (including Seymouria) had a larval stage which lived in the water, therefore making Seymouria not a true reptile, but rather an amphibian (in the traditional, paraphyletic sense of the term). At that time, it was still thought to be closely related to reptiles., and many recent studies still support this hypothesis. If this hypothesis is correct, Seymouria is still an important transitional fossil documenting the acquisition of reptile-like skeletal features prior to the evolution of the amniotic egg, which characterizes amniotes (reptiles, mammals, and birds). However, under the alternative hypothesis that Seymouria is a stem-tetrapod, it has little relevance to the origin of amniotes.
History
Early history as a putative reptile
Fossils of Seymouria were first found near the town of Seymour, in Baylor County, Texas (hence the name of the type species, Seymouria baylorensis, referring to both the town and county). The earliest fossils of the species to be collected were a cluster of individuals acquired by C.H. Sternberg in 1882. However, these fossils would not be properly prepared and identified as Seymouria until 1930.
Various paleontologists from around the world recovered their own Seymouria baylorensis fossils in the late 19th century and early 20th century. Seymouria was formally named and described in 1904 based on a pair of incomplete skulls, one of which was associated with a few pectoral and vertebral elements. These fossils were described by German paleontologist Ferdinand Broili, and are now stored in Munich. American paleontologist S.W. Williston later described a nearly complete skeleton in 1911, and noted that "Desmospondylus anomalus", a taxon he had recently named from fragmentary limbs and vertebrae, likely represented juvenile or even embryonic individuals of Seymouria.
Likewise, English paleontologist D.M.S. Watson noted in 1918 that Conodectes, a dubious genera named by Edward Drinker Cope back in 1896, was likely synonymous with Seymouria. Robert Broom (1922) argued that the genus should be referred to as Conodectes since that name was published first, but Alfred Romer (1928) objected, noting that the name Seymouria was too popular within the scientific community to be replaced. During this time, Seymouria was generally seen as a very early reptile, part of an evolutionary grade known as "cotylosaurs", which also included many other stout-bodied Permian reptiles or reptile-like tetrapods.
Proposed amphibian affinities
Many paleontologists were uncertain about Seymouria's allegiance with the reptiles, noting many similarities with the embolomeres, which were unquestionably "labyrinthodont" amphibians. This combination of features from reptiles (i.e. other "cotylosaurs") and amphibians (i.e. embolomeres) was evidence that Seymouria was central to the evolutionary transition between the two groups. Regardless, not enough was known about its biology to conclude which group it was truly part of. Broom (1922) and Russian paleontologist Peter Sushkin (1925) supported a placement among the Amphibia, but most studies around this time tentatively considered it an extremely "primitive" reptile; these included a comprehensive redescription of material referred to the species, published by Theodore E. White in 1939.
However, indirect evidence that Seymouria was not biologically reptilian started to emerge by the 1940s. Around this time, several newly described genera were linked to Seymouria as part of the group Seymouriamorpha. Some seymouriamorphs, such as Kotlassia, had evidence of aquatic habits, and even Seymouria itself had occasionally been argued to possess lateral lines, sensory structures only usable underwater. Watson (1942) and Romer (1947) each reversed their stance on Seymouria's classification, placing it among the amphibians rather than the reptiles. Perhaps the most damning evidence came in 1952, when Czech paleontologist Zdeněk Špinar reported gills preserved in juvenile fossils of the seymouriamorph Discosauriscus. This unequivocally proved that seymouriamorphs had an aquatic larval stage, and thus were amphibians, biologically speaking. Nevertheless, the numerous similarities between Seymouria and reptiles supported the idea that seymouriamorphs were close to the ancestry of amniotes.
Additional species and fossils
In 1966, Peter Paul Vaughn described an assortment of Seymouria skulls from the Organ Rock Shale of Utah. These remains represented a new species, Seymouria sanjuanensis. Fossils of this species are now understood to be more abundant and widespread than those of Seymouria baylorensis. Several more species were later named by Paul E. Olson, although their validity has been more questionable than that of S. sanjuanensis. For example, Seymouria agilis (Olson, 1980), known from a nearly complete skeleton from the Chickasha Formation of Oklahoma, was reassigned by Michel Laurin and Robert R. Reisz to the parareptile Macroleter in 2001. Seymouria grandis, described a year earlier from a braincase found in Texas, has not been re-referred to any other tetrapod, but it remains poorly known. Langston (1963) reported a femur indistinguishable from that of S. baylorensis in Permian sediments at Prince Edward Island on the Eastern coast of Canada. Seymouria-like skeletal remains are also known from the Richards Spur Quarry in Oklahoma, as first described by Sullivan & Reisz (1999).
A block of sediment containing six S. sanjuanensis skeletons was found in the Cutler Formation of New Mexico, as described by Berman, Reisz, & Eberth (1987). In 1993, Berman & Martens reported the first Seymouria remains outside of North America, when they described S. sanjuanensis fossils from the Tambach Formation of Germany. The Tambach Formation has produced S. sanjuanensis fossils of a similar quality to those of the Cutler Formation. For example, in 2000 Berman and his colleagues described the "Tambach Lovers", two complete and fully articulated skeletons of S. sanjuanensis fossilized lying next to each other (though it cannot be determined whether they were a couple killed during courtship). The Tambach Formation has also produced the developmentally youngest known fossils of Seymouria, assisting comparisons to Discosauriscus, which is known primarily from juveniles.
Description
Seymouria individuals were robustly-built animals, with a large head, short neck, stocky limbs, and broad feet. Even the largest specimens were fairly small, only about 2 ft (60 cm) long. The skull was boxy and roughly triangular when seen from above, but it was lower and longer than that of most other seymouriamorphs. The vertebrae had broad, swollen neural arches (the portion above the spinal cord). As a whole the body shape was similar to that of contemporary reptiles and reptile-like tetrapods such as captorhinids, diadectomorphs, and parareptiles. Collectively these types of animals have been referred to as "cotylosaurs" in the past, although they do not form a clade (a natural, relations-based grouping).
Skull
The skull was composed of many smaller plate-like bones. The configuration of skull bones present in Seymouria was very similar to that of far more ancient tetrapods and tetrapod relatives. For example, it retains an intertemporal bone, which is the plesiomorphic ("primitive") condition present in animals like Ventastega and embolomeres. The skull bones were heavily textured, as was typical for ancient amphibians and captorhinid reptiles. In addition, the rear part of the skull had a large incision stretching along its side. This incision is termed an otic notch, and a similar incision in the same general area is common to most Paleozoic amphibians ("labyrinthodonts", as they are sometimes called), but unknown in amniotes. The lower edge of the otic notch was formed by the squamosal bone, while the upper edge was formed by downturned flanges of the supratemporal and tabular bones (known as otic flanges). The tabular also has a second downturned flange visible from the rear of the skull; this flange (known as an occipital flange) connected to the braincase and partially obscured the space between the braincase and the side of the skull. The development of the otic and occipital flanges is greater in Seymouria (particularly S. baylorensis) than in any other seymouriamorph.
The sensory apparatus of the skull also deserves mention for an array of unique features. The orbits (eye sockets) were about midway down the length of the skull, although they were a bit closer to the snout in juveniles. They were more rhomboidal than the circular orbits of other seymouriamorphs, with an acute front edge. Several authors have noted that a few specimens of Seymouria possessed indistinct grooves present in bones surrounding the orbits and in front of the otic notch. These grooves were likely remnants of a lateral line system, a web of pressure-sensing organs useful for aquatic animals, including the presumed larval stage of Seymouria. Many specimens do not retain any remnant of their lateral lines, not even juveniles. Near the middle of the parietal bones was a small hole known as a pineal foramen, which held a sensory organ known as a parietal eye. The pineal foramen is smaller in Seymouria than in other seymouriamorphs.
The stapes, a rod-like bone which lies between the braincase and the wall of the skull, was tapered. It connected the braincase to the upper edge of the otic notch, and likely served as a conduit of vibrations received by a tympanum (eardrum) which presumably lay within the otic notch. In this way it could transmit sound from the outside world to the brain. The configuration of the stapes is intermediate between non-amniote tetrapods and amniotes. On the one hand, its connection to the otic notch is unusual, since true reptiles and other amniotes have lost an otic notch, forcing the tympanum and stapes to shift downwards towards the quadrate bone of the jaw joint. On the other hand, the thin, sensitive structure of Seymouria's stapes is a specialization over most non-amniote tetrapods, which have a thick stapes better suited for reinforcing the skull rather than hearing. The inner ear of Seymouria baylorensis retains a cochlear recess located behind (rather than below) the vestibule, and its anterior semicircular canal was likely encompassed by a cartilaginous (rather than bony) supraoccipital. These features are more primitive than those of true reptiles and synapsids.
The palate (roof of the mouth) had some similarities with both amniote and non-amniote tetrapods. On the one hand, it retained a few isolated large fangs with maze-like internal enamel folding, as is characteristic for "labyrinthodont" amphibians. On the other hand, the vomer bones at the front of the mouth were fairly narrow, and the adjacent choanae (holes leading from the nasal cavity to the mouth) were large and close together, as in amniotes. The palate is generally solid bone, with only vestigial interpteryoid vacuities (a pair of holes adjacent to the midline) separated by a long and thin cultriform process (the front blade of the base of the braincase). Apart from the fangs, the palate is also covered with small denticles radiating out from the rear part of the pterygoid bones. Seymouria has a few amniote-like characteristics of the palate, such as the presence of a prong-like outer rear branch of the pterygoid (formally known as a transverse flange) as well as an epipterygoid bone which is separate from the pterygoid. However, these characteristics have been observed in various non-amniote tetrapods, so they do not signify its status as an amniote.
The lower jaw retained a few plesiomorphic characteristics. For example, the inner edge of the mandible possessed three coronoid bones. The mandible also retained at least one large hole along its inner edge known as a meckelian fenestra, although this feature was only confirmed during a 2005 re-investigation of one of the Cutler Formation specimens. Neither of these traits are the standard in amniotes. The braincase had a mosaic of features in common with various tetrapodomorphs. The system of grooves and nerve openings on the side of the braincase were unusually similar to those of the fish Megalichthys, and the cartilaginous base is another plesiomorphic feature. However, the internal carotid arteries perforate the braincase near the rear of the bone complex, a derived feature similar to amniotes.
Postcranial skeleton
The vertebral column is fairly short, with a total of 24 vertebrae between the hip and skull. The vertebrae are gastrocentrous, meaning that each vertebra has a larger, somewhat spool-shaped component known as a pleurocentrum, and a smaller, wedge-shaped (or crescent-shaped from the front) component known as an intercentrum. The neural arches, which lie above the pleurocentra, are swollen into broad structures with table-like zygapophyses (joint plates) about three times as wide as the pleurocentrum itself. Some vertebrae have neural spines which are partially subdivided down the middle, while others are oval-shaped in horizontal cross-section. The ribs of the dorsal vertebrae extend horizontally and attach to the vertebrae at two places: the intercentrum and the side of the neural arch. The neck is practically absent, only a few vertebrae long. The first neck vertebra, the atlas, had a small intercentrum as well as a reduced pleurocentrum which was only present in mature individuals. Although the atlantal pleurocentrum (when present) was wedged between the intercentrum of the atlas and intercentrum of the succeeding axis vertebra (as in amniotes), the low bone development in this area of the neck contrasts with the characteristic atlas-axis complex of amniotes. In addition, later studies found that the atlas intercentrum was divided into a left and right portion, more like that of amphibian-grade tetrapods. Unlike almost all other Paleozoic tetrapods (amniote or otherwise), Seymouria completely lacks any bony remnants of scales or scutes, not even the thin, circular belly scales of other seymouriamorphs.
The pectoral (shoulder) girdle has several reptile-like features. For example, the scapula and coracoid (bony plates which lie above and below the shoulder socket, respectively) are separate bones, rather than one large shoulder blade. Likewise, the interclavicle was flat and mushroom-shaped, with a long and thin "stem". The humerus (forearm bone) was shaped like a boxy and slightly twisted L, with large areas for muscle attachment. This form, which has been described as "tetrahedral", is plesiomorphic for tetrapods and contrasts with the slender hourglass-shaped humerus of amniotes. On the other hand, the lower part of the humerus also has a reptile-like adaptation: a hole known as an entepicondylar foramen. The radius was narrowest at mid-length. The ulna is similar, but longer due to the possession of a pronounced olecranon process, as is common in terrestrial tetrapods but rare in amphibious or aquatic ones. The carpus (wrist) has ten bones, and the hand has five stout fingers. The carpal bones are fully developed and closely contact each other, another indication of terrestriality. The phalanges (finger bones) decrease in size towards the tip of the fingers, where they each end in a tiny, rounded segment, without a claw. The phalangeal formula (number of phalanges per finger, from thumb to little finger) is 2-3-4-4-3.
Two sacral (hip) vertebrae were present, though only the first one possessed a large, robust rib which contacted the ilium (upper blade of the hip). Some studies have argued that there was only one sacral vertebra, with the supposed second sacral actually being the first caudal due to having a shorter, more curved rib than the first sacral. Each ilium is low and teardrop-shaped when seen from the side, while the underside of the hip as a whole is formed by a single robust puboischiadic plate, which is rectangular when seen from below. Both the hip and shoulder sockets were directed at 45 degrees below the horizontal. The femur is equally stout as the humerus, and the tibia and fibula are robust, hourglass-shaped bones similar to the radius and ulna. The tarsus (ankle) incorporates 11 bones, intermediate between earlier tetrapods (which have 12) and amniotes (which have 8 or fewer). The five-toed feet are quite similar to the hands, with phalangeal formula 2-3-4-5-3.
There were only about 20 caudal (tail) vertebrae at most. Past the base of the tail, the caudals start to acquire bony spines along their underside, known as chevrons. These begin to appear in the vicinity of the third to sixth caudal, depending on the specimen. Ribs are only present within the first five or six caudals; they are long at the base of the tail but diminish soon afterwards and typically disappear around the same area the chevrons appear.
Differences between species
Seymouria baylorensis and Seymouria sanjuanensis can be distinguished from each other based on several differences in the shape and connections between the different bones of the skull. For example, the downturned flange of bone above the otic notch (sometimes termed the "tabular horn" or "otic process") is much more well-developed in S. baylorensis than in S. sanjuanensis. In the former species, it acquires a triangular shape (when seen from the side) as it extends downwards more extensively towards the rear of the skull. In S. sanjuanensis, the postfrontal bone contacts the parietal bone by means of an obtuse, wedge-like suture, while the connection between the two bones is completely straight in S. baylorensis.
Some authors have argued that the postparietals of S. baylorensis were smaller than those of S. sanjuanensis, but some specimens of S. sanjuanensis (for example, the "Tambach lovers") also had small postparietals. In addition, the "Tambach lovers" have a quadratojugal bone which is more similar to that of S. baylorensis rather than S. sanjuanensis. The combination of features from both species in these specimens may indicate that the two species are part of a continuous lineage, rather than two divergent evolutionary paths. Likewise, some differences relating to the proportions of the rear of the skull may be considered to be an artifact of the fact that most S. sanjuanensis specimens were not fully grown prior to the discovery of the "Tambach lovers", which were adult members of the species.
Nevertheless, several traits are still clearly differentiated between the two species. The lacrimal bone, in front of the eyes, only occupies the front edge of the orbit in S. baylorensis. Conversely, specimens of S. sanjuanensis have a branch of the lacrimal which extends a small distance under the orbit. In S. sanjuanensis, much of the rear edge of the orbit is formed by the chevron-shaped postorbital bone, which is more rectangular in S. baylorensis. The shape of the lacrimal and postorbital of S. sanjuanensis closely corresponds to the condition in other seymouriamorphs, while the condition in S. baylorensis is more unique and derived.
The tooth-bearing maxilla bone, which forms the side of the snout, is also distinctively unique in S. baylorensis. In S. sanjuanensis, the maxilla was low, with many sharp, closely spaced teeth extending along its length. This condition is similar to other seymouriamorphs. However, S. baylorensis has a taller snout, and its teeth are generally much larger, less numerous, and less homogenous in size. The palate is generally similar between the two species, although the ectopterygoids are more triangular in S. baylorensis and rectangular in S. sanjuanensis.
Paleobiology
Lifestyle
Romer (1928) was among the first authors to discuss the biological implications of Seymouria's skeleton. He argued that the robust limbs and wide-set body supported the idea that it was a strong, terrestrial animal with a sprawling gait. However, he also noted that Permian trackways generally support the idea that terrestrial tetrapods from this time period were not belly-draggers, but instead were strong enough to keep their bodies off of the ground. As with other paleontologists around the time, Romer assumed that Seymouria had a reptilian (or amniote) mode of reproduction, with eggs laid on dry land and protected from the elements by an amnion membrane.
White (1939) elaborated on biological implications. He noted that the presence of an otic notch reduces jaw strength by lowering the amount of surface area jaw muscles can attach to within the cranium. In addition, the skull would have been more fragile due to the presence of such a large incision. As a whole, he found it unlikely that Seymouria was capable of tackling large, active prey. Nevertheless, the sites for muscle attachment on the palate were more well-developed than those of contemporaneous amphibians. White extrapolated that Seymouria was a mostly carnivorous generalist and omnivore, feeding on invertebrates, small fish, and perhaps even some plant material. It may have even been cannibalistic according to his reckoning.
White also drew attention to the unusual swollen vertebrae, which would have facilitated lateral (side-to-side) movement but prohibit any torsion (twisting) of the backbone. This would have been beneficial, since Seymouria had low-slung limbs and a wide, top-heavy body that would have otherwise been vulnerable to torsion when it was walking. This may also explain the presence of this trait in captorhinids, diadectomorphs, and other "cotylosaurs". Perhaps swollen vertebrae were an interim strategy to prevent torsion, which would later be supplanted by strong hip muscles in later reptiles. The rather undeveloped hip muscles of Seymouria are in line with this hypothesis. Nevertheless, these vertebrae were inefficient at defending against torsion at any speed faster than a brisk walk, so Seymouria was probably not a quick-moving animal.
Although White considered Seymoria to be quite competent on land, he also discussed a few other lifestyles. He supposed that Seymouria was also a good swimmer, since he (erroneously) estimated that the animal had a deep and powerful tail similar to that of modern crocodilians. However, he also noted that it would have been vulnerable to semiaquatic or aquatic predators, and that Seymouria fossils were more common in terrestrial deposits as a result of its habitat preferences. Berman et al. (2000) supported this hypothesis, as the Tambach Formation preserved Seymouria fossils while also completely lacking aquatic animals. They also pointed out the well-developed wrist and ankle bones of the "Tambach lovers" as supportive of terrestrial affinities. Despite the strong musculature of the forelimbs, Romer (1928) and White (1939) found little evidence for burrowing adaptations in Seymouria.
Sexual dimorphism
Some authors have argued in favor of sexual dimorphism existing in Seymouria, but others are unconvinced by this hypothesis. White (1939) argued that some specimens of Seymouria baylorensis had chevrons (bony spines on the underside of the tail vertebrae) which first appeared on the third tail vertebra, while other specimens had them first appear on the sixth. He postulated that the later appearance of the chevrons in some specimens was indicative that they were males in need of more space to store their internal genitalia. This type of sexual differentiation has been reported in both turtles and crocodilians. Based on this, he also supported the idea that Seymouria females gave birth to large-yolked eggs on land, as with turtles and crocodilians. Vaughn (1966) later found a correlation between chevron acquisition and certain skull proportions in Seymouria sanjuanensis, and proposed that they too were examples of sexual dimorphism.
However, Berman, Reisz, & Elberth (1987) criticized the methodologies of White (1939) and Vaughn (1966). They argued that White's observations were probably unrelated to the sex of the animals. This was supported by the fact that some of the Cutler Formation specimens had chevrons which first appeared on their fifth tail vertebra. Although it was possible that genital size was variable among males to the extent of impacting the skeleton, the more likely explanation was that the differences White had observed were caused by individual skeletal variation, evolutionary divergence, or some other factor unrelated to sexual dimorphism. Likewise, they agreed that skull proportions supported Vaughn (1966)'s proposal that dimorphism was present in Seymouria fossils, though they disagreed with how he linked it to sex using a fossil which was considered "female" under White's criteria. The discovery of fossilized larval seymouriamorphs has shown that Seymouria likely had an aquatic larval stage, debunking earlier hypotheses that Seymouria laid eggs on land.
Histology and development
Histological evidence from specimens found in Richards Spurs, Oklahoma has provided additional information on Seymouria's biology. A femur was found to have an internal structure characterized by a lamellar matrix pierced by numerous plexiform canals. Rest lines of slow growth are indistinct and closely spaced, but there is no evidence that growth ceased at any time during bone development. Like most lissamphibians, the medullary cavity is open and has a small amount of spongiosa bone. The development of spongiosa bone is slightly higher that of Acheloma (a terrestrial amphibian), but is much less extensive than aquatic amphibians such as Rhinesuchus and Trimerorhachis. Seymouria's vertebrae are more robust in shape compared to Discosauriscus, and have a low amount of cartilage despite a high amount of porosity. Seymouria are inferred to have undergone metamorphosis very early in life, likely due to environmental stresses from fluctuating wet and dry seasons.
| Biology and health sciences | Reptiliomorphs | Animals |
4677186 | https://en.wikipedia.org/wiki/Wien%20approximation | Wien approximation | Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short-wavelength (high-frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long-wavelength (low-frequency) emission.
Details
Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation.
Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black-body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck.
The law may be written as
(note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units,
where:
This equation may also be written as
where is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ. Wien acknowledges Friedrich Paschen in his original paper as having supplied him with the same formula based on Paschen's experimental observations.
The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, occurs at
a wavelength
and frequency
Relation to Planck's law
The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long-wavelength (low-frequency) emission. However, it was soon superseded by Planck's law, which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell–Boltzmann statistics. Planck's law may be given as
The Wien approximation may be derived from Planck's law by assuming . When this is true, then
and so the Wien approximation gets ever closer to Planck's law as the frequency increases.
Other approximations of thermal radiation
The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission.
| Physical sciences | Thermodynamics | Physics |
2525843 | https://en.wikipedia.org/wiki/Cardiac%20cycle | Cardiac cycle | The cardiac cycle is the performance of the human heart from the beginning of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole, following a period of robust contraction and pumping of blood, called systole. After emptying, the heart relaxes and expands to receive another influx of blood returning from the lungs and other systems of the body, before again contracting to pump blood to the lungs and those systems.
Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 second to complete the cycle. Duration of the cardiac cycle is inversely proportional to the heart rate.
Description
There are two atrial and two ventricle chambers of the heart; they are paired as the left heart and the right heart—that is, the left atrium with the left ventricle, the right atrium with the right ventricle—and they work in concert to repeat the cardiac cycle continuously (see cycle diagram at right margin). At the start of the cycle, during ventricular diastole–early, the heart relaxes and expands while receiving blood into both ventricles through both atria; then, near the end of ventricular diastole–late, the two atria begin to contract (atrial systole), and each atrium pumps blood into the ventricle below it. During ventricular systole the ventricles contract and vigorously pulse (or eject) two separated blood supplies from the heart—one to the lungs and one to all other body organs and systems—while the two atria relax (atrial diastole). This precise coordination ensures that blood is efficiently collected and circulated throughout the body.
The mitral and tricuspid valves, also known as the atrioventricular, or AV valves, open during ventricular diastole to permit filling. Late in the filling period the atria begin to contract (atrial systole) forcing a final crop of blood into the ventricles under pressure—see cycle diagram. Then, prompted by electrical signals from the sinoatrial node, the ventricles start contracting (ventricular systole), and as back-pressure against them increases the AV valves are forced to close, which stops the blood volumes in the ventricles from flowing in or out; this is known as the isovolumic contraction stage.
Due to the contractions of the systole, pressures in the ventricles rise quickly, exceeding the pressures in the trunks of the aorta and the pulmonary arteries and causing the requisite valves (the aortic and pulmonary valves) to open—which results in separated blood volumes being ejected from the two ventricles. This is the ejection stage of the cardiac cycle; it is depicted (see circular diagram) as the ventricular systole–first phase followed by the ventricular systole–second phase. After ventricular pressures fall below their peak(s) and below those in the trunks of the aorta and pulmonary arteries, the aortic and pulmonary valves close again—see, at the right margin, Wiggers diagram, blue-line tracing.
Next is the isovolumic relaxation, during which pressure within the ventricles begin to fall significantly, and thereafter the atria begin refilling as blood returns to flow into the right atrium (from the vena cavae) and into the left atrium (from the pulmonary veins). As the ventricles begin to relax, the mitral and tricuspid valves open again, and the completed cycle returns to ventricular diastole and a new "Start" of the cardiac cycle.
Throughout the cardiac cycle, blood pressure increases and decreases. The movements of cardiac muscle are coordinated by a series of electrical impulses produced by specialized pacemaker cells found within the sinoatrial node and the atrioventricular node. Cardiac muscle is composed of myocytes which initiate their internal contractions without receiving signals from external nerves—with the exception of changes in the heart rate due to metabolic demand.
In an electrocardiogram, electrical systole initiates the atrial systole at the P wave deflection of a steady signal; and it starts contractions (systole).
Cardiac cycle and Wiggers diagram
The cardiac cycle involves four major stages of activity: 1) "isovolumic relaxation", 2) inflow, 3) "isovolumic contraction", 4) "ejection". Stages 1 and 2 together—"isovolumic relaxation" plus inflow (equals "rapid inflow", "diastasis", and "atrial systole")—comprise the ventricular diastole period, including atrial systole, during which blood returning to the heart flows through the atria into the relaxed ventricles. Stages 3 and 4 together—"isovolumic contraction" plus "ejection"—are the ventricular systole period, which is the simultaneous pumping of separate blood supplies from the two ventricles, one to the pulmonary artery and one to the aorta. Notably, near the end of the diastole, the atria begin contracting, then pump blood into the ventricles; this pressurized delivery during ventricular relaxation (ventricular diastole) is called the atrial systole.
The closure of the aortic valve causes a rapid change in pressure in the aorta called the incisura. This short sharp change in pressure is rapidly attenuated down the arterial tree. The pulse wave form is also reflected from branches in the arterial tree and gives rise to a dicrotic notch in main arteries. The summation of the reflected pulse wave and the systolic wave may increase pulse pressure and help tissue perfusion. With increasing age, the aorta stiffens and can become less elastic which will reduce peak pulse in the periphery.
Physiology
The heart is a four-chambered organ consisting of right and left halves, called the right heart and the left heart. The upper two chambers, the left and right atria, are entry points into the heart for blood-flow returning from the circulatory system, while the two lower chambers, the left and right ventricles, perform the contractions that eject the blood from the heart to flow through the circulatory system. Circulation is split into pulmonary circulation—during which the right ventricle pumps oxygen-depleted blood to the lungs through the pulmonary trunk and arteries; or the systemic circulation—in which the left ventricle pumps/ejects newly oxygenated blood throughout the body via the aorta and all other arteries.
Heart electrical conduction system
In a healthy heart all activities and rests during each individual cardiac cycle, or heartbeat, are initiated and orchestrated by signals of the heart's electrical conduction system, which is the "wiring" of the heart that carries electrical impulses throughout the body of cardiomyocytes, the specialized muscle cells of the heart. These impulses ultimately stimulate heart muscle to contract and thereby to eject blood from the ventricles into the arteries and the cardiac circulatory system; and they provide a system of intricately timed and persistent signaling that controls the rhythmic beating of the heart muscle cells, especially the complex impulse-generation and muscle contractions in the atrial chambers.
The rhythmic sequence (or sinus rhythm) of this signaling across the heart is coordinated by two groups of specialized cells, the sinoatrial (SA) node, which is situated in the upper wall of the right atrium, and the atrioventricular (AV) node located in the lower wall of the right heart between the atrium and ventricle. The sinoatrial node, often known as the cardiac pacemaker, is the point of origin for producing a wave of electrical impulses that stimulates atrial contraction by creating an action potential across myocardium cells.
Impulses of the wave are delayed upon reaching the AV node, which acts as a gate to slow and to coordinate the electrical current before it is conducted below the atria and through the circuits known as the bundle of His and the Purkinje fibers—all which stimulate contractions of both ventricles. The programmed delay at the AV node also provides time for blood volume to flow through the atria and fill the ventricular chambers—just before the return of the systole (contractions), ejecting the new blood volume and completing the cardiac cycle. (See Wiggers diagram: "Ventricular volume" tracing (red), at "Systole" panel.)
Diastole and systole in the cardiac cycle
Cardiac diastole is the period of the cardiac cycle when, after contraction, the heart relaxes and expands while refilling with blood returning from the circulatory system. Both atrioventricular (AV) valves open to facilitate the 'unpressurized' flow of blood directly through the atria into both ventricles, where it is collected for the next contraction. This period is best viewed at the middle of the Wiggers diagram—see the panel labeled "diastole". Here it shows pressure levels in both atria and ventricles as near-zero during most of the diastole. (See gray and light-blue tracings labeled "atrial pressure" and "ventricular pressure"—Wiggers diagram.) Here also may be seen the red-line tracing of "Ventricular volume", showing an increase in blood volume from the low plateau of the "isovolumic relaxation" stage to the maximum volume occurring in the "atrial systole" sub-stage.
Atrial systole
Atrial systole is the contracting of cardiac muscle cells of both atria following electrical stimulation and conduction of electrical currents across the atrial chambers (see above, Physiology). While nominally a component of the heart's sequence of systolic contraction and ejection, atrial systole actually performs the vital role of completing the diastole, which is to finalize the filling of both ventricles with blood while they are relaxed and expanded for that purpose. Atrial systole overlaps the end of the diastole, occurring in the sub-period known as ventricular diastole–late (see cycle diagram). At this point, the atrial systole applies contraction pressure to 'topping-off' the blood volumes sent to both ventricles; this atrial contraction closes the diastole immediately before the heart again begins contracting and ejecting blood from the ventricles (ventricular systole) to the aorta and arteries.
Ventricular systole
Ventricular systole is the contractions, following electrical stimulations, of the ventricular syncytium of cardiac muscle cells in the left and right ventricles. Contractions in the right ventricle provide pulmonary circulation by pulsing oxygen-depleted blood through the pulmonary valve then through the pulmonary arteries to the lungs. Simultaneously, contractions of the left ventricular systole provide systemic circulation of oxygenated blood to all body systems by pumping blood through the aortic valve, the aorta, and all the arteries. (Blood pressure is routinely measured in the larger arteries off the left ventricle during the left ventricular systole).
| Biology and health sciences | Basics | Biology |
2526857 | https://en.wikipedia.org/wiki/Isotopes%20of%20uranium | Isotopes of uranium | Uranium (U) is a naturally occurring radioactive element (radioelement) with no stable isotopes. It has two primordial isotopes, uranium-238 and uranium-235, that have long half-lives and are found in appreciable quantity in Earth's crust. The decay product uranium-234 is also found. Other isotopes such as uranium-233 have been produced in breeder reactors. In addition to isotopes found in nature or nuclear reactors, many isotopes with far shorter half-lives have been produced, ranging from U to U (except for U). The standard atomic weight of natural uranium is .
Natural uranium consists of three main isotopes, U (99.2739–99.2752% natural abundance), U (0.7198–0.7202%), and U (0.0050–0.0059%). All three isotopes are radioactive (i.e., they are radioisotopes), and the most abundant and stable is uranium-238, with a half-life of (about the age of the Earth).
Uranium-238 is an alpha emitter, decaying through the 18-member uranium series into lead-206. The decay series of uranium-235 (historically called actino-uranium) has 15 members and ends in lead-207. The constant rates of decay in these series makes comparison of the ratios of parent-to-daughter elements useful in radiometric dating. Uranium-233 is made from thorium-232 by neutron bombardment.
Uranium-235 is important for both nuclear reactors (energy production) and nuclear weapons because it is the only isotope existing in nature to any appreciable extent that is fissile in response to thermal neutrons, i.e., thermal neutron capture has a high probability of inducing fission. A chain reaction can be sustained with a large enough (critical) mass of uranium-235. Uranium-238 is also important because it is fertile: it absorbs neutrons to produce a radioactive isotope that decays into plutonium-239, which also is fissile.
List of isotopes
|-
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 122
|
|
| α
| Th
| 0+
|
|
|-id=Uranium-215
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 123
| rowspan=2|215.026720(11)
| rowspan=2|1.4(0.9) ms
| α
| Th
| rowspan=2|5/2−#
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-216
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 124
| 216.024760(30)
|
| α
| Th
| 0+
|
|
|-id=Uranium-216m
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 2206 keV
|
| α
| Th
| 8+
|
|
|-id=Uranium-217
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 125
| rowspan=2|217.024660(86)#
| rowspan=2|
| α
| Th
| rowspan=2|(1/2−)
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-218
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 126
| 218.023505(15)
|
| α
| Th
| 0+
|
|
|-id=Uranium-218m
| rowspan=2 style="text-indent:1em" | U
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 2117 keV
| rowspan=2|
| α
| Th
| rowspan=2|8+
| rowspan=2|
| rowspan=2|
|-
| IT?
| U
|-id=Uranium-219
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 127
| rowspan=2|219.025009(14)
| rowspan=2|60(7) μs
| α
| Th
| rowspan=2|(9/2+)
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-221
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 129
| rowspan=2|221.026323(77)
| rowspan=2|0.66(14) μs
| α
| Th
| rowspan=2|(9/2+)
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-222
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 130
| rowspan=2|222.026058(56)
| rowspan=2|4.7(0.7) μs
| α
| Th
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-223
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 131
| rowspan=2|223.027961(63)
| rowspan=2|65(12) μs
| α
| Th
| rowspan=2|7/2+#
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-224
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 132
| rowspan=2|224.027636(16)
| rowspan=2|396(17) μs
| α
| Th
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-225
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 133
| 225.029385(11)
| 62(4) ms
| α
| Th
| 5/2+#
|
|
|-id=Uranium-226
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 134
| 226.029339(12)
| 269(6) ms
| α
| Th
| 0+
|
|
|-id=Uranium-227
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 135
| rowspan=2|227.0311811(91)
| rowspan=2|1.1(0.1) min
| α
| Th
| rowspan=2|(3/2+)
| rowspan=2|
| rowspan=2|
|-
| β?
| Pa
|-id=Uranium-228
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 136
| rowspan=2|228.031369(14)
| rowspan=2|9.1(0.2) min
| α (97.5%)
| Th
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| EC (2.5%)
| Pa
|-id=Uranium-229
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 137
| rowspan=2|229.0335060(64)
| rowspan=2|57.8(0.5) min
| β (80%)
| Pa
| rowspan=2|(3/2+)
| rowspan=2|
| rowspan=2|
|-
| α (20%)
| Th
|-id=Uranium-230
| rowspan=3|U
| rowspan=3|
| rowspan=3 style="text-align:right" | 92
| rowspan=3 style="text-align:right" | 138
| rowspan=3|230.0339401(48)
| rowspan=3|20.23(0.02) d
| α
| Th
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| SF ?
| (various)
|-
| CD (4.8×10%)
| PbNe
|-id=Uranium-231
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 139
| rowspan=2|231.0362922(29)
| rowspan=2|4.2(0.1) d
| EC
| Pa
| rowspan=2|5/2+#
| rowspan=2|
| rowspan=2|
|-
| α (.004%)
| Th
|-
| rowspan=4|U
| rowspan=4|
| rowspan=4 style="text-align:right" | 92
| rowspan=4 style="text-align:right" | 140
| rowspan=4|232.0371548(19)
| rowspan=4|68.9(0.4) y
| α
| Th
| rowspan=4|0+
| rowspan=4|
| rowspan=4|
|-
| CD (8.9×10%)
| PbNe
|-
| SF (10%)
| (various)
|-
| CD?
| HgMg
|-
| rowspan=4|U
| rowspan=4|
| rowspan=4 style="text-align:right" | 92
| rowspan=4 style="text-align:right" | 141
| rowspan=4|233.0396343(24)
| rowspan=4|1.592(2)×10 y
| α
| Th
| rowspan=4|5/2+
| rowspan=4|Trace
| rowspan=4|
|-
| CD (≤7.2×10%)
| PbNe
|-
| SF ?
| (various)
|-
| CD ?
| HgMg
|-
| rowspan=5|U
| rowspan=5|Uranium II
| rowspan=5 style="text-align:right" | 92
| rowspan=5 style="text-align:right" | 142
| rowspan=5|234.0409503(12)
| rowspan=5|2.455(6)×10 y
| α
| Th
| rowspan=5|0+
| rowspan=5|[0.000054(5)]
| rowspan=5|0.000050–0.000059
|-
| SF (1.64×10%)
| (various)
|-
| CD (1.4×10%)
| HgMg
|-
| CD (≤9×10%)
| PbNe
|-
| CD (≤9×10%)
| PbNe
|-id=Uranium-234m
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 1421.257(17) keV
| 33.5(2.0) ms
| IT
| U
| 6−
|
|
|-
| rowspan=5|U
| rowspan=5|Actin UraniumActino-Uranium
| rowspan=5 style="text-align:right" | 92
| rowspan=5 style="text-align:right" | 143
| rowspan=5|235.0439281(12)
| rowspan=5|7.038(1)×10 y
| α
| Th
| rowspan=5|7/2−
| rowspan=5|[0.007204(6)]
| rowspan=5|0.007198–0.007207
|-
| SF (7×10%)
| (various)
|-
| CD (8×10%)
| PbNe
|-
| CD (8×10%)
| PbNe
|-
| CD (8×10%)
| HgMg
|-id=Uranium-235m1
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 0.076737(18) keV
| 25.7(1) min
| IT
| U
| 1/2+
|
|
|-id=Uranium-235m2
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 2500(300) keV
| 3.6(18) ms
| SF
| (various)
|
|
|
|-
| rowspan=4|U
| rowspan=4|Thoruranium
| rowspan=4 style="text-align:right" | 92
| rowspan=4 style="text-align:right" | 144
| rowspan=4|236.0455661(12)
| rowspan=4|2.342(3)×10 y
| α
| Th
| rowspan=4|0+
| rowspan=4|Trace
| rowspan=4|
|-
| SF (9.6×10%)
| (various)
|-
| CD (≤2.0×10%)
| HgMg
|-
| CD (≤2.0×10%)
| HgMg
|-id=Uranium-236m1
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 1052.5(6) keV
| 100(4) ns
| IT
| U
| 4−
|
|
|-id=Uranium-236m2
| rowspan=2 style="text-indent:1em" | U
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 2750(3) keV
| rowspan=2|120(2) ns
| IT (87%)
| U
| rowspan=2|(0+)
| rowspan=2|
| rowspan=2|
|-
| SF (13%)
| (various)
|-
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 145
| 237.0487283(13)
| 6.752(2) d
| β
| Np
| 1/2+
| Trace
|
|-id=Uranium-237m
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 274.0(10) keV
| 155(6) ns
| IT
| U
| 7/2−
|
|
|-
| rowspan=3|U
| rowspan=3|Uranium I
| rowspan=3 style="text-align:right" | 92
| rowspan=3 style="text-align:right" | 146
| rowspan=3|238.050787618(15)
| rowspan=3|4.468(3)×10 y
| α
| Th
| rowspan=3|0+
| rowspan=3|[0.992742(10)]
| rowspan=3|0.992739–0.992752
|-
| SF (5.44×10%)
| (various)
|-
| ββ (2.2×10%)
| Pu
|-id=Uranium-238m
| rowspan=2 style="text-indent:1em" | U
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 2557.9(5) keV
| rowspan=2|280(6) ns
| IT (97.4%)
| U
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| SF (2.6%)
| (various)
|-
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 147
| 239.0542920(16)
| 23.45(0.02) min
| β
| Np
| 5/2+
| Trace
|
|-id=Uranium-239m1
| style="text-indent:1em" | U
|
| colspan="3" style="text-indent:2em" | 133.7991(10) keV
| 780(40) ns
| IT
| U
| 1/2+
|
|
|-id=Uranium-239m2
| rowspan=2 style="text-indent:1em" | U
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 2500(900)# keV
| rowspan=2|>250 ns
| SF?
| (various)
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| IT?
| U
|-id=Uranium-240
| rowspan=2|U
| rowspan=2|
| rowspan=2 style="text-align:right" | 92
| rowspan=2 style="text-align:right" | 148
| rowspan=2|240.0565924(27)
| rowspan=2|14.1(0.1) h
| β
| Np
| rowspan=2|0+
| rowspan=2|Trace
| rowspan=2|
|-
| α?
| Th
|-
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 149
| 241.06031(5)
| ~40 min
| β
| Np
| 7/2+#
|
|-->
|-id=Uranium-242
| U
|
| style="text-align:right" | 92
| style="text-align:right" | 150
| 242.06296(10)
| 16.8(0.5) min
| β
| Np
| 0+
|
|
Actinides vs fission products
Uranium-214
Uranium-214 is the lightest known isotope of uranium. It was discovered at the Spectrometer for Heavy Atoms and Nuclear Structure (SHANS) at the Heavy Ion Research Facility in Lanzhou, China in 2021, produced by firing argon-36 at tungsten-182. It alpha-decays with a half-life of .
Uranium-232
Uranium-232 has a half-life of 68.9 years and is a side product in the thorium cycle. It has been cited as an obstacle to nuclear proliferation using U, because the intense gamma radiation from Tl (a daughter of U, produced relatively quickly) makes U contaminated with it more difficult to handle. Uranium-232 is a rare example of an even-even isotope that is fissile with both thermal and fast neutrons.
Uranium-233
Uranium-233 is a fissile isotope that is bred from thorium-232 as part of the thorium fuel cycle. U was investigated for use in nuclear weapons and as a reactor fuel. It was occasionally tested but never deployed in nuclear weapons and has not been used commercially as a nuclear fuel. It has been used successfully in experimental nuclear reactors and has been proposed for much wider use as a nuclear fuel. It has a half-life of around 160,000 years.
Uranium-233 is produced by neutron irradiation of thorium-232. When thorium-232 absorbs a neutron, it becomes thorium-233, which has a half-life of only 22 minutes. Thorium-233 beta decays into protactinium-233. Protactinium-233 has a half-life of 27 days and beta decays into uranium-233; some proposed molten salt reactor designs attempt to physically isolate the protactinium from further neutron capture before beta decay can occur.
Uranium-233 usually fissions on neutron absorption but sometimes retains the neutron, becoming uranium-234. The capture-to-fission ratio is smaller than the other two major fissile fuels, uranium-235 and plutonium-239; it is also lower than that of short-lived plutonium-241, but bested by very difficult-to-produce neptunium-236.
Uranium-234
U occurs in natural uranium as an indirect decay product of uranium-238, but makes up only 55 parts per million of the uranium because its half-life of 245,500 years is only about 1/18,000 that of U. The path of production of U is this: U alpha decays to thorium-234. Next, with a short half-life, Th beta decays to protactinium-234. Finally, Pa beta decays to U.
U alpha decays to thorium-230, except for a small percentage of nuclei that undergo spontaneous fission.
Extraction of small amounts of U from natural uranium could be done using isotope separation, similar to normal uranium-enrichment. However, there is no real demand in chemistry, physics, or engineering for isolating U. Very small pure samples of U can be extracted via the chemical ion-exchange process, from samples of plutonium-238 that have aged somewhat to allow some alpha decay to U.
Enriched uranium contains more U than natural uranium as a byproduct of the uranium enrichment process aimed at obtaining uranium-235, which concentrates lighter isotopes even more strongly than it does U. The increased percentage of U in enriched natural uranium is acceptable in current nuclear reactors, but (re-enriched) reprocessed uranium might contain even higher fractions of U, which is undesirable. This is because U is not fissile, and tends to absorb slow neutrons in a nuclear reactor—becoming U.
U has a neutron capture cross section of about 100 barns for thermal neutrons, and about 700 barns for its resonance integral—the average over neutrons having various intermediate energies. In a nuclear reactor, non-fissile isotopes capture a neutron breeding fissile isotopes. U is converted to U more easily and therefore at a greater rate than uranium-238 is to plutonium-239 (via neptunium-239), because U has a much smaller neutron-capture cross section of just 2.7 barns.
Uranium-235
Uranium-235 makes up about 0.72% of natural uranium. Unlike the predominant isotope uranium-238, it is fissile, i.e., it can sustain a fission chain reaction. It is the only fissile isotope that is a primordial nuclide or found in significant quantity in nature.
Uranium-235 has a half-life of 703.8 million years. It was discovered in 1935 by Arthur Jeffrey Dempster. Its (fission) nuclear cross section for slow thermal neutron is about 504.81 barns. For fast neutrons it is on the order of 1 barn. At thermal energy levels, about 5 of 6 neutron absorptions result in fission and 1 of 6 result in neutron capture forming uranium-236. The fission-to-capture ratio improves for faster neutrons.
Uranium-236
Uranium-236 has a half-life of about 23 million years; and is neither fissile with thermal neutrons, nor very good fertile material, but is generally considered a nuisance and long-lived radioactive waste. It is found in spent nuclear fuel and in the reprocessed uranium made from spent nuclear fuel.
Uranium-237
Uranium-237 has a half-life of about 6.75 days. It decays into neptunium-237 by beta decay. It was discovered by Japanese physicist Yoshio Nishina in 1940, who in a near-miss discovery, inferred the creation of element 93, but was unable to isolate the then-unknown element or measure its decay properties.
Uranium-238
Uranium-238 (U or U-238) is the most common isotope of uranium in nature. It is not fissile, but is fertile: it can capture a slow neutron and after two beta decays become fissile plutonium-239. Uranium-238 is fissionable by fast neutrons, but cannot support a chain reaction because inelastic scattering reduces neutron energy below the range where fast fission of one or more next-generation nuclei is probable. Doppler broadening of U's neutron absorption resonances, increasing absorption as fuel temperature increases, is also an essential negative feedback mechanism for reactor control.
About 99.284% of natural uranium is uranium-238, which has a half-life of 1.41×10 seconds (4.468×10 years). Depleted uranium has an even higher concentration of U, and even low-enriched uranium (LEU) is still mostly U. Reprocessed uranium is also mainly U, with about as much uranium-235 as natural uranium, a comparable proportion of uranium-236, and much smaller amounts of other isotopes of uranium such as uranium-234, uranium-233, and uranium-232.
Uranium-239
Uranium-239 is usually produced by exposing U to neutron radiation in a nuclear reactor. U has a half-life of about 23.45 minutes and beta decays into neptunium-239, with a total decay energy of about 1.29 MeV. The most common gamma decay at 74.660 keV accounts for the difference in the two major channels of beta emission energy, at 1.28 and 1.21 MeV.
Np then, with a half-life of about 2.356 days, beta-decays to plutonium-239.
Uranium-241
In 2023, in a paper published in Physical Review Letters, a group of researchers based in Korea reported that they had found uranium-241 in an experiment involving U+Pt multinucleon transfer reactions.
Its half-life is about 40 minutes.
| Physical sciences | Actinides | Chemistry |
2527111 | https://en.wikipedia.org/wiki/Isotopes%20of%20oxygen | Isotopes of oxygen | There are three known stable isotopes of oxygen (8O): , , and .
Radioactive isotopes ranging from to have also been characterized, all short-lived. The longest-lived radioisotope is with a half-life of , while the shortest-lived isotope is the unbound with a half-life of , though half-lives have not been measured for the unbound heavy isotopes and .
List of isotopes
|-id=Oxygen-11
|
| style="text-align:right" | 8
| style="text-align:right" | 3
|
| []
| 2p
|
| (3/2−)
|
|
|-id=Oxygen-12
|
| style="text-align:right" | 8
| style="text-align:right" | 4
|
|
| 2p
|
| 0+
|
|
|-
| rowspan=3|
| rowspan=3 style="text-align:right" | 8
| rowspan=3 style="text-align:right" | 5
| rowspan=3|
| rowspan=3|
| β+ ()
|
| rowspan=3|(3/2−)
| rowspan=3|
| rowspan=3|
|-
| β+p ()
|
|-
| β+p,α (<)
| 2
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 6
|
|
| β+
|
| 0+
|
|
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 7
|
|
| β+
|
| 1/2−
| colspan="2" style="text-align:center;"|Trace
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 8
|
| colspan="3" style="text-align:center;"|Stable
| 0+
| colspan="2" style="text-align:center;"|[, ]
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 9
|
| colspan="3" style="text-align:center;"|Stable
| 5/2+
| colspan="2" style="text-align:center;"|[, ]
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 10
|
| colspan="3" style="text-align:center;"|Stable
| 0+
| colspan="2" style="text-align:center;"|[, ]
|-id=Oxygen-19
|
| style="text-align:right" | 8
| style="text-align:right" | 11
|
|
| β−
|
| 5/2+
|
|
|-
|
| style="text-align:right" | 8
| style="text-align:right" | 12
|
|
| β−
|
| 0+
|
|
|-id=Oxygen-21
| rowspan=2|
| rowspan=2 style="text-align:right" | 8
| rowspan=2 style="text-align:right" | 13
| rowspan=2|
| rowspan=2|
| β−
|
| rowspan=2|(5/2+)
| rowspan=2|
| rowspan=2|
|-
| β−n ?
| ?
|-id=Oxygen-22
| rowspan=2|
| rowspan=2 style="text-align:right" | 8
| rowspan=2 style="text-align:right" | 14
| rowspan=2|
| rowspan=2|
| β− (> )
|
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−n (< )
|
|-id=Oxygen-23
| rowspan=2|
| rowspan=2 style="text-align:right" | 8
| rowspan=2 style="text-align:right" | 15
| rowspan=2|
| rowspan=2|
| β− ()
|
| rowspan=2|1/2+
| rowspan=2|
| rowspan=2|
|-
| β−n ()
|
|-id=Oxygen-24
| rowspan=2|
| rowspan=2 style="text-align:right" | 8
| rowspan=2 style="text-align:right" | 16
| rowspan=2|
| rowspan=2|
| β− ()
|
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−n ()
|
|-id=Oxygen-25
|
| style="text-align:right" | 8
| style="text-align:right" | 17
|
|
| n
|
| 3/2+
|
|
|-id=Oxygen-26
|
| style="text-align:right" | 8
| style="text-align:right" | 18
|
|
| 2n
|
| 0+
|
|
|-id=Oxygen-27
|
| style="text-align:right" | 8
| style="text-align:right" | 19
|
| ≥
| n
|
| (3/2+, 7/2−)
|
|
|-id=Oxygen-28
|
| style="text-align:right" | 8
| style="text-align:right" | 20
|
| ≥
| 2n
|
| 0+
|
|
Stable isotopes
Natural oxygen is made of three stable isotopes, , , and , with being the most abundant (99.762% natural abundance). Depending on the terrestrial source, the standard atomic weight varies within the range of [, ] (the conventional value is 15.999).
has high relative and absolute abundance because it is a principal product of stellar evolution and because it is a primary isotope, meaning it can be made by stars that were initially hydrogen only. Most is synthesized at the end of the helium fusion process in stars; the triple-alpha process creates , which captures an additional nucleus to produce . The neon burning process creates additional .
Both and are secondary isotopes, meaning their synthesis requires seed nuclei. is primarily made by burning hydrogen into helium in the CNO cycle, making it a common isotope in the hydrogen burning zones of stars. Most is produced when (made abundant from CNO burning) captures a nucleus, becoming . This quickly (half-life around 110 minutes) beta decays to making that isotope common in the helium-rich zones of stars. Temperatures on the order of 109 kelvins are needed to fuse oxygen into sulfur.
An atomic mass of 16 was assigned to oxygen prior to the definition of the unified atomic mass unit based on . Since physicists referred to only, while chemists meant the natural mix of isotopes, this led to slightly different mass scales.
Applications of various isotopes
Measurements of 18O/16O ratio are often used to interpret changes in paleoclimate. Oxygen in Earth's air is , and . Water molecules with a lighter isotope are slightly more likely to evaporate and less likely to fall as precipitation, so Earth's freshwater and polar ice have slightly less () than air () or seawater (). This disparity allows analysis of temperature patterns via historic ice cores.
Solid samples (organic and inorganic) for oxygen isotopic ratios are usually stored in silver cups and measured with pyrolysis and mass spectrometry. Researchers need to avoid improper or prolonged storage of the samples for accurate measurements.
Due to natural oxygen being mostly , samples enriched with the other stable isotopes can be used for isotope labeling. For example, it was proven that the oxygen released in photosynthesis originates in , rather than in the also consumed , by isotope tracing experiments. The oxygen contained in in turn is used to make up the sugars formed by photosynthesis.
In heavy-water nuclear reactors the neutron moderator should preferably be low in and due to their higher neutron absorption cross section compared to . While this effect can also be observed in light-water reactors, ordinary hydrogen (protium) has a higher absorption cross section than any stable isotope of oxygen and its number density is twice as high in water as that of oxygen, so that the effect is negligible. As some methods of isotope separation enrich not only heavier isotopes of hydrogen but also heavier isotopes of oxygen when producing heavy water, the concentration of and can be measurably higher. Furthermore, the (n,α) reaction is a further undesirable result of an elevated concentration of heavier isotopes of oxygen. Therefore, facilities which remove tritium from heavy water used in nuclear reactors often also remove or at least reduce the amount of heavier isotopes of oxygen.
Oxygen isotopes are also used to trace ocean composition and temperature which seafood is from.
Radioisotopes
Thirteen radioisotopes have been characterized; the most stable are with half-life and with half-life . All remaining radioisotopes have half-lives less than and most have half-lives less than 0.1 s. The four heaviest known isotopes (up to ) decay by neutron emission to
, whose half-life is . This isotope, along with 28Ne, have been used in the model of reactions in crust of neutron stars. The most common decay mode for isotopes lighter than the stable isotopes is β+ decay to nitrogen, and the most common mode after is β− decay to fluorine.
Oxygen-13
Oxygen-13 is an unstable isotope, with 8 protons and 5 neutrons. It has spin 3/2−, and half-life . Its atomic mass is . It decays to nitrogen-13 by electron capture, with a decay energy of . Its parent nuclide is fluorine-14.
Oxygen-14
Oxygen-14 is the second most stable radioisotope. Oxygen-14 ion beams are of interest to researchers of proton-rich nuclei; for example, one early experiment at the Facility for Rare Isotope Beams in East Lansing, Michigan, used a 14O beam to study the beta decay transition of this isotope to 14N.
Oxygen-15
Oxygen-15 is a radioisotope, often used in positron emission tomography (PET). It can be used in, among other things, water for PET myocardial perfusion imaging and for brain imaging. It has an atomic mass of , and a half-life of . It is produced through deuteron bombardment of nitrogen-14 using a cyclotron.
+ → + n
Oxygen-15 and nitrogen-13 are produced in air when gamma rays (for example from lightning) knock neutrons out of 16O and 14N:
+ γ → + n
+ γ → + n
decays to , emitting a positron. The positron quickly annihilates with an electron, producing two gamma rays of about 511 keV. After a lightning bolt, this gamma radiation dies down with half-life of 2 minutes, but these low-energy gamma rays go on average only about 90 metres through the air. Together with rays produced from positrons from nitrogen-13 they may only be detected for a minute or so as the "cloud" of and floats by, carried by the wind.
Oxygen-20
Oxygen-20 has a half-life of and decays by β− decay to 20F. It is one of the known cluster decay ejected particles, being emitted in the decay of 228Th with a branching ratio of about .
| Physical sciences | Group 16 | Chemistry |
2527114 | https://en.wikipedia.org/wiki/Isotopes%20of%20nitrogen | Isotopes of nitrogen | Natural nitrogen (7N) consists of two stable isotopes: the vast majority (99.6%) of naturally occurring nitrogen is nitrogen-14, with the remainder being nitrogen-15. Thirteen radioisotopes are also known, with atomic masses ranging from 9 to 23, along with three nuclear isomers. All of these radioisotopes are short-lived, the longest-lived being nitrogen-13 with a half-life of . All of the others have half-lives below 7.15 seconds, with most of these being below 620 milliseconds. Most of the isotopes with atomic mass numbers below 14 decay to isotopes of carbon, while most of the isotopes with masses above 15 decay to isotopes of oxygen. The shortest-lived known isotope is nitrogen-10, with a half-life of , though the half-life of nitrogen-9 has not been measured exactly.
List of isotopes
|-id=Nitrogen-9
|
| style="text-align:right" | 7
| style="text-align:right" | 2
|
| <1 as
| 5p
|
|
|
|
|-id=Nitrogen-10
|
| style="text-align:right" | 7
| style="text-align:right" | 3
|
|
| p ?
| ?
| 1−, 2−
|
|
|-id=Nitrogen-11
|
| style="text-align:right" | 7
| style="text-align:right" | 4
|
| []
| p
|
| 1/2+
|
|
|-id=Nitrogen-11m
| style="text-indent:1em" |
| colspan="3" style="text-indent:2em" |
|
| p
|
| 1/2−
|
|
|-id=Nitrogen-12
| rowspan=2|
| rowspan=2 style="text-align:right" | 7
| rowspan=2 style="text-align:right" | 5
| rowspan=2|
| rowspan=2|
| β+ ()
|
| rowspan=2|1+
| rowspan=2|
| rowspan=2|
|-
| β+α ()
|
|-
|
| style="text-align:right" | 7
| style="text-align:right" | 6
|
|
| β+
|
| 1/2−
|
|
|-
|
| style="text-align:right" | 7
| style="text-align:right" | 7
|
| colspan=3 align=center|Stable
| 1+
| [, ]
|
|-id=Nitrogen-14m
| style="text-indent:1em" |
| colspan="3" style="text-indent:2em" |
|
| IT
|
| 0+
|
|
|-
|
| style="text-align:right" | 7
| style="text-align:right" | 8
|
| colspan=3 align=center|Stable
| 1/2−
| [, ]
|
|-
| rowspan=2|
| rowspan=2 style="text-align:right" | 7
| rowspan=2 style="text-align:right" | 9
| rowspan=2|
| rowspan=2|
| β− ()
|
| rowspan=2|2−
| rowspan=2|
| rowspan=2|
|-
| β−α ()
|
|-id=Nitrogen-16m
| rowspan=2 style="text-indent:1em" |
| rowspan=2 colspan="3" style="text-indent:2em" |
| rowspan=2|
| IT ()
|
| rowspan=2|0−
| rowspan=2|
| rowspan=2|
|-
| β− ()
|
|-id=Nitrogen-17
| rowspan=3|17N
| rowspan=3 style="text-align:right" | 7
| rowspan=3 style="text-align:right" | 10
| rowspan=3|
| rowspan=3|
| β−n ()
|
| rowspan=3|1/2−
| rowspan=3|
| rowspan=3|
|-
| β− ()
|
|-
| β−α ()
|
|-id=Nitrogen-18
| rowspan=4|
| rowspan=4 style="text-align:right" | 7
| rowspan=4 style="text-align:right" | 11
| rowspan=4|
| rowspan=4|
| β− ()
|
| rowspan=4|1−
| rowspan=4|
| rowspan=4|
|-
| β−α ()
|
|-
| β−n ()
|
|-
| β−2n ?
| ?
|-id=Nitrogen-19
| rowspan=2|
| rowspan=2 style="text-align:right" | 7
| rowspan=2 style="text-align:right" | 12
| rowspan=2|
| rowspan=2|
| β− ()
|
| rowspan=2|1/2−
| rowspan=2|
| rowspan=2|
|-
| β−n ()
|
|-id=Nitrogen-20
| rowspan=3|
| rowspan=3 style="text-align:right" | 7
| rowspan=3 style="text-align:right" | 13
| rowspan=3|
| rowspan=3|
| β− ()
|
| rowspan=3|(2−)
| rowspan=3|
| rowspan=3|
|-
| β−n ()
|
|-
| β−2n ?
| ?
|-id=Nitrogen-21
| rowspan=3|
| rowspan=3 style="text-align:right" | 7
| rowspan=3 style="text-align:right" | 14
| rowspan=3|
| rowspan=3|
| β−n ()
|
| rowspan=3|(1/2−)
| rowspan=3|
| rowspan=3|
|-
| β− ()
|
|-
| β−2n ?
| ?
|-id=Nitrogen-22
| rowspan=3|
| rowspan=3 style="text-align:right" | 7
| rowspan=3 style="text-align:right" | 15
| rowspan=3|
| rowspan=3|
| β− ()
|
| rowspan=3|0−#
| rowspan=3|
| rowspan=3|
|-
| β−n ()
|
|-
| β−2n ()
|
|-id=Nitrogen-23
| rowspan=4|
| rowspan=4 style="text-align:right" | 7
| rowspan=4 style="text-align:right" | 16
| rowspan=4|
| rowspan=4|
| β− (> )
|
| rowspan=4|1/2−#
| rowspan=4|
| rowspan=4|
|-
| β−n ()
|
|-
| β−2n ()
|
|-
| β−3n (< )
|
Nitrogen-13
Nitrogen-13 and oxygen-15 are produced in the atmosphere when gamma rays (for example from lightning) knock neutrons out of nitrogen-14 and oxygen-16:
14N + γ → 13N + n
16O + γ → 15O + n
The nitrogen-13 produced as a result decays with a half-life of to carbon-13, emitting a positron. The positron quickly annihilates with an electron, producing two gamma rays of about . After a lightning bolt, this gamma radiation dies down with a half-life of ten minutes, but these low-energy gamma rays go only about 90 metres through the air on average, so they may only be detected for a minute or so as the "cloud" of 13N and 15O floats by, carried by the wind.
Nitrogen-14
Nitrogen-14 makes up about 99.636% of natural nitrogen.
Nitrogen-14 is one of the very few stable nuclides with both an odd number of protons and of neutrons (seven each) and is the only one to make up a majority of its element. Each proton or neutron contributes a nuclear spin of plus or minus spin 1/2, giving the nucleus a total magnetic spin of one.
The original source of nitrogen-14 and nitrogen-15 in the Universe is believed to be stellar nucleosynthesis, where they are produced as part of the CNO cycle.
Nitrogen-14 is the source of naturally-occurring, radioactive, carbon-14. Some kinds of cosmic radiation cause a nuclear reaction with nitrogen-14 in the upper atmosphere of the Earth, creating carbon-14, which decays back to nitrogen-14 with a half-life of .
Nitrogen-15
Nitrogen-15 is a rare stable isotope of nitrogen. Two sources of nitrogen-15 are the positron emission of oxygen-15 and the beta decay of carbon-15. Nitrogen-15 presents one of the lowest thermal neutron capture cross sections of all isotopes.
Nitrogen-15 is frequently used in NMR (Nitrogen-15 NMR spectroscopy). Unlike the more abundant nitrogen-14, which has an integer nuclear spin and thus a quadrupole moment, 15N has a fractional nuclear spin of one-half, which offers advantages for NMR such as narrower line width.
Nitrogen-15 tracing is a technique used to study the nitrogen cycle.
Nitrogen-16
The radioisotope 16N is the dominant radionuclide in the coolant of pressurised water reactors or boiling water reactors during normal operation. It is produced from 16O (in water) via an (n,p) reaction, in which the 16O atom captures a neutron and expels a proton. It has a short half-life of about 7.1 s, but its decay back to 16O produces high-energy gamma radiation (5 to 7 MeV). Because of this, access to the primary coolant piping in a pressurised water reactor must be restricted during reactor power operation. It is a sensitive and immediate indicator of leaks from the primary coolant system to the secondary steam cycle and is the primary means of detection for such leaks.
Isotopic signatures
| Physical sciences | Group 15 | Chemistry |
2527115 | https://en.wikipedia.org/wiki/Isotopes%20of%20carbon | Isotopes of carbon | Carbon (6C) has 14 known isotopes, from to as well as , of which and are stable. The longest-lived radioisotope is , with a half-life of years. This is also the only carbon radioisotope found in nature, as trace quantities are formed cosmogenically by the reaction + → + . The most stable artificial radioisotope is , which has a half-life of . All other radioisotopes have half-lives under 20 seconds, most less than 200 milliseconds. The least stable isotope is , with a half-life of . Light isotopes tend to decay into isotopes of boron and heavy ones tend to decay into isotopes of nitrogen.
List of isotopes
|-id=Carbon-8
|
| style="text-align:right" | 6
| style="text-align:right" | 2
|
| []
| 2p
|
| 0+
|
|
|-id=Carbon-9
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 3
| rowspan=3|
| rowspan=3|
| β+ ()
|
| rowspan=3|3/2−
| rowspan=3|
| rowspan=3|
|-
| β+α ()
|
|-
| β+p ()
|
|-id=Carbon-10
|
| style="text-align:right" | 6
| style="text-align:right" | 4
|
|
| β+
|
| 0+
|
|
|-
| rowspan=1|
| rowspan=1 style="text-align:right" | 6
| rowspan=1 style="text-align:right" | 5
| rowspan=1 |
| rowspan=1 |
| β+
|
| rowspan=1 |3/2−
| rowspan=1 |
| rowspan=1 |
|-id=Carbon-11m
| style="text-indent:1em" |
| colspan=3 style="text-indent:2em" |
|
| p ?
| ?
| 1/2+
|
|
|-id=Carbon-12
|
| style="text-align:right" | 6
| style="text-align:right" | 6
| 12 exactly
| colspan=3 align=center|Stable
| 0+
| [, ]
|-id=Carbon-13
|
| style="text-align:right" | 6
| style="text-align:right" | 7
|
| colspan=3 align=center|Stable
| 1/2−
| [, ]
|-id=Carbon-14
|
| style="text-align:right" | 6
| style="text-align:right" | 8
|
|
| β−
|
| 0+
| Trace
| < 10−12
|-id=Carbon-14m
| style="text-indent:1em" |
| colspan="3" style="text-indent:2em" |
|
| IT
|
| (2−)
|
|
|-id=Carbon-15
|
| style="text-align:right" | 6
| style="text-align:right" | 9
|
|
| β−
|
| 1/2+
|
|
|-id=Carbon-16
| rowspan=2|
| rowspan=2 style="text-align:right" | 6
| rowspan=2 style="text-align:right" | 10
| rowspan=2|
| rowspan=2|
| β−n ()
|
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β− ()
|
|-id=Carbon-17
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 11
| rowspan=3|
| rowspan=3|
| β− ()
|
| rowspan=3|3/2+
| rowspan=3|
| rowspan=3|
|-
| β−n ()
|
|-
| β−2n ?
| ?
|-id=Carbon-18
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 12
| rowspan=3|
| rowspan=3|
| β− ()
|
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β−n ()
|
|-
| β−2n ?
| ?
|-id=Carbon-19
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 13
| rowspan=3|
| rowspan=3|
| β−n ()
|
| rowspan=3|1/2+
| rowspan=3|
| rowspan=3|
|-
| β− ()
|
|-
| β−2n ()
|
|-id=Carbon-20
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 14
| rowspan=3|
| rowspan=3|
| β−n ()
|
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β−2n (< )
|
|-
| β− (> )
|
|-id=Carbon-22
| rowspan=3|
| rowspan=3 style="text-align:right" | 6
| rowspan=3 style="text-align:right" | 16
| rowspan=3|
| rowspan=3|
| β−n ()
|
| rowspan=3|0+
| rowspan=3|
| rowspan=3|
|-
| β−2n (< )
|
|-
| β− (> )
|
Carbon-11
Carbon-11 or is a radioactive isotope of carbon that decays to boron-11. This decay mainly occurs due to positron emission, with around 0.19–0.23% of decays instead occurring by electron capture. It has a half-life of .
→ + + +
+ → + +
It is produced by hitting nitrogen with protons of around 16.5 MeV in a cyclotron. The causes the endothermic reaction
+ → + − 2.92 MeV
It can also be produced by fragmentation of by shooting high-energy at a target.
Carbon-11 is commonly used as a radioisotope for the radioactive labeling of molecules in positron emission tomography. Among the many molecules used in this context are the radioligands []DASB and []Cimbi-5.
Natural isotopes
There are three naturally occurring isotopes of carbon: 12, 13, and 14. and are stable, occurring in a natural proportion of approximately 93:1. is produced by thermal neutrons from cosmic radiation in the upper atmosphere, and is transported down to earth to be absorbed by living biological material. Isotopically, constitutes a negligible part; but, since it is radioactive with a half-life of years, it is radiometrically detectable. Since dead tissue does not absorb , the amount of is one of the methods used within the field of archeology for radiometric dating of biological material.
Paleoclimate
and are measured as the isotope ratio δ13C in benthic foraminifera and used as a proxy for nutrient cycling and the temperature dependent air–sea exchange of CO2 (ventilation). Plants find it easier to use the lighter isotopes () when they convert sunlight and carbon dioxide into food. For example, large blooms of plankton (free-floating organisms) absorb large amounts of from the oceans. Originally, the was mostly incorporated into the seawater from the atmosphere. If the oceans that the plankton live in are stratified (meaning that there are layers of warm water near the top, and colder water deeper down), then the surface water does not mix very much with the deeper waters, so that when the plankton dies, it sinks and takes away from the surface, leaving the surface layers relatively rich in . Where cold waters well up from the depths (such as in the North Atlantic), the water carries back up with it; when the ocean was less stratified than today, there was much more in the skeletons of surface-dwelling species. Other indicators of past climate include the presence of tropical species and coral growth rings.
Tracing food sources and diets
The quantities of the different isotopes can be measured by mass spectrometry and compared to a standard; the result (e.g., the delta of the = δ) is expressed as parts per thousand (‰) divergence from the ratio of a standard:
‰
The usual standard is Peedee Belemnite, abbreviated "PDB", a fossil belemnite. Due to shortage of the original PDB sample, artificial "virtual PDB", or "VPDB", is generally used today.
Stable carbon isotopes in carbon dioxide are utilized differentially by plants during photosynthesis. Grasses in temperate climates (barley, rice, wheat, rye, and oats, plus sunflower, potato, tomatoes, peanuts, cotton, sugar beet, and most trees and their nuts or fruits, roses, and Kentucky bluegrass) follow a C3 photosynthetic pathway that will yield δ13C values averaging about −26.5‰. Grasses in hot arid climates (maize in particular, but also millet, sorghum, sugar cane, and crabgrass) follow a C4 photosynthetic pathway that produces δ13C values averaging about −12.5‰.
It follows that eating these different plants will affect the δ13C values in the consumer's body tissues. If an animal (or human) eats only C3 plants, their δ13C values will be from −18.5 to −22.0‰ in their bone collagen and −14.5‰ in the hydroxylapatite of their teeth and bones.
In contrast, C4 feeders will have bone collagen with a value of −7.5‰ and hydroxylapatite value of −0.5‰.
In actual case studies, millet and maize eaters can easily be distinguished from rice and wheat eaters. Studying how these dietary preferences are distributed geographically through time can illuminate migration paths of people and dispersal paths of different agricultural crops. However, human groups have often mixed C3 and C4 plants (northern Chinese historically subsisted on wheat and millet), or mixed plant and animal groups together (for example, southeastern Chinese subsisting on rice and fish).
| Physical sciences | Group 14 | Chemistry |
1821971 | https://en.wikipedia.org/wiki/Mid-ocean%20ridge | Mid-ocean ridge | A mid-ocean ridge (MOR) is a seafloor mountain system formed by plate tectonics. It typically has a depth of about and rises about above the deepest portion of an ocean basin. This feature is where seafloor spreading takes place along a divergent plate boundary. The rate of seafloor spreading determines the morphology of the crest of the mid-ocean ridge and its width in an ocean basin.
The production of new seafloor and oceanic lithosphere results from mantle upwelling in response to plate separation. The melt rises as magma at the linear weakness between the separating plates, and emerges as lava, creating new oceanic crust and lithosphere upon cooling.
The first discovered mid-ocean ridge was the Mid-Atlantic Ridge, which is a spreading center that bisects the North and South Atlantic basins; hence the origin of the name 'mid-ocean ridge'. Most oceanic spreading centers are not in the middle of their hosting ocean basis but regardless, are traditionally called mid-ocean ridges. Mid-ocean ridges around the globe are linked by plate tectonic boundaries and the trace of the ridges across the ocean floor appears similar to the seam of a baseball. The mid-ocean ridge system thus is the longest mountain range on Earth, reaching about .
Global system
The mid-ocean ridges of the world are connected and form the Ocean Ridge, a single global mid-oceanic ridge system that is part of every ocean, making it the longest mountain range in the world. The continuous mountain range is long (several times longer than the Andes, the longest continental mountain range), and the total length of the oceanic ridge system is long.
Description
Morphology
At the spreading center on a mid-ocean ridge, the depth of the seafloor is approximately . On the ridge flanks, the depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is correlated with its age (age of the lithosphere where depth is measured). The depth-age relation can be modeled by the cooling of a lithosphere plate or mantle half-space. A good approximation is that the depth of the seafloor at a location on a spreading mid-ocean ridge is proportional to the square root of the age of the seafloor. The overall shape of ridges results from Pratt isostasy: close to the ridge axis, there is a hot, low-density mantle supporting the oceanic crust. As the oceanic plate cools, away from the ridge axis, the oceanic mantle lithosphere (the colder, denser part of the mantle that, together with the crust, comprises the oceanic plates) thickens, and the density increases. Thus older seafloor is underlain by denser material and is deeper.
Spreading rate is the rate at which an ocean basin widens due to seafloor spreading. Rates can be computed by mapping marine magnetic anomalies that span mid-ocean ridges. As crystallized basalt extruded at a ridge axis cools below Curie points of appropriate iron-titanium oxides, magnetic field directions parallel to the Earth's magnetic field are recorded in those oxides. The orientations of the field preserved in the oceanic crust comprise a record of directions of the Earth's magnetic field with time. Because the field has reversed directions at known intervals throughout its history, the pattern of geomagnetic reversals in the ocean crust can be used as an indicator of age; given the crustal age and distance from the ridge axis, spreading rates can be calculated.
Spreading rates range from approximately 10–200 mm/yr. Slow-spreading ridges such as the Mid-Atlantic Ridge have spread much less far (showing a steeper profile) than faster ridges such as the East Pacific Rise (gentle profile) for the same amount of time and cooling and consequent bathymetric deepening. Slow-spreading ridges (less than 40 mm/yr) generally have large rift valleys, sometimes as wide as 10–20 km (6.2–12.4 mi), and very rugged terrain at the ridge crest that can have relief of up to . By contrast, fast-spreading ridges (greater than 90 mm/yr) such as the East Pacific Rise lack rift valleys. The spreading rate of the North Atlantic Ocean is ~ 25 mm/yr, while in the Pacific region, it is 80–145 mm/yr. The highest known rate is over 200 mm/yr in the Miocene on the East Pacific Rise. Ridges that spread at rates <20 mm/yr are referred to as ultraslow spreading ridges (e.g., the Gakkel Ridge in the Arctic Ocean and the Southwest Indian Ridge).
The spreading center or axis commonly connects to a transform fault oriented at right angles to the axis. The flanks of mid-ocean ridges are in many places marked by the inactive scars of transform faults called fracture zones. At faster spreading rates the axes often display overlapping spreading centers that lack connecting transform faults. The depth of the axis changes in a systematic way with shallower depths between offsets such as transform faults and overlapping spreading centers dividing the axis into segments. One hypothesis for different along-axis depths is variations in magma supply to the spreading center. Ultra-slow spreading ridges form both magmatic and amagmatic (currently lack volcanic activity) ridge segments without transform faults.
Volcanism
Mid-ocean ridges exhibit active volcanism and seismicity. The oceanic crust is in a constant state of 'renewal' at the mid-ocean ridges by the processes of seafloor spreading and plate tectonics. New magma steadily emerges onto the ocean floor and intrudes into the existing ocean crust at and near rifts along the ridge axes. The rocks making up the crust below the seafloor are youngest along the axis of the ridge and age with increasing distance from that axis. New magma of basalt composition emerges at and near the axis because of decompression melting in the underlying Earth's mantle. The isentropic upwelling solid mantle material exceeds the solidus temperature and melts.
The crystallized magma forms a new crust of basalt known as MORB for mid-ocean ridge basalt, and gabbro below it in the lower oceanic crust. Mid-ocean ridge basalt is a tholeiitic basalt and is low in incompatible elements. Hydrothermal vents fueled by magmatic and volcanic heat are a common feature at oceanic spreading centers. A feature of the elevated ridges is their relatively high heat flow values, of about 1–10 μcal/cm2s, or roughly 0.04–0.4 W/m2.
Most crust in the ocean basins is less than 200 million years old, which is much younger than the 4.54 billion year age of Earth. This fact reflects the process of lithosphere recycling into the Earth's mantle during subduction. As the oceanic crust and lithosphere moves away from the ridge axis, the peridotite in the underlying mantle lithosphere cools and becomes more rigid. The crust and the relatively rigid peridotite below it make up the oceanic lithosphere, which sits above the less rigid and viscous asthenosphere.
Driving mechanisms
The oceanic lithosphere is formed at an oceanic ridge, while the lithosphere is subducted back into the asthenosphere at ocean trenches. Two processes, ridge-push and slab pull, are thought to be responsible for spreading at mid-ocean ridges. Ridge push refers to the gravitational sliding of the ocean plate that is raised above the hotter asthenosphere, thus creating a body force causing sliding of the plate downslope. In slab pull the weight of a tectonic plate being subducted (pulled) below an overlying plate at a subduction zone drags the rest of the plate along behind it. The slab pull mechanism is considered to be contributing more than the ridge push.
A process previously proposed to contribute to plate motion and the formation of new oceanic crust at mid-ocean ridges is the "mantle conveyor" due to deep convection (see image). However, some studies have shown that the upper mantle (asthenosphere) is too plastic (flexible) to generate enough friction to pull the tectonic plate along. Moreover, mantle upwelling that causes magma to form beneath the ocean ridges appears to involve only its upper 400 km (250 mi), as deduced from seismic tomography and observations of the seismic discontinuity in the upper mantle at about 400 km (250 mi). On the other hand, some of the world's largest tectonic plates such as the North American plate and South American plate are in motion, yet only are being subducted in restricted locations such as the Lesser Antilles Arc and Scotia Arc, pointing to action by the ridge push body force on these plates. Computer modeling of the plates and mantle motions suggest that plate motion and mantle convection are not connected, and the main plate driving force is slab pull.
Impact on global sea level
Increased rates of seafloor spreading (i.e. the rate of expansion of the mid-ocean ridge) have caused the global (eustatic) sea level to rise over very long timescales (millions of years). Increased seafloor spreading means that the mid-ocean ridge will then expand and form a broader ridge with decreased average depth, taking up more space in the ocean basin. This displaces the overlying ocean and causes sea levels to rise.
Sealevel change can be attributed to other factors (thermal expansion, ice melting, and mantle convection creating dynamic topography). Over very long timescales, however, it is the result of changes in the volume of the ocean basins which are, in turn, affected by rates of seafloor spreading along the mid-ocean ridges.
The 100 to 170 meters higher sea level of the Cretaceous Period (144–65 Ma) is partly attributed to plate tectonics because thermal expansion and the absence of ice sheets only account for some of the extra sea level.
Impact on seawater chemistry and carbonate deposition
Seafloor spreading on mid-ocean ridges is a global scale ion-exchange system. Hydrothermal vents at spreading centers introduce various amounts of iron, sulfur, manganese, silicon, and other elements into the ocean, some of which are recycled into the ocean crust. Helium-3, an isotope that accompanies volcanism from the mantle, is emitted by hydrothermal vents and can be detected in plumes within the ocean.
Fast spreading rates will expand the mid-ocean ridge causing basalt reactions with seawater to happen more rapidly. The magnesium/calcium ratio will be lower because more magnesium ions are being removed from seawater and consumed by the rock, and more calcium ions are being removed from the rock and released into seawater. Hydrothermal activity at the ridge crest is efficient in removing magnesium. A lower Mg/Ca ratio favors the precipitation of low-Mg calcite polymorphs of calcium carbonate (calcite seas).
Slow spreading at mid-ocean ridges has the opposite effect and will result in a higher Mg/Ca ratio favoring the precipitation of aragonite and high-Mg calcite polymorphs of calcium carbonate (aragonite seas).
Experiments show that most modern high-Mg calcite organisms would have been low-Mg calcite in past calcite seas, meaning that the Mg/Ca ratio in an organism's skeleton varies with the Mg/Ca ratio of the seawater in which it was grown.
The mineralogy of reef-building and sediment-producing organisms is thus regulated by chemical reactions occurring along the mid-ocean ridge, the rate of which is controlled by the rate of sea-floor spreading.
History
Discovery
The first indications that a ridge bisects the Atlantic Ocean basin came from the results of the British Challenger expedition in the nineteenth century. Soundings from lines dropped to the seafloor were analyzed by oceanographers Matthew Fontaine Maury and Charles Wyville Thomson and revealed a prominent rise in the seafloor that ran down the Atlantic basin from north to south. Sonar echo sounders confirmed this in the early twentieth century.
It was not until after World War II, when the ocean floor was surveyed in more detail, that the full extent of mid-ocean ridges became known. The Vema, a ship of the Lamont–Doherty Earth Observatory of Columbia University, traversed the Atlantic Ocean, recording echo sounder data on the depth of the ocean floor. A team led by Marie Tharp and Bruce Heezen concluded that there was an enormous mountain chain with a rift valley at its crest, running up the middle of the Atlantic Ocean. Scientists named it the 'Mid-Atlantic Ridge'. Other research showed that the ridge crest was seismically active and fresh lavas were found in the rift valley. Also, crustal heat flow was higher here than elsewhere in the Atlantic Ocean basin.
At first, the ridge was thought to be a feature specific to the Atlantic Ocean. However, as surveys of the ocean floor continued around the world, it was discovered that every ocean contains parts of the mid-ocean ridge system. The German Meteor expedition traced the mid-ocean ridge from the South Atlantic into the Indian Ocean early in the twentieth century. Although the first-discovered section of the ridge system runs down the middle of the Atlantic Ocean, it was found that most mid-ocean ridges are located away from the center of other ocean basins.
Impact of discovery: seafloor spreading
Alfred Wegener proposed the theory of continental drift in 1912. He stated: "the Mid-Atlantic Ridge ... zone in which the floor of the Atlantic, as it keeps spreading, is continuously tearing open and making space for fresh, relatively fluid and hot sima [rising] from depth". However, Wegener did not pursue this observation in his later works and his theory was dismissed by geologists because there was no mechanism to explain how continents could plow through ocean crust, and the theory became largely forgotten.
Following the discovery of the worldwide extent of the mid-ocean ridge in the 1950s, geologists faced a new task: explaining how such an enormous geological structure could have formed. In the 1960s, geologists discovered and began to propose mechanisms for seafloor spreading. The discovery of mid-ocean ridges and the process of seafloor spreading allowed for Wegener's theory to be expanded so that it included the movement of oceanic crust as well as the continents. Plate tectonics was a suitable explanation for seafloor spreading, and the acceptance of plate tectonics by the majority of geologists resulted in a major paradigm shift in geological thinking.
It is estimated that along Earth's mid-ocean ridges every year of new seafloor is formed by this process. With a crustal thickness of , this amounts to about of new ocean crust formed every year.
List of mid-ocean ridges
(Mid-Arctic Ridge)
Ridge (between Greenland and Spitsbergen)
(south of Iceland)
List of ancient oceanic ridges
| Physical sciences | Volcanic landforms | null |
1822282 | https://en.wikipedia.org/wiki/Male | Male | Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilisation. A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs.
In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity.
Overview
The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.. But in some species males can reproduce by themselves asexually, for example via androgenesis.
Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another.
Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants.
Evolution
The evolution of anisogamy led to the evolution of male and female function. Before the evolution of anisogamy, mating types in a species were isogamous: the same size and both could move, catalogued only as "+" or "-" types. In anisogamy, the mating type is called a gamete. The male gamete is smaller than the female gamete, and usually mobile. Anisogamy remains poorly understood, as there is no fossil record of its emergence. Numerous theories exist as to why anisogamy emerged. Many share a common thread, in that larger female gametes are more likely to survive, and that smaller male gametes are more likely to find other gametes because they can travel faster. Current models often fail to account for why isogamy remains in a few species. Anisogamy appears to have evolved multiple times from isogamy; for example, female Volvocales (a type of green algae) evolved from the plus mating type. Although sexual evolution emerged at least 1.2 billion years ago, the lack of anisogamous fossil records make it hard to pinpoint when males evolved. One theory suggests male evolved from the dominant mating type (called mating type minus).
Symbol, etymology, and usage
Symbol
A common symbol used to represent the male sex is the Mars symbol ♂, a circle with an arrow pointing northeast. The Unicode code-point is:
The symbol is identical to the planetary symbol of Mars. It was first used to denote sex by Carl Linnaeus in 1751. The symbol is sometimes seen as a stylized representation of the shield and spear of the Roman god Mars. According to William T. Stearn, however, this derivation is "fanciful" and all the historical evidence favours "the conclusion of the French classical scholar Claude de Saumaise (Salmasius, 15881683)" that it is derived from θρ, the contraction of a Greek name for the planet Mars, which is Thouros.
Etymology
Borrowed from Old French masle, from Latin masculus ("masculine, male, worthy of a man"), diminutive of mās ("male person or animal, male").
Usage
In humans, the word male can be used in the context of gender, such as for gender role or gender identity of a man or boy. For example, according to Merriam-Webster, "male" can refer to "having a gender identity that is the opposite of female". According to the Cambridge Dictionary, "male" can mean "belonging or relating to men".
Male can also refer to a shape of connectors.
Sex determination
The sex of a particular organism may be determined by a number of factors. These may be genetic or environmental, or may naturally change during the course of an organism's life. Although most species have only two sexes (either male or female), hermaphroditic animals, such as worms, have both male and female reproductive organs. Species that are divided into females and males are classified as gonochoric in animals, as dioecious in seed plants and as dioicous in cryptogams. Males can coexist with hermaphrodites, a sexual system called androdioecy. They can also coexist with females and hermaphrodites, a sexual system called trioecy.
Not all species share a common sex-determination system. In most animals, including humans, sex is determined genetically; however, species such as Cymothoa exigua change sex depending on the number of females present in the vicinity.
Genetic determination
Most mammals, including humans, are genetically determined as such by the XY sex-determination system where males have XY (as opposed to XX in females) sex chromosomes. It is also possible in a variety of species, including humans, to be XX male or have other karyotypes. During reproduction, a male can give either an X sperm or a Y sperm, while a female can only give an X egg. A Y sperm and an X egg produce a male, while an X sperm and an X egg produce a female.
The part of the Y-chromosome which is responsible for maleness is the sex-determining region of the Y-chromosome, the SRY. The SRY activates Sox9, which forms feedforward loops with FGF9 and PGD2 in the gonads, allowing the levels of these genes to stay high enough in order to cause male development; for example, Fgf9 is responsible for development of the spermatic cords and the multiplication of Sertoli cells, both of which are crucial to male sexual development.
The ZW sex-determination system, where males have ZZ (as opposed to ZW in females) sex chromosomes, may be found in birds and some insects (mostly butterflies and moths) and other organisms. Members of the insect order Hymenoptera, such as ants and bees, are often determined by haplodiploidy, where most males are haploid and females and some sterile males are diploid. However, fertile diploid males may still appear in some species, such as Cataglyphis cursor.
Environmental determination
In some species of reptiles, such as alligators, sex is determined by the temperature at which the egg is incubated. Other species, such as some snails, practice sex change: adults start out male, then become female. In tropical clown fish, the dominant individual in a group becomes female while the other ones are male.
Secondary sex characteristics
Male animals have evolved to use secondary sex characteristics as a way of displaying traits that signify their fitness. Sexual selection is believed to be the driving force behind the development of these characteristics. Differences in physical size and the ability to fulfill the requirements of sexual selection have contributed significantly to the outcome of secondary sex characteristics in each species.
In many species, males differ from females in more ways than just the production of sperm. For example, in some insects and fish, the male is smaller than the female. In seed plants, the sporophyte sex organ of a single organism includes both the male and female parts.
In mammals, including humans, males are typically larger than females. This is often attributed to the need for male mammals to be physically stronger and more competitive in order to win mating opportunities. In humans specifically, males have more body hair and muscle mass than females.
Birds often exhibit colorful plumage that attracts females. This is true for many species of birds where the male displays more vibrant colors than the female, making them more noticeable to potential mates. These characteristics have evolved over time as a result of sexual selection, as males who exhibited these traits were more successful in attracting mates and passing on their genes.
| Biology and health sciences | Biological reproduction | null |
1824992 | https://en.wikipedia.org/wiki/Corixidae | Corixidae | Corixidae is a family of aquatic insects in the order Hemiptera. They are found worldwide in virtually any freshwater habitat and a few species live in saline water. There are about 500 known species worldwide, in 55 genera, including the genus Sigara.
Members of the Corixidae are commonly known as lesser water boatmen: the term used in the United Kingdom to distinguish species such as Corixa punctata from Notonecta glauca, or greater water-boatman, an insect of a different family, Notonectidae.
Morphology and ecology
Corixidae generally have a long flattened body ranging from long. Many have extremely fine dark brown or black striations marking the wings. They tend to have four long rear legs and two short front ones. The forelegs are covered with hairs and shaped like oars, hence the name "water boatman". Their four hindmost legs have scoop- or oar-shaped tarsi to aid swimming. They also have a triangular head with short, triangular mouthparts. Corixidae dwell in slow rivers and ponds, as well as some household pools.
Unlike their relatives the backswimmers (Notonectidae), who swim upside down, Corixidae swim right side up. It is easy to tell the two types of insects apart simply by looking at the swimming position.
Corixidae are unusual among the aquatic Hemiptera in that some species are non-predatory, feeding on aquatic plants and algae instead of insects and other small animals. They use their straw-like mouthparts to inject enzymes into plants. The enzymes digest the plant material, letting the insect suck the liquified food back through its mouthparts and into its digestive tract. However, most species are not strictly herbivorous and can even be completely predatory, like those of the subfamily Cymatiainae. In fact, Corixidae have a broad range of feeding styles: carnivorous, detritivorous, herbivorous and omnivorous.
Some species within this family are preyed upon by a number of amphibians including the rough-skinned newt (Taricha granulosa).
The reproductive cycle of Corixidae is annual. Eggs are typically oviposited (deposited) on submerged plants, sticks, or rocks. In substrate limited waters (waters without many submerged oviposition sites), every bit of available substrate will be covered in eggs.
Genera
These 52 genera belong to the family Corixidae:
Acromocoris Bode, 1953 g
Agraptocorixa Kirkaldy, 1898 g
Archaecorixa Popov, 1968 g
Arctocorisa Wallengren, 1894 i c g b
Bakharia Popov, 1988 g
Bumbacorixa Popov, 1986 g
Callicorixa White, 1873 i c g b
Cenocorixa Hungerford, 1948 i c g b
Centrocorisa Lundblad, 1928 i c g
Corisella Lundblad, 1928 i c g b
Corixa Geoffroy, 1762 i c g
Corixalia Popov, 1986 g
Corixonecta Popov, 1986 g
Corixopsis Hong & Wang, 1990 g
Cristocorixa Popov, 1986 g
Cymatia Flor, 1860 i c g b
Dasycorixa Hungerford, 1948 i c g b
Diacorixa Popov, 1971 g
Diapherinus Popov, 1966 g
Diaprepocoris c g
Ectemnostegella Lundblad, 1928 g
Gazimuria Popov, 1971 g
Glaenocorisa Thomson, 1869 i c g b
Graptocorixa Hungerford, 1930 i c g b
Haenbea Popov, 1988 g
Heliocorisa Lundblad, 1928 g
Hesperocorixa Kirkaldy, 1908 i c g b
Liassocorixa Popov, Dolling & Whalley, 1994 g
Linicorixa Lin, 1980 g
Lufengnacta Lin, 1977 g
Mesocorixa Hong & Wang, 1990 g
Mesosigara Popov, 1971 g
Morphocorixa Jaczewski, 1931 i c g
Neocorixa Hungerford, 1925 i c g
Neosigara Lundblad, 1928 g
Palmacorixa Abbott, 1912 i c g
Palmocorixa b
Paracorixa Stichel, 1955 g
Parasigara Poisson, 1957 g
Pseudocorixa Jaczewski, 1931 i c g
Ramphocorixa Abbott, 1912 i c g b
Ratiticorixa Lin, 1980 g
Shelopuga Popov, 1988 g
Siculicorixa Lin, 1980 g
Sigara Fabricius, 1775 i c g b
Sigaretta Popov, 1971 g
Trichocorixa Kirkaldy, 1908 i c g b
Velocorixa Popov, 1986 g
Venacorixa Lin Qibin, 1986 g
Vulcanicorixa Lin, 1980 g
Xenocorixa Hungerford, 1947 g
Yanliaocorixa Hong, 1983 g
Data sources: i = ITIS, c = Catalogue of Life, g = GBIF, b = Bugguide.net
| Biology and health sciences | Hemiptera (true bugs) | Animals |
1826183 | https://en.wikipedia.org/wiki/False%20gharial | False gharial | The false gharial (Tomistoma schlegelii), also known by the names Malayan gharial, Sunda gharial and tomistoma is a freshwater crocodilian of the family Gavialidae native to Peninsular Malaysia, Borneo, Sumatra and Java. It is listed as Endangered on the IUCN Red List, as the global population is estimated at around 2,500 to 10,000 mature individuals.
The species name schlegelii honors Hermann Schlegel.
Characteristics
The false gharial is dark reddish-brown above with dark brown or black spots and cross-bands on the back and tail. Ventrals are grayish-white, with some lateral dark mottling. Juveniles are mottled with black on the sides of the jaws, body, and tail. The smooth and unornamented snout is extremely long and slender, parallel sided, with a length of 3.0 to 3.5 times the width at the base. All teeth are long and needle-like, interlocking on the insides of the jaws, and are individually socketed. The dorsal scales are broad at midbody and extend onto the sides of the body. The digits are webbed at the base. Integumentary sensory organs are present on the head and body scalation. Scales behind the head are frequently a slightly enlarged single pair. Some individuals bear a number of adjoining small keeled scales. Scalation is divided medially by soft granular skin. Three transverse rows of two enlarged nuchal scales are continuous with the dorsal scales, which consist of 22 transverse rows of six to eight scales, are broad at midbody and extend onto the sides of the body. Nuchal and dorsal rows equals a total of 22 to 23 rows. It has 18 double-crested caudal whorls and 17 single-crested caudal whorls. The flanks have one or two longitudinal rows of six to eight very enlarged scales on each side.
The false gharial has one of the slimmest snouts of any living crocodilian, comparable to that of the slender-snouted crocodile and the freshwater crocodile in slenderness; only that of the gharial is noticeably slimmer. Three mature males kept in captivity measured and weighed , while a female measured and weighed . Females are up to long. Males can grow up to in length and weigh up to . The false gharial apparently has the largest skull of any extant crocodilian, in part because of the great length of the slender snout. Out of the eight longest crocodilian skulls from existing species that could be found in museums around the world, six of these belonged to false gharials. The longest crocodilian skull belonging to an extant species was of this species and measured in length, with a mandibular length of . Most of the owners of these enormous skulls had no confirmed (or even anecdotal) total measurements for the animals, but based on the known skull-to-total length ratio for the species they would measure approximately in length.
Three individuals ranging from in length and weighing from had a bite force of .
Taxonomy
The scientific name Crocodilus (Gavialis) schlegelii was proposed by Salomon Müller in 1838 who described a specimen collected in Borneo. In 1846, he proposed to use the name Tomistoma schlegelii, if it needs to be placed in a distinct genus.
The genus Tomistoma potentially also contains several extinct species like T. cairense, T. lusitanicum, T. taiwanicus, and T. coppensi. However, these species may need to be reclassified to different genera as evidence suggests they may be paraphyletic.The false gharial's snout broadens considerably towards the base and so is more similar to those of true crocodiles than to the gharial (Gavialis gangeticus), whose osteology indicated a distinct lineage from all other living crocodilians. However, although more morphologically similar to Crocodylidae based on skeletal features, recent molecular studies using DNA sequencing consistently indicate that the false gharial and by inference other related extinct forms traditionally viewed as belonging to the crocodylian subfamily Tomistominae actually belong to Gavialoidea and Gavialidae.
Fossils of extinct Tomistoma species have been found in deposits of Paleogene, Neogene, and Quaternary ages in Taiwan, Uganda, Italy, Portugal, Egypt and India, but nearly all of them are likely to be distinct genera due to older age compared to the false gharial.
The below cladogram of the major living crocodile groups is based on molecular studies and shows the false gharial's close relationships:
The following cladogram shows the false gharial's placement within the Gavialidae; it is based on a tip dating study, for which morphological, DNA sequencing and stratigraphic data were analysed:
Distribution and habitat
The false gharial is native to Peninsular Malaysia and the islands of Borneo and Sumatra; it is locally extinct in Singapore, Vietnam and Thailand.
It inhabits peat swamps and lowland swamp forests.
Prior to the 1950s, Tomistoma occurred in freshwater ecosystems along the entire length of Sumatra east of the Barisan Mountains. The current distribution in eastern Sumatra has been reduced by 30-40% due to hunting, logging, fires, and agriculture.
The population has been estimated to comprise less than 2,500 mature individuals as of 2010.
Ecology and behaviour
Diet
Until recently, very little was known about the diet or behaviour of the false gharial in the wild. Details are slowly being revealed. In the past, the false gharial was thought to have a diet of only fish and very small vertebrates, but more recent evidence indicates that it has a generalist diet despite its narrow snout. In addition to fish and smaller aquatic animals, mature adults prey on larger vertebrates, including proboscis monkeys, long-tailed macaques, deer, water birds, and reptiles. There is an eyewitness account of a false gharial attacking a cow in East Kalimantan.
The false gharial may be considered an ecological equivalent to Neotropical crocodiles such as the Orinoco and American crocodiles, which both have slender snouts but a broad diet.
Reproduction
The false gharial is a mound-nester. Females lay small clutches of 13–35 eggs per nest and appear to produce the largest eggs of living crocodilians. They attain sexual maturity at a length of around , which is large compared to other crocodilians.
Courtship coincides with periods of rainfall in November to February and from April to June.
Conflict
In 2008, a 4-m female false gharial attacked and ate a fisherman in central Kalimantan; his remains were found in the gharial's stomach. This was the first verified fatal human attack by a false gharial. However, by 2012, at least two more verified fatal attacks on humans by false gharials had occurred indicating perhaps an increase of human-false gharial conflict possibly correlated to the decline of habitat, habitat quality, and natural prey numbers.
Threats
The false gharial is threatened by habitat loss in most of its range due to the drainage of freshwater swamps and conversion for commercial plantation of oil palms.
It is also hunted for its skin and meat, and its eggs are often harvested for human consumption.
Population surveys carried out in the mid 2000s indicated that the distribution of individuals is spotty and disconnected, with a risk of genetic isolation.
Some population units in unprotected areas do not bear viable breeding adults.
Conservation
The false gharial is listed on CITES Appendix I.
Steps have been taken by the Malaysian and Indonesian governments to prevent its extinction in the wild. There are reports of some populations rebounding in Indonesia, yet with this slight recovery, mostly irrational fears of attacks have surfaced amongst the local human population.
| Biology and health sciences | Crocodilia | Animals |
20646064 | https://en.wikipedia.org/wiki/Quantum | Quantum | In physics, a quantum (: quanta) is the minimum amount of any physical entity (physical property) involved in an interaction. Quantum is a discrete quantity of energy proportional in magnitude to the frequency of the radiation it represents. The fundamental notion that a property can be "quantized" is referred to as "the hypothesis of quantization". This means that the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum. For example, a photon is a single quantum of light of a specific frequency (or of any other form of electromagnetic radiation). Similarly, the energy of an electron bound within an atom is quantized and can exist only in certain discrete values. Atoms and matter in general are stable because electrons can exist only at discrete energy levels within an atom. Quantization is one of the foundations of the much broader physics of quantum mechanics. Quantization of energy and its influence on how energy and matter interact (quantum electrodynamics) is part of the fundamental framework for understanding and describing nature.
Etymology and discovery
The word is the neuter singular of the Latin interrogative adjective quantus, meaning "how much". "", the neuter plural, short for "quanta of electricity" (electrons), was used in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in the area of electricity. However, the word quantum in general was well known before 1900, e.g. quantum was used in E. A. Poe's Loss of Breath. It was often used by physicians, such as in the term quantum satis, "the amount which is enough". Both Helmholtz and Julius von Mayer were physicians as well as physicists. Helmholtz used quantum with reference to heat in his article on Mayer's work, and the word quantum can be found in the formulation of the first law of thermodynamics by Mayer in his letter dated July 24, 1841.
In 1901, Max Planck used quanta to mean "quanta of matter and electricity", gas, and heat. In 1905, in response to Planck's work and the experimental work of Lenard (who explained his results by using the term quanta of electricity), Albert Einstein suggested that radiation existed in spatially localized packets which he called "quanta of light" ("Lichtquanta").
The concept of quantization of radiation was discovered in 1900 by Max Planck, who had been trying to understand the emission of radiation from heated objects, known as black-body radiation. By assuming that energy can be absorbed or released only in tiny, differential, discrete packets (which he called "bundles", or "energy elements"), Planck accounted for certain objects changing color when heated. On December 14, 1900, Planck reported his findings to the German Physical Society, and introduced the idea of quantization for the first time as a part of his research on black-body radiation. As a result of his experiments, Planck deduced the numerical value of h, known as the Planck constant, and reported more precise values for the unit of electrical charge and the Avogadro–Loschmidt number, the number of real molecules in a mole, to the German Physical Society. After his theory was validated, Planck was awarded the Nobel Prize in Physics for his discovery in 1918.
Quantization
While quantization was first discovered in electromagnetic radiation, it describes a fundamental aspect of energy not just restricted to photons.
In the attempt to bring theory into agreement with experiment, Max Planck postulated that electromagnetic energy is absorbed or emitted in discrete packets, or quanta.
| Physical sciences | Quantum mechanics | Physics |
20646438 | https://en.wikipedia.org/wiki/Real%20number | Real number | In mathematics, a real number is a number that can be used to measure a continuous one-dimensional quantity such as a distance, duration or temperature. Here, continuous means that pairs of values can have arbitrarily small differences. Every real number can be almost uniquely represented by an infinite decimal expansion.
The real numbers are fundamental in calculus (and in many other branches of mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives.
The set of real numbers, sometimes called "the reals", is traditionally denoted by a bold , often using blackboard bold, .
The adjective real, used in the 17th century by René Descartes, distinguishes real numbers from imaginary numbers such as the square roots of .
The real numbers include the rational numbers, such as the integer and the fraction . The rest of the real numbers are called irrational numbers. Some irrational numbers (as well as all the rationals) are the root of a polynomial with integer coefficients, such as the square root ; these are called algebraic numbers. There are also real numbers which are not, such as ; these are called transcendental numbers.
Real numbers can be thought of as all points on a line called the number line or real line, where the points corresponding to integers () are equally spaced.
Conversely, analytic geometry is the association of points on lines (especially axis lines) to real numbers such that geometric displacements are proportional to differences between corresponding numbers.
The informal descriptions above of the real numbers are not sufficient for ensuring the correctness of proofs of theorems involving real numbers. The realization that a better definition was needed, and the elaboration of such a definition was a major development of 19th-century mathematics and is the foundation of real analysis, the study of real functions and real-valued sequences. A current axiomatic definition is that real numbers form the unique (up to an isomorphism) Dedekind-complete ordered field. Other common definitions of real numbers include equivalence classes of Cauchy sequences (of rational numbers), Dedekind cuts, and infinite decimal representations. All these definitions satisfy the axiomatic definition and are thus equivalent.
Characterizing properties
Real numbers are completely characterized by their fundamental properties that can be summarized by saying that they form an ordered field that is Dedekind complete. Here, "completely characterized" means that there is a unique isomorphism between any two Dedekind complete ordered fields, and thus that their elements have exactly the same properties. This implies that one can manipulate real numbers and compute with them, without knowing how they can be defined; this is what mathematicians and physicists did during several centuries before the first formal definitions were provided in the second half of the 19th century. See Construction of the real numbers for details about these formal definitions and the proof of their equivalence.
Arithmetic
The real numbers form an ordered field. Intuitively, this means that methods and rules of elementary arithmetic apply to them. More precisely, there are two binary operations, addition and multiplication, and a total order that have the following properties.
The addition of two real numbers and produce a real number denoted which is the sum of and .
The multiplication of two real numbers and produce a real number denoted or which is the product of and .
Addition and multiplication are both commutative, which means that and for every real numbers and .
Addition and multiplication are both associative, which means that and for every real numbers , and , and that parentheses may be omitted in both cases.
Multiplication is distributive over addition, which means that for every real numbers , and .
There is a real number called zero and denoted which is an additive identity, which means that for every real number .
There is a real number denoted which is a multiplicative identity, which means that for every real number .
Every real number has an additive inverse denoted This means that for every real number .
Every nonzero real number has a multiplicative inverse denoted or This means that for every nonzero real number .
The total order is denoted being that it is a total order means two properties: given two real numbers and , exactly one of or is true; and if and then one has also
The order is compatible with addition and multiplication, which means that implies for every real number , and is implied by and
Many other properties can be deduced from the above ones. In particular:
for every real number
for every nonzero real number
Auxiliary operations
Several other operations are commonly used, which can be deduced from the above ones.
Subtraction: the subtraction of two real numbers and results in the sum of and the additive inverse of ; that is,
Division: the division of a real number by a nonzero real number is denoted or and defined as the multiplication of with the multiplicative inverse of ; that is,
Absolute value: the absolute value of a real number , denoted measures its distance from zero, and is defined as
Auxiliary order relations
The total order that is considered above is denoted and read as " is less than ". Three other order relations are also commonly used:
Greater than: read as " is greater than ", is defined as if and only if
Less than or equal to: read as " is less than or equal to " or " is not greater than ", is defined as or equivalently as
Greater than or equal to: read as " is greater than or equal to " or " is not less than ", is defined as or equivalently as
Integers and fractions as real numbers
The real numbers and are commonly identified with the natural numbers and . This allows identifying any natural number with the sum of real numbers equal to .
This identification can be pursued by identifying a negative integer (where is a natural number) with the additive inverse of the real number identified with Similarly a rational number (where and are integers and ) is identified with the division of the real numbers identified with and .
These identifications make the set of the rational numbers an ordered subfield of the real numbers The Dedekind completeness described below implies that some real numbers, such as are not rational numbers; they are called irrational numbers.
The above identifications make sense, since natural numbers, integers and real numbers are generally not defined by their individual nature, but by defining properties (axioms). So, the identification of natural numbers with some real numbers is justified by the fact that Peano axioms are satisfied by these real numbers, with the addition with taken as the successor function.
Formally, one has an injective homomorphism of ordered monoids from the natural numbers to the integers an injective homomorphism of ordered rings from to the rational numbers and an injective homomorphism of ordered fields from to the real numbers The identifications consist of not distinguishing the source and the image of each injective homomorphism, and thus to write
These identifications are formally abuses of notation (since, formally, a rational number is an equivalence class of pairs of integers, and a real number is an equivalence class of Cauchy series), and are generally harmless. It is only in very specific situations, that one must avoid them and replace them by using explicitly the above homomorphisms. This is the case in constructive mathematics and computer programming. In the latter case, these homomorphisms are interpreted as type conversions that can often be done automatically by the compiler.
Dedekind completeness
Previous properties do not distinguish real numbers from rational numbers. This distinction is provided by Dedekind completeness, which states that every set of real numbers with an upper bound admits a least upper bound. This means the following. A set of real numbers is bounded above if there is a real number such that for all ; such a is called an upper bound of So, Dedekind completeness means that, if is bounded above, it has an upper bound that is less than any other upper bound.
Dedekind completeness implies other sorts of completeness (see below), but also has some important consequences.
Archimedean property: for every real number , there is an integer such that (take, where is the least upper bound of the integers less than ).
Equivalently, if is a positive real number, there is a positive integer such that .
Every positive real number has a positive square root, that is, there exist a positive real number such that
Every univariate polynomial of odd degree with real coefficients has at least one real root (if the leading coefficient is positive, take the least upper bound of real numbers for which the value of the polynomial is negative).
The last two properties are summarized by saying that the real numbers form a real closed field. This implies the real version of the fundamental theorem of algebra, namely that every polynomial with real coefficients can be factored into polynomials with real coefficients of degree at most two.
Decimal representation
The most common way of describing a real number is via its decimal representation, a sequence of decimal digits each representing the product of an integer between zero and nine times a power of ten, extending to finitely many positive powers of ten to the left and infinitely many negative powers of ten to the right. For a number whose decimal representation extends places to the left, the standard notation is the juxtaposition of the digits in descending order by power of ten, with non-negative and negative powers of ten separated by a decimal point, representing the infinite series
For example, for the circle constant is zero and etc.
More formally, a decimal representation for a nonnegative real number consists of a nonnegative integer and integers between zero and nine in the infinite sequence
(If then by convention )
Such a decimal representation specifies the real number as the least upper bound of the decimal fractions that are obtained by truncating the sequence: given a positive integer , the truncation of the sequence at the place is the finite partial sum
The real number defined by the sequence is the least upper bound of the which exists by Dedekind completeness.
Conversely, given a nonnegative real number , one can define a decimal representation of by induction, as follows. Define as decimal representation of the largest integer such that (this integer exists because of the Archimedean property). Then, supposing by induction that the decimal fraction has been defined for one defines as the largest digit such that and one sets
One can use the defining properties of the real numbers to show that is the least upper bound of the So, the resulting sequence of digits is called a decimal representation of .
Another decimal representation can be obtained by replacing with in the preceding construction. These two representations are identical, unless is a decimal fraction of the form In this case, in the first decimal representation, all are zero for and, in the second representation, all 9. (see 0.999... for details).
In summary, there is a bijection between the real numbers and the decimal representations that do not end with infinitely many trailing 9.
The preceding considerations apply directly for every numeral base simply by replacing 10 with and 9 with
Topological completeness
A main reason for using real numbers is so that many sequences have limits. More formally, the reals are complete (in the sense of metric spaces or uniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section):
A sequence (xn) of real numbers is called a Cauchy sequence if for any there exists an integer N (possibly depending on ε) such that the distance is less than ε for all n and m that are both greater than N. This definition, originally provided by Cauchy, formalizes the fact that the xn eventually come and remain arbitrarily close to each other.
A sequence (xn) converges to the limit x if its elements eventually come and remain arbitrarily close to x, that is, if for any there exists an integer N (possibly depending on ε) such that the distance is less than ε for n greater than N.
Every convergent sequence is a Cauchy sequence, and the converse is true for real numbers, and this means that the topological space of the real numbers is complete.
The set of rational numbers is not complete. For example, the sequence (1; 1.4; 1.41; 1.414; 1.4142; 1.41421; ...), where each term adds a digit of the decimal expansion of the positive square root of 2, is Cauchy but it does not converge to a rational number (in the real numbers, in contrast, it converges to the positive square root of 2).
The completeness property of the reals is the basis on which calculus, and more generally mathematical analysis, are built. In particular, the test that a sequence is a Cauchy sequence allows proving that a sequence has a limit, without computing it, and even without knowing it.
For example, the standard series of the exponential function
converges to a real number for every x, because the sums
can be made arbitrarily small (independently of M) by choosing N sufficiently large. This proves that the sequence is Cauchy, and thus converges, showing that is well defined for every x.
"The complete ordered field"
The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways.
First, an order can be lattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have no largest element (given any element z, is larger).
Additionally, an order can be Dedekind-complete, see . The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way.
These two notions of completeness ignore the field structure. However, an ordered group (in this case, the additive group of the field) defines a uniform structure, and uniform structures have a notion of completeness; the description in § Completeness is a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion for metric spaces, since the definition of metric space relies on already having a characterization of the real numbers.) It is not true that is the only uniformly complete ordered field, but it is the only uniformly complete Archimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way.
But the original use of the phrase "complete Archimedean field" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of . Thus is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals from surreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield.
Cardinality
The set of all real numbers is uncountable, in the sense that while both the set of all natural numbers and the set of all real numbers are infinite sets, there exists no one-to-one function from the real numbers to the natural numbers. The cardinality of the set of all real numbers is denoted by and called the cardinality of the continuum. It is strictly greater than the cardinality of the set of all natural numbers (denoted and called 'aleph-naught'), and equals the cardinality of the power set of the set of the natural numbers.
The statement that there is no subset of the reals with cardinality strictly greater than and strictly smaller than is known as the continuum hypothesis (CH). It is neither provable nor refutable using the axioms of Zermelo–Fraenkel set theory including the axiom of choice (ZFC)—the standard foundation of modern mathematics. In fact, some models of ZFC satisfy CH, while others violate it.
Other properties
As a topological space, the real numbers are separable. This is because the set of rationals, which is countable, is dense in the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals.
The real numbers form a metric space: the distance between x and y is defined as the absolute value . By virtue of being a totally ordered set, they also carry an order topology; the topology arising from the metric and the one arising from the order are identical, but yield different presentations for the topology—in the order topology as ordered intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals form a contractible (hence connected and simply connected), separable and complete metric space of Hausdorff dimension 1. The real numbers are locally compact but not compact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separable order topologies are necessarily homeomorphic to the reals.
Every nonnegative real number has a square root in , although no negative number does. This shows that the order on is determined by its algebraic structure. Also, every polynomial of odd degree admits at least one real root: these two properties make the premier example of a real closed field. Proving this is the first half of one proof of the fundamental theorem of algebra.
The reals carry a canonical measure, the Lebesgue measure, which is the Haar measure on their structure as a topological group normalized such that the unit interval [0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g. Vitali sets.
The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set of hyperreal numbers satisfies the same first order sentences as . Ordered fields that satisfy the same first-order sentences as are called nonstandard models of . This is what makes nonstandard analysis work; by proving a first-order statement in some nonstandard model (which may be easier than proving it in ), we know that the same statement must also be true of .
The field of real numbers is an extension field of the field of rational numbers, and can therefore be seen as a vector space over . Zermelo–Fraenkel set theory with the axiom of choice guarantees the existence of a basis of this vector space: there exists a set B of real numbers such that every real number can be written uniquely as a finite linear combination of elements of this set, using rational coefficients only, and such that no element of B is a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described.
The well-ordering theorem implies that the real numbers can be well-ordered if the axiom of choice is assumed: there exists a total order on with the property that every nonempty subset of has a least element in this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. an open interval does not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it has not been explicitly described. If V=L is assumed in addition to the axioms of ZF, a well ordering of the real numbers can be shown to be explicitly definable by a formula.
A real number may be either computable or uncomputable; either algorithmically random or not; and either arithmetically random or not.
History
Simple fractions were used by the Egyptians around 1000 BC; the Vedic "Shulba Sutras" ("The rules of chords") in include what may be the first "use" of irrational numbers. The concept of irrationality was implicitly accepted by early Indian mathematicians such as Manava , who was aware that the square roots of certain numbers, such as 2 and 61, could not be exactly determined.
Around 500 BC, the Greek mathematicians led by Pythagoras also realized that the square root of 2 is irrational.
For Greek mathematicians, numbers were only the natural numbers. Real numbers were called "proportions", being the ratios of two lengths, or equivalently being measures of a length in terms of another length, called unit length. Two lengths are "commensurable", if there is a unit in which they are both measured by integers, that is, in modern terminology, if their ratio is a rational number. Eudoxus of Cnidus (c. 390−340 BC) provided a definition of the equality of two irrational proportions in a way that is similar to Dedekind cuts (introduced more than 2,000 years later), except that he did not use any arithmetic operation other than multiplication of a length by a natural number (see Eudoxus of Cnidus). This may be viewed as the first definition of the real numbers.
The Middle Ages brought about the acceptance of zero, negative numbers, integers, and fractional numbers, first by Indian and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects (the latter being made possible by the development of algebra). Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers. The Egyptian mathematician Abū Kāmil Shujā ibn Aslam was the first to accept irrational numbers as solutions to quadratic equations, or as coefficients in an equation (often in the form of square roots, cube roots, and fourth roots). In Europe, such numbers, not commensurable with the numerical unit, were called irrational or surd ("deaf").
In the 16th century, Simon Stevin created the basis for modern decimal notation, and insisted that there is no difference between rational and irrational numbers in this regard.
In the 17th century, Descartes introduced the term "real" to describe roots of a polynomial, distinguishing them from "imaginary" numbers.
In the 18th and 19th centuries, there was much work on irrational and transcendental numbers. Lambert (1761) gave a flawed proof that cannot be rational; Legendre (1794) completed the proof and showed that is not the square root of a rational number. Liouville (1840) showed that neither nor can be a root of an integer quadratic equation, and then established the existence of transcendental numbers; Cantor (1873) extended and greatly simplified this proof. Hermite (1873) proved that is transcendental, and Lindemann (1882), showed that is transcendental. Lindemann's proof was much simplified by Weierstrass (1885), Hilbert (1893), Hurwitz, and Gordan.
The concept that many points existed between rational numbers, such as the square root of 2, was well known to the ancient Greeks. The existence of a continuous number line was considered self-evident, but the nature of this continuity, presently called completeness, was not understood. The rigor developed for geometry did not cross over to the concept of numbers until the 1800s.
Modern analysis
The developers of calculus used real numbers and limits without defining them rigorously. In his Cours d'Analyse (1821), Cauchy made calculus rigorous, but he used the real numbers without defining them, and assumed without proof that every Cauchy sequence has a limit and that this limit is a real number.
In 1854 Bernhard Riemann highlighted the limitations of calculus in the method of Fourier series, showing the need for a rigorous definition of the real numbers.
Beginning with Richard Dedekind in 1858, several mathematicians worked on the definition of the real numbers, including Hermann Hankel, Charles Méray, and Eduard Heine, leading to the publication in 1872 of two independent definitions of real numbers, one by Dedekind, as Dedekind cuts, and the other one by Georg Cantor, as equivalence classes of Cauchy sequences. Several problems were left open by these definitions, which contributed to the foundational crisis of mathematics. Firstly both definitions suppose that rational numbers and thus natural numbers are rigorously defined; this was done a few years later with Peano axioms. Secondly, both definitions involve infinite sets (Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor's set theory was published several years later. Thirdly, these definitions imply quantification on infinite sets, and this cannot be formalized in the classical logic of first-order predicates. This is one of the reasons for which higher-order logics were developed in the first half of the 20th century.
In 1874 Cantor showed that the set of all real numbers is uncountably infinite, but the set of all algebraic numbers is countably infinite. Cantor's first uncountability proof was different from his famous diagonal argument published in 1891.
Formal definitions
The real number system can be defined axiomatically up to an isomorphism, which is described hereinafter. There are also many ways to construct "the" real number system, and a popular approach involves starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of their Cauchy sequences or as Dedekind cuts, which are certain subsets of rational numbers. Another approach is to start from some rigorous axiomatization of Euclidean geometry (say of Hilbert or of Tarski), and then define the real number system geometrically. All these constructions of the real numbers have been shown to be equivalent, in the sense that the resulting number systems are isomorphic.
Axiomatic approach
Let denote the set of all real numbers. Then:
The set is a field, meaning that addition and multiplication are defined and have the usual properties.
The field is ordered, meaning that there is a total order ≥ such that for all real numbers x, y and z:
if x ≥ y, then x + z ≥ y + z;
if x ≥ 0 and y ≥ 0, then xy ≥ 0.
The order is Dedekind-complete, meaning that every nonempty subset S of with an upper bound in has a least upper bound (a.k.a., supremum) in .
The last property applies to the real numbers but not to the rational numbers (or to other more exotic ordered fields). For example, has a rational upper bound (e.g., 1.42), but no least rational upper bound, because is not rational.
These properties imply the Archimedean property (which is not implied by other definitions of completeness), which states that the set of integers has no upper bound in the reals. In fact, if this were false, then the integers would have a least upper bound N; then, N – 1 would not be an upper bound, and there would be an integer n such that , and thus , which is a contradiction with the upper-bound property of N.
The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fields and , there exists a unique field isomorphism from to . This uniqueness allows us to think of them as essentially the same mathematical object.
For another axiomatization of see Tarski's axiomatization of the reals.
Construction from the rational numbers
The real numbers can be constructed as a completion of the rational numbers, in such a way that a sequence defined by a decimal or binary expansion like (3; 3.1; 3.14; 3.141; 3.1415; ...) converges to a unique real number—in this case . For details and other constructions of real numbers, see Construction of the real numbers.
Applications and connections
Physics
In the physical sciences most physical constants, such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact the fundamental physical theories such as classical mechanics, electromagnetism, quantum mechanics, general relativity, and the standard model are described using mathematical structures, typically smooth manifolds or Hilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finite accuracy and precision.
Physicists have occasionally suggested that a more fundamental theory would replace the real numbers with quantities that do not form a continuum, but such proposals remain speculative.
Logic
The real numbers are most often formalized using the Zermelo–Fraenkel axiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied in reverse mathematics and in constructive mathematics.
The hyperreal numbers as developed by Edwin Hewitt, Abraham Robinson, and others extend the set of the real numbers by introducing infinitesimal and infinite numbers, allowing for building infinitesimal calculus in a way closer to the original intuitions of Leibniz, Euler, Cauchy, and others.
Edward Nelson's internal set theory enriches the Zermelo–Fraenkel set theory syntactically by introducing a unary predicate "standard". In this approach, infinitesimals are (non-"standard") elements of the set of the real numbers (rather than being elements of an extension thereof, as in Robinson's theory).
The continuum hypothesis posits that the cardinality of the set of the real numbers is ; i.e. the smallest infinite cardinal number after , the cardinality of the integers. Paul Cohen proved in 1963 that it is an axiom independent of the other axioms of set theory; that is: one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction.
Computation
Electronic calculators and computers cannot operate on arbitrary real numbers, because finite computers cannot directly store infinitely many digits or other infinite representations. Nor do they usually even operate on arbitrary definable real numbers, which are inconvenient to manipulate.
Instead, computers typically work with finite-precision approximations called floating-point numbers, a representation similar to scientific notation. The achievable precision is limited by the data storage space allocated for each number, whether as fixed-point, floating-point, or arbitrary-precision numbers, or some other representation. Most scientific computation uses binary floating-point arithmetic, often a 64-bit representation with around 16 decimal digits of precision. Real numbers satisfy the usual rules of arithmetic, but floating-point numbers do not. The field of numerical analysis studies the stability and accuracy of numerical algorithms implemented with approximate arithmetic.
Alternately, computer algebra systems can operate on irrational quantities exactly by manipulating symbolic formulas for them (such as or ) rather than their rational or decimal approximation. But exact and symbolic arithmetic also have limitations: for instance, they are computationally more expensive; it is not in general possible to determine whether two symbolic expressions are equal (the constant problem); and arithmetic operations can cause exponential explosion in the size of representation of a single number (for instance, squaring a rational number roughly doubles the number of digits in its numerator and denominator, and squaring a polynomial roughly doubles its number of terms), overwhelming finite computer storage.
A real number is called computable if there exists an algorithm that yields its digits. Because there are only countably many algorithms, but an uncountable number of reals, almost all real numbers fail to be computable. Moreover, the equality of two computable numbers is an undecidable problem. Some constructivists accept the existence of only those reals that are computable. The set of definable numbers is broader, but still only countable.
Set theory
In set theory, specifically descriptive set theory, the Baire space is used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals".
Vocabulary and notation
The set of all real numbers is denoted (blackboard bold) or R (upright bold). As it is naturally endowed with the structure of a field, the expression field of real numbers is frequently used when its algebraic properties are under consideration.
The sets of positive real numbers and negative real numbers are often noted and , respectively; and are also used. The non-negative real numbers can be noted but one often sees this set noted In French mathematics, the positive real numbers and negative real numbers commonly include zero, and these sets are noted respectively and In this understanding, the respective sets without zero are called strictly positive real numbers and strictly negative real numbers, and are noted and
The notation refers to the set of the -tuples of elements of (real coordinate space), which can be identified to the Cartesian product of copies of It is an -dimensional vector space over the field of the real numbers, often called the coordinate space of dimension ; this space may be identified to the -dimensional Euclidean space as soon as a Cartesian coordinate system has been chosen in the latter. In this identification, a point of the Euclidean space is identified with the tuple of its Cartesian coordinates.
In mathematics real is used as an adjective, meaning that the underlying field is the field of the real numbers (or the real field). For example, real matrix, real polynomial and real Lie algebra. The word is also used as a noun, meaning a real number (as in "the set of all reals").
Generalizations and extensions
The real numbers can be generalized and extended in several different directions:
The complex numbers contain solutions to all polynomial equations and hence are an algebraically closed field unlike the real numbers. However, the complex numbers are not an ordered field.
The affinely extended real number system adds two elements and . It is a compact space. It is no longer a field, or even an additive group, but it still has a total order; moreover, it is a complete lattice.
The real projective line adds only one value . It is also a compact space. Again, it is no longer a field, or even an additive group. However, it allows division of a nonzero element by zero. It has cyclic order described by a separation relation.
The long real line pastes together copies of the real line plus a single point (here denotes the reversed ordering of ) to create an ordered set that is "locally" identical to the real numbers, but somehow longer; for instance, there is an order-preserving embedding of in the long real line but not in the real numbers. The long real line is the largest ordered set that is complete and locally Archimedean. As with the previous two examples, this set is no longer a field or additive group.
Ordered fields extending the reals are the hyperreal numbers and the surreal numbers; both of them contain infinitesimal and infinitely large numbers and are therefore non-Archimedean ordered fields.
Self-adjoint operators on a Hilbert space (for example, self-adjoint square complex matrices) generalize the reals in many respects: they can be ordered (though not totally ordered), they are complete, all their eigenvalues are real and they form a real associative algebra. Positive-definite operators correspond to the positive reals and normal operators correspond to the complex numbers.
| Mathematics | Counting and numbers | null |
20646507 | https://en.wikipedia.org/wiki/Agricultural%20marketing | Agricultural marketing | Agricultural marketing covers the services involved in moving an agricultural product from the farm to the consumer. These services involve the planning, organizing, directing and handling of agricultural produce in such a way as to satisfy farmers, intermediaries and consumers. Numerous interconnected activities are involved in doing this, such as planning production, growing and harvesting, grading, packing and packaging, transport, storage, agro- and food processing, provision of market information, distribution, advertising and sale. Effectively, the term encompasses the entire range of supply chain operations for agricultural products, whether conducted through ad hoc sales or through a more integrated chain, such as one involving contract farming.
Agricultural marketing development
Efforts to develop agricultural marketing have, particularly in developing countries, intended to concentrate on a number of areas, specifically infrastructure development; information provision; training of farmers and traders in marketing and post-harvest issues; and support to the development of an appropriate policy environment. In the past, efforts were made to develop government-run marketing bodies but these have tended to become less prominent over the years.
Agricultural market infrastructure
Efficient marketing infrastructure such as wholesale, retail and assembly markets and storage facilities is essential for cost-effective marketing, to minimize post-harvest losses and to reduce health risks. Markets play an important role in rural development, income generation, food security, and developing rural-market linkages. Experience shows that planners need to be aware of how to design markets that meet a community's social and economic needs and how to choose a suitable site for a new market. In many cases sites are chosen that are inappropriate and result in under-use or even no use of the infrastructure constructed. It is also not sufficient just to build a market: attention needs to be paid to how that market will be managed, operated and maintained.
Rural assembly markets are located in production areas and primarily serve as places where farmers can meet with traders to sell their products. These may be occasional (perhaps weekly) markets, such as haat bazaars in India and Nepal, or permanent. Terminal wholesale markets are located in major metropolitan areas, where produce is finally channelled to consumers through trade between wholesalers and retailers, caterers, etc. The characteristics of wholesale markets have changed considerably as retailing changes in response to urban growth, the increasing role of supermarkets and increased consumer spending capacity. These changes may require responses in the way in which traditional wholesale markets are organized and managed.
Retail marketing systems in western countries have broadly evolved from traditional street markets through to the modern hypermarket or out-of-town shopping center. In developing countries, there remains scope to improve agricultural marketing by constructing new retail markets, despite the growth of supermarkets, although municipalities often view markets primarily as sources of revenue rather than infrastructure requiring development. Effective regulation of markets is essential. Inside a market, both hygiene rules and revenue collection activities have to be enforced. Of equal importance, however, is the maintenance of order outside the market. Licensed traders in a market will not be willing to cooperate in raising standards if they face competition from unlicensed operators outside who do not pay any of the costs involved in providing a proper service.
Market information
Efficient market information can be shown to have positive benefits for farmers and traders. Up-to-date information on prices and other market factors enables farmers to negotiate with traders and also facilitates spatial distribution of products from rural areas to towns and between markets. Most governments in developing countries have tried to provide market information services to farmers, but these have tended to experience problems of sustainability. Moreover, even when they function, the service provided is often insufficient to allow commercial decisions to be made because of time lags between data collection and dissemination. Modern communications technologies open up the possibility for market information services to improve information delivery through SMS on cell phones and the rapid growth of FM radio stations in many developing countries offers the possibility of more localised information services. In the longer run, the internet may become an effective way of delivering information to farmers. However, problems associated with the cost and accuracy of data collection still remain to be addressed. Even when they have access to market information, farmers often require assistance in interpreting that information. For example, the market price quoted on the radio may refer to a wholesale selling price and farmers may have difficulty in translating this into a realistic price at their local assembly market. Various attempts have been made in developing countries to introduce commercial market information services but these have largely been targeted at traders, commercial farmers or exporters. It is not easy to see how small, poor farmers can generate sufficient income for a commercial service to be profitable although in India a service introduced by Thomson Reuters was reportedly used by over 100,000 farmers in its first year of operation. Esoko in West Africa attempts to subsidize the cost of such services to farmers by charging access to a more advanced feature set of mobile-based tools to businesses.
Marketing training
Farmers frequently consider marketing as being their major problem. However, while they are able to identify such problems as poor prices, lack of transport and high post-harvest losses, they are often poorly equipped to identify potential solutions. Successful marketing requires learning new skills, new techniques and new ways of obtaining information. Extension officers working with ministries of agriculture or NGOs are often well-trained in agricultural production techniques but usually lack knowledge of marketing or post-harvest handling.
Enabling environments
Agricultural marketing needs to be conducted within a supportive policy, legal, institutional, macro-economic, infrastructural and bureaucratic environment. Traders and others are generally reluctant to make investments in an uncertain policy climate, such as those that restrict imports and exports or internal produce movement. Businesses have difficulty functioning when their trading activities are hampered by excessive bureaucracy. Inappropriate law can distort and reduce the efficiency of the market, increase the costs of doing business and retard the development of a competitive private sector. Poor support institutions, such as agricultural extension services, municipalities that operate markets inefficiently and inadequate export promotion bodies, can be particularly damaging. Poor roads increase the cost of doing business, reduce payments to farmers and increase prices to consumers. Finally, corruption can increase the transaction costs faced by those in the marketing chain.
Agricultural marketing support
Most governments have at some stage made efforts to promote agricultural marketing improvements. In the United States the Agricultural Marketing Service (AMS) is a division of USDA and has programs that provide testing, support standardization and grading and offer market news services. AMS oversees marketing agreements and orders research and promotion programs. It also purchases commodities for federal food programs. USDA also provides support to agricultural marketing work at various universities. In the United Kingdom, support for marketing of some commodities was provided before and after the Second World War by boards such as the Milk Marketing Board and the Egg Marketing Board. These boards were closed down in the 1970s. As a colonial power, Britain established marketing boards in many countries, particularly in Africa. Some continue to exist although many were closed at the time of the introduction of structural adjustment measures in the 1990s.
Several developing countries have established government-sponsored marketing or agribusiness units. South Africa, for example, started the National Agricultural Marketing Council (NAMC), as a response to the deregulation of the agriculture industry and closure of marketing boards in the country. India has the long-established National Institute of Agricultural Marketing. These are primarily research and policy organizations, but other agencies provide facilitating services for marketing channels, such as the provision of infrastructure, market information and documentation support. Examples from the Caribbean include the National Agricultural Marketing Development Corporation, in Trinidad and Tobago and the New Guyana Marketing Corporation in Guyana.
Recent developments
New marketing linkages between agribusiness, large retailers and farmers are gradually being developed, e.g. through contract farming, group marketing and other forms of collective action.
Donors and NGOs are paying increasing attention to ways of promoting direct linkages between farmers and buyers within a value chain context. More attention is now being paid to the development of regional markets (e.g. East Africa) and to structured trading systems that should facilitate such developments. The growth of supermarkets, particularly in Latin America and East and South East Asia, is having a significant impact on marketing channels for horticultural, dairy and livestock products. Nevertheless, "spot" markets will continue to be important for many years, necessitating attention to infrastructure improvement such as for retail and wholesale markets.
| Technology | Agriculture, labor and economy | null |
20646679 | https://en.wikipedia.org/wiki/Lightning%20rod | Lightning rod | A lightning rod or lightning conductor (British English) is a metal rod mounted on a structure and intended to protect the structure from a lightning strike. If lightning hits the structure, it is most likely to strike the rod and be conducted to ground through a wire, rather than passing through the structure, where it could start a fire or even cause electrocution. Lightning rods are also called finials, air terminals, or strike termination devices.
In a lightning protection system, a lightning rod is a single component of the system. The lightning rod requires a connection to the earth to perform its protective function. Lightning rods come in many different forms, including hollow, solid, pointed, rounded, flat strips, or even bristle brush-like. The main attribute common to all lightning rods is that they are all made of conductive materials, such as copper and aluminum. Copper and its alloys are the most common materials used in lightning protection.
History
The first proper lightning rod was invented by Father Prokop Diviš, a Czech priest and scientist, who erected a grounded lightning rod in 1754. Diviš's design involved a vertical iron rod topped with a grounded wire, intended to attract lightning strikes and safely conduct them to the ground. His experimental apparatus, known as the "weather machine” predated Benjamin Franklin's more widely recognized experiments. Franklin, unaware of Diviš's work, independently developed and popularized his own lightning rod design, which became widely adopted across Europe and North America. Franklin's contribution significantly advanced the understanding and application of lightning protection systems, although Diviš's earlier conceptual work remains an important milestone in the history of electrical safety engineering.
British Empire
In what later became the United States, the pointed lightning rod conductor (not grounded), also called a lightning attractor or Franklin rod, was invented by Benjamin Franklin in 1752 as part of his groundbreaking exploration of electricity. Although not the first to suggest a correlation between electricity and lightning, Franklin was the first to propose a workable system for testing his hypothesis. Franklin speculated that, with an iron rod sharpened to a point, "The electrical fire would, I think, be drawn out of a cloud silently, before it could come near enough to strike." Franklin speculated about lightning rods for several years before his reported kite experiment.
In the 19th century, the lightning rod became a decorative motif. Lightning rods were embellished with ornamental glass balls (now prized by collectors). The ornamental appeal of these glass balls has been used in weather vanes. The main purpose of these balls, however, is to provide evidence of a lightning strike by shattering or falling off. If after a storm a ball is discovered missing or broken, the property owner should then check the building, rod, and grounding wire for damage.
Balls of solid glass occasionally were used in a method purported to prevent lightning strikes to ships and other objects. The idea was that glass objects, being non-conductors, are seldom struck by lightning. Therefore, goes the theory, there must be something about glass that repels lightning. Hence the best method for preventing a lightning strike to a wooden ship was to bury a small solid glass ball in the tip of the highest mast. The random behavior of lightning combined with observers' confirmation bias ensured that the method gained a good bit of credence even after the development of the marine lightning rod soon after Franklin's initial work.
The first lightning conductors on ships were supposed to be hoisted when lightning was anticipated, and had a low success rate. In 1820 William Snow Harris invented a successful system for fitting lightning protection to the wooden sailing ships of the day, but despite successful trials which began in 1830, the British Royal Navy did not adopt the system until 1842, by which time the Imperial Russian Navy had already adopted the system.
In the 1990s, the 'lightning points' were replaced as originally constructed when the Statue of Freedom atop the United States Capitol building in Washington, D.C. was restored. The statue was designed with multiple devices that are tipped with platinum. The Washington Monument also was equipped with multiple lightning points, and the Statue of Liberty in New York Harbor gets hit by lightning, which is shunted to ground.
Lightning protection system
A lightning protection system is designed to protect a structure from damage due to lightning strikes by intercepting such strikes and safely passing their extremely high currents to ground. A lightning protection system includes a network of air terminals, bonding conductors, and ground electrodes designed to provide a low impedance path to ground for potential strikes.
Lightning protection systems are used to prevent lightning strike damage to structures. Lightning protection systems mitigate the fire hazard which lightning strikes pose to structures. A lightning protection system provides a low-impedance path for the lightning current to lessen the heating effect of current flowing through flammable structural materials. If lightning travels through porous and water-saturated materials, these materials may explode if their water content is flashed to steam by heat produced from the high current. This is why trees are often shattered by lightning strikes.
Because of the high energy and current levels associated with lightning (currents can be in excess of 150,000 A), and the very rapid rise time of a lightning strike, no protection system can guarantee absolute safety from lightning. Lightning current will divide to follow every conductive path to ground, and even the divided current can cause damage. Secondary "side-flashes" can be enough to ignite a fire, blow apart brick, stone, or concrete, or injure occupants within a structure or building. However, the benefits of basic lightning protection systems have been evident for well over a century.
Laboratory-scale measurements of the effects of [any lightning investigation research] do not scale to applications involving natural lightning. Field applications have mainly been derived from trial and error based on the best intended laboratory research of a highly complex and variable phenomenon.
The parts of a lightning protection system are air terminals (lightning rods or strike termination devices), bonding conductors, ground terminals (ground or "earthing" rods, plates, or mesh), and all of the connectors and supports to complete the system. The air terminals are typically arranged at or along the upper points of a roof structure, and are electrically bonded together by bonding conductors (called "down conductors" or "downleads"), which are connected by the most direct route to one or more grounding or earthing terminals. Connections to the earth electrodes must not only have low resistance, but must have low self-inductance.
An example of a structure vulnerable to lightning is a wooden barn. When lightning strikes the barn, the wooden structure and its contents may be ignited by the heat generated by lightning current conducted through parts of the structure. A basic lightning protection system would provide a conductive path between an air terminal and earth, so that most of the lightning's current will follow the path of the lightning protection system, with substantially less current traveling through flammable materials.
Originally, scientists believed that such a lightning protection system of air terminals and "downleads" directed the current of the lightning down into the earth to be "dissipated". However, high speed photography has clearly demonstrated that lightning is actually composed of both a cloud component and an oppositely charged ground component. During "cloud-to-ground" lightning, these oppositely charged components usually "meet" somewhere in the atmosphere well above the earth to equalize previously unbalanced charges. The heat generated as this electric current flows through flammable materials is the hazard which lightning protection systems attempt to mitigate by providing a low-resistance path for the lightning circuit. No lightning protection system can be relied upon to "contain" or "control" lightning completely (nor thus far, to prevent lightning strikes entirely), but they do seem to help immensely on most occasions of lightning strikes.
Steel framed structures can bond the structural members to earth to provide lightning protection. A metal flagpole with its foundation in the earth is its own extremely simple lightning protection system. However, the flag(s) flying from the pole during a lightning strike may be completely incinerated.
The majority of lightning protection systems in use today are of the traditional Franklin design. The fundamental principle used in Franklin-type lightning protections systems is to provide a sufficiently low impedance path for the lightning to travel through to reach ground without damaging the building. This is accomplished by surrounding the building in a kind of Faraday cage. A system of lightning protection conductors and lightning rods are installed on the roof of the building to intercept any lightning before it strikes the building.
Russia
A lightning conductor may have been intentionally used in the Leaning Tower of Nevyansk. The spire of the tower is crowned with a metallic rod in the shape of a gilded sphere with spikes. This lightning rod is grounded through the rebar carcass, which pierces the entire building.
The Nevyansk Tower was built between 1721 and 1745, on the orders of industrialist Akinfiy Demidov. The Nevyansk Tower was built 28 years before Benjamin Franklin's experiment and scientific explanation. However, the true intent behind the metal rooftop and rebars remains unknown.
Europe
The church tower of many European cities, which was usually the highest structure in the city, was likely to be hit by lightning. Peter Ahlwardts ("Reasonable and Theological Considerations about Thunder and Lightning", 1745) advised individuals seeking cover from lightning to go anywhere except in or around a church.
There is an ongoing debate over whether a "metereological machine", invented by Premonstratensian priest Prokop Diviš and erected in Brenditz, (now Přímětice, part of Znojmo), Moravia (now the Czech Republic) in June 1754, does count as an individual invention of the lightning rod. Diviš's apparatus was, according to his private theories, aimed towards preventing thunderstorms altogether by constantly depriving the air of its superfluous electricity. The apparatus was, however, mounted on a free-standing pole and probably better grounded than Franklin's lightning rods at that time, so it served the purpose of a lightning rod. After local protests, Diviš had to cease his weather experiments around 1760.
Structure protectors
Lightning arrester
A lightning arrester is a device, essentially an air gap between an electric wire and ground, used on electric power systems and telecommunication systems to protect the insulation and conductors of the system from the damaging effects of lightning. The typical lightning arrester has a high-voltage terminal and a ground terminal.
In telegraphy and telephony, a lightning arrester is a device placed where wires enter a structure, in order to prevent damage to electronic instruments within and ensuring the safety of individuals near the structures. Smaller versions of lightning arresters, also called surge protectors, are devices that are connected between each electrical conductor in a power or communications system, and the ground. They help prevent the flow of the normal power or signal currents to ground, but provide a path over which high-voltage lightning current flows, bypassing the connected equipment. Arresters are used to limit the rise in voltage when a communications or power line is struck by lightning or is near to a lightning strike.
Protection of electric distribution systems
In overhead electric transmission systems, one or two lighter ground wires may be mounted to the top of the pylons, poles, or towers not specifically used to send electricity through the grid. These conductors, often referred as to "static", "pilot" or "shield" wires are designed to be the point of lightning termination instead of the high-voltage lines themselves. These conductors are intended to protect the primary power conductors from lightning strikes.
These conductors are bonded to earth either through the metal structure of a pole or tower, or by additional ground electrodes installed at regular intervals along the line. As a general rule, overhead power lines with voltages below 50 kV do not have a "static" conductor, but most lines carrying more than 50 kV do. The ground conductor cable may also support fibre optic cables for data transmission.
Older lines may use surge arresters which insulate conducting lines from direct bonding with earth and may be used as low voltage communication lines. If the voltage exceeds a certain threshold, such as during a lightning termination to the conductor, it "jumps" the insulators and passes to earth.
Protection of electrical substations is as varied as lightning rods themselves, and is often proprietary to the electric company.
Lightning protection of mast radiators
Radio mast radiators may be insulated from the ground by a spark gap at the base. When lightning hits the mast, it jumps this gap. A small inductivity in the feed line between the mast and the tuning unit (usually one winding) limits the voltage increase, protecting the transmitter from dangerously high voltages.
The transmitter must be equipped with a device to monitor the antenna's electrical properties. This is very important, as a charge could remain after a lightning strike, damaging the gap or the insulators.
The monitoring device switches off the transmitter when the antenna shows incorrect behavior, e.g. as a result of undesired electrical charge. When the transmitter is switched off, these charges dissipate. The monitoring device makes several attempts to switch back on. If after several attempts the antenna continues to show improper behavior, possibly as result of structural damage, the transmitter remains switched off.
Lightning conductors and grounding precautions
Ideally, the underground part of the assembly should reside in an area of high ground conductivity. If the underground cable is able to resist corrosion well, it can be covered in salt to improve its electrical connection with the ground. While the electrical resistance of the lightning conductor between the air terminal and the Earth is of significant concern, the inductive reactance of the conductor could be more important. For this reason, the down conductor route is kept short, and any curves have a large radius. If these measures are not taken, lightning current may arc over a resistive or reactive obstruction that it encounters in the conductor. At the very least, the arc current will damage the lightning conductor and can easily find another conductive path, such as building wiring or plumbing, and cause fires or other disasters. Grounding systems without low resistivity to the ground can still be effective in protecting a structure from lightning damage. When ground soil has poor conductivity, is very shallow, or non-existent, a grounding system can be augmented by adding ground rods, counterpoise (ground ring) conductor, cable radials projecting away from the building, or a concrete building's reinforcing bars can be used for a ground conductor (Ufer ground). These additions, while still not reducing the resistance of the system in some instances, will allow the [dispersion] of the lightning into the earth without damage to the structure.
Additional precautions must be taken to prevent side-flashes between conductive objects on or in the structure and the lightning protection system. The surge of lightning current through a lightning protection conductor will create a voltage difference between it and any conductive objects that are near it. This voltage difference can be large enough to cause a dangerous side-flash (spark) between the two that can cause significant damage, especially on structures housing flammable or explosive materials. The most effective way to prevent this potential damage is to ensure the electrical continuity between the lightning protection system and any objects susceptible to a side-flash. Effective bonding will allow the voltage potential of the two objects to rise and fall simultaneously, thereby eliminating any risk of a side-flash.
Lightning protection system design
Considerable material is used to make up lightning protection systems, so it is prudent to consider carefully where an air terminal will provide the greatest protection. Historical understanding of lightning, from statements made by Ben Franklin, assumed that each lightning rod protected a cone of 45 degrees. This has been found to be unsatisfactory for protecting taller structures, as it is possible for lightning to strike the side of a building.
A modeling system based on a better understanding of the termination targeting of lightning, called the Rolling Sphere Method, was developed by Dr Tibor Horváth. It has become the standard by which traditional Franklin Rod systems are installed. To understand this requires knowledge of how lightning 'moves'. As the step leader of a lightning bolt jumps toward the ground, it steps toward the grounded objects nearest its path. The maximum distance that each step may travel is called the critical distance and is proportional to the electric current. Objects are likely to be struck if they are nearer to the leader than this critical distance. It is standard practice to approximate the sphere's radius as 46 m near the ground.
An object outside the critical distance is unlikely to be struck by the leader if there is a solidly grounded object within the critical distance. Locations that are considered safe from lightning can be determined by imagining a leader's potential paths as a sphere that travels from the cloud to the ground. For lightning protection, it suffices to consider all possible spheres as they touch potential strike points. To determine strike points, consider a sphere rolling over the terrain. At each point, a potential leader position is simulated. Lightning is most likely to strike where the sphere touches the ground. Points that the sphere cannot roll across and touch are safest from lightning. Lightning protectors should be placed where they will prevent the sphere from touching a structure. A weak point in most lightning diversion systems is in transporting the captured discharge from the lightning rod to the ground, though. Lightning rods are typically installed around the perimeter of flat roofs, or along the peaks of sloped roofs at intervals of 6.1 m or 7.6 m, depending on the height of the rod. When a flat roof has dimensions greater than 15 m by 15 m, additional air terminals will be installed in the middle of the roof at intervals of 15 m or less in a rectangular grid pattern.
Rounded vis-à-vis pointed ends
The optimal shape for the tip of a lightning rod has been controversial since the 18th century. During the period of political confrontation between Britain and its American colonies, British scientists maintained that a lightning rod should have a ball on its end, while American scientists maintained that there should be a point. , the controversy had not been completely resolved.
It is difficult to resolve the controversy because proper controlled experiments are nearly impossible, but work performed by Charles B. Moore, et al., in 2000 has shed some light on the issue, finding that moderately rounded or blunt-tipped lightning rods act as marginally better strike receptors. As a result, round-tipped rods are installed on most new systems in the United States, though most existing systems still have pointed rods. According to the study,
Additionally, the height of the lightning protector relative to the structure, and indeed the Earth itself, both have an effect.
Charge transfer theory
The charge transfer theory states that a lightning strike to a protected structure can be prevented by reducing the electrical potential between the protected structure and the thundercloud. This is done by transferring electric charge (such as from the nearby Earth to the sky or vice versa). Transferring electric charge from the Earth to the sky is done by installing engineered products composed of many points above the structure. It is noted that pointed objects will indeed transfer charge to the surrounding atmosphere and that a considerable electric current can be measured through the conductors as ionization occurs at the point when an electric field is present, such as happens when thunderclouds are overhead.
In the United States, the National Fire Protection Association (NFPA) does not currently endorse a device that can prevent or reduce lightning strikes. The NFPA Standards Council, following a request for a project to address Dissipation Array[tm] Systems and Charge Transfer Systems, denied the request to begin forming standards on such technology (though the Council did not foreclose on future standards development after reliable sources demonstrating the validity of the basic technology and science were submitted).
Early streamer emission (ESE) theory
The theory of early streamer emission proposes that if a lightning rod has a mechanism producing ionization near its tip, then its lightning capture area is greatly increased. At first, small quantities of radioactive isotopes (radium-226 or americium-241) were used as sources of ionization between 1930 and 1980, later replaced with various electrical and electronic devices. According to an early patent, since most lightning protectors' ground potentials are elevated, the path distance from the source to the elevated ground point will be shorter, creating a stronger field (measured in volts per unit distance) and that structure will be more prone to ionization and breakdown.
AFNOR, the French national standardization body, issued a standard, NF C 17-102, covering this technology. The NFPA also investigated the subject and there was a proposal to issue a similar standard in the USA. Initially, an NFPA independent third party panel stated that "the [Early Streamer Emission] lightning protection technology appears to be technically sound" and that there was an "adequate theoretical basis for the [Early Streamer Emission] air terminal concept and design from a physical viewpoint".) The same panel also concluded that "the recommended [NFPA 781 standard] lightning protection system has never been scientifically or technically validated and the Franklin rod air terminals have not been validated in field tests under thunderstorm conditions".
In response, the American Geophysical Union concluded that "[t]he Bryan Panel reviewed essentially none of the studies and literature on the effectiveness and scientific basis of traditional lightning protection systems and was erroneous in its conclusion that there was no basis for the Standard". AGU did not attempt to assess the effectiveness of any proposed modifications to traditional systems in its report. The NFPA withdrew its proposed draft edition of standard 781 due to a lack of evidence of increased effectiveness of Early Streamer Emission-based protection systems over conventional air terminals.
Members of the Scientific Committee of the International Conference on Lightning Protection (ICLP) have issued a joint statement stating their opposition to Early Streamer Emission technology. ICLP maintained a web page with information related to ESE and related technologies until 2016. Still, the number of buildings and structures equipped with ESE lightning protection systems is growing as well as the number of manufacturers of ESE air terminals from Europe, Americas, Middle East, Russia, China, South Korea, ASEAN countries, and Australia.
Analysis of strikes
Lightning strikes to a metallic structure can vary from leaving no evidence—except, perhaps, a small pit in the metal—to the complete destruction of the structure. When there is no evidence, analyzing the strikes is difficult. This means that a strike on an uninstrumented structure must be visually confirmed, and the random behavior of lightning renders such observations difficult. Inventors have patented lightning rockets. Whilst controlled experiments might eventually become feasible, very good contemperaneous data is obtained via specialized radio receivers which record the characteristic electrical "signature" of lightning strikes. Through extremely accurate timing and triangulation techniques, lightning strikes can be located with great precision such that strikes on specific objects can often be pinpointed with a high degree of confidence.
The energy in a lightning strike is typically in the range of 1 to 10 billion joules. This energy is released usually in a small number of separate strokes, each with duration of a few tens of microseconds (typically 30 to 50 microseconds), over a period of about approximately fifth of a second. The vast majority of the energy is dissipated as heat, light and sound in the atmosphere, with a minority via conduction to ground (in both aspects of "ground").
Aircraft protectors
Aircraft are protected by devices mounted to the aircraft structure and by the design of internal systems. Lightning usually enters and exits an aircraft through the outer surface of its airframe or through static wicks. The lightning protection system provides safe conductive paths between the entry and exit points to prevent damage to electronic equipment and to protect flammable fuel or cargo from sparks.
These paths are constructed of conductive materials. Electrical insulators are only effective in combination with a conductive path because blocked lightning can easily exceed the breakdown voltage of insulators. Composite materials are constructed with layers of wire mesh to make them sufficiently conductive and structural joints are protected by making an electrical connection across the joint.
Shielded cable and conductive enclosures provide the majority of protection to electronic systems. The lightning current emits a magnetic pulse which induces current through any loops formed by the cables. The current induced in the shield of a loop creates magnetic flux through the loop in the opposite direction. This decreases the total flux through the loop and the induced voltage around it.
The lightning-conductive path and conductive shielding carry the majority of current. The remainder is bypassed around sensitive electronics using transient voltage suppressors, and blocked using electronic filters once the let-through voltage is low enough. Filters, like insulators, are only effective when lightning and surge currents are able to flow through an alternate path.
Watercraft protectors
A lightning protection installation on a watercraft comprises a lightning protector mounted on the top of a mast or superstructure, and a grounding conductor in contact with the water. Electrical conductors attach to the protector and run down to the conductor. For a vessel with a conducting (iron or steel) hull, the grounding conductor is the hull. For a vessel with a non-conducting hull, the grounding conductor may be retractable, attached to the hull, or attached to a centerboard.
Risk assessment
Some structures are inherently more or less at risk of being struck by lightning. The risk for a structure is a function of the size (area) of a structure, the height, and the number of lightning strikes per year per mi2 for the region. For example, a small building will be less likely to be struck than a large one, and a building in an area with a high density of lightning strikes will be more likely to be struck than one in an area with a low density of lightning strikes. The National Fire Protection Association provides a risk assessment worksheet in their lightning protection standard.
The International Electrotechnical Commission (IEC) lightning risk-assessment comprises four parts: loss of living beings, loss of service to public, loss of cultural heritage, and loss of economic value. Loss of living beings is rated as the most important and is the only loss taken into consideration for many nonessential industrial and commercial applications.
Standards
The introduction of lightning protection systems into standards allowed various manufactures to develop protector systems to a multitude of specifications. There are multiple international, national, corporate and military lightning protection standards.
NFPA-780: "Standard for the Installation of Lightning Protection Systems" (2014)
M440.1-1, Electrical Storms and Lightning Protection, Department of Energy
AFI 32-1065 – Grounding Systems, U. S. Air Force Space Command
FAA STD 019e, Lightning and Surge Protection, Grounding, Bonding and Shielding Requirements for Facilities and Electronic Equipment
UL standards for lightning protection
UL 96: "Standard of Lightning Protection Components" (5th Edition, 2005)
UL 96A: "Standard for Installation Requirements for Lightning Protection Systems" (Twelfth Edition, 2007)
UL 1449: "Standard for Surge Protective Devices" (Fourth Edition, 2014)
IEC standards
EN 61000-4-5/IEC 61000-4-5: "Electromagnetic compatibility (EMC) – Part 4-5: Testing and measurement techniques – Surge immunity test"
EN 62305/IEC 62305: "Protection against lightning"
EN 62561/IEC 62561: "Lightning Protection System Components (LPSC)"
ITU-T K Series recommendations: "Protection against interference"
IEEE standards for grounding
IEEE SA-142-2007: "IEEE Recommended Practice for Grounding of Industrial and Commercial Power Systems". (2007)
IEEE SA-1100-2005: "IEEE Recommended Practice for Powering and Grounding Electronic Equipment" (2005)
AFNOR NF C 17-102 : "Lightning protection – Protection of structures and open areas against lightning using early streamer emission air terminals" (1995)
GB 50057-2010 Design Code for Lightning Protection of Buildings
AS / NZS 1768:2007: "Lightning protection"
| Technology | Electrical protective devices | null |
20646704 | https://en.wikipedia.org/wiki/Sewage | Sewage | Sewage (or domestic sewage, domestic wastewater, municipal wastewater) is a type of wastewater that is produced by a community of people. It is typically transported through a sewer system. Sewage consists of wastewater discharged from residences and from commercial, institutional and public facilities that exist in the locality. Sub-types of sewage are greywater (from sinks, bathtubs, showers, dishwashers, and clothes washers) and blackwater (the water used to flush toilets, combined with the human waste that it flushes away). Sewage also contains soaps and detergents. Food waste may be present from dishwashing, and food quantities may be increased where garbage disposal units are used. In regions where toilet paper is used rather than bidets, that paper is also added to the sewage. Sewage contains macro-pollutants and micro-pollutants, and may also incorporate some municipal solid waste and pollutants from industrial wastewater.
Sewage usually travels from a building's plumbing either into a sewer, which will carry it elsewhere, or into an onsite sewage facility. Collection of sewage from several households together usually takes places in either sanitary sewers or combined sewers. The former is designed to exclude stormwater flows whereas the latter is designed to also take stormwater. The production of sewage generally corresponds to the water consumption. A range of factors influence water consumption and hence the sewage flowrates per person. These include: Water availability (the opposite of water scarcity), water supply options, climate (warmer climates may lead to greater water consumption), community size, economic level of the community, level of industrialization, metering of household consumption, water cost and water pressure.
The main parameters in sewage that are measured to assess the sewage strength or quality as well as treatment options include: solids, indicators of organic matter, nitrogen, phosphorus, and indicators of fecal contamination. These can be considered to be the main macro-pollutants in sewage. Sewage contains pathogens which stem from fecal matter. The following four types of pathogens are found in sewage: pathogenic bacteria, viruses, protozoa (in the form of cysts or oocysts) and helminths (in the form of eggs). In order to quantify the organic matter, indirect methods are commonly used: mainly the Biochemical Oxygen Demand (BOD) and the Chemical Oxygen Demand (COD).
Management of sewage includes collection and transport for release into the environment, after a treatment level that is compatible with the local requirements for discharge into water bodies, onto soil or for reuse applications. Disposal options include dilution (self-purification of water bodies, making use of their assimilative capacity if possible), marine outfalls, land disposal and sewage farms. All disposal options may run risks of causing water pollution.
Terminology
Sewage and wastewater
Sewage (or domestic wastewater) consists of wastewater discharged from residences and from commercial, institutional and public facilities that exist in the locality. Sewage is a mixture of water (from the community's water supply), human excreta (feces and urine), used water from bathrooms, food preparation wastes, laundry wastewater, and other waste products of normal living.
Sewage from municipalities contains wastewater from commercial activities and institutions, e.g. wastewater discharged from restaurants, laundries, hospitals, schools, prisons, offices, stores and establishments serving the local area of larger communities.
Sewage can be distinguished into "untreated sewage" (also called "raw sewage") and "treated sewage" (also called "effluent" from a sewage treatment plant).
The term "sewage" is nowadays often used interchangeably with "wastewater" – implying "municipal wastewater" – in many textbooks, policy documents and the literature. To be precise, wastewater is a broader term, because it refers to any water after it has been used in a variety of applications. Thus it may also refer to "industrial wastewater", agricultural wastewater and other flows that are not related to household activities.
Blackwater
Greywater
Overall appearance
The overall appearance of sewage is as follows: The temperature tends to be slightly higher than in drinking water but is more stable than the ambient temperature. The color of fresh sewage is slightly grey, whereas older sewage (also called "septic sewage") is dark grey or black. The odor of fresh sewage is "oily" and relatively unpleasant, whereas older sewage has an unpleasant foul odor due to hydrogen sulfide gas and other decomposition by-products. Sewage can have high turbidity from suspended solids.
The pH value of sewage is usually near neutral, and can be in the range of 6.7–8.0.
Pollutants
Sewage consists primarily of water and usually contains less than one part of solid matter per thousand parts of water. In other words, one can say that sewage is composed of around 99.9% pure water, and the remaining 0.1% are solids, which can be in the form of either dissolved solids or suspended solids. The thousand-to-one ratio is an order of magnitude estimate rather than an exact percentage because, aside from variation caused by dilution, solids may be defined differently depending upon the mechanism used to separate those solids from the liquid fraction. Sludges of settleable solids removed by settling or suspended solids removed by filtration may contain significant amounts of entrained water, while dried solid material remaining after evaporation eliminates most of that water but includes dissolved minerals not captured by filtration or gravitational separation. The suspended and dissolved solids include organic and inorganic matter plus microorganisms.
About one-third of this solid matter is suspended by turbulence, while the remainder is dissolved or colloidal. For the situation in the United States in the 1950s it was estimated that the waste contained in domestic sewage is about half organic and half inorganic.
Organic matter
The organic matter in sewage can be classified in terms of form and size: Suspended (particulate) or dissolved (soluble). Secondly, it can be classified in terms of biodegradability: either inert or biodegradable. The organic matter in sewage consists of protein compounds (about 40%), carbohydrates (about 25–50%), oils and grease (about 10%) and urea, surfactants, phenols, pesticides and others (lower quantity). In order to quantify the organic matter content, it is common to use "indirect methods" which are based on the consumption of oxygen to oxidize the organic matter: mainly the Biochemical Oxygen Demand (BOD) and the Chemical Oxygen Demand (COD). These indirect methods are associated with the major impact of the discharge of organic matter into water bodies: the organic matter will be food for microorganisms, whose population will grow, and lead to the consumption of oxygen, which may then affect aquatic living organisms.
The mass load of organic content is calculated as the sewage flowrate multiplied with the concentration of the organic matter in the sewage.
Typical values for physical–chemical characteristics of raw sewage is provided further down below.
Nutrients
Apart from organic matter, sewage also contains nutrients. The major nutrients of interest are nitrogen and phosphorus. If sewage is discharged untreated, its nitrogen and phosphorus content can lead to pollution of lakes and reservoirs via a process called eutrophication.
In raw sewage, nitrogen exists in the two forms of organic nitrogen or ammonia. The ammonia stems from the urea in urine. Urea is rapidly hydrolyzed and therefore not usually found in raw sewage.
Total phosphorus is mostly present in sewage in the form of phosphates.They are either inorganic (polyphosphates and orthophosphates) and their main source is from detergents and other household chemical products. The other form is organic phosphorus, where the source is organic compounds to which the organic phosphorus is bound.
Pathogens
Human feces in sewage may contain pathogens capable of transmitting diseases. The following four types of pathogens are found in sewage:
Bacteria like Salmonella, Shigella, Campylobacter, or Vibrio cholerae;
Viruses like hepatitis A, rotavirus, coronavirus, enteroviruses;
Protozoa like Entamoeba histolytica, Giardia lamblia, Cryptosporidium parvum; and
Helminths and their eggs including Ascaris (roundworm), Ancylostoma (hookworm), and Trichuris (whipworm)
In most practical cases, pathogenic organisms are not directly investigated in laboratory analyses. An easier way to assess the presence of fecal contamination is by assessing the most probable numbers of fecal coliforms (called thermotolerant coliforms), especially Escherichia coli. Escherichia coli are intestinal bacteria excreted by all warm blooded animals, including human beings, and thus tracking their presence in sewage is easy, because of their substantially high concentrations (around 10 to 100 million per 100 mL).
Solid waste
The ability of a flush toilet to make things "disappear" is soon recognized by young children who may experiment with virtually anything they can carry to the toilet. Adults may be tempted to dispose of toilet paper, wet wipes, diapers, sanitary napkins, tampons, tampon applicators, condoms, and expired medications, even at the risk of causing blockages. The privacy of a toilet offers a clandestine means of removing embarrassing evidence by flushing such things as drug paraphernalia, pregnancy test kits, combined oral contraceptive pill dispensers, and the packaging for those devices. There may be reluctance to retrieve items like children's toys or toothbrushes which accidentally fall into toilets, and items of clothing may be found in sewage from prisons or other locations where occupants may be careless. Trash and garbage in streets may be carried to combined sewers by stormwater runoff.
Micro-pollutants
Sewage contains environmental persistent pharmaceutical pollutants. Trihalomethanes can also be present as a result of past disinfection. Sewage may contain microplastics such as polyethylene and polypropylene beads, or polyester and polyamide fragments from synthetic clothing and bedding fabrics abraded by wear and laundering, or from plastic packaging and plastic-coated paper products disintegrated by lift station pumps. Pharmaceuticals, endocrine disrupting compounds, and hormones may be excreted in urine or feces if not catabolized within the human body.
Some residential users tend to pour unwanted liquids like used cooking oil, lubricants, adhesives, paint, solvents, detergents, and disinfectants into their sewer connections. This behavior can result in problems for the treatment plant operation and is thus discouraged.
Typical sewage composition
Factors that determine composition
The composition of sewage varies with climate, social and economic situation and population habits. In regions where water use is low, the strength of the sewage (or pollutant concentrations) is much higher than that in the United States where water use per person is high. Household income and diet also plays a role: For example, for the case of Brazil, it has been found that the higher the household income, the higher is the BOD load per person and the lower is the BOD concentration.
Concentrations and loads
Typical values for physical–chemical characteristics of raw sewage in developing countries have been published as follows: 180 g/person/d for total solids (or 1100 mg/L when expressed as a concentration), 50 g/person/d for BOD (300 mg/L), 100 g/person/d for COD (600 mg/L), 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L). The typical ranges for these values are: 120–220 g/person/d for total solids (or 700–1350 mg/L when expressed as a concentration), 40–60 g/person/d for BOD (250–400 mg/L), 80–120 g/person/d for COD (450–800 mg/L), 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L).
For high income countries, the "per person organic matter load" has been found to be approximately 60 gram of BOD per person per day. This is called the population equivalent (PE) and is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage.
Values for households in the United States have been published as follows, whereby the estimates are based on the assumption that 25% of the homes have kitchen waste-food grinders (sewage from such households contain more waste): 95 g/person/d for total suspended solids (503 mg/L concentration), 85 g/person/d for BOD (450 mg/L), 198 g/person/d for COD (1050 mg/L), 13.3 g/person/d for the sum of organic nitrogen and ammonia nitrogen (70.4 mg/L), 7.8 g/person/d for ammonia-N (41.2 mg/L) and 3.28 g/person/d for total phosphorus (17.3 mg/L). The concentration values given here are based on a flowrate of 190 L per person per day.
A United States source published in 1972 estimated that the daily dry weight of solid wastes per capita in sewage is estimated as in feces, of dissolved solids in urine, of toilet paper, of greywater solids, of food solids (if garbage disposal units are used), and varying amounts of dissolved minerals depending upon salinity of local water supplies, volume of water use per capita, and extent of water softener use.
Sewage contains urine and feces. The mass of feces varies with dietary fiber intake. An average person produces 128 grams of wet feces per day, or a median dry mass of 29 g/person/day. The median urine generation rate is about 1.42 L/person/day, as was determined by a global literature review.
Flowrates
The volume of domestic sewage produced per person (or "per capita", abbreviated as "cap") varies with the water consumption in the respective locality. A range of factors influence water consumption and hence the sewage flowrates per person. These include: Water availability (the opposite of water scarcity), water supply options, climate (warmer climates may lead to greater water consumption), community size, economic level of the community, level of industrialization, metering of household consumption, water cost and water pressure.
The production of sewage generally corresponds to the water consumption. However water used for landscape irrigation will not enter the sewer system, while groundwater and stormwater may enter the sewer system in addition to sewage. There are usually two peak flowrates of sewage arriving at a treatment plant per day: One peak is at the beginning of the morning and another peak is at the beginning of the evening.
With regards to water consumption, a design figure that can be regarded as "world average" is 35–90 L per person per day (data from 1992). The same publication listed the water consumption in China as 80 L per person per day, Africa as 15–35 L per person per day, Eastern Mediterranean in Europe as 40–85 L per person per day and Latin America and Caribbean as 70–190 L per person per day. Even inside a country, there may be large variations from one region to another due to the various factors that determine the water consumption as listed above.
A flowrate value of 200 liters of sewage per person per day is often used as an estimate in high income countries, and is used for example in the design of sewage treatment plants.
For comparison, typical sewage flowrates from urban residential sources in the United States are estimated as follows: 365 L/person/day (for one person households), 288 L/person/day (two person households), 200 L/person/day (four person households), 189 L/person/day (six person households). This means the overall range for this example would be .
Analytical methods
General quality indicators
Specific organisms and substances
Sewage can be monitored for both disease-causing and benign organisms with a variety of techniques. Traditional techniques involve filtering, staining, and examining samples under a microscope. Much more sensitive and specific testing can be accomplished with DNA sequencing, such as when looking for rare organisms, attempting eradication, testing specifically for drug-resistant strains, or discovering new species. Sequencing DNA from an environmental sample is known as metagenomics.
Sewage has also been analyzed to determine relative rates of use of prescription and illegal drugs among municipal populations. General socioeconomic demographics may be inferred as well.
Collection
Sewage is commonly collected and transported in gravity sewers, either in a sanitary sewer or in a combined sewer. The latter also conveys urban runoff (stormwater) which means the sewage gets diluted during rain events.
Sanitary sewer
Combined sewer
Dilution in the sewer
Infiltration of groundwater into the sewerage system
Infiltration is groundwater entering sewer pipes through defective pipes, connections, joints or manholes. Contaminated or saline groundwater may introduce additional pollutants to the sewage. The amount of such infiltrated water depends on several parameters, such as the length of the collection network, pipeline diameters, drainage area, soil type, water table depth, topography and number of connections per unit area. Infiltration is increased by poor construction procedures, and tends to increase with the age of the sewer. The amount of infiltration varies with the depth of the sewer in comparison to the local groundwater table. Older sewer systems that are in need of rehabilitation may also exfiltrate sewage into groundwater from the leaking sewer joints and service connections. This can lead to groundwater pollution.
Stormwater
Combined sewers are designed to transport sewage and stormwater together. This means that sewage becomes diluted during rain events. There are other types of inflow that also dilute sewage, e.g. "water discharged from cellar and foundation drains, cooling-water discharges, and any direct stormwater runoff connections to the sanitary collection system". The "direct inflows" can result in peak sewage flowrates similar to combined sewers during wet weather events.
Industrial wastewater
Sewage from communities with industrial facilities may include some industrial wastewater, generated by industrial processes such as the production or manufacture of goods. Volumes of industrial wastewater vary widely with the type of industry. Industrial wastewater may contain very different pollutants at much higher concentrations than what is typically found in sewage. Pollutants may be toxic or non-biodegradable waste including pharmaceuticals, biocides, heavy metals, radionuclides, or thermal pollution.
An industry may treat its wastewater and discharge it into the environment (or even use the treated wastewater for specific applications), or, in case it is located in the urban area, it may discharge the wastewater into the public sewerage system. In the latter case, industrial wastewater may receive pre-treatment at the factories to reduce the pollutant load. Mixing industrial wastewater with sewage does nothing to reduce the mass of pollutants to be treated, but the volume of sewage lowers the concentration of pollutants unique to industrial wastewater, and the volume of industrial wastewater lowers the concentration of pollutants unique to sewage.
Disposal and dilution
Assimilative capacity of receiving water bodies or land
When wastewater is discharged into a water body (river, lakes, sea) or land, its relative impact will depend on the assimilative capacity of the water body or ecosystem. Water bodies have a self-purification capacity, so that the concentration of a pollutant may decrease along the distance from the discharge point. Furthermore, water bodies provide a dilution to the pollutants concentrations discharged, although it does not decrease their mass. In principle, the higher the dilution capacity (ratio of volume or flow of the receiving water and volume or flow of sewage discharged), the lower will be the concentration of pollutants in the receiving water, and probably the lower will be the negative impacts. But if the water body already arrives very polluted at the point of discharge, the dilution will be of limited value.
In several cases, a community may treat partially its sewage, and still count on the assimilative capacity of the water body. However, this needs to be analyzed very carefully, taking into account the quality of the water in the receiving body before it receives the discharge of sewage, the resulting water quality after the discharge and the impact on the intended water uses after discharge. There are also specific legal requirements in each country. Different countries have different regulations regarding the specifications of the quality of the sewage being discharged and the quality to be maintained in the receiving water body.The combination of treatment and disposal must comply with existing local regulations.
The assimilative capacity depends – among several factors – on the ability of the receiving water to sustain dissolved oxygen concentrations necessary to support organisms catabolizing organic waste. For example, fish may die if dissolved oxygen levels are depressed below 5 mg/L.
Application of sewage to land can be considered as a form of final disposal or of treatment, or both. Land disposal alternatives require consideration of land availability, groundwater quality, and possible soil deterioration.
Other disposal methods
Sewage may be discharged to an evaporation or infiltration basin. Groundwater recharge is used to reduce saltwater intrusion, or replenish aquifers used for agricultural irrigation. Treatment is usually required to sustain percolation capacity of infiltration basins, and more extensive treatment may be required for aquifers used as drinking water supplies.
Marine outfall
Global situation
Treatment
Sewage treatment is beneficial in reducing environmental pollution. Bar screens can remove large solid debris from sewage, and primary treatment can remove floating and settleable matter. Primary treated sewage usually contains less than half of the original solids content and approximately two-thirds of the BOD in the form of colloids and dissolved organic compounds. Secondary treatment can reduce the BOD of organic waste in undiluted sewage, but is less effective for dilute sewage. Water disinfection may be attempted to kill pathogens prior to disposal, and is increasingly effective after more elements of the foregoing treatment sequence have been completed.
Reuse and reclamation
An alternative to discharge into the environment is to reuse the sewage in a productive way (for agricultural, urban or industrial uses), in compliance with local regulations and requirements for each specific reuse application. Public health risks of sewage reuse in agriculture can be minimized by following a "multiple barrier approach" according to guidelines by the World Health Organization.
There is also the possibility of resource recovery which could make agriculture more sustainable by using carbon, nitrogen, phosphorus, water and energy recovered from sewage.
Sewage farm
Regulations
Sewage management includes collection and transport for release into the environment after a treatment level that is compatible with the local requirements for discharge into water bodies, onto soil, or for reuse applications. In most countries, uncontrolled discharges of wastewater to the environment are not permitted under law, and strict water quality requirements are to be met. For requirements in the United States, see Clean Water Act.
Sewage management regulations are often part of a country's broader sanitation policies. These may also include the management of human excreta (from non-sewered collection systems), solid waste and stormwater.
| Technology | Food, water and health | null |
20646772 | https://en.wikipedia.org/wiki/Vibration | Vibration | Vibration () is a mechanical phenomenon whereby oscillations occur about an equilibrium point. Vibration may be deterministic if the oscillations can be characterised precisely (e.g. the periodic motion of a pendulum), or random if the oscillations can only be analysed statistically (e.g. the movement of a tire on a gravel road).
Vibration can be desirable: for example, the motion of a tuning fork, the reed in a woodwind instrument or harmonica, a mobile phone, or the cone of a loudspeaker.
In many cases, however, vibration is undesirable, wasting energy and creating unwanted sound. For example, the vibrational motions of engines, electric motors, or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the rotating parts, uneven friction, or the meshing of gear teeth. Careful designs usually minimize unwanted vibrations.
The studies of sound and vibration are closely related (both fall under acoustics). Sound, or pressure waves, are generated by vibrating structures (e.g. vocal cords); these pressure waves can also induce the vibration of structures (e.g. ear drum). Hence, attempts to reduce noise are often related to issues of vibration.
Machining vibrations are common in the process of subtractive manufacturing.
Types
Free vibration or natural vibration occurs when a mechanical system is set in motion with an initial input and allowed to vibrate freely. Examples of this type of vibration are pulling a child back on a swing and letting it go, or hitting a tuning fork and letting it ring. The mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness.
Forced vibration is when a time-varying disturbance (load, displacement, velocity, or acceleration) is applied to a mechanical system. The disturbance can be a periodic and steady-state input, a transient input, or a random input. The periodic input can be a harmonic or a non-harmonic disturbance. Examples of these types of vibration include a washing machine shaking due to an imbalance, transportation vibration caused by an engine or uneven road, or the vibration of a building during an earthquake. For linear systems, the frequency of the steady-state vibration response resulting from the application of a periodic, harmonic input is equal to the frequency of the applied force or motion, with the response magnitude being dependent on the actual mechanical system.
Damped vibration: When the energy of a vibrating system is gradually dissipated by friction and other resistances, the vibrations are said to be damped. The vibrations gradually reduce or change in frequency or intensity or cease and the system rests in its equilibrium position. An example of this type of vibration is the vehicular suspension dampened by the shock absorber.
Isolation
Testing
Vibration testing is accomplished by introducing a forcing function into a structure, usually with some type of shaker. Alternately, a DUT (device under test) is attached to the "table" of a shaker. Vibration testing is performed to examine the response of a device under test (DUT) to a defined vibration environment. The measured response may be ability to function in the vibration environment, fatigue life, resonant frequencies or squeak and rattle sound output (NVH). Squeak and rattle testing is performed with a special type of quiet shaker that produces very low sound levels while under operation.
For relatively low frequency forcing (typically less than 100 Hz), servohydraulic (electrohydraulic) shakers are used. For higher frequencies (typically 5 Hz to 2000 Hz), electrodynamic shakers are used. Generally, one or more "input" or "control" points located on the DUT-side of a vibration fixture is kept at a specified acceleration. Other "response" points may experience higher vibration levels (resonance) or lower vibration level (anti-resonance or damping) than the control point(s). It is often desirable to achieve anti-resonance to keep a system from becoming too noisy, or to reduce strain on certain parts due to vibration modes caused by specific vibration frequencies.
The most common types of vibration testing services conducted by vibration test labs are sinusoidal and random. Sine (one-frequency-at-a-time) tests are performed to survey the structural response of the device under test (DUT). During the early history of vibration testing, vibration machine controllers were limited only to controlling sine motion so only sine testing was performed. Later, more sophisticated analog and then digital controllers were able to provide random control (all frequencies at once). A random (all frequencies at once) test is generally considered to more closely replicate a real world environment, such as road inputs to a moving automobile.
Most vibration testing is conducted in a 'single DUT axis' at a time, even though most real-world vibration occurs in various axes simultaneously. MIL-STD-810G, released in late 2008, Test Method 527, calls for multiple exciter testing. The vibration test fixture used to attach the DUT to the shaker table must be designed for the frequency range of the vibration test spectrum. It is difficult to design a vibration test fixture which duplicates the dynamic response (mechanical impedance) of the actual in-use mounting. For this reason, to ensure repeatability between vibration tests, vibration fixtures are designed to be resonance free within the test frequency range.
Generally for smaller fixtures and lower frequency ranges, the designer can target a fixture design that is free of resonances in the test frequency range. This becomes more difficult as the DUT gets larger and as the test frequency increases. In these cases multi-point control strategies can mitigate some of the resonances that may be present in the future.
Some vibration test methods limit the amount of crosstalk (movement of a response point in a mutually perpendicular direction to the axis under test) permitted to be exhibited by the vibration test fixture.
Devices specifically designed to trace or record vibrations are called vibroscopes.
Analysis
Vibration analysis (VA), applied in an industrial or maintenance environment aims to reduce maintenance costs and equipment downtime by detecting equipment faults. VA is a key component of a condition monitoring (CM) program, and is often referred to as predictive maintenance (PdM). Most commonly VA is used to detect faults in rotating equipment (Fans, Motors, Pumps, and Gearboxes etc.) such as imbalance, misalignment, rolling element bearing faults and resonance conditions.
VA can use the units of Displacement, Velocity and Acceleration displayed as a time waveform (TWF), but most commonly the spectrum is used, derived from a fast Fourier transform of the TWF. The vibration spectrum provides important frequency information that can pinpoint the faulty component.
The fundamentals of vibration analysis can be understood by studying the simple Mass-spring-damper model. Indeed, even a complex structure such as an automobile body can be modeled as a "summation" of simple mass–spring–damper models. The mass–spring–damper model is an example of a simple harmonic oscillator. The mathematics used to describe its behavior is identical to other simple harmonic oscillators such as the RLC circuit.
Note: This article does not include the step-by-step mathematical derivations, but focuses on major vibration analysis equations and concepts. Please refer to the references at the end of the article for detailed derivations.
Free vibration without damping
To start the investigation of the mass–spring–damper assume the damping is negligible and that there is no external force applied to the mass (i.e. free vibration). The force applied to the mass by the spring is proportional to the amount the spring is stretched "x" (assuming the spring is already compressed due to the weight of the mass). The proportionality constant, k, is the stiffness of the spring and has units of force/distance (e.g. lbf/in or N/m). The negative sign indicates that the force is always opposing the motion of the mass attached to it:
The force generated by the mass is proportional to the acceleration of the mass as given by Newton's second law of motion:
The sum of the forces on the mass then generates this ordinary differential equation:
Assuming that the initiation of vibration begins by stretching the spring by the distance of A and releasing, the solution to the above equation that describes the motion of mass is:
This solution says that it will oscillate with simple harmonic motion that has an amplitude of A and a frequency of fn. The number fn is called the undamped natural frequency. For the simple mass–spring system, fn is defined as:
Note: angular frequency ω (ω=2 π f) with the units of radians per second is often used in equations because it simplifies the equations, but is normally converted to ordinary frequency (units of Hz or equivalently cycles per second) when stating the frequency of a system. If the mass and stiffness of the system is known, the formula above can determine the frequency at which the system vibrates once set in motion by an initial disturbance. Every vibrating system has one or more natural frequencies that it vibrates at once disturbed. This simple relation can be used to understand in general what happens to a more complex system once we add mass or stiffness. For example, the above formula explains why, when a car or truck is fully loaded, the suspension feels "softer" than unloaded—the mass has increased, reducing the natural frequency of the system.
What causes the system to vibrate: from conservation of energy point of view
Vibrational motion could be understood in terms of conservation of energy. In the above example the spring has been extended by a value of x and therefore some potential energy () is stored in the spring. Once released, the spring tends to return to its un-stretched state (which is the minimum potential energy state) and in the process accelerates the mass. At the point where the spring has reached its un-stretched state all the potential energy that we supplied by stretching it has been transformed into kinetic energy (). The mass then begins to decelerate because it is now compressing the spring and in the process transferring the kinetic energy back to its potential. Thus oscillation of the spring amounts to the transferring back and forth of the kinetic energy into potential energy. In this simple model the mass continues to oscillate forever at the same magnitude—but in a real system, damping always dissipates the energy, eventually bringing the spring to rest.
Free vibration with damping
When a "viscous" damper is added to the model this outputs a force that is proportional to the velocity of the mass. The damping is called viscous because it models the effects of a fluid within an object. The proportionality constant c is called the damping coefficient and has units of Force over velocity (lbf⋅s/in or N⋅s/m).
Summing the forces on the mass results in the following ordinary differential equation:
The solution to this equation depends on the amount of damping. If the damping is small enough, the system still vibrates—but eventually, over time, stops vibrating. This case is called underdamping, which is important in vibration analysis. If damping is increased just to the point where the system no longer oscillates, the system has reached the point of critical damping. If the damping is increased past critical damping, the system is overdamped. The value that the damping coefficient must reach for critical damping in the mass-spring-damper model is:
To characterize the amount of damping in a system a ratio called the damping ratio (also known as damping factor and % critical damping) is used. This damping ratio is just a ratio of the actual damping over the amount of damping required to reach critical damping. The formula for the damping ratio () of the mass-spring-damper model is:
For example, metal structures (e.g., airplane fuselages, engine crankshafts) have damping factors less than 0.05, while automotive suspensions are in the range of 0.2–0.3. The solution to the underdamped system for the mass-spring-damper model is the following:
The value of X, the initial magnitude, and the phase shift, are determined by the amount the spring is stretched. The formulas for these values can be found in the references.
Damped and undamped natural frequencies
The major points to note from the solution are the exponential term and the cosine function. The exponential term defines how quickly the system “damps” down – the larger the damping ratio, the quicker it damps to zero. The cosine function is the oscillating portion of the solution, but the frequency of the oscillations is different from the undamped case.
The frequency in this case is called the "damped natural frequency", and is related to the undamped natural frequency by the following formula:
The damped natural frequency is less than the undamped natural frequency, but for many practical cases the damping ratio is relatively small and hence the difference is negligible. Therefore, the damped and undamped description are often dropped when stating the natural frequency (e.g. with 0.1 damping ratio, the damped natural frequency is only 1% less than the undamped).
The plots to the side present how 0.1 and 0.3 damping ratios effect how the system “rings” down over time. What is often done in practice is to experimentally measure the free vibration after an impact (for example by a hammer) and then determine the natural frequency of the system by measuring the rate of oscillation, as well as the damping ratio by measuring the rate of decay. The natural frequency and damping ratio are not only important in free vibration, but also characterize how a system behaves under forced vibration.
Forced vibration with damping
The behavior of the spring mass damper model varies with the addition of a harmonic force. A force of this type could, for example, be generated by a rotating imbalance.
Summing the forces on the mass results in the following ordinary differential equation:
The steady state solution of this problem can be written as:
The result states that the mass will oscillate at the same frequency, f, of the applied force, but with a phase shift
The amplitude of the vibration “X” is defined by the following formula.
Where “r” is defined as the ratio of the harmonic force frequency over the undamped natural frequency of the mass–spring–damper model.
The phase shift, is defined by the following formula.
The plot of these functions, called "the frequency response of the system", presents one of the most important features in forced vibration. In a lightly damped system when the forcing frequency nears the natural frequency () the amplitude of the vibration can get extremely high. This phenomenon is called resonance (subsequently the natural frequency of a system is often referred to as the resonant frequency). In rotor bearing systems any rotational speed that excites a resonant frequency is referred to as a critical speed.
If resonance occurs in a mechanical system it can be very harmful – leading to eventual failure of the system. Consequently, one of the major reasons for vibration analysis is to predict when this type of resonance may occur and then to determine what steps to take to prevent it from occurring. As the amplitude plot shows, adding damping can significantly reduce the magnitude of the vibration. Also, the magnitude can be reduced if the natural frequency can be shifted away from the forcing frequency by changing the stiffness or mass of the system. If the system cannot be changed, perhaps the forcing frequency can be shifted (for example, changing the speed of the machine generating the force).
The following are some other points in regards to the forced vibration shown in the frequency response plots.
At a given frequency ratio, the amplitude of the vibration, X, is directly proportional to the amplitude of the force (e.g. if you double the force, the vibration doubles)
With little or no damping, the vibration is in phase with the forcing frequency when the frequency ratio r < 1 and 180 degrees out of phase when the frequency ratio r > 1
When r ≪ 1 the amplitude is just the deflection of the spring under the static force This deflection is called the static deflection Hence, when r ≪ 1 the effects of the damper and the mass are minimal.
When r ≫ 1 the amplitude of the vibration is actually less than the static deflection In this region the force generated by the mass (F = ma) is dominating because the acceleration seen by the mass increases with the frequency. Since the deflection seen in the spring, X, is reduced in this region, the force transmitted by the spring (F = kx) to the base is reduced. Therefore, the mass–spring–damper system is isolating the harmonic force from the mounting base – referred to as vibration isolation. More damping actually reduces the effects of vibration isolation when r ≫ 1 because the damping force (F = cv) is also transmitted to the base.
Whatever the damping is, the vibration is 90 degrees out of phase with the forcing frequency when the frequency ratio r = 1, which is very helpful when it comes to determining the natural frequency of the system.
Whatever the damping is, when r ≫ 1, the vibration is 180 degrees out of phase with the forcing frequency
Whatever the damping is, when r ≪ 1, the vibration is in phase with the forcing frequency
Resonance causes
Resonance is simple to understand if the spring and mass are viewed as energy storage elements – with the mass storing kinetic energy and the spring storing potential energy. As discussed earlier, when the mass and spring have no external force acting on them they transfer energy back and forth at a rate equal to the natural frequency. In other words, to efficiently pump energy into both mass and spring requires that the energy source feed the energy in at a rate equal to the natural frequency. Applying a force to the mass and spring is similar to pushing a child on swing, a push is needed at the correct moment to make the swing get higher and higher. As in the case of the swing, the force applied need not be high to get large motions, but must just add energy to the system.
The damper, instead of storing energy, dissipates energy. Since the damping force is proportional to the velocity, the more the motion, the more the damper dissipates the energy. Therefore, there is a point when the energy dissipated by the damper equals the energy added by the force. At this point, the system has reached its maximum amplitude and will continue to vibrate at this level as long as the force applied stays the same. If no damping exists, there is nothing to dissipate the energy and, theoretically, the motion will continue to grow into infinity.
Applying "complex" forces to the mass–spring–damper model
In a previous section only a simple harmonic force was applied to the model, but this can be extended considerably using two powerful mathematical tools. The first is the Fourier transform that takes a signal as a function of time (time domain) and breaks it down into its harmonic components as a function of frequency (frequency domain). For example, by applying a force to the mass–spring–damper model that repeats the following cycle – a force equal to 1 newton for 0.5 second and then no force for 0.5 second. This type of force has the shape of a 1 Hz square wave.
The Fourier transform of the square wave generates a frequency spectrum that presents the magnitude of the harmonics that make up the square wave (the phase is also generated, but is typically of less concern and therefore is often not plotted). The Fourier transform can also be used to analyze non-periodic functions such as transients (e.g. impulses) and random functions. The Fourier transform is almost always computed using the fast Fourier transform (FFT) computer algorithm in combination with a window function.
In the case of our square wave force, the first component is actually a constant force of 0.5 newton and is represented by a value at 0 Hz in the frequency spectrum. The next component is a 1 Hz sine wave with an amplitude of 0.64. This is shown by the line at 1 Hz. The remaining components are at odd frequencies and it takes an infinite amount of sine waves to generate the perfect square wave. Hence, the Fourier transform allows you to interpret the force as a sum of sinusoidal forces being applied instead of a more "complex" force (e.g. a square wave).
In the previous section, the vibration solution was given for a single harmonic force, but the Fourier transform in general gives multiple harmonic forces. The second mathematical tool, the superposition principle, allows the summation of the solutions from multiple forces if the system is linear. In the case of the spring–mass–damper model, the system is linear if the spring force is proportional to the displacement and the damping is proportional to the velocity over the range of motion of interest. Hence, the solution to the problem with a square wave is summing the predicted vibration from each one of the harmonic forces found in the frequency spectrum of the square wave.
Frequency response model
The solution of a vibration problem can be viewed as an input/output relation – where the force is the input and the output is the vibration. Representing the force and vibration in the frequency domain (magnitude and phase) allows the following relation:
is called the frequency response function (also referred to as the transfer function, but not technically as accurate) and has both a magnitude and phase component (if represented as a complex number, a real and imaginary component). The magnitude of the frequency response function (FRF) was presented earlier for the mass–spring–damper system.
The phase of the FRF was also presented earlier as:
For example, calculating the FRF for a mass–spring–damper system with a mass of 1 kg, spring stiffness of 1.93 N/mm and a damping ratio of 0.1. The values of the spring and mass give a natural frequency of 7 Hz for this specific system. Applying the 1 Hz square wave from earlier allows the calculation of the predicted vibration of the mass. The figure illustrates the resulting vibration. It happens in this example that the fourth harmonic of the square wave falls at 7 Hz. The frequency response of the mass–spring–damper therefore outputs a high 7 Hz vibration even though the input force had a relatively low 7 Hz harmonic. This example highlights that the resulting vibration is dependent on both the forcing function and the system that the force is applied to.
The figure also shows the time domain representation of the resulting vibration. This is done by performing an inverse Fourier Transform that converts frequency domain data to time domain. In practice, this is rarely done because the frequency spectrum provides all the necessary information.
The frequency response function (FRF) does not necessarily have to be calculated from the knowledge of the mass, damping, and stiffness of the system—but can be measured experimentally. For example, if a known force over a range of frequencies is applied, and if the associated vibrations are measured, the frequency response function can be calculated, thereby characterizing the system. This technique is used in the field of experimental modal analysis to determine the vibration characteristics of a structure.
Multiple degrees of freedom systems and mode shapes
The simple mass–spring–damper model is the foundation of vibration analysis. The model described above is called a single degree of freedom (SDOF) model since the mass is assumed to only move up and down. In more complex systems, the system must be discretized into more masses that move in more than one direction, adding degrees of freedom. The major concepts of multiple degrees of freedom (MDOF) can be understood by looking at just a two degree of freedom model as shown in the figure.
The equations of motion of the 2DOF system are found to be:
This can be rewritten in matrix format:
A more compact form of this matrix equation can be written as:
where and are symmetric matrices referred respectively as the mass, damping, and stiffness matrices. The matrices are NxN square matrices where N is the number of degrees of freedom of the system.
The following analysis involves the case where there is no damping and no applied forces (i.e. free vibration). The solution of a viscously damped system is somewhat more complicated.
This differential equation can be solved by assuming the following type of solution:
Note: Using the exponential solution of is a mathematical trick used to solve linear differential equations. Using Euler's formula and taking only the real part of the solution it is the same cosine solution for the 1 DOF system. The exponential solution is only used because it is easier to manipulate mathematically.
The equation then becomes:
Since cannot equal zero the equation reduces to the following.
Eigenvalue problem
This is referred to an eigenvalue problem in mathematics and can be put in the standard format by pre-multiplying the equation by
and if: and
The solution to the problem results in N eigenvalues (i.e. ), where N corresponds to the number of degrees of freedom. The eigenvalues provide the natural frequencies of the system. When these eigenvalues are substituted back into the original set of equations, the values of that correspond to each eigenvalue are called the eigenvectors. These eigenvectors represent the mode shapes of the system. The solution of an eigenvalue problem can be quite cumbersome (especially for problems with many degrees of freedom), but fortunately most math analysis programs have eigenvalue routines.
The eigenvalues and eigenvectors are often written in the following matrix format and describe the modal model of the system:
A simple example using the 2 DOF model can help illustrate the concepts. Let both masses have a mass of 1 kg and the stiffness of all three springs equal 1000 N/m. The mass and stiffness matrix for this problem are then:
and
Then
The eigenvalues for this problem given by an eigenvalue routine is:
The natural frequencies in the units of hertz are then (remembering ) and
The two mode shapes for the respective natural frequencies are given as:
Since the system is a 2 DOF system, there are two modes with their respective natural frequencies and shapes. The mode shape vectors are not the absolute motion, but just describe relative motion of the degrees of freedom. In our case the first mode shape vector is saying that the masses are moving together in phase since they have the same value and sign. In the case of the second mode shape vector, each mass is moving in opposite direction at the same rate.
Illustration of a multiple DOF problem
When there are many degrees of freedom, one method of visualizing the mode shapes is by animating them using structural analysis software such as Femap, ANSYS or VA One by ESI Group. An example of animating mode shapes is shown in the figure below for a cantilevered -beam as demonstrated using modal analysis on ANSYS. In this case, the finite element method was used to generate an approximation of the mass and stiffness matrices by meshing the object of interest in order to solve a discrete eigenvalue problem. Note that, in this case, the finite element method provides an approximation of the meshed surface (for which there exists an infinite number of vibration modes and frequencies). Therefore, this relatively simple model that has over 100 degrees of freedom and hence as many natural frequencies and mode shapes, provides a good approximation for the first natural frequencies and modes. Generally, only the first few modes are important for practical applications.
Note that when performing a numerical approximation of any mathematical model, convergence of the parameters of interest must be ascertained.
Multiple DOF problem converted to a single DOF problem
The eigenvectors have very important properties called orthogonality properties. These properties can be used to greatly simplify the solution of multi-degree of freedom models. It can be shown that the eigenvectors have the following properties:
and are diagonal matrices that contain the modal mass and stiffness values for each one of the modes. (Note: Since the eigenvectors (mode shapes) can be arbitrarily scaled, the orthogonality properties are often used to scale the eigenvectors so the modal mass value for each mode is equal to 1. The modal mass matrix is therefore an identity matrix)
These properties can be used to greatly simplify the solution of multi-degree of freedom models by making the following coordinate transformation.
Using this coordinate transformation in the original free vibration differential equation results in the following equation.
Taking advantage of the orthogonality properties by premultiplying this equation by
The orthogonality properties then simplify this equation to:
This equation is the foundation of vibration analysis for multiple degree of freedom systems. A similar type of result can be derived for damped systems. The key is that the modal mass and stiffness matrices are diagonal matrices and therefore the equations have been "decoupled". In other words, the problem has been transformed from a large unwieldy multiple degree of freedom problem into many single degree of freedom problems that can be solved using the same methods outlined above.
Solving for x is replaced by solving for q, referred to as the modal coordinates or modal participation factors.
It may be clearer to understand if is written as:
Written in this form it can be seen that the vibration at each of the degrees of freedom is just a linear sum of the mode shapes. Furthermore, how much each mode "participates" in the final vibration is defined by q, its modal participation factor.
Rigid-body mode
An unrestrained multi-degree of freedom system experiences both rigid-body translation and/or rotation and vibration. The existence of a rigid-body mode results in a zero natural frequency. The corresponding mode shape is called the rigid-body mode.
| Physical sciences | Classical mechanics | Physics |
20646971 | https://en.wikipedia.org/wiki/Waste | Waste | Waste (or wastes) are unwanted or unusable materials. Waste is any substance discarded after primary use, or is worthless, defective and of no use. A by-product, by contrast is a joint product of relatively minor economic value. A waste product may become a by-product, joint product or resource through an invention that raises a waste product's value above zero.
Examples include municipal solid waste (household trash/refuse), hazardous waste, wastewater (such as sewage, which contains bodily wastes (feces and urine) and surface runoff), radioactive waste, and others.
Definitions
What constitutes waste depends on the eye of the beholder; one person's waste can be a resource for another person. Though waste is a physical object, its generation is a physical and psychological process. The definitions used by various agencies are as below.
United Nations Environment Program
According to the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal of 1989, Art. 2(1), Wastes' are substance or objects, which are disposed of or are intended to be disposed of or are required to be disposed of by the provisions of national law".
United Nations Statistics Division
The UNSD Glossary of Environment Statistics describes waste as "materials that are not prime products (that is, products produced for the market) for which the generator has no further use in terms of his/her own purposes of production, transformation or consumption, and of which he/she wants to dispose. Wastes may be generated during the extraction of raw materials, the processing of raw materials into intermediate and final products, the consumption of final products, and other human activities. Residuals recycled or reused at the place of generation are excluded."
European Union
Under the Waste Framework Directive 2008/98/EC, Art. 3(1), the European Union defines waste as "an object the holder discards, intends to discard or is required to discard." For a more structural description of the Waste Directive, see the European Commission's summary.
Types of waste
Metabolic waste
Municipal waste
The Organization for Economic Co-operation and Development also known as OECD defines municipal solid waste (MSW) as "waste collected and treated by or for municipalities". Typically this type of waste includes household waste, commercial waste, and demolition or construction waste. In 2018, the Environmental Protection Agency concluded that 292.4 tons of municipal waste was generated which equated to about 4.9 pounds per day per person. Out of the 292.4 tons, approximately 69 million tons were recycled, and 25 million tons were composted.
Household waste and commercial waste
Household waste more commonly known as trash or garbage are items that are typically thrown away daily from ordinary households. Items often included in this category include product packaging, yard waste, clothing, food scraps, appliance, paints, and batteries. Most of the items that are collected by municipalities end up in landfills across the world. In the United States, it is estimated that 11.3 million tons of textile waste is generated. On an individual level, it is estimated that the average American throws away 81.5 pounds of clothes each year. As online shopping becomes more prevalent, items such as cardboard, bubble wrap, shipping envelopes are ending up in landfills across the United States. The EPA has estimated that approximately 10.1 million tons of plastic containers and packaging ended up landfills in 2018. The EPA noted that only 30.5% of plastic containers and packaging was recycled or combusted as an energy source. Additionally, approximately 940,000 pounds of cardboard ends up in the landfill each year.
Commercial waste is very similar to household waste. To be considered as commercial waste, it must come from a business or commercial occupancy. This can be restaurants, retail occupants, manufacturing occupants or similar businesses. Typically, commercial waste contains similar items such as food scraps, cardboard, paper, and shipping materials. Generally speaking, commercial waste creates more waste than household waste on a per location basis.
Construction and demolition waste
The EPA defines this type of waste as "Construction and Demolition (C&D) debris is a type of waste that is not included in municipal solid waste (MSW)." Items typically found in C&D include but are not limited to steel, wood products, drywall and plaster, brick and clay tile, asphalt shingles, concrete, and asphalt. Generally speaking, construction and demolition waste can be categorized as any components needed to build infrastructures. In 2018, the EPA estimated that the US generated approximately 600 million tons of C&D waste. The waste generated by construction and demolition is often intended to be reused or is sent to the landfill. Examples of reused waste is milled asphalt can be used again for the asphalt mixture or fill dirt can be used to level grade.
Hazardous waste
The EPA defines hazardous waste as "a waste with properties that make it dangerous or capable of having a harmful effect on human health or the environment." Hazardous Waste falls under the Resource Conservation and Recovery Act (RCRA). Under the RCRA, the EPA has the authority to control hazardous waste during its entire lifecycle. This means from the point of creation to the point where it has been properly disposed of. The life cycle of hazardous waste includes generation, transportation, treatment, and storage and disposal. All of which are included in the RCRA. Some forms of hazardous waste include radioactive waste, explosive waste, and electronic waste.
Radioactive waste
Radioactive waste, often referred to as nuclear waste, is produced by various industries such as nuclear power plants, nuclear reactors, hospitals, research centers, and mining facilities. Any activity that involves radioactive material can generate radioactive waste. Furthermore, such waste emits radioactive particles, which if not handled correctly, can be both an environmental hazard as well as a human health hazard. When dealing with radioactive waste, it is extremely important to understand the necessary protocols and follow the correct precautions. Failure to handle and recycle these materials can have catastrophic consequences and potentially damage the site's ecosystems for years to come.
Radioactive waste is monitored and regulated by multiple governmental agencies such as Nuclear Regulatory Commission (NRC), Department of Energy (DOE), Environmental Protection Agency (EPA), Department of Transportation (DOT), and Department of the Interior (DOI). Each agency plays an important role in creating, handling, and properly disposing of radioactive waste. A brief description of each agency's role can be found below.
NRC: "Licenses and regulates the receipt and possession of high-level waste at privately owned facilities and at certain DOE facilities."
DOE: "Plans and carries out programs for sand handling of DOE-generated radioactive wastes, develops waste disposal technologies, and will design, construct and operate disposal facilities for DOE-generated and commercial high-level wastes."
EPA: "Develops environmental standards and federal radiation protection guidance for offsite radiation due to the disposal of spent nuclear fuel and high-level and transuranic radioactive wastes."
DOT: "Regulates both the packaging and carriage of all hazardous materials including radioactive waste."
DOI: "Through the U.S. Geological Survey, conducts laboratory and field geologic investigations in support of DOE's waste disposal programs and collaborates with DOE on earth science technical activities."
The US currently defines five types of radioactive waste, as shown below.
High-level Waste: This type of radioactive waste is generated from nuclear reactors or reprocessing spent nuclear fuel.
Transuranic Waste: This type of radioactive waste is man-made and has an atomic number of 92 or higher.
Uranium or thorium mill tailings: This type of radioactive waste is a result after the mining or milling or uranium or thorium ore.
Low-level waste: This type of radioactive waste is radioactively contaminated waste. It is typically generated from industrial processes or research. Examples of these items include paper, protective clothing, bags, and cardboard.
Technologically enhanced naturally-occurring radioactive material (TENORM): This type of radioactive waste is created through human activity such as mining, oil and gas drilling, and water treatment where naturally-occurring radiological material (NORM) becomes concentrated.
Energetic hazardous waste
The EPA defines energetic hazardous waste as "wastes that have the potential to detonate and bulk military propellants which cannot safely be disposed of through other modes of treatments." The items which typically fall under this category include munitions, fireworks, flares, hobby rockets, and automobile propellants.
Munitions
Munitions were added to hazardous waste in 1997 when the EPA finalized RCRA. A special rule was added to address munitions in waste. This new rule is commonly referred to as the Military Munitions Rule. The EPA defines military munitions as "all types of both conventional and chemical ammunition products and their components, produced by or for the military for national defense and security (including munitions produced by other parties under contract to or acting as an agent for DOD—in the case of Government Owned/Contractor Operated [GOCO] operations)." While a large percentage of munitions waste is generated by the government or governmental contractors, residents also throw away expired or faulty ammunition inside their household waste.
Fireworks, flares, and hobby rockets
Every year, the US generates this type of waste from both the commercial and consumer aspects. This waste is often generated from fireworks, signal flares and hobby rockets which have been damaged, failed to operate or for other reasons. Due to their chemical properties, these types of devices are extremely dangerous.
Automobile airbag propellants
While automobile airbag propellants are not as common as munitions and fireworks, they share similar properties which makes them extremely hazardous. Airbag propellants characteristics of reactivity and ignitability are the characteristics which qualify for hazardous waste. When disposed undeployed, leaves these two hazardous characteristics intact. To properly dispose of these items, they must be safely deployed which removes these hazardous characteristics.
The EPA includes the waste of automobile airbag propellants under the RCRA. In 2018, the EPA issued a final rule on handling of automobile airbag propellants. The "interim final rule"provides an exemption of entities which install and remove airbags. This includes automobile dealerships, salvage yards, automobile repair facilities and collision centers. The handler and transporter are exempt from RCRA, but the airbag waste collection facility is not exempt. Once the airbags have met the collection center, it will then be classified as RCRA hazardous waste and must be disposed or recycled at a RCRA disposal facility.
Electronic waste
Electronic waste, often referred to as "E-Waste" or "E-Scrap," are often thrown away or sent to a recycler. E-Waste continues to end up in landfills across the world. The EPA estimates that in 2009, 2.37 million tons of televisions, computers, cell phones, printers, scanners, and fax machines were discarded by US consumers. Only 25% of these devices were recycled; the remainder ended up in landfills across the US.
E-Waste contains many elements that can be recycled or re-used. Typically speaking, electronics are encased in a plastic or light metal enclosure. Items such as computer boards, wiring, capacitors, and small motor items are common types of E-waste. Of these items, the internal components include iron, gold, palladium, platinum, and copper, all of which are mined from the earth. It requires energy to operate the equipment to mine these metals, which emits greenhouse gases into the atmosphere. Donating e-waste to recycling centers or refurbishing this equipment can reduce the greenhouse gases emitted through the mining process as well as decrease the use of natural resources to ensure future generations will have sufficient access to these resources.
As this issue continued to grow, President Obama established the Interagency Task Force on Electronics Stewardship in November 2010. The overall goal for this task was to develop a national strategy for handling and proper disposal of electronic waste. The task force would work with the White House Council on Environmental Quality (CEQ), EPA, and the US General Services Administration (GSA). The task force released its final product, the National Strategy for Electronics Stewardship report. The report focuses on four goals of the federal government's plan to enhance the management of electronics:
1. Incentivizing greener design of electronics
2. Leading by example
3. Increasing domestic recycling
4. Reducing harmful exports of e-waste and building capacity in developing countries.
E-Waste is not only a problem in the US, but also a global issue. Tackling this issue requires collaboration from multiple agencies across the world. Some agencies involved in this include U.S. EPA, Taiwan Environmental Protection Administration (Taiwan EPA), International E-Waste Management Network (IEMN), and environmental offices from Asia, Latin America, the Caribbean, Africa, and North America.
Mixed waste
Mixed waste is a term that has different definitions based on its context. Most commonly, mixed waste refers to hazardous waste which contains radioactive material. In this context, the management of mixed waste is regulated by the EPA and RCRA and Atomic Energy Act. The hazardous materials content is regulated by RCRA while the radiological component is regulated by the Department of Energy (DOE) and Nuclear Regulatory Commission (NRC).
Mixed waste can also be defined as a type of waste which includes recyclable materials and organic materials. Some examples of mixed waste in this context include a combination of broken glassware, floor sweepings, non-repairable household goods, non-recyclable plastic and metal, clothing, and furnishings. Additionally, ashes, soot, and residential renovation waste materials are also included under this definition.
Medical Waste
This type of waste is typically generated from hospitals, physicians' offices, dental practices, blood banks, veterinary offices, and research facilities. This waste has often been contaminated with bodily fluids from humans or animals. Examples of this type of contamination can include blood, vomit, urine, and other bodily fluids. Concerns started to generate when medical waste was appearing on east coast beaches in the 1980s. This forced congress to pass the Medical Waste Tracking Act. This act was only in effect for approximately 3 years after the EPA concluded the "disease-causing medical waste was greatest at the point of generation and naturally tapers off after that point."
Prior to the Hospital Medical Infectious Waste Incinerator (HMIWI) standard, approximately 90% of the infectious waste was incinerated before 1997. Due to the potential of negatively affect air quality, alternative treatment and disposal technologies for medical waste was developed. These new alternatives include:
Thermal Treatment, such as microwave technologies
Steam sterilization, such as autoclaving
Electropyrolysis
Chemical mechanical systems
Reporting
There are many issues that surround reporting waste. It is most commonly measured by size or weight, and there is a stark difference between the two. For example, organic waste is much heavier when it is wet, and plastic or glass bottles can have different weights but be the same size. On a global scale it is difficult to report waste because countries have different definitions of waste and what falls into waste categories, as well as different ways of reporting. Based on incomplete reports from its parties, the Basel Convention estimated 338 million tonnes of waste was generated in 2001. For the same year, OECD estimated 4 billion tonnes from its member countries. Despite these inconsistencies, waste reporting is still useful on a small and large scale to determine key causes and locations, and to find ways of preventing, minimizing, recovering, treating, and disposing of waste.
Costs
Environmental costs
Inappropriately managed waste can attract rodents and insects, which can harbor gastrointestinal parasites, yellow fever, worms, various diseases, and other conditions for humans, and exposure to hazardous wastes, particularly when they are burned, can cause various other diseases including cancers.Toxic waste materials can contaminate surface water, groundwater, soil, and air, which causes more problems for humans, other species, and ecosystems. A form of waste disposal involving combustion creates a significant amount of greenhouse gases. When the burned waste contains metals, it can create toxic gases. On the other hand, when the waste contains plastics, the gases produce contain CO2. As global warming and CO2 emissions increase, soil begins to become a larger carbon sink and will become increasingly valuable for plant life.
Social costs
Waste management is a significant environmental justice issue. Many of the environmental burdens cited above are more often borne by marginalized groups, such as racial minorities, women, and residents of developing nations. NIMBY (not in my back yard) is the opposition of residents to a proposal for a new development because it is close to them. However, the need for expansion and siting of waste treatment and disposal facilities is increasing worldwide. There is now a growing market in the transboundary movement of waste, and although most waste that flows between countries goes between developed nations, a significant amount of waste is moved from developed to developing nations.
Economic costs
The economic costs of managing waste are high, and are often paid for by municipal governments; money can often be saved with more efficiently designed collection routes, modifying vehicles, and with public education. Environmental policies such as pay as you throw can reduce the cost of management and reduce waste quantities. Waste recovery (that is, recycling, reuse) can curb economic costs because it avoids extracting raw materials and often cuts transportation costs. "Economic assessment of municipal waste management systems – case studies using a combination of life-cycle assessment (LCA) and life-cycle costing (LCC)". The location of waste treatment and disposal facilities often reduces property values due to noise, dust, pollution, unsightliness, and negative stigma. The informal waste sector consists mostly of waste pickers who scavenge for metals, glass, plastic, textiles, and other materials and then trade them for a profit. This sector can significantly alter or reduce waste in a particular system, but other negative economic effects come with the disease, poverty, exploitation, and abuse of its workers.
Affecting communities
People in developing countries suffer from contaminated water and landfills caused by unlawful government policies that allow first-world countries and companies to transport their trash to their homes and oftentimes near bodies of water. Those same governments do not use any waste trade profits to create ways to manage landfills or clean water sources. Photographer Kevin McElvaney documents the world's biggest e-waste dump called Agbogbloshie in Accra, Ghana, which used to be a wetland. The young men and children that work in Agbogbloshie smash devices to get to the metals, obtain burns, eye damage, lung and back problems, chronic nausea, debilitating headaches, and respiratory problems and most workers die from cancer in their 20s (McElvaney). In McElvaney's photos, kids in fields burning refrigerators and computers with blackened hands and trashed clothes and animals, such as cows with open wounds, in the dumpsite. There are piles of waste used as makeshift bridges over lakes, with metals and chemicals just seeping into the water and groundwater that could be linked to homes' water systems. The same unfortunate situation and dumps/landfills can be seen in similar countries that are considered the third world, such as other West African countries and China. Many are advocating for waste management, a stop to the waste trade, the creation of wastewater treatment facilities, and providing a clean and accessible water source. The health of all these people in landfills and water are human necessities/rights that are being taken away.
Management
Wastewater facilities
Wastewater treatment facilities remove pollutants and contaminants physically and chemically to clean water to be returned to society. The South Gippsland Water Organization breaks down the three steps of waste-water treatment. The primary treatment is to sift through the water to remove large solids to leave oils and small particles in the water. Secondary treatment to dissolve/remove oils, particles, and micro-organisms from the water to be prepared for tertiary treatment to chemically disinfect the water with chlorine or with UV light. “For most industrial applications, a 150,000 GPD capacity WWTS would cost an estimated $500,000 to $1.5 million inclusive of all necessary design, engineering, equipment, installation, and startup”. With such a simple solution that has been proven to clean water to be reused and is relatively inexpensive, there is no excuse why there should not be a waste-water treatment facility in every country, every state, and every town.
Benefits
“Right now, according to a NASA-led study, many of the world’s freshwater sources are being drained faster than they are being replenished. The water table is dropping all over the world. There's not an infinite supply of water”. There is a need to preserve every resource, every finite water source that we do have left to maintain our lives and lifestyles. Able countries helping under-developed countries with their creation of wastewater treatments benefits society. Another cost of not adding wastewater treatments in countries is that people have no choice but to clean with, cook with, or drink the contaminated water which has caused millions of cases of disease and deaths. “Between 400,000 and 1 million people die each year in developing countries because of diseases caused by mismanaged waste, estimates poverty charity Tearfund”. Society has the means to decrease or even eliminate this way of death and save millions of lives by providing the simple human necessity of clean water.
Utilization
Resource recovery
Energy recovery
Energy recovery from waste is using non-recyclable waste materials and extracting from it heat, electricity, or energy through a variety of processes, including combustion, gasification, pyrolyzation, and anaerobic digestion. This process is referred to as waste-to-energy.
There are several ways to recover energy from waste. Anaerobic digestion is a naturally occurring process of decomposition where organic matter is reduced to a simpler chemical component in the absence of oxygen. Incineration or direct controlled burning of municipal solid waste reduces waste and makes energy. Secondary recovered fuel is the energy recovery from waste that cannot be reused or recycled from mechanical and biological treatment activities. Pyrolysis involves heating of waste, with the absence of oxygen, to high temperatures to break down any carbon content into a mixture of gaseous and liquid fuels and solid residue. Gasification is the conversion of carbon rich material through high temperature with partial oxidation into a gas stream. Plasma arc heating is the very high heating of municipal solid waste to temperatures ranging from 3,000 to 10,000 °C, where energy is released by an electrical discharge in an inert atmosphere.
Using waste as fuel can offer important environmental benefits. It can provide a safe and cost-effective option for wastes that would normally have to be dealt with through disposal. It can help reduce carbon dioxide emissions by diverting energy use from fossil fuels, while also generating energy and using waste as fuel can reduce the methane emissions generated in landfills by averting waste from landfills.
There is some debate in the classification of certain biomass feedstock as wastes. Crude Tall Oil (CTO), a co-product of the pulp and papermaking process, is defined as a waste or residue in some European countries when in fact it is produced “on purpose” and has significant value add potential in industrial applications. Several companies use CTO to produce fuel, while the pine chemicals industry maximizes it as a feedstock “producing low-carbon, bio-based chemicals” through cascading use.
Education and awareness
Education and awareness in the area of waste and waste management is increasingly important from a global perspective of resource management. The Talloires Declaration is a declaration for sustainability concerned about the unprecedented scale and speed of environmental pollution and degradation, and the depletion of natural resources. Local, regional, and global air pollution; accumulation and distribution of toxic wastes; destruction and depletion of forests, soil, and water; depletion of the ozone layer and emission of "green house" gases threaten the survival of humans and thousands of other living species, the integrity of the earth and its biodiversity, the security of nations, and the heritage of future generations. Several universities have implemented the Talloires Declaration by establishing environmental management and waste management programs, e.g. the waste management university project. University and vocational education are promoted by various organizations, e.g. WAMITAB and Chartered Institution of Wastes Management.
Gallery
| Technology | Basics_6 | null |
20647050 | https://en.wikipedia.org/wiki/Temperature | Temperature | Temperature is a physical quantity that quantitatively expresses the attribute of hotness or coldness. Temperature is measured with a thermometer. It reflects the average kinetic energy of the vibrating and colliding atoms making up a substance.
Thermometers are calibrated in various temperature scales that historically have relied on various reference points and thermometric substances for definition. The most common scales are the Celsius scale with the unit symbol °C (formerly called centigrade), the Fahrenheit scale (°F), and the Kelvin scale (K), with the third being used predominantly for scientific purposes. The kelvin is one of the seven base units in the International System of Units (SI).
Absolute zero, i.e., zero kelvin or −273.15 °C, is the lowest point in the thermodynamic temperature scale. Experimentally, it can be approached very closely but not actually reached, as recognized in the third law of thermodynamics. It would be impossible to extract energy as heat from a body at that temperature.
Temperature is important in all fields of natural science, including physics, chemistry, Earth science, astronomy, medicine, biology, ecology, material science, metallurgy, mechanical engineering and geography as well as most aspects of daily life.
Effects
Many physical processes are related to temperature; some of them are given below:
the physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, electrical conductivity, hardness, wear resistance, thermal conductivity, corrosion resistance, strength
the rate and extent to which chemical reactions occur
the amount and properties of thermal radiation emitted from the surface of an object
air temperature affects all living organisms
the speed of sound, which in a gas is proportional to the square root of the absolute temperature
Scales
Temperature scales need two values for definition: the point chosen as zero degrees and the magnitudes of the incremental unit of temperature.
The Celsius scale (°C) is used for common temperature measurements in most of the world. It is an empirical scale that developed historically, which led to its zero point being defined as the freezing point of water, and as the boiling point of water, both at atmospheric pressure at sea level. It was called a centigrade scale because of the 100-degree interval. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though numerically the scales differ by an exact offset of 273.15.
The Fahrenheit scale is in common use in the United States. Water freezes at and boils at at sea-level atmospheric pressure.
Absolute zero
At the absolute zero of temperature, no energy can be removed from matter as heat, a fact expressed in the third law of thermodynamics. At this temperature, matter contains no macroscopic thermal energy, but still has quantum-mechanical zero-point energy as predicted by the uncertainty principle, although this does not enter into the definition of absolute temperature. Experimentally, absolute zero can be approached only very closely; it can never be reached (the lowest temperature attained by experiment is 38 pK). Theoretically, in a body at a temperature of absolute zero, all classical motion of its particles has ceased and they are at complete rest in this classical sense. Absolute zero, defined as , is exactly equal to , or .
Absolute scales
Referring to the Boltzmann constant, to the Maxwell–Boltzmann distribution, and to the Boltzmann statistical mechanical definition of entropy, as distinct from the Gibbs definition, for independently moving microscopic particles, disregarding interparticle potential energy, by international agreement, a temperature scale is defined and said to be absolute because it is independent of the characteristics of particular thermometric substances and thermometer mechanisms. Apart from absolute zero, it does not have a reference temperature. It is known as the Kelvin scale, widely used in science and technology. The kelvin (the unit name is spelled with a lower-case 'k') is the unit of temperature in the International System of Units (SI). The temperature of a body in a state of thermodynamic equilibrium is always positive relative to absolute zero.
Besides the internationally agreed Kelvin scale, there is also a thermodynamic temperature scale, invented by Lord Kelvin, also with its numerical zero at the absolute zero of temperature, but directly relating to purely macroscopic thermodynamic concepts, including the macroscopic entropy, though microscopically referable to the Gibbs statistical mechanical definition of entropy for the canonical ensemble, that takes interparticle potential energy into account, as well as independent particle motion so that it can account for measurements of temperatures near absolute zero. This scale has a reference temperature at the triple point of water, the numerical value of which is defined by measurements using the aforementioned internationally agreed Kelvin scale.
Kelvin scale
Many scientific measurements use the Kelvin temperature scale (unit symbol: K), named in honor of the physicist who first defined it. It is an absolute scale. Its numerical zero point, , is at the absolute zero of temperature. Since May 2019, the kelvin has been defined through particle kinetic theory, and statistical mechanics. In the International System of Units (SI), the magnitude of the kelvin is defined in terms of the Boltzmann constant, the value of which is defined as fixed by international convention.
Statistical mechanical versus thermodynamic temperature scales
Since May 2019, the magnitude of the kelvin is defined in relation to microscopic phenomena, characterized in terms of statistical mechanics. Previously, but since 1954, the International System of Units defined a scale and unit for the kelvin as a thermodynamic temperature, by using the reliably reproducible temperature of the triple point of water as a second reference point, the first reference point being at absolute zero.
Historically, the temperature of the triple point of water was defined as exactly 273.16 K. Today it is an empirically measured quantity. The freezing point of water at sea-level atmospheric pressure occurs at very close to ().
Classification of scales
There are various kinds of temperature scale. It may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century.
Empirical scales
Empirically based temperature scales rely directly on measurements of simple macroscopic physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent largely on temperature and is the basis of the very useful mercury-in-glass thermometer. Such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, and then they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example, its boiling-point.
In spite of these limitations, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics. Nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, and this can extend their range of adequacy.
Theoretical scales
Theoretically based temperature scales are based directly on theoretical arguments, especially those of kinetic theory and thermodynamics. They are more or less ideally realized in practically feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practical empirically based thermometers.
Microscopic statistical mechanical scale
In physics, the internationally agreed conventional temperature scale is called the Kelvin scale. It is calibrated through the internationally agreed and prescribed value of the Boltzmann constant, referring to motions of microscopic particles, such as atoms, molecules, and electrons, constituent in the body whose temperature is to be measured. In contrast with the thermodynamic temperature scale invented by Kelvin, the presently conventional Kelvin temperature is not defined through comparison with the temperature of a reference state of a standard body, nor in terms of macroscopic thermodynamics.
Apart from the absolute zero of temperature, the Kelvin temperature of a body in a state of internal thermodynamic equilibrium is defined by measurements of suitably chosen of its physical properties, such as have precisely known theoretical explanations in terms of the Boltzmann constant. That constant refers to chosen kinds of motion of microscopic particles in the constitution of the body. In those kinds of motion, the particles move individually, without mutual interaction. Such motions are typically interrupted by inter-particle collisions, but for temperature measurement, the motions are chosen so that, between collisions, the non-interactive segments of their trajectories are known to be accessible to accurate measurement. For this purpose, interparticle potential energy is disregarded.
In an ideal gas, and in other theoretically understood bodies, the Kelvin temperature is defined to be proportional to the average kinetic energy of non-interactively moving microscopic particles, which can be measured by suitable techniques. The proportionality constant is a simple multiple of the Boltzmann constant. If molecules, atoms, or electrons are emitted from material and their velocities are measured, the spectrum of their velocities often nearly obeys a theoretical law called the Maxwell–Boltzmann distribution, which gives a well-founded measurement of temperatures for which the law holds. There have not yet been successful experiments of this same kind that directly use the Fermi–Dirac distribution for thermometry, but perhaps that will be achieved in the future.
The speed of sound in a gas can be calculated theoretically from the gas's molecular character, temperature, pressure, and the Boltzmann constant. For a gas of known molecular character and pressure, this provides a relation between temperature and the Boltzmann constant. Those quantities can be known or measured more precisely than can the thermodynamic variables that define the state of a sample of water at its triple point. Consequently, taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas.
Measurement of the spectrum of electromagnetic radiation from an ideal three-dimensional black body can provide an accurate temperature measurement because the frequency of maximum spectral radiance of black-body radiation is directly proportional to the temperature of the black body; this is known as Wien's displacement law and has a theoretical explanation in Planck's law and the Bose–Einstein law.
Measurement of the spectrum of noise-power produced by an electrical resistor can also provide accurate temperature measurement. The resistor has two terminals and is in effect a one-dimensional body. The Bose-Einstein law for this case indicates that the noise-power is directly proportional to the temperature of the resistor and to the value of its resistance and to the noise bandwidth. In a given frequency band, the noise-power has equal contributions from every frequency and is called Johnson noise. If the value of the resistance is known then the temperature can be found.
Macroscopic thermodynamic scale
Historically, till May 2019, the definition of the Kelvin scale was that invented by Kelvin, based on a ratio of quantities of energy in processes in an ideal Carnot engine, entirely in terms of macroscopic thermodynamics. That Carnot engine was to work between two temperatures, that of the body whose temperature was to be measured, and a reference, that of a body at the temperature of the triple point of water. Then the reference temperature, that of the triple point, was defined to be exactly . Since May 2019, that value has not been fixed by definition but is to be measured through microscopic phenomena, involving the Boltzmann constant, as described above. The microscopic statistical mechanical definition does not have a reference temperature.
Ideal gas
A material on which a macroscopically defined temperature scale may be based is the ideal gas. The pressure exerted by a fixed volume and mass of an ideal gas is directly proportional to its temperature. Some natural gases show so nearly ideal properties over suitable temperature range that they can be used for thermometry; this was important during the development of thermodynamics and is still of practical importance today. The ideal gas thermometer is, however, not theoretically perfect for thermodynamics. This is because the entropy of an ideal gas at its absolute zero of temperature is not a positive semi-definite quantity, which puts the gas in violation of the third law of thermodynamics. In contrast to real materials, the ideal gas does not liquefy or solidify, no matter how cold it is. Alternatively thinking, the ideal gas law, refers to the limit of infinitely high temperature and zero pressure; these conditions guarantee non-interactive motions of the constituent molecules.
Kinetic theory approach
The magnitude of the kelvin is now defined in terms of kinetic theory, derived from the value of the Boltzmann constant.
Kinetic theory provides a microscopic account of temperature for some bodies of material, especially gases, based on macroscopic systems' being composed of many microscopic particles, such as molecules and ions of various species, the particles of a species being all alike. It explains macroscopic phenomena through the classical mechanics of the microscopic particles. The equipartition theorem of kinetic theory asserts that each classical degree of freedom of a freely moving particle has an average kinetic energy of where denotes the Boltzmann constant. The translational motion of the particle has three degrees of freedom, so that, except at very low temperatures where quantum effects predominate, the average translational kinetic energy of a freely moving particle in a system with temperature will be .
Molecules, such as oxygen (O2), have more degrees of freedom than single spherical atoms: they undergo rotational and vibrational motions as well as translations. Heating results in an increase of temperature due to an increase in the average translational kinetic energy of the molecules. Heating will also cause, through equipartitioning, the energy associated with vibrational and rotational modes to increase. Thus a diatomic gas will require more energy input to increase its temperature by a certain amount, i.e. it will have a greater heat capacity than a monatomic gas.
As noted above, the speed of sound in a gas can be calculated from the gas's molecular character, temperature, pressure, and the Boltzmann constant. Taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas.
It is possible to measure the average kinetic energy of constituent microscopic particles if they are allowed to escape from the bulk of the system, through a small hole in the containing wall. The spectrum of velocities has to be measured, and the average calculated from that. It is not necessarily the case that the particles that escape and are measured have the same velocity distribution as the particles that remain in the bulk of the system, but sometimes a good sample is possible.
Thermodynamic approach
Temperature is one of the principal quantities in the study of thermodynamics. Formerly, the magnitude of the kelvin was defined in thermodynamic terms, but nowadays, as mentioned above, it is defined in terms of kinetic theory.
The thermodynamic temperature is said to be absolute for two reasons. One is that its formal character is independent of the properties of particular materials. The other reason is that its zero is, in a sense, absolute, in that it indicates absence of microscopic classical motion of the constituent particles of matter, so that they have a limiting specific heat of zero for zero temperature, according to the third law of thermodynamics. Nevertheless, a thermodynamic temperature does in fact have a definite numerical value that has been arbitrarily chosen by tradition and is dependent on the property of particular materials; it is simply less arbitrary than relative "degrees" scales such as Celsius and Fahrenheit. Being an absolute scale with one fixed point (zero), there is only one degree of freedom left to arbitrary choice, rather than two as in relative scales. For the Kelvin scale since May 2019, by international convention, the choice has been made to use knowledge of modes of operation of various thermometric devices, relying on microscopic kinetic theories about molecular motion. The numerical scale is settled by a conventional definition of the value of the Boltzmann constant, which relates macroscopic temperature to average microscopic kinetic energy of particles such as molecules. Its numerical value is arbitrary, and an alternate, less widely used absolute temperature scale exists called the Rankine scale, made to be aligned with the Fahrenheit scale as Kelvin is with Celsius.
The thermodynamic definition of temperature is due to Kelvin. It is framed in terms of an idealized device called a Carnot engine, imagined to run in a fictive continuous cycle of successive processes that traverse a cycle of states of its working body. The engine takes in a quantity of heat from a hot reservoir and passes out a lesser quantity of waste heat to a cold reservoir. The net heat energy absorbed by the working body is passed, as thermodynamic work, to a work reservoir, and is considered to be the output of the engine. The cycle is imagined to run so slowly that at each point of the cycle the working body is in a state of thermodynamic equilibrium. The successive processes of the cycle are thus imagined to run reversibly with no entropy production. Then the quantity of entropy taken in from the hot reservoir when the working body is heated is equal to that passed to the cold reservoir when the working body is cooled. Then the absolute or thermodynamic temperatures, and , of the reservoirs are defined such that
The zeroth law of thermodynamics allows this definition to be used to measure the absolute or thermodynamic temperature of an arbitrary body of interest, by making the other heat reservoir have the same temperature as the body of interest.
Kelvin's original work postulating absolute temperature was published in 1848. It was based on the work of Carnot, before the formulation of the first law of thermodynamics. Carnot had no sound understanding of heat and no specific concept of entropy. He wrote of 'caloric' and said that all the caloric that passed from the hot reservoir was passed into the cold reservoir. Kelvin wrote in his 1848 paper that his scale was absolute in the sense that it was defined "independently of the properties of any particular kind of matter". His definitive publication, which sets out the definition just stated, was printed in 1853, a paper read in 1851.
Numerical details were formerly settled by making one of the heat reservoirs a cell at the triple point of water, which was defined to have an absolute temperature of 273.16 K. Nowadays, the numerical value is instead obtained from measurement through the microscopic statistical mechanical international definition, as above.
Intensive variability
In thermodynamic terms, temperature is an intensive variable because it is equal to a differential coefficient of one extensive variable with respect to another, for a given body. It thus has the dimensions of a ratio of two extensive variables. In thermodynamics, two bodies are often considered as connected by contact with a common wall, which has some specific permeability properties. Such specific permeability can be referred to a specific intensive variable. An example is a diathermic wall that is permeable only to heat; the intensive variable for this case is temperature. When the two bodies have been connected through the specifically permeable wall for a very long time, and have settled to a permanent steady state, the relevant intensive variables are equal in the two bodies; for a diathermal wall, this statement is sometimes called the zeroth law of thermodynamics.
In particular, when the body is described by stating its internal energy , an extensive variable, as a function of its entropy , also an extensive variable, and other state variables , with ), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy:
Likewise, when the body is described by stating its entropy as a function of its internal energy , and other state variables , with , then the reciprocal of the temperature is equal to the partial derivative of the entropy with respect to the internal energy:
The above definition, equation (1), of the absolute temperature, is due to Kelvin. It refers to systems closed to the transfer of matter and has a special emphasis on directly experimental procedures. A presentation of thermodynamics by Gibbs starts at a more abstract level and deals with systems open to the transfer of matter; in this development of thermodynamics, the equations (2) and (3) above are actually alternative definitions of temperature.
Local thermodynamic equilibrium
Real-world bodies are often not in thermodynamic equilibrium and not homogeneous. For the study by methods of classical irreversible thermodynamics, a body is usually spatially and temporally divided conceptually into 'cells' of small size. If classical thermodynamic equilibrium conditions for matter are fulfilled to good approximation in such a 'cell', then it is homogeneous and a temperature exists for it. If this is so for every 'cell' of the body, then local thermodynamic equilibrium is said to prevail throughout the body.
It makes good sense, for example, to say of the extensive variable , or of the extensive variable , that it has a density per unit volume or a quantity per unit mass of the system, but it makes no sense to speak of the density of temperature per unit volume or quantity of temperature per unit mass of the system. On the other hand, it makes no sense to speak of the internal energy at a point, while when local thermodynamic equilibrium prevails, it makes good sense to speak of the temperature at a point. Consequently, the temperature can vary from point to point in a medium that is not in global thermodynamic equilibrium, but in which there is local thermodynamic equilibrium.
Thus, when local thermodynamic equilibrium prevails in a body, the temperature can be regarded as a spatially varying local property in that body, and this is because the temperature is an intensive variable.
Basic theory
Temperature is a measure of a quality of a state of a material. The quality may be regarded as a more abstract entity than any particular temperature scale that measures it, and is called hotness by some writers. The quality of hotness refers to the state of material only in a particular locality, and in general, apart from bodies held in a steady state of thermodynamic equilibrium, hotness varies from place to place. It is not necessarily the case that a material in a particular place is in a state that is steady and nearly homogeneous enough to allow it to have a well-defined hotness or temperature. Hotness may be represented abstractly as a one-dimensional manifold. Every valid temperature scale has its own one-to-one map into the hotness manifold.
When two systems in thermal contact are at the same temperature no heat transfers between them. When a temperature difference does exist heat flows spontaneously from the warmer system to the colder system until they are in thermal equilibrium. Such heat transfer occurs by conduction or by thermal radiation.
Experimental physicists, for example Galileo and Newton, found that there are indefinitely many empirical temperature scales. Nevertheless, the zeroth law of thermodynamics says that they all measure the same quality. This means that for a body in its own state of internal thermodynamic equilibrium, every correctly calibrated thermometer, of whatever kind, that measures the temperature of the body, records one and the same temperature. For a body that is not in its own state of internal thermodynamic equilibrium, different thermometers can record different temperatures, depending respectively on the mechanisms of operation of the thermometers.
Bodies in thermodynamic equilibrium
For experimental physics, hotness means that, when comparing any two given bodies in their respective separate thermodynamic equilibria, any two suitably given empirical thermometers with numerical scale readings will agree as to which is the hotter of the two given bodies, or that they have the same temperature. This does not require the two thermometers to have a linear relation between their numerical scale readings, but it does require that the relation between their numerical readings shall be strictly monotonic. A definite sense of greater hotness can be had, independently of calorimetry, of thermodynamics, and of properties of particular materials, from Wien's displacement law of thermal radiation: the temperature of a bath of thermal radiation is proportional, by a universal constant, to the frequency of the maximum of its frequency spectrum; this frequency is always positive, but can have values that tend to zero. Thermal radiation is initially defined for a cavity in thermodynamic equilibrium. These physical facts justify a mathematical statement that hotness exists on an ordered one-dimensional manifold. This is a fundamental character of temperature and thermometers for bodies in their own thermodynamic equilibrium.
Except for a system undergoing a first-order phase change such as the melting of ice, as a closed system receives heat, without a change in its volume and without a change in external force fields acting on it, its temperature rises. For a system undergoing such a phase change so slowly that departure from thermodynamic equilibrium can be neglected, its temperature remains constant as the system is supplied with latent heat. Conversely, a loss of heat from a closed system, without phase change, without change of volume, and without a change in external force fields acting on it, decreases its temperature.
Bodies in a steady state but not in thermodynamic equilibrium
While for bodies in their own thermodynamic equilibrium states, the notion of temperature requires that all empirical thermometers must agree as to which of two bodies is the hotter or that they are at the same temperature, this requirement is not safe for bodies that are in steady states though not in thermodynamic equilibrium. It can then well be that different empirical thermometers disagree about which is hotter, and if this is so, then at least one of the bodies does not have a well-defined absolute thermodynamic temperature. Nevertheless, any one given body and any one suitable empirical thermometer can still support notions of empirical, non-absolute, hotness, and temperature, for a suitable range of processes. This is a matter for study in non-equilibrium thermodynamics.
Bodies not in a steady state
When a body is not in a steady-state, then the notion of temperature becomes even less safe than for a body in a steady state not in thermodynamic equilibrium. This is also a matter for study in non-equilibrium thermodynamics.
Thermodynamic equilibrium axiomatics
For the axiomatic treatment of thermodynamic equilibrium, since the 1930s, it has become customary to refer to a zeroth law of thermodynamics. The customarily stated minimalist version of such a law postulates only that all bodies, which when thermally connected would be in thermal equilibrium, should be said to have the same temperature by definition, but by itself does not establish temperature as a quantity expressed as a real number on a scale. A more physically informative version of such a law views empirical temperature as a chart on a hotness manifold. While the zeroth law permits the definitions of many different empirical scales of temperature, the second law of thermodynamics selects the definition of a single preferred, absolute temperature, unique up to an arbitrary scale factor, whence called the thermodynamic temperature. If internal energy is considered as a function of the volume and entropy of a homogeneous system in thermodynamic equilibrium, thermodynamic absolute temperature appears as the partial derivative of internal energy with respect the entropy at constant volume. Its natural, intrinsic origin or null point is absolute zero at which the entropy of any system is at a minimum. Although this is the lowest absolute temperature described by the model, the third law of thermodynamics postulates that absolute zero cannot be attained by any physical system.
Heat capacity
When an energy transfer to or from a body is only as heat, the state of the body changes. Depending on the surroundings and the walls separating them from the body, various changes are possible in the body. They include chemical reactions, increase of pressure, increase of temperature and phase change. For each kind of change under specified conditions, the heat capacity is the ratio of the quantity of heat transferred to the magnitude of the change.
For example, if the change is an increase in temperature at constant volume, with no phase change and no chemical change, then the temperature of the body rises and its pressure increases. The quantity of heat transferred, , divided by the observed temperature change, , is the body's heat capacity at constant volume:
If heat capacity is measured for a well-defined amount of substance, the specific heat is the measure of the heat required to increase the temperature of such a unit quantity by one unit of temperature. For example, raising the temperature of water by one kelvin (equal to one degree Celsius) requires 4186 joules per kilogram (J/kg).
Measurement
Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Daniel Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use in the United States for non-scientific applications.
Temperature is measured with thermometers that may be calibrated to a variety of temperature scales. In most of the world (except for Belize, Myanmar, Liberia and the United States), the Celsius scale is used for most temperature measuring purposes. Most scientists measure temperature using the Celsius scale and thermodynamic temperature using the Kelvin scale, which is the Celsius scale offset so that its null point is = , or absolute zero. Many engineering fields in the US, notably high-tech and US federal specifications (civil and military), also use the Kelvin and Celsius scales. Other engineering fields in the US also rely upon the Rankine scale (a shifted Fahrenheit scale) when working in thermodynamic-related disciplines such as combustion.
Units
The basic unit of temperature in the International System of Units (SI) is the kelvin. It has the symbol K.
For everyday applications, it is often convenient to use the Celsius scale, in which corresponds very closely to the freezing point of water and is its boiling point at sea level. Because liquid droplets commonly exist in clouds at sub-zero temperatures, is better defined as the melting point of ice. In this scale, a temperature difference of 1 degree Celsius is the same as a increment, but the scale is offset by the temperature at which ice melts ().
By international agreement, until May 2019, the Kelvin and Celsius scales were defined by two fixing points: absolute zero and the triple point of Vienna Standard Mean Ocean Water, which is water specially prepared with a specified blend of hydrogen and oxygen isotopes. Absolute zero was defined as precisely and . It is the temperature at which all classical translational motion of the particles comprising matter ceases and they are at complete rest in the classical model. Quantum-mechanically, however, zero-point motion remains and has an associated energy, the zero-point energy. Matter is in its ground state, and contains no thermal energy. The temperatures and were defined as those of the triple point of water. This definition served the following purposes: it fixed the magnitude of the kelvin as being precisely 1 part in 273.16 parts of the difference between absolute zero and the triple point of water; it established that one kelvin has precisely the same magnitude as one degree on the Celsius scale; and it established the difference between the null points of these scales as being ( = and = ). Since 2019, there has been a new definition based on the Boltzmann constant, but the scales are scarcely changed.
In the United States, the Fahrenheit scale is the most widely used. On this scale the freezing point of water corresponds to and the boiling point to . The Rankine scale, still used in fields of chemical engineering in the US, is an absolute scale based on the Fahrenheit increment.
Historical scales
The following temperature scales are in use or have historically been used for measuring temperature:
Kelvin scale
Celsius scale
Fahrenheit scale
Rankine scale
Delisle scale
Newton scale
Réaumur scale
Rømer scale
Plasma physics
The field of plasma physics deals with phenomena of electromagnetic nature that involve very high temperatures. It is customary to express temperature as energy in a unit related to the electronvolt or kiloelectronvolt (eV/kB or keV/kB). The corresponding energy, which is dimensionally distinct from temperature, is then calculated as the product of the Boltzmann constant and temperature, . Then, 1eV/kB is . In the study of QCD matter one routinely encounters temperatures of the order of a few hundred MeV/kB, equivalent to about .
Continuous or discrete
When one measures the variation of temperature across a region of space or time, do the temperature measurements turn out to be continuous or discrete? There is a widely held misconception that such temperature measurements must always be continuous. This misconception partly originates from the historical view associated with the continuity of classical physical quantities, which states that physical quantities must assume every intermediate value between a starting value and a final value. However, the classical picture is only true in the cases where temperature is measured in a system that is in equilibrium, that is, temperature may not be continuous outside these conditions. For systems outside equilibrium, such as at interfaces between materials (e.g., a metal/non-metal interface or a liquid-vapour interface) temperature measurements may show steep discontinuities in time and space. For instance, Fang and Ward were some of the first authors to successfully report temperature discontinuities of as much as 7.8 K at the surface of evaporating water droplets. This was reported at inter-molecular scales, or at the scale of the mean free path of molecules which is typically of the order of a few micrometers in gases at room temperature. Generally speaking, temperature discontinuities are considered to be norms rather than exceptions in cases of interfacial heat transfer. This is due to the abrupt change in the vibrational or thermal properties of the materials across such interfaces which prevent instantaneous transfer of heat and the establishment of thermal equilibrium (a prerequisite for having a uniform equilibrium temperature across the interface). Further, temperature measurements at the macro-scale (typical observational scale) may be too coarse-grained as they average out the microscopic thermal information based on the scale of the representative sample volume of the control system, and thus it is likely that temperature discontinuities at the micro-scale may be overlooked in such averages. Such an averaging may even produce incorrect or misleading results in many cases of temperature measurements, even at macro-scales, and thus it is prudent that one examines the micro-physical information carefully before averaging out or smoothing out any potential temperature discontinuities in a system as such discontinuities cannot always be averaged or smoothed out. Temperature discontiuities, rather than merely being anomalies, have actually substantially improved our understanding and predictive abilities pertaining to heat transfer at small scales.
Theoretical foundation
Historically, there are several scientific approaches to the explanation of temperature: the classical thermodynamic description based on macroscopic empirical variables that can be measured in a laboratory; the kinetic theory of gases which relates the macroscopic description to the probability distribution of the energy of motion of gas particles; and a microscopic explanation based on statistical physics and quantum mechanics. In addition, rigorous and purely mathematical treatments have provided an axiomatic approach to classical thermodynamics and temperature. Statistical physics provides a deeper understanding by describing the atomic behavior of matter and derives macroscopic properties from statistical averages of microscopic states, including both classical and quantum states. In the fundamental physical description, the temperature may be measured directly in units of energy. However, in the practical systems of measurement for science, technology, and commerce, such as the modern metric system of units, the macroscopic and the microscopic descriptions are interrelated by the Boltzmann constant, a proportionality factor that scales temperature to the microscopic mean kinetic energy.
The microscopic description in statistical mechanics is based on a model that analyzes a system into its fundamental particles of matter or into a set of classical or quantum-mechanical oscillators and considers the system as a statistical ensemble of microstates. As a collection of classical material particles, the temperature is a measure of the mean energy of motion, called translational kinetic energy, of the particles, whether in solids, liquids, gases, or plasmas. The kinetic energy, a concept of classical mechanics, is half the mass of a particle times its speed squared. In this mechanical interpretation of thermal motion, the kinetic energies of material particles may reside in the velocity of the particles of their translational or vibrational motion or in the inertia of their rotational modes. In monatomic perfect gases and, approximately, in most gas and in simple metals, the temperature is a measure of the mean particle translational kinetic energy, 3/2 kBT. It also determines the probability distribution function of energy. In condensed matter, and particularly in solids, this purely mechanical description is often less useful and the oscillator model provides a better description to account for quantum mechanical phenomena. Temperature determines the statistical occupation of the microstates of the ensemble. The microscopic definition of temperature is only meaningful in the thermodynamic limit, meaning for large ensembles of states or particles, to fulfill the requirements of the statistical model.
Kinetic energy is also considered as a component of thermal energy. The thermal energy may be partitioned into independent components attributed to the degrees of freedom of the particles or to the modes of oscillators in a thermodynamic system. In general, the number of these degrees of freedom that are available for the equipartitioning of energy depends on the temperature, i.e. the energy region of the interactions under consideration. For solids, the thermal energy is associated primarily with the vibrations of its atoms or molecules about their equilibrium position. In an ideal monatomic gas, the kinetic energy is found exclusively in the purely translational motions of the particles. In other systems, vibrational and rotational motions also contribute degrees of freedom.
Kinetic theory of gases
Maxwell and Boltzmann developed a kinetic theory that yields a fundamental understanding of temperature in gases.
This theory also explains the ideal gas law and the observed heat capacity of monatomic (or 'noble') gases.
The ideal gas law is based on observed empirical relationships between pressure (p), volume (V), and temperature (T), and was recognized long before the kinetic theory of gases was developed (see Boyle's and Charles's laws). The ideal gas law states:
where n is the number of moles of gas and is the gas constant.
This relationship gives us our first hint that there is an absolute zero on the temperature scale, because it only holds if the temperature is measured on an absolute scale such as Kelvin's. The ideal gas law allows one to measure temperature on this absolute scale using the gas thermometer. The temperature in kelvins can be defined as the pressure in pascals of one mole of gas in a container of one cubic meter, divided by the gas constant.
Although it is not a particularly convenient device, the gas thermometer provides an essential theoretical basis by which all thermometers can be calibrated. As a practical matter, it is not possible to use a gas thermometer to measure absolute zero temperature since the gases condense into a liquid long before the temperature reaches zero. It is possible, however, to extrapolate to absolute zero by using the ideal gas law, as shown in the figure.
The kinetic theory assumes that pressure is caused by the force associated with individual atoms striking the walls, and that all energy is translational kinetic energy. Using a sophisticated symmetry argument, Boltzmann deduced what is now called the Maxwell–Boltzmann probability distribution function for the velocity of particles in an ideal gas. From that probability distribution function, the average kinetic energy (per particle) of a monatomic ideal gas is
where the Boltzmann constant is the ideal gas constant divided by the Avogadro number, and is the root-mean-square speed. This direct proportionality between temperature and mean molecular kinetic energy is a special case of the equipartition theorem, and holds only in the classical limit of a perfect gas. It does not hold exactly for most substances.
Zeroth law of thermodynamics
When two otherwise isolated bodies are connected together by a rigid physical path impermeable to matter, there is the spontaneous transfer of energy as heat from the hotter to the colder of them. Eventually, they reach a state of mutual thermal equilibrium, in which heat transfer has ceased, and the bodies' respective state variables have settled to become unchanging.
One statement of the zeroth law of thermodynamics is that if two systems are each in thermal equilibrium with a third system, then they are also in thermal equilibrium with each other.
This statement helps to define temperature but it does not, by itself, complete the definition. An empirical temperature is a numerical scale for the hotness of a thermodynamic system. Such hotness may be defined as existing on a one-dimensional manifold, stretching between hot and cold. Sometimes the zeroth law is stated to include the existence of a unique universal hotness manifold, and of numerical scales on it, so as to provide a complete definition of empirical temperature. To be suitable for empirical thermometry, a material must have a monotonic relation between hotness and some easily measured state variable, such as pressure or volume, when all other relevant coordinates are fixed. An exceptionally suitable system is the ideal gas, which can provide a temperature scale that matches the absolute Kelvin scale. The Kelvin scale is defined on the basis of the second law of thermodynamics.
Second law of thermodynamics
As an alternative to considering or defining the zeroth law of thermodynamics, it was the historical development in thermodynamics to define temperature in terms of the second law of thermodynamics which deals with entropy. The second law states that any process will result in either no change or a net increase in the entropy of the universe. This can be understood in terms of probability.
For example, in a series of coin tosses, a perfectly ordered system would be one in which either every toss comes up heads or every toss comes up tails. This means the outcome is always 100% the same result. In contrast, many mixed (disordered) outcomes are possible, and their number increases with each toss. Eventually, the combinations of ~50% heads and ~50% tails dominate, and obtaining an outcome significantly different from 50/50 becomes increasingly unlikely. Thus the system naturally progresses to a state of maximum disorder or entropy.
As temperature governs the transfer of heat between two systems and the universe tends to progress toward a maximum of entropy, it is expected that there is some relationship between temperature and entropy. A heat engine is a device for converting thermal energy into mechanical energy, resulting in the performance of work. An analysis of the Carnot heat engine provides the necessary relationships. According to energy conservation and energy being a state function that does not change over a full cycle, the work from a heat engine over a full cycle is equal to the net heat, i.e. the sum of the heat put into the system at high temperature, qH > 0, and the waste heat given off at the low temperature, qC < 0.
The efficiency is the work divided by the heat input:
where wcy is the work done per cycle. The efficiency depends only on |qC|/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, |qC|/qH should be some function of these temperatures:
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and T2, and the second between T2 and T3. This can only be the case if
which implies
Since the first function is independent of T2, this temperature must cancel on the right side, meaning f(T1, T3) is of the form g(T1)/g(T3) (i.e. = = = , where g is a function of a single temperature. A temperature scale can now be chosen with the property that
Substituting (6) back into (4) gives a relationship for the efficiency in terms of temperature:
For TC = 0K the efficiency is 100% and that efficiency becomes greater than 100% below 0K. Since an efficiency greater than 100% violates the first law of thermodynamics, this implies that 0K is the minimum possible temperature. In fact, the lowest temperature ever obtained in a macroscopic system was 20nK, which was achieved in 1995 at NIST. Subtracting the right hand side of (5) from the middle portion and rearranging gives
where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function, S, whose change characteristically vanishes for a complete cycle if it is defined by
where the subscript indicates a reversible process. This function corresponds to the entropy of the system, which was described previously. Rearranging (8) gives a formula for temperature in terms of fictive infinitesimal quasi-reversible elements of entropy and heat:
For a constant-volume system where entropy S(E) is a function of its energy E, dE = dqrev and (9) gives
i.e. the reciprocal of the temperature is the rate of increase of entropy with respect to energy at constant volume.
Definition from statistical mechanics
Statistical mechanics defines temperature based on a system's fundamental degrees of freedom. Eq.(10) is the defining relation of temperature, where the entropy is defined (up to a constant) by the logarithm of the number of microstates of the system in the given macrostate (as specified in the microcanonical ensemble):
where is the Boltzmann constant and W is the number of microstates with the energy E of the system (degeneracy).
When two systems with different temperatures are put into purely thermal connection, heat will flow from the higher temperature system to the lower temperature one; thermodynamically this is understood by the second law of thermodynamics: The total change in entropy following a transfer of energy from system 1 to system 2 is:
and is thus positive if
From the point of view of statistical mechanics, the total number of microstates in the combined system 1 + system 2 is , the logarithm of which (times the Boltzmann constant) is the sum of their entropies; thus a flow of heat from high to low temperature, which brings an increase in total entropy, is more likely than any other scenario (normally it is much more likely), as there are more microstates in the resulting macrostate.
Generalized temperature from single-particle statistics
It is possible to extend the definition of temperature even to systems of few particles, like in a quantum dot. The generalized temperature is obtained by considering time ensembles instead of configuration-space ensembles given in statistical mechanics in the case of thermal and particle exchange between a small system of fermions (N even less than 10) with a single/double-occupancy system. The finite quantum grand canonical ensemble, obtained under the hypothesis of ergodicity and orthodicity, allows expressing the generalized temperature from the ratio of the average time of occupation and of the single/double-occupancy system:
where EF is the Fermi energy. This generalized temperature tends to the ordinary temperature when N goes to infinity.
Negative temperature
On the empirical temperature scales that are not referenced to absolute zero, a negative temperature is one below the zero point of the scale used. For example, dry ice has a sublimation temperature of which is equivalent to . On the absolute Kelvin scale this temperature is . No body of matter can be brought to exactly (the temperature of the ideally coldest possible body) by any finite practicable process; this is a consequence of the third law of thermodynamics.
The internal kinetic theory states that the temperature of a body of matter cannot take negative values. The thermodynamic temperature scale, however, is not so constrained.
A body of matter can sometimes be conceptually defined in terms of microscopic degrees of freedom, namely particle spins, a subsystem with a temperature other than that of the whole body. When the body is in its state of internal thermodynamic equilibrium, the temperatures of the entire body and the subsystem must be the same. The two temperatures can differ when, by work through externally imposed force fields, energy can be transferred to and from the subsystem, separately from the rest of the body; then, the whole body is not in its own state of internal thermodynamic equilibrium. There is an upper limit of energy such a spin subsystem can attain.
Considering the subsystem to be in a temporary state of virtual thermodynamic equilibrium, obtaining a negative temperature on the thermodynamic scale is possible. Thermodynamic temperature is the inverse of the derivative of the subsystem's entropy for its internal energy. As the subsystem's internal energy increases, the entropy increases for some range but eventually attains a maximum value and then begins to decrease as the highest energy states begin to fill. At the point of maximum entropy, the temperature function shows the behavior of a singularity because the slope of the entropy as a function of energy decreases to zero and then turns negative. As the subsystem's entropy reaches its maximum, its thermodynamic temperature goes to positive infinity, switching to negative infinity as the slope turns negative. Such negative temperatures are hotter than any positive temperature. Over time, when the subsystem is exposed to the rest of the body, which has a positive temperature, energy is transferred as heat from the negative temperature subsystem to the positive temperature system. The kinetic theory temperature is not defined for such subsystems.
Examples
| Physical sciences | Physics | null |
20647108 | https://en.wikipedia.org/wiki/Red%20fox | Red fox | The red fox (Vulpes vulpes) is the largest of the true foxes and one of the most widely distributed members of the order Carnivora, being present across the entire Northern Hemisphere including most of North America, Europe and Asia, plus parts of North Africa. It is listed as least concern on the IUCN Red List. Its range has increased alongside human expansion, having been introduced to Australia, where it is considered harmful to native small and medium-sized rodents and marsupials. Due to its impact on native species, it is included on the list of the "world's 100 worst invasive species".
The red fox originated in Eurasia during the Middle Pleistocene at least 400,000 years ago and later colonised North America sometime prior to 130,000 years ago. Among the true foxes, the red fox represents a more progressive form in the direction of carnivory. Apart from its large size, the red fox is distinguished from other fox species by its ability to adapt quickly to new environments. Despite its name, the species often produces individuals with other colourings, including leucistic and melanistic individuals. Forty-five subspecies are currently recognised, which are divided into two categories: the large northern foxes and the small, basal southern grey desert foxes of Asia and North Africa.
Red foxes are usually found in pairs or small groups consisting of families, such as a mated pair and their young, or a male with several females having kinship ties. The young of the mated pair remain with their parents to assist in caring for new kits. The species primarily feeds on small rodents, though it may also target rabbits, squirrels, game birds, reptiles, invertebrates and young ungulates. Fruit and vegetable matter is also eaten sometimes. Although the red fox tends to kill smaller predators, including other fox species, it is vulnerable to attack from larger predators, such as wolves, coyotes, golden jackals, large predatory birds such as golden eagles and Eurasian eagle owls, and medium- and large-sized felids.
The species has a long history of association with humans, having been extensively hunted as a pest and furbearer for many centuries, as well as being represented in human folklore and mythology. Because of its widespread distribution and large population, the red fox is one of the most important furbearing animals harvested for the fur trade. Too small to pose a threat to humans, it has extensively benefited from the presence of human habitation, and has successfully colonised many suburban and urban areas. Domestication of the red fox is also underway in Russia, and has resulted in the domesticated silver fox.
Terminology
Males are called tods or dogs, females are called vixens, and young are known as cubs or kits. Although the Arctic fox has a small native population in northern Scandinavia, and while the corsac fox's range extends into European Russia, the red fox is the only fox native to Western Europe, and so is simply called "the fox" in colloquial British English.
Etymology
The word "fox" comes from Old English, which derived from Proto-Germanic *fuhsaz. Compare with West Frisian foks, Dutch , and German . This, in turn, derives from Proto-Indo-European *puḱ- 'thick-haired; tail'. Compare to the Hindi pū̃ch 'tail', Tocharian B päkā 'tail; chowrie', and Lithuanian 'fur / fluff'. The bushy tail also forms the basis for the fox's Welsh name, , literally 'bushy', from 'bush'. Likewise, from rabo 'tail', Lithuanian from uodegà 'tail', and Ojibwe waagosh from waa, which refers to the up and down "bounce" or flickering of an animal or its tail.
The scientific term vulpes derives from the Latin word for fox, and gives the adjectives vulpine and vulpecular.
Evolution
The red fox is considered to be a more specialised form of Vulpes than the Afghan, corsac and Bengal fox species, in regards to their overall size and adaptation to carnivory; the skull displays far fewer neotenous traits than in other foxes, and its facial area is more developed. It is, however, not as adapted for a purely carnivorous diet as the Tibetan fox.
The sister lineage to the red fox is the Rüppell's fox, but the two species are surprisingly closely related through mitochondrial DNA markers, with Rüppell's fox nested inside the lineages of red foxes. Such a nesting of one species within another is called paraphyly. Several hypotheses have been suggested to explain this, including (1) recent divergence of Rüppell's fox from a red fox lineage, (2) incomplete lineage sorting, or introgression of mtDNA between the two species. Based on fossil record evidence, the last scenario seems most likely, which is further supported by the clear ecological and morphological differences between the two species.
Origins
The species is Eurasian in origin, and may have evolved from either Vulpes alopecoides or the related Chinese V. chikushanensis, both of which lived during the Middle Villafranchian of the Pleistocene Epoch. The earliest fossil specimens of V. vulpes were uncovered in Baranya County, Hungary, dating from 3.4 to 1.8 million years ago. The ancestral red fox was likely more diminutive compared to today's extant foxes, as the earliest red fox fossils have shown a smaller build than living specimens. The earliest fossil remains of the modern species date back to the mid-Pleistocene, found in association with middens and refuse left by early human settlements. This has led to the theory that the red fox was hunted by primitive humans (as both a source of food and pelts); the possibility also exists of red foxes scavenging from middens or butchered animal carcasses.
Colonisation of North America
Red foxes colonised the North American continent in two waves: before and during the Illinoian glaciation, and during the Wisconsinan glaciation. Gene mapping demonstrates that red foxes in North America have been isolated from their Old World counterparts for over 400,000 years, thus raising the possibility that speciation has occurred, and that the previous binomial name of Vulpes fulva may be valid. In the far north, red fox fossils have been found in Sangamonian Stage deposits near the Fairbanks District, Alaska, and Medicine Hat, Alberta. Fossils dating from the Wisconsinan are present in 25 sites across Arkansas, California, Colorado, Idaho, Missouri, New Mexico, Ohio, Tennessee, Texas, Virginia, and Wyoming. Although they ranged far south during the Wisconsinan, the onset of warm conditions shrank their range toward the north, and they have only recently reclaimed their former North American ranges because of human-induced environmental changes. Genetic testing indicates that two distinct red fox refugia exist in North America, which have been separated since the Wisconsinan. The northern (or boreal) refugium occurs in Alaska and western Canada, and consists of the larger subspecies V. v. alascensis, V. v. abietorum, V. v. regalis, and V. v. rubricosa. The southern (or montane) refugium occurs in the subalpine parklands and alpine meadows of the west, from the Rocky Mountains to the Cascades and the Sierra Nevada ranges, consisting of the smaller subspecies V. v. cascadensis, V. v. macroura, V. v. necator, and V. v. patwin. The latter clade has been separated from all other red fox populations since at least the last glacial maximum, and may possess unique ecological or physiological adaptations.
Although European foxes (V. v. crucigera) were introduced to portions of the United States in the 1900s, recent genetic investigation indicates an absence of European fox mitochondrial haplotypes in any North American populations. Additionally, introduced eastern North American red foxes have colonised most of inland California, from Southern California to the San Joaquin Valley, Monterey and north-coastal San Francisco Bay Area (including urban San Francisco and adjacent cities). In spite of the red fox's adaptability to city life, they are still found in somewhat greater numbers in the northern portions of California (north of the Bay Area) than in the south, as the wilderness is more alpine and isolated. The eastern red foxes appear to have mixed with the Sacramento Valley red fox (V. v. patwin) only in a narrow hybrid zone. In addition, no evidence is seen of interbreeding of eastern American red foxes in California with the montane Sierra Nevada red fox (V. v. necator) or other populations in the Intermountain West (between the Rocky Mountains to the east and the Cascade and Sierra Nevada Mountains to the west).
Subspecies
The 3rd edition of Mammal Species of the World listed 45 subspecies as valid. In 2010, a distinct 46th subspecies, the Sacramento Valley red fox (V. v. patwin), which inhabits the grasslands of the Sacramento Valley, was identified through mitochondrial haplotype studies. Castello (2018) recognized 30 subspecies of the Old World red fox and nine subspecies of the North American red fox as valid.
Substantial gene pool mixing between different subspecies is known; British red foxes have crossbred extensively with red foxes imported from Germany, France, Belgium, Sardinia and possibly Siberia and Scandinavia. However, genetic studies suggest very little differences between red foxes sampled across Europe. Lack of genetic diversity is consistent with the red fox being a highly agile species, with one red fox covering in under a year's time.
Red fox subspecies in Eurasia and North Africa are divided into two categories:
are large and brightly coloured.
include the Asian subspecies V. v. griffithi, V. v. pusilla, and V. v. flavescens. These foxes display transitional features between the northern foxes and other, smaller fox species; their skulls possess more primitive, neotenous traits than the northern foxes and they are much smaller; the maximum sizes attained by southern grey desert foxes are invariably less than the average sizes of northern foxes. Their limbs are also longer and their ears larger.
Red foxes living in Middle Asia show physical traits intermediate to the northern foxes and southern grey desert foxes.
Description
Build
The red fox has an elongated body and relatively short limbs. The tail, which is longer than half the body length (70 percent of head and body length), is fluffy and reaches the ground when in a standing position. Their pupils are oval and vertically oriented. Nictitating membranes are present, but move only when the eyes are closed. The forepaws have five digits, while the hind feet have only four and lack dewclaws. They are very agile, being capable of jumping over high fences, and swim well. Vixens normally have four pairs of teats, though vixens with seven, nine, or ten teats are not uncommon. The testes of males are smaller than those of Arctic foxes.
Their skulls are fairly narrow and elongated, with small braincases. Their canine teeth are relatively long. Sexual dimorphism of the skull is more pronounced than in corsac foxes, with female red foxes tending to have smaller skulls than males, with wider nasal regions and hard palates, as well as having larger canines. Their skulls are distinguished from those of dogs by their narrower muzzles, less crowded premolars, more slender canine teeth, and concave rather than convex profiles.
Dimensions
Red foxes are the largest species of the genus Vulpes. However, relative to dimensions, red foxes are much lighter than similarly sized dogs of the genus Canis. Their limb bones, for example, weigh 30 percent less per unit area of bone than expected for similarly sized dogs. They display significant individual, sexual, age and geographical variation in size. On average, adults measure high at the shoulder and in body length with tails measuring . The ears measure and the hind feet . Weights range from , with vixens typically weighing 15–20% less than males. Adult red foxes have skulls measuring , while those of vixens measure . The forefoot print measures in length and in width, while the hind foot print measures long and wide. They trot at a speed of , and have a maximum running speed of . They have a stride of when walking at a normal pace. North American red foxes are generally lightly built, with comparatively long bodies for their mass and have a high degree of sexual dimorphism. British red foxes are heavily built, but short, while continental European red foxes are closer to the general average among red fox populations. The largest red fox on record in Great Britain was a long male, that weighed , killed in Aberdeenshire, Scotland, in early 2012.
Fur
The winter fur is dense, soft, silky and relatively long. For the northern foxes, the fur is very long, dense and fluffy, but it is shorter, sparser and coarser in southern forms. Among northern foxes, the North American varieties generally have the silkiest guard hairs, while most Eurasian red foxes have coarser fur. The fur in "thermal windows" areas such as the head and the lower legs is kept dense and short all year round, while fur in other areas changes with the seasons. The foxes actively control the peripheral vasodilation and peripheral vasoconstriction in these areas to regulate heat loss. There are three main colour morphs; red, silver/black and cross (see Mutations). In the typical red morph, their coats are generally bright reddish-rusty with yellowish tints. A stripe of weak, diffuse patterns of many brown-reddish-chestnut hairs occurs along the spine. Two additional stripes pass down the shoulder blades, which, together with the spinal stripe, form a cross. The lower back is often a mottled silvery colour. The flanks are lighter coloured than the back, while the chin, lower lips, throat and front of the chest are white. The remaining lower surface of the body is dark, brown or reddish. During lactation, the belly fur of vixens may turn brick red. The upper parts of the limbs are rusty reddish, while the paws are black. The frontal part of the face and upper neck is bright brownish-rusty red, while the upper lips are white. The backs of the ears are black or brownish-reddish, while the inner surface is whitish. The top of the tail is brownish-reddish, but lighter in colour than the back and flanks. The underside of the tail is pale grey with a straw-coloured tint. A black spot, the location of the supracaudal gland, is usually present at the base of the tail. The tip of the tail is white.
Colour morphs
Atypical colouration in the red fox usually represents stages toward full melanism, and mostly occurs in cold regions.
Senses
Red foxes have binocular vision, but their sight reacts mainly to movement. Their auditory perception is acute, being able to hear black grouse changing roosts at 600 paces, the flight of crows at and the squeaking of mice at about . They are capable of locating sounds to within one degree at 700–3,000 Hz, though less accurately at higher frequencies. Their sense of smell is good, but weaker than that of specialised dogs.
Scent glands
Red foxes have a pair of anal sacs lined by sebaceous glands, both of which open through a single duct. The size and volume of the anal sacs increases with age, ranging in size from 5–40mm in length, 1–3mm in diameter, and with a capacity of 1–5 mL. The anal sacs act as fermentation chambers in which aerobic and anaerobic bacteria convert sebum into odorous compounds, including aliphatic acids. The oval-shaped caudal gland is long and wide, and reportedly smells of violets. The presence of foot glands is equivocal. The interdigital cavities are deep, with a reddish tinge and smell strongly. Sebaceous glands are present on the angle of the jaw and mandible.
Distribution and habitat
The red fox is a wide-ranging species. Its range covers nearly including as far north as the Arctic Circle. It occurs all across Europe, in Africa north of the Sahara Desert, throughout Asia apart from extreme Southeast Asia, and across North America apart from most of the southwestern United States and Mexico. It is absent in the Arctic islands, the most northern parts of central Siberia, and in extreme deserts.
It is not present in New Zealand and is classed as a "prohibited new organism" under the Hazardous Substances and New Organisms Act 1996, which does not allow import.
Australia
In Australia, estimates in 2012 indicated that there were more than 7.2 million red foxes, with a range extending throughout most of the continental mainland. They became established in Australia through successive introductions in the 1830s and 1840s, by settlers in the British colonies of Van Diemen's Land (as early as 1833) and the Port Phillip District of New South Wales (as early as 1845), who wanted to foster the traditional English sport of fox hunting. A permanent red fox population did not establish itself on the island of Tasmania, and it is widely held that foxes were out-competed by the Tasmanian devil. On the mainland, however, the species was successful as an apex predator. The fox is generally less common in areas where the dingo is more prevalent, but it has, primarily through its burrowing behaviour, achieved niche differentiation with both the feral dog and the feral cat. Consequently, the fox has become one of the continent's most destructive invasive species.
The red fox has been implicated in the extinction or decline of several native Australian species, particularly those of the family Potoroidae, including the desert rat-kangaroo. The spread of red foxes across the southern part of the continent has coincided with the spread of rabbits in Australia, and corresponds with declines in the distribution of several medium-sized ground-dwelling mammals, including brush-tailed bettongs, burrowing bettongs, rufous bettongs, bilbies, numbats, bridled nail-tail wallabies and quokkas. Most of those species are now limited to areas (such as islands) where red foxes are absent or rare. Local fox eradication programs exist, although elimination has proven difficult due to the fox's denning behaviour and nocturnal hunting, so the focus is on management, including the introduction of state bounties. According to the Tasmanian government, red foxes were accidentally introduced to the previously fox-free island of Tasmania in 1999 or 2000, posing a significant threat to native wildlife, including the eastern bettong, and an eradication program was initiated, conducted by the Tasmanian Department of Primary Industries and Water.
Sardinia, Italy
The origin of the ichnusae subspecies in Sardinia, Italy is uncertain, as it is absent from Pleistocene deposits in their current homeland. It is possible it originated during the Neolithic following its introduction to the island by humans. It is likely then that Sardinian fox populations stem from repeated introductions of animals from different localities in the Mediterranean. This latter theory may explain the subspecies' phenotypic diversity.
Behaviour
Social and territorial behaviour
Red foxes either establish stable home ranges within particular areas or are itinerant with no fixed abode. They use their urine to mark their territories. A male fox raises one hind leg and his urine is sprayed forward in front of him, whereas a female fox squats down so that the urine is sprayed in the ground between the hind legs. Urine is also used to mark empty cache sites, used to store found food, as reminders not to waste time investigating them. Males generally have higher urine marking rates during late summer and autumn, but the rest of the year the rates between male and female are similar. The use of up to 12 different urination postures allows them to precisely control the position of the scent mark. Red foxes live in family groups sharing a joint territory. In favourable habitats and/or areas with low hunting pressure, subordinate foxes may be present in a range. Subordinate foxes may number one or two, sometimes up to eight in one territory. These subordinates could be formerly dominant animals, but are mostly young from the previous year, who act as helpers in rearing the breeding vixen's kits. Alternatively, their presence has been explained as being in response to temporary surpluses of food unrelated to assisting reproductive success. Non-breeding vixens will guard, play, groom, provision and retrieve kits, an example of kin selection. Red foxes may leave their families once they reach adulthood if the chances of winning a territory of their own are high. If not, they will stay with their parents, at the cost of postponing their own reproduction.
Reproduction and development
Red foxes reproduce once a year in spring. Two months prior to oestrus (typically December), the reproductive organs of vixens change shape and size. By the time they enter their oestrus period, their uterine horns double in size, and their ovaries grow 1.5–2 times larger. Sperm formation in males begins in August–September, with the testicles attaining their greatest weight in December–February. The vixen's oestrus period lasts three weeks, during which the dog-foxes mate with the vixens for several days, often in burrows. The male's bulbus glandis enlarges during copulation, forming a copulatory tie which may last for more than an hour. The gestation period lasts 49–58 days. Though foxes are largely monogamous, DNA evidence from one population indicated large levels of polygyny, incest and mixed paternity litters. Subordinate vixens may become pregnant, but usually fail to whelp, or have their kits killed postpartum by either the dominant female or other subordinates.
The average litter size consists of four to six kits, though litters of up to 13 kits have occurred. Large litters are typical in areas where fox mortality is high. Kits are born blind, deaf and toothless, with dark brown fluffy fur. At birth, they weigh and measure in body length and in tail length. At birth, they are short-legged, large-headed and have broad chests. Mothers remain with the kits for 2–3 weeks, as they are unable to thermoregulate. During this period, the fathers or barren vixens feed the mothers. Vixens are very protective of their kits, and have been known to even fight off terriers in their defence. If the mother dies before the kits are independent, the father takes over as their provider. The kits' eyes open after 13–15 days, during which time their ear canals open and their upper teeth erupt, with the lower teeth emerging 3–4 days later. Their eyes are initially blue, but change to amber at 4–5 weeks. Coat colour begins to change at three weeks of age, when the black eye streak appears. By one month, red and white patches are apparent on their faces. During this time, their ears erect and their muzzles elongate. Kits begin to leave their dens and experiment with solid food brought by their parents at the age of 3–4 weeks. The lactation period lasts 6–7 weeks. Their woolly coats begin to be coated by shiny guard hairs after 8 weeks. By the age of 3–4 months, the kits are long-legged, narrow-chested and sinewy. They reach adult proportions at the age of 6–7 months. Some vixens may reach sexual maturity at the age of 9–10 months, thus bearing their first litters at one year of age. In captivity, their longevity can be as long as 15 years, though in the wild they typically do not survive past 5 years of age.
Denning behaviour
Outside the breeding season, most red foxes favour living in the open, in densely vegetated areas, though they may enter burrows to escape bad weather. Their burrows are often dug on hill or mountain slopes, ravines, bluffs, steep banks of water bodies, ditches, depressions, gutters, in rock clefts and neglected human environments. Red foxes prefer to dig their burrows on well drained soils. Dens built among tree roots can last for decades, while those dug on the steppes last only several years. They may permanently abandon their dens during mange outbreaks, possibly as a defence mechanism against the spread of disease. In the Eurasian desert regions, foxes may use the burrows of wolves, porcupines and other large mammals, as well as those dug by gerbil colonies. Compared to burrows constructed by Arctic foxes, badgers, marmots and corsac foxes, red fox dens are not overly complex. Red fox burrows are divided into a den and temporary burrows, which consist only of a small passage or cave for concealment. The main entrance of the burrow leads downwards (40–45°) and broadens into a den, from which numerous side tunnels branch. Burrow depth ranges from , rarely extending to ground water. The main passage can reach in length, standing an average of . In spring, red foxes clear their dens of excess soil through rapid movements, first with the forepaws then with kicking motions with their hind legs, throwing the discarded soil over from the burrow. When kits are born, the discarded debris is trampled, thus forming a spot where the kits can play and receive food. They may share their dens with woodchucks or badgers. Unlike badgers, which fastidiously clean their earths and defecate in latrines, red foxes habitually leave pieces of prey around their dens. The average sleep time of a captive red fox is 9.8 hours per day.
Communication
Body language
Red fox body language consists of movements of the ears, tail and postures, with their body markings emphasising certain gestures. Postures can be divided into aggressive/dominant and fearful/submissive categories. Some postures may blend the two together.
Inquisitive foxes will rotate and flick their ears whilst sniffing. Playful individuals will perk their ears and rise on their hind legs. Male foxes courting females, or after successfully evicting intruders, will turn their ears outwardly, and raise their tails in a horizontal position, with the tips raised upward. When afraid, red foxes grin in submission, arching their backs, curving their bodies, crouching their legs and lashing their tails back and forth with their ears pointing backwards and pressed against their skulls. When merely expressing submission to a dominant animal, the posture is similar, but without arching the back or curving the body. Submissive foxes will approach dominant animals in a low posture, so that their muzzles reach up in greeting. When two evenly matched foxes confront each other over food, they approach each other sideways and push against each other's flanks, betraying a mixture of fear and aggression through lashing tails and arched backs without crouching and pulling their ears back without flattening them against their skulls. When launching an assertive attack, red foxes approach directly rather than sideways, with their tails aloft and their ears rotated sideways. During such fights, red foxes will stand on each other's upper bodies with their forelegs, using open mouthed threats. Such fights typically only occur among juveniles or adults of the same sex.
Vocalisations
Red foxes have a wide vocal range, and produce different sounds spanning five octaves, which grade into each other. Recent analyses identify 12 different sounds produced by adults and 8 by kits. The majority of sounds can be divided into "contact" and "interaction" calls. The former vary according to the distance between individuals, while the latter vary according to the level of aggression.
Contact calls: The most commonly heard contact call is a three to five syllable barking "wow wow wow" sound, which is often made by two foxes approaching one another. This call is most frequently heard from December to February (when they can be confused with the territorial calls of tawny owls). The "wow wow wow" call varies according to individual; captive foxes have been recorded to answer pre-recorded calls of their pen-mates, but not those of strangers. Kits begin emitting the "wow wow wow" call at the age of 19 days, when craving attention. When red foxes draw close together, they emit trisyllabic greeting warbles similar to the clucking of chickens. Adults greet their kits with gruff huffing noises.
Interaction calls: When greeting one another, red foxes emit high pitched whines, particularly submissive animals. A submissive fox approached by a dominant animal will emit a ululating siren-like shriek. During aggressive encounters with conspecifics, they emit a throaty rattling sound, similar to a ratchet, called "gekkering". Gekkering occurs mostly during the courting season from rival males or vixens rejecting advances.
Another call that does not fit into the two categories is a long, drawn-out, monosyllabic "waaaaah" sound. As it is commonly heard during the breeding season, it is thought to be emitted by vixens summoning males. When danger is detected, foxes emit a monosyllabic bark. At close quarters, it is a muffled cough, while at long distances it is sharper. Kits make warbling whimpers when nursing, these calls being especially loud when they are dissatisfied.
Ecology
Diet, hunting and feeding behaviour
Red foxes are omnivores with a highly varied diet. Research conducted in the former Soviet Union showed red foxes consuming over 300 animal species and a few dozen species of plants. They primarily feed on small rodents like voles, mice, ground squirrels, hamsters, gerbils, woodchucks, pocket gophers and deer mice. Secondary prey species include birds (with Passeriformes, Galliformes and waterfowl predominating), leporids, porcupines, raccoons, opossums, reptiles, insects, other invertebrates, flotsam (marine mammals, fish and echinoderms) and carrion. On very rare occasions, foxes may attack young or small ungulates. They typically target mammals up to about in weight, and they require of food daily. Red foxes readily eat plant material and in some areas fruit can amount to 100% of their diet in autumn. Commonly consumed fruits include blueberries, blackberries, raspberries, cherries, persimmons, mulberries, apples, plums, grapes and acorns. Other plant material includes grasses, sedges and tubers.
Red foxes are implicated in the predation of game and song birds, hares, rabbits, muskrats and young ungulates, particularly in preserves, reserves and hunting farms where ground-nesting birds are protected and raised, as well as in poultry farms.
While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes.
Red foxes prefer to hunt in the early morning hours before sunrise and late evening. Although they typically forage alone, they may aggregate in resource-rich environments. When hunting mouse-like prey, they first pinpoint their prey's location by sound, then leap, sailing high above their quarry, steering in mid-air with their tails, before landing on target up to away. They typically only feed on carrion in the late evening hours and at night. They are extremely possessive of their food and will defend their catches from even dominant animals. Red foxes may occasionally commit acts of surplus killing; during one breeding season, four red foxes were recorded to have killed around 200 black-headed gulls each, with peaks during dark, windy hours when flying conditions were unfavourable. Losses to poultry and penned game birds can be substantial because of this. Red foxes seem to dislike the taste of moles, but will nonetheless catch them alive and present them to their kits as playthings.
A 2008–2010 study of 84 red foxes in the Czech Republic and Germany found that successful hunting in long vegetation or under snow appeared to involve an alignment of the red fox with the Earth's magnetic field.
Enemies and competitors
Red foxes typically dominate other fox species. Arctic foxes generally escape competition from red foxes by living farther north, where food is too scarce to support the larger-bodied red species. Although the red species' northern limit is linked to the availability of food, the Arctic species' southern range is limited by the presence of the former. Red and Arctic foxes were both introduced to almost every island from the Aleutian Islands to the Alexander Archipelago during the 1830s–1930s by fur companies. The red foxes invariably displaced the Arctic foxes, with one male red fox having been reported to have killed off all resident Arctic foxes on a small island in 1866. Where they are sympatric, Arctic foxes may also escape competition by feeding on lemmings and flotsam rather than voles, as favoured by red foxes. Both species will kill each other's kits, given the opportunity. Red foxes are serious competitors of corsac foxes, as they hunt the same prey all year. The red species is also stronger, is better adapted to hunting in snow deeper than and is more effective in hunting and catching medium-sized to large rodents. Corsac foxes seem to only outcompete red foxes in semi-desert and steppe areas. In Israel, Blanford's foxes escape competition with red foxes by restricting themselves to rocky cliffs and actively avoiding the open plains inhabited by red foxes. Red foxes dominate kit and swift foxes. Kit foxes usually avoid competition with their larger cousins by living in more arid environments, though red foxes have been increasing in ranges formerly occupied by kit foxes due to human-induced environmental changes. Red foxes will kill both species and compete with them for food and den sites. Grey foxes are exceptional, as they dominate red foxes wherever their ranges meet. Historically, interactions between the two species were rare, as grey foxes favoured heavily wooded or semiarid habitats as opposed to the open and mesic ones preferred by red foxes. However, interactions have become more frequent due to deforestation, allowing red foxes to colonise grey fox-inhabited areas.
Wolves may kill and eat red foxes in disputes over carcasses. In areas in North America where red fox and coyote populations are sympatric, red fox ranges tend to be located outside coyote territories. The principal cause of this separation is believed to be active avoidance of coyotes by the red foxes. Interactions between the two species vary in nature, ranging from active antagonism to indifference. The majority of aggressive encounters are initiated by coyotes, and there are few reports of red foxes acting aggressively toward coyotes except when attacked or when their kits were approached. Foxes and coyotes have sometimes been seen feeding together. In Israel, red foxes share their habitat with golden jackals. Where their ranges meet, the two canids compete due to near-identical diets. Red foxes ignore golden jackal scents or tracks in their territories and avoid close physical proximity with golden jackals themselves. In areas where golden jackals become very abundant, the population of red foxes decreases significantly, apparently because of competitive exclusion. However, there is one record of multiple red foxes interacting peacefully with a golden jackal in southwestern Germany.
Red foxes dominate raccoon dogs, sometimes killing their kits or biting adults to death. Cases are known of red foxes killing raccoon dogs after entering their dens. Both species compete for mouse-like prey. This competition reaches a peak during early spring when food is scarce. In Tatarstan, red fox predation accounted for 11.1% of deaths among 54 raccoon dogs and amounted to 14.3% of 186 raccoon dog deaths in northwestern Russia.
Red foxes may kill small mustelids like weasels, stone martens, pine martens (martes martes), stoats, Siberian weasels, polecats and young sables. Eurasian badgers may live alongside red foxes in isolated sections of large burrows. It is possible that the two species tolerate each other out of mutualism; red foxes provide Eurasian badgers with food scraps, while Eurasian badgers maintain the shared burrow's cleanliness. However, cases are known of Eurasian badgers driving vixens from their dens and destroying their litters without eating them. Wolverines may kill red foxes, often while the latter is sleeping or near carrion. Red foxes, in turn, may kill young wolverines.
Red foxes may compete with striped hyenas on large carcasses. Red foxes may give way to striped hyenas on unopened carcasses, as the latter's stronger jaws can easily tear open flesh that is too tough for red foxes. Red foxes may harass striped hyenas, using their smaller size and greater speed to avoid the hyena's attacks. Sometimes, red foxes seem to deliberately torment striped hyenas even when there is no food at stake. Some red foxes may mistime their attacks and are killed. Red fox remains are often found in striped hyena dens and striped hyenas may steal red foxes from traps.
In Eurasia, red foxes may be preyed upon by leopards, caracals and Eurasian lynxes. The Eurasian lynxes chase red foxes into deep snow, where their long legs and larger paws give them an advantage over red foxes, especially when the depth of the snow exceeds one meter. In the Velikoluksky District in Russia, red foxes are absent or are seen only occasionally where Eurasian lynxes establish permanent territories. Researchers consider Eurasian lynxes to represent considerably less danger to red foxes than wolves do. North American felid predators of red foxes include cougars, Canada lynxes and bobcats.
Red foxes compete with various birds of prey such as common buzzards (Buteo buteo) and northern goshawks (Accipiter gentilis) and even steal their kills. In turn, golden eagles (Aquila chrysaetos) regularly takes young red foxes and prey on adults if needed. Other large eagles such as wedge-tailed eagles (Aquila audax), eastern imperial eagles (Aquila heliaca), white-tailed eagles (Haliaeetus albicilla), and steller's sea eagles (Haliaeetus pelagicus) have also been known to kill red foxes less frequently. Additionally, large owls such as Eurasian eagle-owls (Bubo bubo) and snowy owls (Bubo scandiacus) will prey on young foxes, and adults on exceptional occasions.
Diseases and parasites
Red foxes are the most important rabies vector in Europe. In London, arthritis is common in foxes, being particularly frequent in the spine. Foxes may be infected with leptospirosis and tularemia, though they are not overly susceptible to the latter. They may also fall ill from listeriosis and spirochetosis, as well as acting as vectors in spreading erysipelas, brucellosis and tick-borne encephalitis. A mysterious fatal disease near Lake Sartlan in the Novosibirsk Oblast was noted among local red foxes, but the cause was undetermined. The possibility was considered that it was caused by an acute form of encephalomyelitis, which was first observed in captive-bred silver foxes. Individual cases of foxes infected with Yersinia pestis are known.
Red foxes are not readily prone to infestation with fleas. Species like Spilopsyllus cuniculi are probably only caught from the fox's prey species, while others like Archaeopsylla erinacei are caught whilst traveling. Fleas that feed on red foxes include Pulex irritans, Ctenocephalides canis and Paraceras melis. Ticks such as Ixodes ricinus and I. hexagonus are not uncommon in red foxes, and are typically found on nursing vixens and kits still in their earths. The louse Trichodectes vulpis specifically targets red foxes, but is found infrequently. The mite Sarcoptes scabiei is the most important cause of mange in red foxes. It causes extensive hair loss, starting from the base of the tail and hindfeet, then the rump before moving on to the rest of the body. In the final stages of the condition, red foxes can lose most of their fur, 50% of their body weight and may gnaw at infected extremities. In the epizootic phase of the disease, it usually takes red foxes four months to die after infection. Other endoparasites include Demodex folliculorum, Notoderes, Otodectes cynotis (which is frequently found in the ear canal), Linguatula serrata (which infects the nasal passages) and ringworms.
Up to 60 helminth species are known to infect captive-bred foxes in fur farms, while 20 are known in the wild. Several coccidian species of the genera Isospora and Eimeria are also known to infect them. The most common nematode species found in red fox guts are Toxocara canis and Uncinaria stenocephala, Capillaria aerophila and Crenosoma vulpis; the latter two infect their lungs and trachea. Capillaria plica infects the red fox's bladder. Trichinella spiralis rarely affects them. The most common tapeworm species in red foxes are Taenia spiralis and T. pisiformis. Others include Echinococcus granulosus and E. multilocularis. Eleven trematode species infect red foxes, including Metorchis conjunctus. A red fox from was found to be a host of intestinal parasitic acanthocephalan worms, Pachysentis canicola in Bushehr Province, Iran, Pachysentis procumbens and Pachysentis ehrenbergi in both in Egypt.
Relationships with humans
In folklore, religion and mythology
Red foxes feature prominently in the folklore and mythology of human cultures with which they are sympatric. In Greek mythology, the Teumessian fox, or Cadmean vixen, was a gigantic fox that was destined never to be caught. The fox was one of the children of Echidna.
In Celtic mythology, the red fox is a symbolic animal. In the Cotswolds, witches were thought to take the shape of foxes to steal butter from their neighbours. In later European folklore, the figure of Reynard the Fox symbolises trickery and deceit. He originally appeared (then under the name of "Reinardus") as a secondary character in the 1150 poem "Ysengrimus". He reappeared in 1175 in Pierre Saint Cloud's Le Roman de Renart, and made his debut in England in Geoffrey Chaucer's The Nun's Priest's Tale. Many of Reynard's adventures may stem from actual observations on fox behaviour; he is an enemy of the wolf and has a fondness for blackberries and grapes.
Chinese folk tales tell of fox-spirits called huli jing that may have up to nine tails, or kumiho as they are known in Korea. In Japanese mythology, the kitsune are fox-like spirits possessing magical abilities that increase with their age and wisdom. Foremost among these is the ability to assume human form. While some folktales speak of kitsune employing this ability to trick others, other stories portray them as faithful guardians, friends, lovers, and wives. In Arab folklore, the fox is considered a cowardly, weak, deceitful, and cunning animal, said to feign death by filling its abdomen with air to appear bloated, then lies on its side, awaiting the approach of unwitting prey. The animal's cunning was noted by the authors of the Bible who applied the word "fox" to false prophets (Ezekiel 13:4) and the hypocrisy of Herod Antipas (Luke 13:32).
The cunning Fox is commonly found in Native American mythology, where it is portrayed as an almost constant companion to Coyote. Fox, however, is a deceitful companion that often steals Coyote's food. In the Achomawi creation myth, Fox and Coyote are the co-creators of the world, that leave just before the arrival of humans. The Yurok tribe believed that Fox, in anger, captured the Sun, and tied him to a hill, causing him to burn a great hole in the ground. An Inuit story tells of how Fox, portrayed as a beautiful woman, tricks a hunter into marrying her, only to resume her true form and leave after he offends her. A Menominee story tells of how Fox is an untrustworthy friend to Wolf.
Hunting
The earliest historical records of fox hunting come from the 4th century BC; Alexander the Great is known to have hunted foxes and a seal dated from 350 BC depicts a Persian horseman in the process of spearing a fox. Xenophon, who viewed hunting as part of a cultured man's education, advocated the killing of foxes as pests, as they distracted hounds from hares. The Romans were hunting foxes by AD 80. During the Dark Ages in Europe, foxes were considered secondary quarries, but gradually grew in importance. Cnut the Great re-classed foxes as Beasts of the Chase, a lower category of quarry than Beasts of Venery. Foxes were gradually hunted less as vermin and more as Beasts of the Chase, to the point that by the late 1200s, Edward I had a royal pack of foxhounds and a specialised fox huntsman. In this period, foxes were increasingly hunted above ground with hounds, rather than underground with terriers. Edward, Second Duke of York assisted the climb of foxes as more prestigious quarries in his The Master of Game. By the Renaissance, fox hunting became a traditional sport of the nobility. After the English Civil War caused a drop in deer populations, fox hunting grew in popularity. By the mid-1600s, Great Britain was divided into fox hunting territories, with the first fox hunting clubs being formed (the first was the Charlton Hunt Club in 1737). The popularity of fox hunting in Great Britain reached a peak during the 1700s. Although already native to North America, red foxes from England were imported for sporting purposes to Virginia and Maryland in 1730 by prosperous tobacco planters. These American fox hunters considered the red fox more sporting than the grey fox.
Red foxes are still widely persecuted as pests, with human-caused deaths among the highest causes of mortality in the species. Annual red fox kills are: UK 21,500–25,000 (2000); Germany 600,000 (2000–2001); Austria 58,000 (2000–2001); Sweden 58,000 (1999–2000); Finland 56,000 (2000–2001); Denmark 50,000 (1976–1977); Switzerland 34,832 (2001); Norway 17,000 (2000–2001); Saskatchewan (Canada) 2,000 (2000–2001); Nova Scotia (Canada) 491 (2000–2001); Minnesota (US) 4,000–8,000 (average annual trapping harvest 2002–2009); New Mexico (US) 69 (1999–2000).
Fur use
Red foxes are among the most important fur-bearing animals harvested by the fur trade. Their pelts are used for trimmings, scarfs, muffs, jackets and coats. They are principally used as trimming for both cloth coats and fur garments, including evening wraps. The pelts of silver foxes are popular as capes, while cross foxes are mostly used for scarves and rarely for trimming. The number of sold fox scarves exceeds the total number of scarves made from other fur-bearers. However, this amount is overshadowed by the total number of red fox pelts used for trimming purposes. The silver colour morphs are the most valued by furriers, followed by the cross colour morphs and the red colour morphs, respectively. In the early 1900s, over 1,000 American red fox skins were imported to Great Britain annually, while 500,000 were exported annually from Germany and Russia. The total worldwide trade of wild red foxes in 1985–86 was 1,543,995 pelts. Red foxes amounted to 45% of U.S. wild-caught pelts worth $50 million. Pelt prices are increasing, with 2012 North American wholesale auction prices averaging $39 and 2013 prices averaging $65.78.
North American red foxes, particularly those of northern Alaska, are the most valued for their fur, as they have guard hairs of a silky texture which, after dressing, allow the wearer unrestricted mobility. Red foxes living in southern Alaska's coastal areas and the Aleutian Islands are an exception, as they have extremely coarse pelts that rarely exceed one-third of the price of their northern Alaskan cousins. Most European peltries have coarse-textured fur compared to North American varieties. The only exceptions are the Nordic and Far Eastern Russian peltries, but they are still inferior to North American peltries in terms of silkiness.
Livestock and pet predation
Red foxes may on occasion prey on lambs. Usually, lambs targeted by red foxes tend to be physically weakened specimens, but not always. Lambs belonging to small breeds, such as the Scottish Blackface, are more vulnerable than larger breeds, such as the Merino. Twins may be more vulnerable to red foxes than singlets, as ewes cannot effectively defend both simultaneously. Crossbreeding small, upland ewes with larger, lowland rams can cause difficult and prolonged labour for ewes due to the heaviness of the resulting offspring, thus making the lambs more at risk to red fox predation. Lambs born from gimmers (ewes breeding for the first time) are more often killed by red foxes than those of experienced mothers, who stick closer to their young.
Red foxes may prey on domestic rabbits and guinea pigs if they are kept in open runs or are allowed to range freely in gardens. This problem is usually averted by housing them in robust hutches and runs. Urban red foxes frequently encounter cats and may feed alongside them. In physical confrontations, the cats usually have the upper hand. Authenticated cases of red foxes killing cats usually involve kittens. Although most red foxes do not prey on cats, some may do so and may treat them more as competitors rather than food.
Taming and domestication
In their unmodified wild state, red foxes are generally unsuitable as pets. Many supposedly abandoned kits are adopted by well-meaning people during the spring period, though it is unlikely that vixens would abandon their young. Actual orphans are rare and the ones that are adopted are likely kits that simply strayed from their den sites. Kits require almost constant supervision; when still suckling, they require milk at four-hour intervals day and night. Once weaned, they may become destructive to leather objects, furniture and electric cables. Though generally friendly toward people when young, captive red foxes become fearful of humans, save for their handlers, once they reach 10 weeks of age. They maintain their wild counterparts' strong instinct of concealment and may pose a threat to domestic birds, even when well-fed. Although suspicious of strangers, they can form bonds with cats and dogs, even ones bred for fox hunting. Tame red foxes were once used to draw ducks close to hunting blinds.
White to black individual red foxes have been selected and raised on fur farms as "silver foxes". In the second half of the 20th century, a lineage of domesticated silver foxes was developed by Russian geneticist Dmitry Belyayev who, over a 40-year period, bred several generations selecting only those individuals that showed the least fear of humans. Eventually, Belyayev's team selected only those that showed the most positive response to humans, thus resulting in a population of silver foxes whose behaviour and appearance was significantly changed. After about 10 generations of controlled breeding, these foxes no longer showed any fear of humans and often wagged their tails and licked their human caretakers to show affection. These behavioural changes were accompanied by physical alterations, which included piebald coats, floppy ears in kits and curled tails, similar to the traits that distinguish domestic dogs from grey wolves.
Urban red foxes
Distribution
Red foxes have been exceedingly successful in colonising built-up environments, especially lower-density suburbs, although many have also been sighted in dense urban areas far from the countryside. Throughout the 20th century, they have established themselves in many Australian, European, Japanese and North American cities. The species first colonised British cities during the 1930s, entering Bristol and London during the 1940s, and later established themselves in Cambridge and Norwich. In Ireland, they are now common in suburban Dublin. In Australia, red foxes were recorded in Melbourne as early as the 1930s, while in Zurich, Switzerland, they only started appearing in the 1980s. Urban red foxes are most common in residential suburbs consisting of privately owned, low-density housing. They are rare in areas where industry, commerce or council-rented houses predominate. In these latter areas, the distribution is of a lower average density because they rely less on human resources; the home range of these foxes average from , whereas those in more residential areas average from .
In 2006, it was estimated that there were 10,000 red foxes in London. City-dwelling red foxes may have the potential to consistently grow larger than their rural counterparts as a result of abundant scraps and a relative lack of predators. In cities, red foxes may scavenge food from litter bins and bin bags, although much of their diet is similar to rural red foxes.
Behaviour
Urban red foxes are most active at dusk and dawn, doing most of their hunting and scavenging at these times. It is uncommon to spot them during the day, but they can be caught sunbathing on roofs of houses or sheds. Urban red foxes will often make their homes in hidden and undisturbed spots in urban areas as well as on the edges of a city, visiting at night for sustenance. They sleep at night in dens.
While urban red foxes will scavenge successfully in the city (and the red foxes tend to eat anything that humans eat) some urban residents will deliberately leave food out for the animals, finding them endearing. Doing this regularly can attract urban red foxes to one's home; they can become accustomed to human presence, warming up to their providers by allowing themselves to be approached and in some cases even played with, particularly young kits.
Urban red fox control
Urban red foxes can cause problems for local residents. They have been known to steal chickens, disrupt rubbish bins and damage gardens. Most complaints about urban red foxes made to local authorities occur during the breeding season in late January/early February or from late April to August when the new kits are developing.
In the U.K., hunting red foxes in urban areas is banned and shooting them in an urban environment is not suitable. One alternative to hunting urban red foxes has been to trap them, which appears to be a more viable method. However, killing red foxes has little effect on the population in an urban area; those that are killed are very soon replaced, either by new kits during the breeding season or by other red foxes moving into the territory of those that were killed. A more effective method of urban red fox control is to deter them from the specific areas they inhabit. Deterrents such as creosote, diesel oil, or ammonia can be used. Cleaning up and blocking access to den locations can also discourage an urban red fox's return.
Relationship between urban and rural red foxes
In January 2014 it was reported that "Fleet", a relatively tame urban red fox tracked as part of a wider study by the University of Brighton in partnership with the BBC TV series Winterwatch, had unexpectedly traveled 195 miles in 21 days from his neighbourhood in Hove at the western edge of East Sussex across rural countryside as far as the town of Rye, near the eastern edge of the county. He was still continuing his journey when the GPS collar stopped transmitting due to suspected water damage. Along with setting a record for the longest journey undertaken by a tracked red fox in the United Kingdom, his travels have highlighted the fluidity of movement between rural and urban red fox populations.
| Biology and health sciences | Canines | Animals |
20647197 | https://en.wikipedia.org/wiki/Modem | Modem | A modulator-demodulator, commonly referred to as a modem, is a computer hardware device that converts data from a digital format into a format suitable for an analog transmission medium such as telephone or radio. A modem transmits data by modulating one or more carrier wave signals to encode digital information, while the receiver demodulates the signal to recreate the original digital information. The goal is to produce a signal that can be transmitted easily and decoded reliably. Modems can be used with almost any means of transmitting analog signals, from LEDs to radio.
Early modems were devices that used audible sounds suitable for transmission over traditional telephone systems and leased lines. These generally operated at 110 or 300 bits per second (bit/s), and the connection between devices was normally manual, using an attached telephone handset. By the 1970s, higher speeds of 1,200 and 2,400 bit/s for asynchronous dial connections, 4,800 bit/s for synchronous leased line connections and 35 kbit/s for synchronous conditioned leased lines were available. By the 1980s, less expensive 1,200 and 2,400 bit/s dialup modems were being released, and modems working on radio and other systems were available. As device sophistication grew rapidly in the late 1990s, telephone-based modems quickly exhausted the available bandwidth, reaching 56 kbit/s.
The rise of public use of the internet during the late 1990s led to demands for much higher performance, leading to the move away from audio-based systems to entirely new encodings on cable television lines and short-range signals in subcarriers on telephone lines. The move to cellular telephones, especially in the late 1990s and the emergence of smartphones in the 2000s led to the development of ever-faster radio-based systems. Today, modems are ubiquitous and largely invisible, included in almost every mobile computing device in one form or another, and generally capable of speeds on the order of tens or hundreds of megabytes per second.
Speeds
Modems are frequently classified by the maximum amount of data they can send in a given unit of time, usually expressed in bits per second (symbol bit/s, sometimes abbreviated "bps") or rarely in bytes per second (symbol B/s). Modern broadband modem speeds are typically expressed in megabits per second (Mbit/s).
Historically, modems were often classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU-T V.21 standard used audio frequency-shift keying with two possible frequencies, corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU-T V.22 standard, which could transmit and receive four distinct symbols (two bits per symbol), transmitted 1,200 bits by sending 600 symbols per second (600 baud) using phase-shift keying.
Many modems are variable-rate, permitting them to be used over a medium with less than ideal characteristics, such as a telephone line that is of poor quality or is too long. This capability is often adaptive so that a modem can discover the maximum practical transmission rate during the connect phase, or during operation.
Overall history
Modems grew out of the need to connect teleprinters over ordinary phone lines instead of the more expensive leased lines which had previously been used for current loop–based teleprinters and automated telegraphs. The earliest devices which satisfy the definition of a modem may have been the multiplexers used by news wire services in the 1920s.
In 1941, the Allies developed a voice encryption system called SIGSALY which used a vocoder to digitize speech, then encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. This was also a digital modulation technique, making this an early modem.
Commercial modems largely did not become available until the late 1950s, when the rapid development of computer technology created demand for a method of connecting computers together over long distances, resulting in the Bell Company and then other businesses producing an increasing number of computer modems for use over both switched and leased telephone lines.
Later developments would produce modems that operated over cable television lines, power lines, and various radio technologies, as well as modems that achieved much higher speeds over telephone lines.
Dial-up
A dial-up modem transmits computer data over an ordinary switched telephone line that has not been designed for data use. It was once a widely known technology, mass-marketed globally dial-up internet access. In the 1990s, tens of millions of people in the United States alone used dial-up modems for internet access.
Dial-up service has since been largely superseded by broadband internet, such as DSL.
History
1950s
Mass production of telephone line modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the United States and Canada.
Shortly afterwards in 1959, the technology in the SAGE modems was made available commercially as the Bell 101, which provided 110 bit/s speeds. Bell called this and several other early modems "datasets".
1960s
Some early modems were based on touch-tone frequencies, such as Bell 400-style touch-tone modems.
The Bell 103A standard was introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used, with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz.
The 103 modem would eventually become a de facto standard once third-party (non-AT&T) modems reached the market, and throughout the 1970s, independently made modems compatible with the Bell 103 de facto standard were commonplace. Example models included the Novation CAT and the Anderson-Jacobson. A lower-cost option was the Pennywhistle modem, designed to be built using readily available parts.
Teletype machines were granted access to remote networks such as the Teletypewriter Exchange using the Bell 103 modem. AT&T also produced reduced-cost units, the originate-only 113D and the answer-only 113B/C modems.
1970s
The 201A Data-Phone was a synchronous modem using two-bit-per-symbol phase-shift keying (PSK) encoding, achieving 2,000 bit/s half-duplex over normal phone lines. In this system the two tones for any one side of the connection are sent at similar frequencies as in the 300 bit/s systems, but slightly out of phase.
In early 1973, Vadic introduced the VA3400 which performed full-duplex at 1,200 bit/s over a normal phone line.
In November 1976, AT&T introduced the 212A modem, similar in design, but using the lower frequency set for transmission. It was not compatible with the VA3400, but it would operate with 103A modem at 300 bit/s.
In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.
1980s
A significant advance in modems was the Hayes Smartmodem, introduced in 1981. The Smartmodem was an otherwise standard 103A 300 bit/s direct-connect modem, but it introduced a command language which allowed the computer to make control requests, such as commands to dial or answer calls, over the same RS-232 interface used for the data connection. The command set used by this device became a de facto standard, the Hayes command set, which was integrated into devices from many other manufacturers.
Automatic dialing was not a new capabilityit had been available via separate Automatic Calling Units, and via modems using the X.21 interfacebut the Smartmodem made it available in a single device that could be used with even the most minimal implementations of the ubiquitous RS-232 interface, making this capability accessible from virtually any system or language.
The introduction of the Smartmodem made communications much simpler and more easily accessed. This provided a growing market for other vendors, who licensed the Hayes patents and competed on price or by adding features. This eventually led to legal action over use of the patented Hayes command language.
Dial modems generally remained at 300 and 1,200 bit/s (eventually becoming standards such as V.21 and V.22) into the mid-1980s.
Commodore's 1982 VicModem for the VIC-20 was the first modem to be sold under $100, and the first modem to sell a million units.
In 1984, V.22bis was created, a 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212. This bit rate increase was achieved by defining four or sixteen distinct symbols, which allowed the encoding of two or four bits per symbol instead of only one. By the late 1980s, many modems could support improved standards like this, and 2,400-bit/s operation was becoming common.
Increasing modem speed greatly improved the responsiveness of online systems and made file transfer practical. This led to rapid growth of online services with large file libraries, which in turn gave more reason to own a modem. The rapid update of modems led to a similar rapid increase in BBS use.
The introduction of microcomputer systems with internal expansion slots made small internal modems practical. This led to a series of popular modems for the S-100 bus and Apple II computers that could directly dial out, answer incoming calls, and hang up entirely from software, the basic requirements of a bulletin board system (BBS). The seminal CBBS for instance was created on an S-100 machine with a Hayes internal modem, and a number of similar systems followed.
Echo cancellation became a feature of modems in this period, which allowed both modems to ignore their own reflected signals. This way both modems can simultaneously transmit and receive over the full spectrum of the phone line, improving the available bandwidth.
Additional improvements were introduced by quadrature amplitude modulation (QAM) encoding, which increased the number of bits per symbol to four through a combination of phase shift and amplitude.
Transmitting at 1,200 baud produced the 4,800 bit/s V.27ter standard, and at 2,400 baud the 9,600 bit/s V.32. The carrier frequency was 1,650 Hz in both systems.
The introduction of these higher-speed systems also led to the development of the digital fax machine during the 1980s. While early fax technology also used modulated signals on a phone line, digital fax used the now-standard digital encoding used by computer modems. This eventually allowed computers to send and receive fax images.
1990s
In the early 1990s, V.32 modems operating at 9,600 bit/s were introduced, but were expensive and were only starting to enter the market when V.32bis was standardized, which operated at 14,400 bit/s.
Rockwell International's chip division developed a new driver chip set incorporating the V.32bis standard and aggressively priced it. Supra, Inc. arranged a short-term exclusivity arrangement with Rockwell, and developed the SupraFAXModem 14400 based on it. Introduced in January 1992 at (or less), it was half the price of the slower V.32 modems already on the market. This led to a price war, and by the end of the year V.32 was dead, never having been really established, and V.32bis modems were widely available for .
V.32bis was so successful that the older high-speed standards had little advantages. USRobotics (USR) fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter, but neither non-standard modem sold well.
Consumer interest in these proprietary improvements waned during the lengthy introduction of the V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.Fast.
In order to guarantee compatibility with V.34 modems once a standard was ratified (1994), manufacturers used more flexible components, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips. This would allow later firmware updates to conform with the standards once ratified.
The ITU standard V.34 represents the culmination of these joint efforts. It employed the most powerful coding techniques available at the time, including channel encoding and shape encoding. From the mere four bits per symbol (), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and modems. This rate is near the theoretical Shannon limit of a phone line.
technologies
While speeds had been available for leased-line modems for some time, they did not become available for dial up modems until the late 1990s.
In the late 1990s, technologies to achieve speeds above began to be introduced. Several approaches were used, but all of them began as solutions to a single fundamental problem with phone lines.
By the time technology companies began to investigate speeds above , telephone companies had switched almost entirely to all-digital networks. As soon as a phone line reached a local central office, a line card converted the analog signal from the subscriber to a digital one and conversely. While digitally encoded telephone lines notionally provide the same bandwidth as the analog systems they replaced, the digitization itself placed constraints on the types of waveforms that could be reliably encoded.
The first problem was that the process of analog-to-digital conversion is intrinsically lossy, but second, and more importantly, the digital signals used by the telcos were not "linear": they did not encode all frequencies the same way, instead utilizing a nonlinear encoding (μ-law and a-law) meant to favor the nonlinear response of the human ear to voice signals. This made it very difficult to find a encoding that could survive the digitizing process.
Modem manufacturers discovered that, while the analog to digital conversion could not preserve higher speeds, digital-to-analog conversions could. Because it was possible for an ISP to obtain a direct digital connection to a telco, a digital modem one that connects directly to a digital telephone network interface, such as T1 or PRI could send a signal that utilized every bit of bandwidth available in the system. While that signal still had to be converted back to analog at the subscriber end, that conversion would not distort the signal in the same way that the opposite direction did.
Early 56k dial-up products
The first 56k (56 kbit/s) dial-up option was a proprietary design from USRobotics, which they called "X2" because 56k was twice the speed (×2) of 28k modems.
At that time, USRobotics held a 40% share of the retail modem market, while Rockwell International held an 80% share of the modem chipset market. Concerned with being shut out, Rockwell began work on a rival 56k technology. They joined with Lucent and Motorola to develop what they called "K56Flex" or just "Flex".
Both technologies reached the market around February 1997; although problems with K56Flex modems were noted in product reviews through July, within six months the two technologies worked equally well, with variations dependent largely on local connection characteristics.
The retail price of these early 56k modems was about , compared to for standard 33k modems. Compatible equipment was also required at the Internet service providers (ISPs) end, with costs varying depending on whether their current equipment could be upgraded. About half of all ISPs offered 56k support by October 1997. Consumer sales were relatively low, which USRobotics and Rockwell attributed to conflicting standards.
Standardized 56k (V.90/V.92)
In February 1998, The International Telecommunication Union (ITU) announced the draft of a new standard V.90 with strong industry support. Incompatible with either existing standard, it was an amalgam of both, but was designed to allow both types of modem by a firmware upgrade. The V.90 standard was approved in September 1998 and widely adopted by ISPs and consumers.
The ITU-T V.92 standard was approved by ITU in November 2000 and utilized digital PCM technology to increase the upload speed to a maximum of .
The high upload speed was a tradeoff. Use of the upstream rate would reduce the downstream as low as due to echo effects on the line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a plain 33. analog connection in order to maintain a high digital downstream of or higher.
V.92 also added two other features. The first is the ability for users who have call waiting to put their dial-up Internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP, achieved by remembering the analog and digital characteristics of the telephone line and using this saved information when reconnecting.
Evolution of dial-up speeds
These values are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines). For a complete list see the companion article list of device bandwidths. A baud is one symbol per second; each symbol may encode one or more data bits.
Compression
Many dial-up modems implement standards for data compression to achieve higher effective throughput for the same bitrate. V.44 is an example used in conjunction with V.92 to achieve speeds greater than 56k over ordinary phone lines.
As telephone-based 56k modems began losing popularity, some Internet service providers such as Netzero/Juno, Netscape, and others started using pre-compression to increase apparent throughput. This server-side compression can operate much more efficiently than the on-the-fly compression performed within modems, because the compression techniques are content-specific (JPEG, text, EXE, etc.).The drawback is a loss in quality, as they use lossy compression which causes images to become pixelated and smeared. ISPs employing this approach often advertised it as "accelerated dial-up".
These accelerated downloads are integrated into the Opera and Amazon Silk web browsers, using their own server-side text and image compression requiring all data to pass through their own servers before reaching the user.
Methods of attachment
Dial-up modems can attach in two different ways: with an acoustic coupler, or with a direct electrical connection.
Directly connected modems
The case Hush-A-Phone Corp. v. United States, which legalized acoustic couplers, applied only to mechanical connections to a telephone set, not electrical connections to the telephone line. The Carterfone decision of 1968, however, permitted customers to attach devices directly to a telephone line as long as they followed stringent Bell-defined standards for non-interference with the phone network. This opened the door to independent (non-AT&T) manufacture of direct-connect modems, that plugged directly into the phone line rather than via an acoustic coupler.
While Carterfone required AT&T to permit connection of devices, AT&T successfully argued that they should be allowed to require the use of a special device to protect their network, placed in between the third-party modem and the line, called a Data Access Arrangement or DAA. The use of DAAs was mandatory from 1969 to 1975 when the new FCC Part 68 rules allowed the use of devices without a Bell-provided DAA, subject to equivalent circuitry being included in the third-party device.
Virtually all modems produced after the 1980s are direct-connect.
Acoustic couplers
While Bell (AT&T) provided modems that attached via direct wire connection to the phone network as early as 1958, their regulations at the time did not permit the direct electrical connection of any non-Bell device to a telephone line. However, the Hush-a-Phone ruling allowed customers to attach any device to a telephone set as long as it did not interfere with its functionality. This allowed third-party (non-Bell) manufacturers to sell modems utilizing an acoustic coupler.
With an acoustic coupler, an ordinary telephone handset was placed in a cradle containing a speaker and microphone positioned to match up with those on the handset. The tones used by the modem were transmitted and received into the handset, which then relayed them to the phone line.
Because the modem was not electrically connected, it was incapable of picking up, hanging up or dialing, all of which required direct control of the line. Touch-tone dialing would have been possible, but touch-tone was not universally available at this time. Consequently, the dialing process was executed by the user lifting the handset, dialing, then placing the handset on the coupler. To accelerate this process, a user could purchase a dialer or Automatic Calling Unit.
Automatic calling units
Early modems could not place or receive calls on their own, but required human intervention for these steps.
As early as 1964, Bell provided automatic calling units that connected separately to a second serial port on a host machine and could be commanded to open the line, dial a number, and even ensure the far end had successfully connected before transferring control to the modem. Later on, third-party models would become available, sometimes known simply as dialers, and features such as the ability to automatically sign in to time-sharing systems.
Eventually this capability would be built into modems and no longer require a separate device.
Controller-based modems vs. soft modems
Prior to the 1990s, modems contained all the electronics and intelligence to convert data in discrete form to an analog (modulated) signal and back again, and to handle the dialing process, as a mix of discrete logic and special-purpose chips. This type of modem is sometimes referred to as controller-based.
In 1993, Digicom introduced the Connection 96 Plus, a modem which replaced the discrete and custom components with a general purpose digital signal processor, which could be reprogrammed to upgrade to newer standards.
Subsequently, USRobotics released the Sportster Winmodem, a similarly upgradable DSP-based design.
As this design trend spread, both terms – soft modem and Winmodem – obtained a negative connotation in non-Windows-based computing circles because the drivers were either unavailable for non-Windows platforms, or were only available as unmaintainable closed-source binaries, a particular problem for Linux users.
Later in the 1990s, software-based modems became available. These are essentially sound cards, and in fact a common design uses the AC'97 audio codec, which provides multichannel audio to a PC and includes three audio channels for modem signals.
The audio sent and received on the line by a modem of this type is generated and processed entirely in software, often in a device driver. There is little functional difference from the user's perspective, but this design reduces the cost of a modem by moving most of the processing power into inexpensive software instead of expensive hardware DSPs or discrete components.
Soft modems of both types either are internal cards or connect over external buses such as USB. They never utilize RS-232 because they require high bandwidth channels to the host computers to carry the raw audio signals generated (sent) or analyzed (received) by software.
Since the interface is not RS-232, there is no standard for communication with the device directly. Instead, soft modems come with drivers which create an emulated RS-232 port, which standard modem software (such as an operating system dialer application) can communicate with.
Voice/fax modems
"Voice" and "fax" are terms added to describe any dial modem that is capable of recording/playing audio or transmitting/receiving faxes. Some modems are capable of all three functions.
Voice modems are used for computer telephony integration applications as simple as placing/receiving calls directly through a computer with a headset, and as complex as fully automated robocalling systems.
Fax modems can be used for computer-based faxing, in which faxes are sent and received without inbound or outbound faxes ever needing to ever be printed on paper. This differs from efax, in which faxing occurs over the internet, in some cases involving no phone lines whatsoever.
Modem Over IP (Modem Relay)
The ITU-T V.150.1 Recommendation defines procedures for the inter-operation of PSTN to IP gateways. In a classic example of this setup, each dial-up modem would connect to a modem relay gateway. The gateways are then connected to an IP network (such as the Internet). The analog connection from the modem is terminated at the gateway and the signal is demodulated. The demodulated control signals are transported over the IP network in an RTP packet type defined as State Signaling Events (SSEs). The data from the demodulated signal is sent over the IP network via a transport protocol (also defined as an RTP payload) called Simple Packet Relay Transport (SPRT). Both the SSE and SPRT packet formats are defined in the V.150.1 Recommendation (Annex C and Annex B respectively). The gateway at the remote end that receives the packets uses the information to re-modulate the signal for the modem connected at that end.
While the V.150.1 Recommendation is not widely deployed, a pared down version of the recommendation called "Minimum Essential Requirements (MER) for V.150.1 Gateways" (SCIP-216) is used in Secure Telephony applications.
Cloud-based Modems
While traditionally a hardware device, fully software-based modems with the ability to be deployed in a cloud environment (such as Microsoft Azure or AWS) do exist. Leveraging a Voice-over-IP (VoIP) connection through a SIP Trunk, the modulated audio samples are generated and sent over an IP network via RTP and an uncompressed audio codec (such as G.711 μ-law or a-law).
Popularity
A 1994 Software Publishers Association found that although 60% of computers in US households had a modem, only 7% of households went online. A CEA study in 2006 found that dial-up Internet access was declining in the US. In 2000, dial-up Internet connections accounted for 74% of all US residential Internet connections. The United States demographic pattern for dial-up modem users per capita has been more or less mirrored in Canada and Australia for the past 20 years.
Dial-up modem use in the US had dropped to 60% by 2003, and stood at 36% in 2006. Voiceband modems were once the most popular means of Internet access in the US, but with the advent of new ways of accessing the Internet, the traditional 56K modem was losing popularity. The dial-up modem is still widely used by customers in rural areas where DSL, cable, wireless broadband, satellite, or fiber optic service are either not available or they are unwilling to pay what the available broadband companies charge. In its 2012 annual report, AOL showed it still collected around $700 million in fees from about three million dial-up users.
TTY/TDD
TDD devices are a subset of the teleprinter intended for use by the deaf or hard of hearing, essentially a small teletype with a built-in dial-up modem and acoustic coupler. The first models produced in 1964 utilized FSK modulation much like early computer modems.
Leased-line modems
A leased line modem also uses ordinary phone wiring, like dial-up and DSL, but does not use the same network topology. While dial-up uses a normal phone line and connects through the telephone switching system, and DSL uses a normal phone line but connects to equipment at the telco central office, leased lines do not terminate at the telco.
Leased lines are pairs of telephone wire that have been connected together at one or more telco central offices so that they form a continuous circuit between two subscriber locations, such as a business' headquarters and a satellite office. They provide no power or dialtone - they are simply a pair of wires connected at two distant locations.
A dialup modem will not function across this type of line, because it does not provide the power, dialtone and switching that those modems require. However, a modem with leased-line capability can operate over such a line, and in fact can have greater performance because the line is not passing through the telco switching equipment, the signal is not filtered, and therefore greater bandwidth is available.
Leased-line modems can operate in 2-wire or 4-wire mode. The former uses a single pair of wires and can only transmit in one direction at a time, while the latter uses two pairs of wires and can transmit in both directions simultaneously. When two pairs are available, bandwidth can be as high as 1.5 Mbit/s, a full data T1 circuit.
While the slower leased line modems used, e.g., RS-232, interfaces, the faster wideband modems used, e.g., V.35.
Broadband
The term broadband was previously used to describe communications faster than what was available on voice grade channels.
The term broadband gained widespread adoption in the late 1990s to describe internet access technology exceeding the 56 kilobit/s maximum of dialup. There are many broadband technologies, such as various DSL (digital subscriber line) technologies and cable broadband.
DSL technologies such as ADSL, HDSL, and VDSL use telephone lines (wires that were installed by a telephone company and originally intended for use by a telephone subscriber) but do not utilize most of the rest of the telephone system. Their signals are not sent through ordinary phone exchanges, but are instead received by special equipment (a DSLAM) at the telephone company central office.
Because the signal does not pass through the telephone exchange, no "dialing" is required, and the bandwidth constraints of an ordinary voice call are not imposed. This allows much higher frequencies, and therefore much faster speeds. ADSL in particular is designed to permit voice calls and data usage over the same line simultaneously.
Similarly, cable modems use infrastructure originally intended to carry television signals, and like DSL, typically permit receiving television signals at the same time as broadband internet service.
Other broadband modems include FTTx modems, satellite modems, and power line modems.
Terminology
Different terms are used for broadband modems, because they frequently contain more than just a modulation/demodulation component.
Because high-speed connections are frequently used by multiple computers at once, many broadband modems do not have direct (e.g. USB) PC connections. Rather they connect over a network such as Ethernet or Wi-Fi. Early broadband modems offered Ethernet handoff allowing the use of one or more public IP addresses, but no other services such as NAT and DHCP that would allow multiple computers to share one connection. This led to many consumers purchasing separate "broadband routers," placed between the modem and their network, to perform these functions.
Eventually, ISPs began providing residential gateways which combined the modem and broadband router into a single package that provided routing, NAT, security features, and even Wi-Fi access in addition to modem functionality, so that subscribers could connect their entire household without purchasing any extra equipment. Even later, these devices were extended to provide "triple play" features such as telephony and television service. Nonetheless, these devices are still often referred to simply as "modems" by service providers and manufacturers.
Consequently, the terms "modem", "router", and "gateway" are now used interchangeably in casual speech, but in a technical context "modem" may carry a specific connotation of basic functionality with no routing or other features, while the others describe a device with features such as NAT.
Broadband modems may also handle authentication such as PPPoE. While it is often possible to authenticate a broadband connection from a users PC, as was the case with dial-up internet service, moving this task to the broadband modem allows it to establish and maintain the connection itself, which makes sharing access between PCs easier since each one does not have to authenticate separately. Broadband modems typically remain authenticated to the ISP as long as they are powered on.
Radio
Any communication technology sending digital data wirelessly involves a modem. This includes direct broadcast satellite, WiFi, WiMax, mobile phones, GPS, Bluetooth and NFC.
Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer network links to outlying areas where fiber optic is not economical.
Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many wireless communication links to work simultaneously on different frequencies.
Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically, transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.
Smart modems come with media access controllers inside, which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.
Mobile broadband
Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, 5G etc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside a laptop, mobile phone or other device, or be connected externally. External wireless modems include connect cards, USB modems, and cellular routers.
Most GSM wireless modems come with an integrated SIM cardholder (i.e. Huawei E220, Sierra 881.) Some models are also provided with a microSD memory slot and/or jack for additional external antenna, (Huawei E1762, Sierra Compass 885.)
The CDMA (EVDO) versions do not typically use R-UIM cards, but use Electronic Serial Number (ESN) instead.
Until the end of April 2011, worldwide shipments of USB modems surpassed embedded 3G and 4G modules by 3:1 because USB modems can be easily discarded. Embedded modems may overtake separate modems as tablet sales grow and the incremental cost of the modems shrinks, so by 2016, the ratio may change to 1:1.
Like mobile phones, mobile broadband modems can be SIM locked to a particular network provider. Unlocking a modem is achieved the same way as unlocking a phone, by using an 'unlock code'.
Optical modem
A device that connects to a fiber optic network is known as an optical network terminal (ONT) or optical network unit (ONU). These are commonly used in fiber to the home installations, installed inside or outside a house to convert the optical medium to a copper Ethernet interface, after which a router or gateway is often installed to perform authentication, routing, NAT, and other typical consumer internet functions, in addition to "triple play" features such as telephony and television service. They are not a modem, although they perform a similar function and are sometimes referred to as a modem.
Fiber optic systems can use quadrature amplitude modulation to maximize throughput. 16QAM uses a 16-point constellation to send four bits per symbol, with speeds on the order of 200 or 400 gigabits per second. 64QAM uses a 64-point constellation to send six bits per symbol, with speeds up to 65 terabits per second. Although this technology has been announced, it may not yet be commonly used.
Home networking
Although the name modem is seldom used, some high-speed home networking applications do use modems, such as powerline ethernet. The G.hn standard for instance, developed by ITU-T, provides a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines, and coaxial cables). G.hn devices use orthogonal frequency-division multiplexing (OFDM) to modulate a digital signal for transmission over the wire.
As described above, technologies like Wi-Fi and Bluetooth also use modems to communicate over radio at short distances.
Null modem
A null modem cable is a specially wired cable connected between the serial ports of two devices, with the transmit and receive lines reversed. It is used to connect two devices directly without a modem. The same software or hardware typically used with modems (such as Procomm or Minicom) could be used with this type of connection.
A null modem adapter is a small device with plugs at both ends which is placed on the termination of a normal "straight-through" serial cable to convert it into a null-modem cable.
Short-haul modem
A "short haul modem" is a device that bridges the gap between leased-line and dial-up modems. Like a leased-line modem, they transmit over "bare" lines with no power or telco switching equipment, but are not intended for the same distances that leased lines can achieve. Ranges up to several miles are possible, but significantly, short-haul modems can be used for medium distances, greater than the maximum length of a basic serial cable but still relatively short, such as within a single building or campus. This allows a serial connection to be extended for perhaps only several hundred to several thousand feet, a case where obtaining an entire telephone or leased line would be overkill.
While some short-haul modems do in fact use modulation, low-end devices (for reasons of cost or power consumption) are simple "line drivers" that increase the level of the digital signal but do not modulate it. These are not technically modems, but the same terminology is used for them.
| Technology | Computer hardware | null |
20647689 | https://en.wikipedia.org/wiki/Irrational%20number | Irrational number | In mathematics, the irrational numbers (in- + rational) are all the real numbers that are not rational numbers. That is, irrational numbers cannot be expressed as the ratio of two integers. When the ratio of lengths of two line segments is an irrational number, the line segments are also described as being incommensurable, meaning that they share no "measure" in common, that is, there is no length ("the measure"), no matter how short, that could be used to express the lengths of both of the two given segments as integer multiples of itself.
Among irrational numbers are the ratio of a circle's circumference to its diameter, Euler's number e, the golden ratio φ, and the square root of two. In fact, all square roots of natural numbers, other than of perfect squares, are irrational.
Like all real numbers, irrational numbers can be expressed in positional notation, notably as a decimal number. In the case of irrational numbers, the decimal expansion does not terminate, nor end with a repeating sequence. For example, the decimal representation of starts with 3.14159, but no finite number of digits can represent exactly, nor does it repeat. Conversely, a decimal expansion that terminates or repeats must be a rational number. These are provable properties of rational numbers and positional number systems and are not used as definitions in mathematics.
Irrational numbers can also be expressed as non-terminating continued fractions (which in some cases are periodic), and in many other ways.
As a consequence of Cantor's proof that the real numbers are uncountable and the rationals countable, it follows that almost all real numbers are irrational.
History
Ancient Greece
The first proof of the existence of irrational numbers is usually attributed to a Pythagorean (possibly Hippasus of Metapontum), who probably discovered them while identifying sides of the pentagram.
The Pythagorean method would have claimed that there must be some sufficiently small, indivisible unit that could fit evenly into one of these lengths as well as the other. Hippasus in the 5th century BC, however, was able to deduce that there was no common unit of measure, and that the assertion of such an existence was a contradiction. He did this by demonstrating that if the hypotenuse of an isosceles right triangle was indeed commensurable with a leg, then one of those lengths measured in that unit of measure must be both odd and even, which is impossible. His reasoning is as follows:
Start with an isosceles right triangle with side lengths of integers a, b, and c (a = b since it is isosceles). The ratio of the hypotenuse to a leg is represented by c:b.
Assume a, b, and c are in the smallest possible terms (i.e. they have no common factors).
By the Pythagorean theorem: c2 = a2+b2 = b2+b2 = 2b2. (Since the triangle is isosceles, a = b).
Since c2 = 2b2, c2 is divisible by 2, and therefore even.
Since c2 is even, c must be even.
Since c is even, dividing c by 2 yields an integer. Let y be this integer (c = 2y).
Squaring both sides of c = 2y yields c2 = (2y)2, or c2 = 4y2.
Substituting 4y2 for c2 in the first equation (c2 = 2b2) gives us 4y2= 2b2.
Dividing by 2 yields 2y2 = b2.
Since y is an integer, and 2y2 = b2, b2 is divisible by 2, and therefore even.
Since b2 is even, b must be even.
We have just shown that both b and c must be even. Hence they have a common factor of 2. However, this contradicts the assumption that they have no common factors. This contradiction proves that c and b cannot both be integers and thus the existence of a number that cannot be expressed as a ratio of two integers.
Greek mathematicians termed this ratio of incommensurable magnitudes alogos, or inexpressible. Hippasus, however, was not lauded for his efforts: according to one legend, he made his discovery while out at sea, and was subsequently thrown overboard by his fellow Pythagoreans 'for having produced an element in the universe which denied the... doctrine that all phenomena in the universe can be reduced to whole numbers and their ratios.' Another legend states that Hippasus was merely exiled for this revelation. Whatever the consequence to Hippasus himself, his discovery posed a very serious problem to Pythagorean mathematics, since it shattered the assumption that numbers and geometry were inseparable; a foundation of their theory.
The discovery of incommensurable ratios was indicative of another problem facing the Greeks: the relation of the discrete to the continuous. This was brought to light by Zeno of Elea, who questioned the conception that quantities are discrete and composed of a finite number of units of a given size. Past Greek conceptions dictated that they necessarily must be, for "whole numbers represent discrete objects, and a commensurable ratio represents a relation between two collections of discrete objects", but Zeno found that in fact "[quantities] in general are not discrete collections of units; this is why ratios of incommensurable [quantities] appear... .[Q]uantities are, in other words, continuous". What this means is that contrary to the popular conception of the time, there cannot be an indivisible, smallest unit of measure for any quantity. In fact, these divisions of quantity must necessarily be infinite. For example, consider a line segment: this segment can be split in half, that half split in half, the half of the half in half, and so on. This process can continue infinitely, for there is always another half to be split. The more times the segment is halved, the closer the unit of measure comes to zero, but it never reaches exactly zero. This is just what Zeno sought to prove. He sought to prove this by formulating four paradoxes, which demonstrated the contradictions inherent in the mathematical thought of the time. While Zeno's paradoxes accurately demonstrated the deficiencies of contemporary mathematical conceptions, they were not regarded as proof of the alternative. In the minds of the Greeks, disproving the validity of one view did not necessarily prove the validity of another, and therefore, further investigation had to occur.
The next step was taken by Eudoxus of Cnidus, who formalized a new theory of proportion that took into account commensurable as well as incommensurable quantities. Central to his idea was the distinction between magnitude and number. A magnitude "...was not a number but stood for entities such as line segments, angles, areas, volumes, and time which could vary, as we would say, continuously. Magnitudes were opposed to numbers, which jumped from one value to another, as from 4 to 5". Numbers are composed of some smallest, indivisible unit, whereas magnitudes are infinitely reducible. Because no quantitative values were assigned to magnitudes, Eudoxus was then able to account for both commensurable and incommensurable ratios by defining a ratio in terms of its magnitude, and proportion as an equality between two ratios. By taking quantitative values (numbers) out of the equation, he avoided the trap of having to express an irrational number as a number. "Eudoxus' theory enabled the Greek mathematicians to make tremendous progress in geometry by supplying the necessary logical foundation for incommensurable ratios". This incommensurability is dealt with in Euclid's Elements, Book X, Proposition 9. It was not until Eudoxus developed a theory of proportion that took into account irrational as well as rational ratios that a strong mathematical foundation of irrational numbers was created.
As a result of the distinction between number and magnitude, geometry became the only method that could take into account incommensurable ratios. Because previous numerical foundations were still incompatible with the concept of incommensurability, Greek focus shifted away from numerical conceptions such as algebra and focused almost exclusively on geometry. In fact, in many cases, algebraic conceptions were reformulated into geometric terms. This may account for why we still conceive of x2 and x3 as x squared and x cubed instead of x to the second power and x to the third power. Also crucial to Zeno's work with incommensurable magnitudes was the fundamental focus on deductive reasoning that resulted from the foundational shattering of earlier Greek mathematics. The realization that some basic conception within the existing theory was at odds with reality necessitated a complete and thorough investigation of the axioms and assumptions that underlie that theory. Out of this necessity, Eudoxus developed his method of exhaustion, a kind of reductio ad absurdum that "...established the deductive organization on the basis of explicit axioms..." as well as "...reinforced the earlier decision to rely on deductive reasoning for proof". This method of exhaustion is the first step in the creation of calculus.
Theodorus of Cyrene proved the irrationality of the surds of whole numbers up to 17, but stopped there probably because the algebra he used could not be applied to the square root of 17.
India
Geometrical and mathematical problems involving irrational numbers such as square roots were addressed very early during the Vedic period in India. There are references to such calculations in the Samhitas, Brahmanas, and the Shulba Sutras (800 BC or earlier).
It is suggested that the concept of irrationality was implicitly accepted by Indian mathematicians since the 7th century BC, when Manava (c. 750 – 690 BC) believed that the square roots of numbers such as 2 and 61 could not be exactly determined. Historian Carl Benjamin Boyer, however, writes that "such claims are not well substantiated and unlikely to be true".
Later, in their treatises, Indian mathematicians wrote on the arithmetic of surds including addition, subtraction, multiplication, rationalization, as well as separation and extraction of square roots.
Mathematicians like Brahmagupta (in 628 AD) and Bhāskara I (in 629 AD) made contributions in this area as did other mathematicians who followed. In the 12th century Bhāskara II evaluated some of these formulas and critiqued them, identifying their limitations.
During the 14th to 16th centuries, Madhava of Sangamagrama and the Kerala school of astronomy and mathematics discovered the infinite series for several irrational numbers such as π and certain irrational values of trigonometric functions. Jyeṣṭhadeva provided proofs for these infinite series in the Yuktibhāṣā.
Islamic World
In the Middle Ages, the development of algebra by Muslim mathematicians allowed irrational numbers to be treated as algebraic objects. Middle Eastern mathematicians also merged the concepts of "number" and "magnitude" into a more general idea of real numbers, criticized Euclid's idea of ratios, developed the theory of composite ratios, and extended the concept of number to ratios of continuous magnitude. In his commentary on Book 10 of the Elements, the Persian mathematician Al-Mahani (d. 874/884) examined and classified quadratic irrationals and cubic irrationals. He provided definitions for rational and irrational magnitudes, which he treated as irrational numbers. He dealt with them freely but explains them in geometric terms as follows:
In contrast to Euclid's concept of magnitudes as lines, Al-Mahani considered integers and fractions as rational magnitudes, and square roots and cube roots as irrational magnitudes. He also introduced an arithmetical approach to the concept of irrationality, as he attributes the following to irrational magnitudes:
The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850 – 930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation in the form of square roots and fourth roots. In the 10th century, the Iraqi mathematician Al-Hashimi provided general proofs (rather than geometric demonstrations) for irrational numbers, as he considered multiplication, division, and other arithmetical functions.
Many of these concepts were eventually accepted by European mathematicians sometime after the Latin translations of the 12th century. Al-Hassār, a Moroccan mathematician from Fez specializing in Islamic inheritance jurisprudence during the 12th century, first mentions the use of a fractional bar, where numerators and denominators are separated by a horizontal bar. In his discussion he writes, "..., for example, if you are told to write three-fifths and a third of a fifth, write thus, ." This same fractional notation appears soon after in the work of Leonardo Fibonacci in the 13th century.
Modern period
The 17th century saw imaginary numbers become a powerful tool in the hands of Abraham de Moivre, and especially of Leonhard Euler. The completion of the theory of complex numbers in the 19th century entailed the differentiation of irrationals into algebraic and transcendental numbers, the proof of the existence of transcendental numbers, and the resurgence of the scientific study of the theory of irrationals, largely ignored since Euclid. The year 1872 saw the publication of the theories of Karl Weierstrass (by his pupil Ernst Kossak), Eduard Heine (Crelle's Journal, 74), Georg Cantor (Annalen, 5), and Richard Dedekind. Méray had taken in 1869 the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method has been completely set forth by Salvatore Pincherle in 1880, and Dedekind's has received additional prominence through the author's later work (1888) and the endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of all rational numbers, separating them into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Leopold Kronecker (Crelle, 101), and Charles Méray.
Continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the 19th century were brought into prominence through the writings of Joseph-Louis Lagrange. Dirichlet also added to the general theory, as have numerous contributors to the applications of the subject.
Johann Heinrich Lambert proved (1761) that π cannot be rational, and that en is irrational if n is rational (unless n = 0). While Lambert's proof is often called incomplete, modern assessments support it as satisfactory, and in fact for its time it is unusually rigorous. Adrien-Marie Legendre (1794), after introducing the Bessel–Clifford function, provided a proof to show that π2 is irrational, whence it follows immediately that π is irrational also. The existence of transcendental numbers was first established by Liouville (1844, 1851). Later, Georg Cantor (1873) proved their existence by a different method, which showed that every interval in the reals contains transcendental numbers. Charles Hermite (1873) first proved e transcendental, and Ferdinand von Lindemann (1882), starting from Hermite's conclusions, showed the same for π. Lindemann's proof was much simplified by Weierstrass (1885), still further by David Hilbert (1893), and was finally made elementary by Adolf Hurwitz and Paul Gordan.
Examples
Square roots
The square root of 2 was likely the first number proved irrational. The golden ratio is another famous quadratic irrational number. The square roots of all natural numbers that are not perfect squares are irrational and a proof may be found in quadratic irrationals.
General roots
The proof for the irrationality of the square root of two can be generalized using the fundamental theorem of arithmetic. This asserts that every integer has a unique factorization into primes. Using it we can show that if a rational number is not an integer then no integral power of it can be an integer, as in lowest terms there must be a prime in the denominator that does not divide into the numerator whatever power each is raised to. Therefore, if an integer is not an exact th power of another integer, then that first integer's th root is irrational.
Logarithms
Perhaps the numbers most easy to prove irrational are certain logarithms. Here is a proof by contradiction that log2 3 is irrational (log2 3 ≈ 1.58 > 0).
Assume log2 3 is rational. For some positive integers m and n, we have
It follows that
The number 2 raised to any positive integer power must be even (because it is divisible by 2) and the number 3 raised to any positive integer power must be odd (since none of its prime factors will be 2). Clearly, an integer cannot be both odd and even at the same time: we have a contradiction. The only assumption we made was that log2 3 is rational (and so expressible as a quotient of integers m/n with n ≠ 0). The contradiction means that this assumption must be false, i.e. log2 3 is irrational, and can never be expressed as a quotient of integers m/n with n ≠ 0.
Cases such as log10 2 can be treated similarly.
Types
An irrational number may be algebraic, that is a real root of a polynomial with integer coefficients. Those that are not algebraic are transcendental.
Algebraic
The real algebraic numbers are the real solutions of polynomial equations
where the coefficients are integers and . An example of an irrational algebraic number is x0 = (21/2 + 1)1/3. It is clearly algebraic since it is the root of an integer polynomial, , which is equivalent to . This polynomial has no rational roots, since the rational root theorem shows that the only possibilities are ±1, but x0 is greater than 1. So x0 is an irrational algebraic number. There are countably many algebraic numbers, since there are countably many integer polynomials.
Transcendental
Almost all irrational numbers are transcendental. Examples are e r and π r, which are transcendental for all nonzero rational r.
Because the algebraic numbers form a subfield of the real numbers, many irrational real numbers can be constructed by combining transcendental and algebraic numbers. For example, 3 + 2, + and e are irrational (and even transcendental).
Decimal expansions
The decimal expansion of an irrational number never repeats (meaning the decimal expansion does not repeat the same number or sequence of numbers) or terminates (this means there is not a finite number of nonzero digits), unlike any rational number. The same is true for binary, octal or hexadecimal expansions, and in general for expansions in every positional notation with natural bases.
To show this, suppose we divide integers n by m (where m is nonzero). When long division is applied to the division of n by m, there can never be a remainder greater than or equal to m. If 0 appears as a remainder, the decimal expansion terminates. If 0 never occurs, then the algorithm can run at most m − 1 steps without using any remainder more than once. After that, a remainder must recur, and then the decimal expansion repeats.
Conversely, suppose we are faced with a repeating decimal, we can prove that it is a fraction of two integers. For example, consider:
Here the repetend is 162 and the length of the repetend is 3. First, we multiply by an appropriate power of 10 to move the decimal point to the right so that it is just in front of a repetend. In this example we would multiply by 10 to obtain:
Now we multiply this equation by 10r where r is the length of the repetend. This has the effect of moving the decimal point to be in front of the "next" repetend. In our example, multiply by 103:
The result of the two multiplications gives two different expressions with exactly the same "decimal portion", that is, the tail end of 10,000A matches the tail end of 10A exactly. Here, both 10,000A and 10A have after the decimal point.
Therefore, when we subtract the 10A equation from the 10,000A equation, the tail end of 10A cancels out the tail end of 10,000A leaving us with:
Then
is a ratio of integers and therefore a rational number.
Irrational powers
Dov Jarden gave a simple non-constructive proof that there exist two irrational numbers a and b, such that ab is rational:
Consider ; if this is rational, then take a = b = . Otherwise, take a to be the irrational number and b = . Then ab = () = · = 2 = 2, which is rational.
Although the above argument does not decide between the two cases, the Gelfond–Schneider theorem shows that is transcendental, hence irrational. This theorem states that if a and b are both algebraic numbers, and a is not equal to 0 or 1, and b is not a rational number, then any value of ab is a transcendental number (there can be more than one value if complex number exponentiation is used).
An example that provides a simple constructive proof is
The base of the left side is irrational and the right side is rational, so one must prove that the exponent on the left side, , is irrational. This is so because, by the formula relating logarithms with different bases,
which we can assume, for the sake of establishing a contradiction, equals a ratio m/n of positive integers. Then hence hence hence , which is a contradictory pair of prime factorizations and hence violates the fundamental theorem of arithmetic (unique prime factorization).
A stronger result is the following: Every rational number in the interval can be written either as aa for some irrational number a or as nn for some natural number n. Similarly, every positive rational number can be written either as for some irrational number a or as for some natural number n.
Open questions
Various combinations of , and elementary functions (such as , , , , , ) are not known to be irrational, in part because and are not known to be algebraically independent. Schanuel's conjecture would imply that all of the above numbers are irrational and even transcendental.
The question about the irrationality of Euler's constant is a long standing open problem in number theory.
Other important numbers which are not known to be irrational include odd zeta constants for and Catalan's constant .
In constructive mathematics
In constructive mathematics, excluded middle is not valid, so it is not true that every real number is rational or irrational. Thus, the notion of an irrational number bifurcates into multiple distinct notions. One could take the traditional definition of an irrational number as a real number that is not rational. However, there is a second definition of an irrational number used in constructive mathematics, that a real number is an irrational number if it is apart from every rational number, or equivalently, if the distance between and every rational number is positive. This definition is stronger than the traditional definition of an irrational number. This second definition is used in Errett Bishop's proof that the square root of 2 is irrational.
Set of all irrationals
Since the reals form an uncountable
set, of which the rationals are a countable subset, the complementary set of
irrationals is uncountable.
Under the usual (Euclidean) distance function , the real numbers are a metric space and hence also a topological space. Restricting the Euclidean distance function gives the irrationals the structure of a metric space. Since the subspace of irrationals is not closed,
the induced metric is not complete. Being a G-delta set—i.e., a countable intersection of open subsets—in a complete metric space, the space of irrationals is completely metrizable: that is, there is a metric on the irrationals inducing the same topology as the restriction of the Euclidean metric, but with respect to which the irrationals are complete. One can see this without knowing the aforementioned fact about G-delta sets: the continued fraction expansion of an irrational number defines a homeomorphism from the space of irrationals to the space of all sequences of positive integers, which is easily seen to be completely metrizable.
Furthermore, the set of all irrationals is a disconnected metrizable space. In fact, the irrationals equipped with the subspace topology have a basis of clopen groups so the space is zero-dimensional.
| Mathematics | Basics | null |
20647810 | https://en.wikipedia.org/wiki/Sunburn | Sunburn | Sunburn is a form of radiation burn that affects living tissue, such as skin, that results from an overexposure to ultraviolet (UV) radiation, usually from the Sun. Common symptoms in humans and other animals include red or reddish skin that is hot to the touch or painful, general fatigue, and mild dizziness. Other symptoms include blistering, peeling skin, swelling, itching, and nausea. Excessive UV radiation is the leading cause of (primarily) non-malignant skin tumors, which in extreme cases can be life-threatening. Sunburn is an inflammatory response in the tissue triggered by direct DNA damage by UV radiation. When the cells' DNA is overly damaged by UV radiation, type I cell-death is triggered and the tissue is replaced.
Sun protective measures like sunscreen and sun protective clothing are widely accepted to prevent sunburn and some types of skin cancer. Special populations, including children, are especially susceptible to sunburn and protective measures should be used to prevent damage.
Signs and symptoms
Typically, there is initial redness, followed by varying degrees of pain, the severity of which correlates with the duration and intensity of sun exposure.
Other symptoms can include blistering, swelling (edema), itching (pruritus), peeling skin, rash, nausea, fever, chills, and fainting (syncope). Also, heat is produced from capillaries close to the skin surface, therefore the affected area feels warm to touch. Sunburns may be classified as superficial or partial-thickness burns. Blistering is a sign of second-degree sunburn.
Variations
Minor sunburns typically cause nothing more than slight redness and tenderness to the affected areas. In more serious cases, blistering can occur. Extreme sunburns can be painful to the point of debilitation and may require hospital care.
Duration
Sunburn can occur in less than 15 minutes in response to sun exposure and in seconds when exposed to non-shielded welding arcs or other sources of intense ultraviolet light. Nevertheless, the inflicted harm is often not immediately obvious.
After sun exposure, the skin may turn red in as little as 30 minutes, but sunburn usually takes 2 to 6 hours. Pain is usually strongest 6 to 48 hours after exposure. The burn continues to develop for 1 to 3 days, occasionally followed by peeling skin after 3 to 8 days. Some peeling and itching may continue for several weeks.
Skin cancer
Ultraviolet radiation causes sunburns and increases the risk of three types of skin cancer: melanoma, basal-cell carcinoma and squamous-cell carcinoma. Of greatest concern is that the melanoma risk increases dose-dependently proportional to the number of a person's lifetime cumulative episodes of sunburn. An estimated 1/3 of melanomas in the United States and Australia could be prevented with regular sunscreen use.
Causes
Sunburn is caused by UV radiation from the Sun but may also result from artificial sources, such as tanning lamps, welding arcs, or ultraviolet germicidal irradiation. It is the body's reaction to direct DNA damage from UVB light. This damage is mainly the formation of a thymine dimer. The damage is recognized by the body, which then triggers several defense mechanisms, including DNA repair to revert the damage, apoptosis and peeling to remove irreparably damaged skin cells, and increased melanin production to prevent future damage.
Melanin readily absorbs UV wavelength light, acting as a photoprotectant. By preventing UV photons from disrupting chemical bonds, melanin inhibits both the direct alteration of DNA, as well as the generation of free radicals, to prevent them from indirectly damaging DNA. However, human melanocytes contain over 2,000 genomic sites that are highly sensitive to UV, and such sites can be up to 170-fold more sensitive to UV induction of cyclobutane pyrimidine dimers than the average site These sensitive sites often occur at biologically significant locations near genes.
Sunburn causes an inflammation process that includes the production of prostanoids and bradykinin. These chemical compounds increase sensitivity to heat by reducing the threshold of heat receptor (TRPV1) activation from to . The pain may be caused by the overproduction of a protein called CXCL5, which activates nerve fibers.
Skin type determines the ease of sunburn. People with lighter skin tones and limited capacity to develop a tan after UV radiation exposure have a greater risk of sunburn. Fitzpatrick's Skin phototypes classification describes the normal variations of skin responses to UV radiation. Persons with type I skin have the greatest capacity to sunburn, and type VI have the least capacity to burn. However, all skin types can develop sunburn.
Fitzpatrick's skin phototypes:
Type 0: Albino
Type I: Pale white skin, burns easily, does not tan
Type II: White skin, burns easily, tans with difficulty
Type III: White skin, may burn, but eventually tans easily
Type IV: Light brown/olive skin, hardly burns, tans easily
Type V: Brown skin, usually does not burn, tans easily
Type VI: Black skin, very unlikely to burn, becomes darker with UV radiation exposure
Age also affects how skin reacts to the sun. Children younger than six and adults older than sixty are more sensitive to sunlight.
Certain genetic conditions, for example, xeroderma pigmentosum, increase a person's susceptibility to sunburn and subsequent skin cancers. These conditions involve defects in DNA repair mechanisms which decrease the ability to repair DNA damaged by UV radiation.
Medications
The risk of sunburn can be increased by pharmaceutical products that sensitize users to UV radiation. Certain antibiotics, oral contraceptives, antidepressants, acne medications, and tranquillizers have this effect.
UV intensity
The UV Index indicates the risk of sunburn at a given time and location. Contributing factors include:
The time of day. In most locations, the sun's rays are strongest between approximately 10 am and 4 pm daylight saving time.
Cloud cover. Clouds partially block UV, but even on an overcast day, a significant percentage of the sun's damaging UV radiation can pass through clouds.
Proximity to reflective surfaces, such as water, sand, concrete, snow, and ice. All of these reflect the sun's rays and can cause sunburns.
The season of the year. The Sun's position in late spring and early summer can cause a more-severe sunburn.
Altitude. At a higher altitude, it is easier to become burnt, because there is less of the Earth's atmosphere to block the sunlight. UV exposure increases about 4% for every 1000 ft (305 m) gain in elevation.
Proximity to the equator (latitude). Between the polar and tropical regions, the closer to the equator, the more direct sunlight passes through the atmosphere over a year. For example, the southern United States gets fifty percent more sunlight than the northern United States.
Because of variations in the intensity of UV radiation passing through the atmosphere, the risk of sunburn increases with proximity to the tropic latitudes, located between 23.5° north and south latitude. All else being equal (e.g., cloud cover, ozone layer, terrain, etc.), each location within the tropic or polar regions receives approximately the same amount of UV radiation over a year. In the temperate zones between 23.5° and 66.5°, UV radiation varies substantially by latitude and season. The higher the latitude, the lower the intensity of the UV rays. Sun intensity in the northern hemisphere is greatest during May, June and Julyand in the southern hemisphere, November, December and January. On a minute-by-minute basis, the amount of UV radiation depends on the Sun's angle. Ultraviolet radiation is easily determined by the height ratio of any object to the size of its shadow. Height is measured parallel to the Earth's gravitational field and the projected shadow is measured on a flat, level surface. For objects wider than skulls or poles, the height and length are best measured relative to the same occluding edge. The most significant risk is at solar noon when shadows are at their minimum, and the Sun's radiation passes most directly through the atmosphere. Regardless of one's latitude (assuming no other variables), equal shadow lengths mean equal amounts of UV radiation.
The skin and eyes are most sensitive to damage by UV at 265–275 nm wavelength, which is in the lower UVC band that is rarely encountered except from artificial sources like welding arcs. Longer wavelengths of UV radiation cause most sunburn because those wavelengths are more prevalent in ground-level sunlight.
Ozone depletion
In recent decades, the incidence and severity of sunburn have increased worldwide, partly because of chemical damage to the atmosphere's ozone layer. Between the 1970s and the 2000s, average stratospheric ozone decreased by approximately 4%, contributing an approximate 4% increase to the average UV intensity at the Earth's surface. Ozone depletion and the seasonal "ozone hole" have led to much larger changes in some locations, especially in the southern hemisphere.
Tanning
Suntans, which naturally develop in some individuals as a protective mechanism against the sun, are viewed by most in the Western world as desirable. Tanning has led to an increased exposure to UV radiation from both the natural sun and tanning lamps. Suntans can provide a modest sun protection factor (SPF) of 3, meaning that tanned skin would tolerate up to three times the UV exposure as pale skin.
Sunburns associated with indoor tanning can be severe.
The World Health Organization, American Academy of Dermatology, and the Skin Cancer Foundation have recommended avoiding artificial UV sources such as tanning beds. Suntans are not recommended as a form of sun protection.
Diagnosis
Differential diagnosis
The differential diagnosis of sunburn includes other skin pathology induced by UV radiation, including photoallergic reactions, phototoxic reactions to topical or systemic medications, and other dermatologic disorders that are aggravated by exposure to sunlight. Considerations for diagnosis include duration and intensity of UV exposure, topical or systemic medication use, history of dermatologic disease, and nutritional status.
Phototoxic reactions: Non-immunological response to sunlight interacting with certain drugs and chemicals in the skin which resembles an exaggerated sunburn. Common medications that may cause a phototoxic reaction include amiodarone, dacarbazine, fluoroquinolones, 5-fluorouracil, furosemide, nalidixic acid, phenothiazines, psoralens, retinoids, sulfonamides, sulfonylureas, tetracyclines, thiazides, and vinblastine.
Photoallergic reactions: Uncommon immunological response to sunlight interacting with certain drugs and chemicals in the skin. When in an excited state by UVR, these drugs and chemicals form free radicals that react to form functional antigens and induce a Type IV hypersensitivity reaction. These drugs include 6-methylcoumarin, aminobenzoic acid and esters, chlorpromazine, promethazine, diclofenac, sulfonamides, and sulfonylureas. Unlike phototoxic reactions which resemble exaggerated sunburns, photoallergic reactions can cause intense itching and can lead to thickening of the skin.
Phytophotodermatitis: UV radiation induces skin inflammation after contact with certain plants (including limes, celery, and meadow grass). Causes pain, redness, and blistering of the skin in the distribution of plant exposure.
Polymorphic light eruption: Recurrent abnormal reactions to UVR present in various ways, including pink-to-red bumps, blisters, plaques and urticaria.
Solar urticaria: A rare allergic reaction to the sun that occurs within minutes of exposure and fades within hours.
Other skin diseases exacerbated by sunlight: Several dermatologic conditions can increase in severity with exposure to UVR. These include systemic lupus erythematosus (SLE), dermatomyositis, acne, atopic dermatitis, and rosacea.
Additionally, since sunburn is a type of radiation burn, it can initially hide a severe exposure to radioactivity. Excess radiation exposure may result in acute radiation syndrome or other radiation-induced illnesses, especially in sunny conditions. For instance, the difference between the erythema caused by sunburn and other radiation burns is not immediately obvious. Symptoms common to heat illness and the prodromic stage of acute radiation syndrome like nausea, vomiting, fever, weakness/fatigue, dizziness or seizure can add to further diagnostic confusion.
Prevention
The most effective way to prevent sunburn is to reduce the amount of UV radiation reaching the skin. The World Health Organization, American Academy of Dermatology, and Skin Cancer Foundation recommend the following measures to prevent excessive UV exposure and skin cancer:
Limiting sun exposure between the hours of 10 am and 4 pm, when UV rays are the strongest
Seeking shade when UV rays are most intense
Wearing sun-protective clothing, including a wide-brim hat, sunglasses, and tightly woven, loose-fitting clothing
Using sunscreen
Avoiding tanning beds and artificial UV exposure
UV intensity
The strength of sunlight is published in many locations as a UV Index. Sunlight is generally strongest when the Sun is close to the highest point in the sky. Due to time zones and daylight saving time, this is not necessarily at 12 pm, but often one to two hours later. Seeking shade using umbrellas and canopies can reduce UV exposure, but does not block all UV rays. The WHO recommends following the shadow rule: "Watch your shadow – Short shadow, seek shade!"
Sunscreen
Commercial preparations that block UV light are known as sunscreens or sunblocks. They have a sun protection factor (SPF) rating based on the sunblock's ability to suppress sunburn: The higher the SPF rating, the lower the amount of direct DNA damage. The stated protection factors are correct only if 2 mg of sunscreen is applied per square cm of exposed skin translates into about 28 mL (1 oz) to cover the whole body of an adult male. The recommended dose is much more than many people use in practice. Sunscreens function as chemicals such as oxybenzone and dioxybenzone (organic sunscreens) or opaque materials such as zinc oxide or titanium oxide (inorganic sunscreens) that mainly absorb UV radiation. Chemical and mineral sunscreens vary in the wavelengths of UV radiation blocked. Broad-spectrum sunscreens contain filters that protect against UVA radiation as well as UVB. Although UVA radiation does not primarily cause sunburn, it contributes to skin aging and increases skin cancer risk.
Sunscreen is effective and thus recommended for preventing melanoma and squamous cell carcinoma. There is little evidence that it is effective in preventing basal cell carcinoma. Typical use of sunscreen does not usually result in vitamin D deficiency, but extensive usage may.
Recommendations
Research has shown that the best sunscreen protection is achieved by application 15 to 30 minutes before exposure, followed by one reapplication 15 to 30 minutes after exposure begins. Further reapplication is necessary after activities such as swimming, sweating, and rubbing. Recommendations are product dependent varying from 80 minutes in water to hours based on the indications and protection shown on the label. The American Academy of Dermatology recommends the following criteria in selecting a sunscreen:
Broad spectrum: protects against both UVA and UVB rays
SPF 30 or higher
Water resistant: sunscreens are classified as water resistant based on time, either 40 minutes, 80 minutes, or not water resistant
Eyes
The eyes are also sensitive to sun exposure at about the same UV wavelengths as skin; snow blindness is sunburn of the cornea. Wrap-around sunglasses or the use by spectacle-wearers of glasses that block UV light reduce harmful radiation. UV light has been implicated in the development of age-related macular degeneration, pterygium and cataracts. Concentrated clusters of melanin, commonly known as freckles, are often found within the iris.
The tender skin of the eyelids can also become sunburned and can be especially irritating.
Lips
The lips can become chapped (cheilitis) by sun exposure. Sunscreen on the lips does not have a pleasant taste and might be removed by saliva. Some lip balms (ChapSticks) have SPF ratings and contain sunscreens.
Feet
The skin of the feet is often tender and protected, so sudden prolonged exposure to UV radiation can be particularly painful and damaging to the top of the foot. Protective measures include sunscreen, socks, or swimwear that covers the foot.
Diet
Dietary factors influence susceptibility to sunburn, recovery from sunburn, and risk of secondary complications. Several dietary antioxidants, including essential vitamins, are effective in protecting against sunburn and skin damage associated with ultraviolet radiation, in both human and animal studies. Supplementation with Vitamin C and Vitamin E was shown in one study to reduce the amount of sunburn after a controlled amount of UV exposure.
A review of scientific literature through 2007 found that beta carotene (Vitamin A) supplementation had a protective effect against sunburn. The effects of beta carotene were only evident in the long-term, with studies of supplementation for periods less than ten weeks in duration failing to show any effects. There is also evidence that common foods may have some protective ability against sunburn if taken for a period before exposure.
Protecting children
Babies and children are particularly susceptible to UV damage which increases their risk of both melanoma and non-melanoma skin cancers later in life. Children should not sunburn at any age, and protective measures can reduce their future risk of skin cancer.
Infants 0–6 months: Children under 6mo generally have skin too sensitive for sunscreen and protective measures should focus on avoiding excessive UV exposure by using window mesh covers, wide-brim hats, loose clothing that covers the skin, and reducing UV exposure between the hours of 10am and 4pm.
Infants 6–12 months: Sunscreen can safely be used on infants this age. It is recommended to apply a broad-spectrum, water-resistant SPF 30+ sunscreen to exposed areas and avoid excessive UV exposure by using wide-brim hats and protective clothing.
Toddlers and Preschool-aged children: Apply a broad-spectrum, water-resistant SPF 30+ sunscreen to exposed areas, use wide-brim hats and sunglasses, avoid peak UV intensity hours of 10 am - 4 pm and seek shade. Sun-protective clothing with an SPF rating can also provide additional protection.
Artificial UV exposure
The WHO recommends that artificial UV exposure, including tanning beds, should be avoided as no safe dose has been established. Special protective clothing (for example, welding helmets/shields) should be worn when exposed to any artificial source of occupational UV. Such sources can produce UVC, an extremely carcinogenic wavelength of UV, which ordinarily is not present in normal sunlight, having been filtered out by the atmosphere.
Treatment
The primary measure of treatment is avoiding further exposure to the sun. The best treatment for most sunburns is time; most sunburns heal completely within a few weeks.
The American Academy of Dermatology recommends the following for the treatment of sunburn:
For pain relief, take cool baths or showers frequently.
Use soothing moisturizers that contain aloe vera or soy.
Anti-inflammatory medications such as ibuprofen or aspirin can help with pain.
Keep hydrated and drink extra water.
Do not pop blisters on a sunburn; let them heal on their own instead.
Protect sunburned skin (see: Sun protective clothing and Sunscreen) with loose clothing when going outside to prevent further damage while not irritating the sunburn.
Non-steroidal anti-inflammatory drugs (NSAIDs; such as ibuprofen or naproxen), and aspirin may decrease redness and pain. Local anesthetics such as benzocaine, however, are contraindicated. Schwellnus et al. state that topical steroids (such as hydrocortisone cream) do not help with sunburns, although the American Academy of Dermatology says they can be used on especially sore areas. While lidocaine cream (a local anesthetic) is often used as a sunburn treatment, there is little evidence for the effectiveness of such use.
A home treatment that may help the discomfort is using cool and wet cloths on the sunburned areas. Applying soothing lotions that contain aloe vera to sunburned areas was supported by multiple studies. However, others have found aloe vera to have no effect. Note that aloe vera cannot protect people from new or further sunburn. Another home treatment is using a moisturizer that contains soy. Furthermore, sunburn draws fluid to the skin's surface and away from the rest of the body. Drinking extra water is recommended to help prevent dehydration.
| Biology and health sciences | Types | Health |
20647902 | https://en.wikipedia.org/wiki/Escalator | Escalator | An escalator is a moving staircase which carries people between floors of a building or structure. It consists of a motor-driven chain of individually linked steps on a track which cycle on a pair of tracks which keep the step tread horizontal.
Escalators are often used around the world in places where lifts would be impractical, or they can be used in conjunction with them. Principal areas of usage include department stores, shopping malls, airports, transit systems (railway/railroad stations), convention centers, hotels, arenas, stadiums and public buildings.
Escalators have the capacity to move large numbers of people. They have no waiting interval (except during very heavy traffic). They can be used to guide people toward main exits or special exhibits and may be weatherproofed for outdoor use. A non-functional escalator can function as a normal staircase, whereas many other methods of transport become useless when they break down or lose power.
History
Inventors and manufacturers
Nathan Ames, a patent attorney from Saugus, Massachusetts, is credited with patenting the first "escalator" in 1859, even though no working model of his design was ever built. His invention, the "revolving stairs", is largely speculative and the patent specifications indicate that he had no preference for materials or potential use (he noted that steps could be upholstered or made of wood, and suggested that the units might benefit the infirm within a household use). The suggested motive power was either manual or hydraulic.
In 1889, Leamon Souder successfully patented the "stairway", an analogous device that featured a "series of steps and links jointed to each other". No model was ever built. This was the first of at least four escalator-style patents issued to Souder, including two for spiral designs.
On March 15, 1892, Jesse W. Reno patented the "Endless Conveyor or Elevator." A few months after Reno's patent was approved, George A. Wheeler patented his ideas for a more recognizable moving staircase, though it was never built. Wheeler's patents were bought by Charles Seeberger; some features of Wheeler's designs were incorporated in Seeberger's prototype that was built by the Otis Elevator Company in 1899. Reno, a graduate of Lehigh University, produced the first working escalator (called the "inclined elevator") and installed it alongside the Old Iron Pier at Coney Island, New York City in 1896. This particular device was little more than an inclined belt with cast-iron slats or cleats on the surface for traction, and traveled along a 25 degree incline. A few months later, the same prototype was used for a month-long trial period on the Manhattan side of the Brooklyn Bridge. Reno eventually joined forces with Otis and retired once he had sold his patents. Some Reno-type escalators were still being used in the Boston subway until construction for the Big Dig () precipitated their removal. The Smithsonian Institution considered re-assembling one of these historic units from 1914 in their collection of Americana, but "logistics and reassembly costs won out over nostalgia", and the project was discarded.
Around May 1895, Charles Seeberger began drawings on a form of escalator similar to those patented by Wheeler in 1892. This device consisted of flat, moving stairs, not unlike the escalators of today, except for one important detail: the step surface was smooth, with no comb effect to safely guide the rider's feet off at the ends. Instead, the passenger had to step off sideways. To facilitate this, at the top or bottom of the escalator the steps continued moving horizontally beyond the end of the handrail (like a miniature moving sidewalk) until they disappeared under a triangular "divider" which guided the passenger to either side. Seeberger teamed with Otis in 1899, and together they produced the first commercial escalator. It won first prize at the 1900 Paris Exposition Universelle. Also on display at the Exposition were Reno's inclined elevator, a similar model by James M. Dodge and the Link Belt Machinery Co., and two different devices by the French manufacturers Hallé and Piat.
Piat installed its "stepless" escalator in Harrods Knightsbridge store on Wednesday, November 16, 1898, though the company relinquished its patent rights to the department store. Noted by Bill Lancaster in The Department Store: a Social History, "customers unnerved by the experience were revived by shopmen dispensing free smelling salts and cognac." The Harrods unit was a continuous leather belt made of "224 pieces... strongly linked together traveling in an upward direction", and was the first "moving staircase" in England.
Hocquardt received European patent rights for the Fahrtreppe in 1906. After the Exposition, Hallé continued to sell its escalator device in Europe but was eventually eclipsed in sales by other major manufacturers.
In the first half of the twentieth century, several manufacturers developed their own escalator products, though they had to market their devices under different names, due to Otis’ hold on the trademark rights to the word "escalator." New York-based Peelle Company called their models the Motorstair, while Westinghouse called their model an Electric Stairway. The Toledo-based Haughton Elevator company referred to their product as simply Moving Stairs. The Otis trademark is no longer in effect.
Kone and Schindler introduced their first escalator models several decades after the Otis Elevator Co., but grew to dominate the field over time. Today, Mitsubishi and ThyssenKrupp are Otis's primary rivals. Kone expanded internationally by acquisition in the 1970s, buying out Swedish elevator manufacturer Asea-Graham, and purchasing other minor French, German and Austrian elevator makers before assuming control of Westinghouse's European elevator business. As the last of the "big four" manufacturers to emerge onto the global market, Kone first acquired the Montgomery Elevator company, then took control of Germany's Orenstein & Koppel Rolltreppen.
In the twenty-first century Schindler became the largest maker of escalators and second largest maker of elevators in the world, though their first escalator installation did not occur until 1936. In 1979, the company entered the United States market by purchasing the Haughton Elevator company. A decade later, Schindler assumed control of the North American escalator/elevator operations of Westinghouse, forming Schindler's American division.
Extant historic escalator models
Notable examples of historic escalators still in operation include:
St Anna Pedestrian Tunnel underneath the Scheldt river in Antwerp, Belgium, opened 1933.
Maastunnel's bicycle/pedestrian tunnel, adjacent to its car tunnel in Rotterdam, The Netherlands, opened 1942.
Tyne Cyclist and Pedestrian Tunnel, Tyne and Wear, England, constructed 1951.
Macy's Herald Square department store upwards escalators, New York, U.S., opened 1920s.
Etymology
Authors and historians have offered multiple interpretations of the source of the word "escalator", and some degree of misinformation then proliferated. For reference, contradictory citations by seven separate individuals, including the Otis Elevator Company itself, are provided below.
Seeberger trademarked the word "escalator" in 1900, to coincide with his device's debut at the Exposition universelle. According to his own account, in 1895, his legal counsel advised him to name his new invention, and he then set out to devise a title for it. As evidenced in Seeberger's handwritten documents, the inventor consulted "a Latin lexicon" and "adopted as the root of the new word, 'Scala'; as a prefix, 'E' and as a suffix, 'Tor.'" His own rough translation of the word thus created was "means of traversing from", and he intended for the word to be pronounced (). By 1906, Seeberger noted that the public had instead come to pronounce it ().
"Escalator" was not a combination of other French or Greek words, and was never a derivative of "elevator" in the original sense, which means "one who raises up, a deliverer" in Latin. Similarly, the root word "scala" does not mean "a flight of steps", but is the singular form of the plural noun "scalae", which can denote any of: "a flight of steps or stairs, a staircase; a ladder, [or] a scaling-ladder."
The alleged intended capitalization of "escalator" is likewise a topic of debate. Seeberger's trademark application lists the word not only with the "E", but also with all of the letters capitalized (in two different instances), and he specifies that "any other form and character of type may be employed... without altering in any essential manner the character of [the] trade-mark." Otis Elevator Co. advertisements so frequently capitalized all of the letters in the word.
In 1950, the landmark case Haughton Elevator Co. v. Seeberger precipitated the end of Otis's exclusive reign over the word "escalator", and simultaneously created a cautionary study for companies and individuals interested in trademark retention. Confirming the contention of the Examiner of Trademark Interferences, Assistant Commissioner of Patents Murphy's decision rejected Otis’ appeal to keep their trademark intact, and noted that "the term 'escalator' is recognized by the general public as the name for a moving stairway and not the source thereof", observing that Otis had "used the term as a generic descriptive term... in a number of patents which [had] been issued to them and... in their advertising matter." All trademark protections were removed from the word "escalator", the term was officially genericized, and it fell into the public domain.
Design
Design factors include innovative technology, physical requirements, location, traffic patterns, safety considerations, and aesthetics. Physical factors such as the distance to be spanned determine the length and pitch of the escalator, while factors such as the infrastructure's ability to provide support and power must be considered. How upward and downward traffic is separated and load/unload areas are other important considerations.
Temporal traffic patterns must be anticipated. Some escalators need only to move people from one floor to another, but others may have specific requirements, such as funneling visitors towards exits or exhibits. The visibility and accessibility of the escalator to traffic is relevant. Designers need to account for the projected traffic volumes. For example, a single-width escalator traveling at about can move about 2000 people per hour, assuming that passengers ride single file. The carrying capacity of an escalator system is typically matched to the expected peak traffic demand. For example, escalators at transit stations must be designed to cater for the peak traffic flow discharged from a train, without excessive bunching at the escalator entrance. In this regard, escalators help manage the flow of people. For example, at many airports an unpaired escalator delivers passengers to an exit, with no means for anyone entering at the exit to access the concourse.
Escalators are often built next to or around staircases that allow alternative travel between the same two floors. Elevators are necessary for disability access to floors serviced by escalators.
Escalators typically rise at an angle of 30 or 35 degrees from the ground. They move at , like moving walkways, and may traverse vertical distances in excess of . Most modern escalators have single-piece aluminum or stainless steel steps that move on a system of tracks in a continuous loop.
Different types of escalator planning include:
Parallel (up and down escalators adjacent or nearby, often seen in perpendicular areas, metro stations and multilevel movie theaters);
Multiple parallel (banks of more than one escalator going in the same direction parallel to banks going the other direction);
Crisscross (escalators going in one direction "stacked" with escalators going the opposite direction oriented adjacent but perpendicular, frequently used in department stores or shopping centers).
Most countries require escalators to have moving handrails that keep pace with the movement of the steps as a safety measure. This helps riders steady themselves, especially when stepping onto the moving stairs. Occasionally a handrail moves at a slightly different speed from the steps, causing it to "creep" slowly forward or backward relative to the steps; it is only slippage and normal wear that causes such losses of synchronicity, and is not by design.
The direction of escalator movement (up or down) can be permanently set, controlled manually depending on the predominant flow of the crowd, or controlled automatically. In some setups, the direction is controlled by whoever arrives first.
Components
Landing platforms are the two platforms (at the two ends) that house the curved sections of the tracks, as well as the gears and motors that drive the stairs. The top platform usually contains the motor assembly and the main drive gear, while the bottom holds the return gear. These sections also anchor the ends of the escalator truss. Each platform also has a floor and a comb bearer. The floor plate provides a place for the passengers to stand before they step onto the moving stairs, flush with the rest of the floor and are removable to allow easy engineer access, while the comb bearer sits between the stationary floor plate and the moving step, so named for the cleats on its edge which mesh with the matching cleats on each step (and resemble a comb). The comb plates, which bolt to the comb bearer (usually 4 or 5 depending on the width of the machine), help to minimize the gap between the stairs and landing, preventing objects or persons from becoming caught in it. The comb bearer, depending on what brand of the escalator will push back and/or up and activate limit switches in the event of an impact of something that jams through combs (typically stones, screws & popcorn) can be someone's shoe/item loose clothing.
The truss is the hollow metal structure that bridges the lower and upper landings, composed of two side sections joined with cross braces across the bottom and just below the top. The ends of the truss are attached to the top and bottom landing platforms via steel or concrete supports. It carries all the straight track sections connecting the upper and lower sections.
The balustrade is composed of handrails, balustrade panels, and skirt panels.
The handrail provides a handhold for passengers while they are riding the escalator. The handrail is pulled along its own track by a chain that is connected to the main drive gear by a series of pulleys, keeping it at the same speed as the steps. Four distinct sections make up the rail: at its center is a "slider", also known as a "glider ply", which is a layer of a cotton or synthetic textile that allows the rail to move smoothly along its track. The "tension member" lies on the slider and consists of either steel cable or flat steel tape, providing the handrail with tensile strength and flexibility. The inner components, on top of the tension member, are made of chemically treated rubber designed to prevent the layers from separating. Finally, the outer layerthe part that passengers see—is the cover, typically a blend of synthetic polymers and rubber. Covers are designed to resist degradation from environmental conditions, mechanical wear and tear and vandalism. In a factory, handrails are constructed by feeding rubber through an extrusion machine to produce layers of the required size and type in order to match specific orders. The component layers of fabric, rubber and steel are shaped by workers before being fed into the presses which fuse them together. In the mid-twentieth century, some handrail designs consisted of a rubber bellows, with rings of smooth metal cladding called "bracelets" between each coil. This gave the handrail a rigid yet flexible feel. Additionally, each bellows section was no more than around a metre long, so if part of the handrail was damaged, only the bad segment needed to be replaced. These forms of handrail have largely been replaced with fabric-and-rubber railings. Being made of either metal, sandwich panel, or glass, the balustrade panel supports the handrails of the escalator. It also provides additional protection for the handrail and passengers. Some escalators have direction arrows on the ends of the balustrade. Escalators' on/off buttons are frequently located at the ends of the balustrade. Moving walkways often use balustrades in the same way. The bottom of the balustrade is called a skirt panel. It is notorious in this art for being a frequent site of injuries and failures, due to the possible entrapment of materials (including body parts) in the machinery. Multiple solutions have been suggested for this issue, including coating with a low-friction material, employing bristles, and others.
The track system is built into the truss to guide the step chain, which continuously pulls the steps from the bottom platform and back to the top in an endless loop. One track guides the front wheels of the steps (called the step-wheel track) and another guides the back wheels of the steps (called the trailer-wheel track). The relative positions of these tracks cause the steps to form a staircase as they move out from under the comb plate. Along the straight section of the truss the tracks are at their maximum distance apart. This configuration forces the back of one step to be at a 90-degree angle relative to the step behind it. This right angle forces the steps into a shape resembling a staircase. At the top and bottom of the escalator, the two tracks converge so that the front and back wheels of the steps are almost in a straight line. This causes the stairs to lay in a flat sheetlike arrangement, one after another, so they can easily travel around the bend in the curved section of track. The tracks carry the steps down along the underside of the truss until they reach the bottom landing, where they pass through another curved section of track before exiting the bottom landing. At this point, the tracks separate and the steps once again assume a staircase configuration. This cycle is repeated continually as the steps are pulled from bottom to top and back to the bottom again.
The steps themselves are solid, one piece, die-cast aluminium or steel. Yellow demarcation lines are sometimes added to indicate their edges. In most escalator models manufactured after 1950, both the riser and the tread of each step is cleated (given a ribbed appearance) with comb-like protrusions that mesh with the comb plates on the top and bottom platforms and the succeeding steps in the chain. Seeberger escalators featured flat treads and smooth risers; other escalator models have cleated treads and smooth risers. The steps are linked by a continuous metal chain that forms a closed loop. The front and back edges of the steps each have two wheels, the rear of which are set further apart and fit into the trailer-wheel track while the front set have narrower axles and fit the step-wheel track.
Alternative designs
Jesse Reno also designed the first escalators installed in any underground subway system in the form of a helical escalator at Holloway Road tube station in London in 1906. The experimental device never saw public use and its remains are now in the London Transport Museum's depot in Acton.
Although the first fully operational spiral escalator, Reno's design was nonetheless only one in a series of similar proposed contraptions. Souder patented two helical designs, while Wheeler drafted helical stairway plans in 1905. Seeberger devised at least two helical designs between 1906 and 1911 (including an unrealized arrangement for the London Underground), and Gilbert Luna obtained West German, Japanese, and United States patents for his version of a spiral escalator by 1973. When interviewed for the Los Angeles Times that year, Luna was in the process of soliciting major firms for the acquisition of his patents and company, but statistics are unclear on the outcome of these endeavors. Karl-Heinz Pahl received a European and a US patent for a spiral escalator in 1992.
The Mitsubishi Electric Corporation was most successful in its development of spiral or helical escalators, and it alone has sold them since the mid-1980s. The world's first practical spiral escalator—a Mitsubishi model—was installed in Osaka, Japan, in 1985. Helixator, an experimental helical escalator design that currently exists as a prototype scale model, could further reduce floor space demands. Its design has several innovations that allow a continuous helix; driven by a linear motor instead of a chain system, it spreads force evenly along the escalator path, avoiding excessive force on the top chain links and hence avoiding the geometry, length, and height limits of standard escalators. San Francisco Centre, San Francisco, California, United States, is the first spiral escalator in the Western Hemisphere.
Levytator, a design originating at City University in London, can move in straight lines or curves with or without rising or descending. The returning steps do not move underneath the in-use steps: rather, they provide steps for travel in the opposite direction, as in the Pahl spiral escalator patent.
Safety
Safety is a major concern in escalator design, as escalators are powerful machines that can become entangled with clothing and other items. Such entanglements can injure or kill riders. In India many women wear saris, increasing the likelihood of entangling the clothing's loose end. To prevent this, sari guards are built into most escalators in India.
Children wearing footwear such as Crocs and flip-flops are especially at risk of being caught in escalator mechanisms. The softness of the shoe's material combined with the smaller size of children's feet makes this sort of accident especially common.
Escalators sometimes include fire protection systems including automatic fire detection and suppression systems within the dust collection and engineer pit. To limit the danger caused by overheating, spaces that contain motors and gears typically include additional ventilation. Small, targeted clean agent automatic extinguishing systems are sometimes installed in these areas. Fire protection of an escalator floor opening is also sometimes provided by adding automatic sprinklers or fireproof shutters to the opening, or by installing the escalator in an enclosed fire-protected space.
The King's Cross fire of 1987 illustrated the demanding nature of escalator upkeep and the devices' propensity to collect "fluff" and other small debris when not properly maintained. The official inquiry determined that the fire started slowly, smoldering virtually undetected for a time, and then exploded into the ticket hall above in a previously unrecognised phenomenon now known as the "trench effect". In the escalators' undercarriage, approximately of accumulated detritus acted as a wick to a neglected buildup of interior lubricants; wood veneers, paper and plastic advertisements, solvent-based paint, plywood in the ticket hall, and melamine combustion added to the impact of the calamity. Following the report, older wooden escalators were removed from service in the London Underground. Additionally, sections of the London Underground that were actually below ground were made non-smoking; ultimately, the whole system became a smoke-free zone.
Some of longest and fastest escalators in Europe are found in Prague, and are set to be replaced with slower versions in order to meet modern safety standards.
Legislation
In the 1930s, at least one suit was filed against a department store, alleging that its escalators posed an attractive nuisance, responsible for a child's injury.
Despite their considerable scope, the two Congressional Acts regarding accessibility (the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990 (ADA)) did not directly affect escalators or their public installations. Since Section 504 of the Rehabilitation Act included public transportation systems, for a few years, the United States Department of Transportation considered designs to retrofit existing escalators for wheelchair access. Nonetheless, Foster-Miller Associates' 1980 plan, Escalator Modification for the Handicapped was ultimately ignored in favor of increased elevator installations in subway systems. Likewise, the ADA provided more accessibility options, but expressly excluded escalators as "accessible means of egress", advocating neither their removal nor their retention in public structures.
In the United States and Canada, new escalators must abide by ASME A17.1 standards, and old/historic escalators must conform to the safety guidelines of ASME A17.3. In Europe, the escalator safety code is EN 115.
Etiquette
In most major countries, the expectation is that escalator users wishing to stand keep to one side to allow others to climb past them on the other. Due to historical design purposes, riders in Canada, Germany, Hong Kong, Taiwan, the United Kingdom, France and the United States are expected to stand on the right and walk on the left. However, in Australia and New Zealand, the opposite is the case. Practice may differ from city to city within countries: in Osaka, riders stand on the right, whereas in Tokyo (and most other Japanese cities), riders stand on the left.
In certain high-traffic systems, including the East Japan Railway Company and the Prague metro, escalator users are encouraged to stand on whichever side they choose, with the aim of preventing wear and tear and asymmetrical burdening. All Tokyo metro stations also have posters next to the escalators that ask users not to walk but instead to stand on either side.
The practice of standing on one side and walking on the other may cause uneven wear on escalator mechanisms.
Transport for London trialed standing on both sides (no walking) for a period of several months in 2016. This increased capacity and eliminated queues approaching the escalator during peak travel times. A follow-up report was released several months later with no recommendation to continue the practice.
| Technology | Architectural elements | null |
20648024 | https://en.wikipedia.org/wiki/Isomer | Isomer | In chemistry, isomers are molecules or polyatomic ions with identical molecular formula – that is, the same number of atoms of each element – but distinct arrangements of atoms in space. Isomerism refers to the existence or possibility of isomers.
Isomers do not necessarily share similar chemical or physical properties. Two main forms of isomerism are structural (or constitutional) isomerism, in which bonds between the atoms differ; and stereoisomerism (or spatial isomerism), in which the bonds are the same but the relative positions of the atoms differ.
Isomeric relationships form a hierarchy. Two chemicals might be the same constitutional isomer, but upon deeper analysis be stereoisomers of each other. Two molecules that are the same stereoisomer as each other might be in different conformational forms or be different isotopologues. The depth of analysis depends on the field of study or the chemical and physical properties of interest.
The English word "isomer" () is a back-formation from "isomeric", which was borrowed through German isomerisch from Swedish ; which in turn was coined from Greek ἰσόμερoς , with roots = "equal", = "part".
Structural isomers
Structural isomers have the same number of atoms of each element (hence the same molecular formula), but the atoms are connected in distinct ways.
Example:
For example, there are three distinct compounds with the molecular formula C3H8O:
The first two isomers shown of C3H8O are propanols, that is, alcohols derived from propane. Both have a chain of three carbon atoms connected by single bonds, with the remaining carbon valences being filled by seven hydrogen atoms and by a hydroxyl group -OH comprising the oxygen atom bound to a hydrogen atom. These two isomers differ on which carbon the hydroxyl is bound to: either to an extremity of the carbon chain propan-1-ol (1-propanol, n-propyl alcohol, n-propanol; I) or to the middle carbon propan-2-ol (2-propanol, isopropyl alcohol, isopropanol; II). These can be described by the condensed structural formulas H3C-CH2-CH2OH and H3C-CH(OH)-CH3.
The third isomer of C3H8O is the ether methoxyethane (ethyl-methyl-ether; III). Unlike the other two, it has the oxygen atom connected to two carbons, and all eight hydrogens bonded directly to carbons. It can be described by the condensed formula H3C-CH2-O-CH3.
The alcohol "3-propanol" is not another isomer, since the difference between it and 1-propanol is not real; it is only the result of an arbitrary choice in the direction of numbering the carbons along the chain. For the same reason, "ethoxymethane" is the same molecule as methoxyethane, not another isomer.
1-Propanol and 2-propanol are examples of positional isomers, which differ by the position at which certain features, such as double bonds or functional groups, occur on a "parent" molecule (propane, in that case).
Example:
There are also three structural isomers of the hydrocarbon C3H4:
In two of the isomers, the three carbon atoms are connected in an open chain, but in one of them (propadiene or allene; I) the carbons are connected by two double bonds, while in the other (propyne or methylacetylene; II) they are connected by a single bond and a triple bond. In the third isomer (cyclopropene; III) the three carbons are connected into a ring by two single bonds and a double bond. In all three, the remaining valences of the carbon atoms are satisfied by the four hydrogens.
Again, note that there is only one structural isomer with a triple bond, because the other possible placement of that bond is just drawing the three carbons in a different order. For the same reason, there is only one cyclopropene, not three.
Tautomers
Tautomers are structural isomers which readily interconvert, so that two or more species co-exist in equilibrium such as
H-X-Y=Z <=> X=Y-Z-H.
Important examples are keto-enol tautomerism and the equilibrium between neutral and zwitterionic forms of an amino acid.
Stereoisomers
.
Stereoisomers have the same atoms or isotopes connected by bonds of the same type, but differ in the relative positions of those atoms in space. Two broad types of stereoisomers exist, enantiomes, and diastereomers. Enantiomers have identical physical properties but diastereomers do not.
Enantiomers
Two compounds are said to be enantiomers if their molecules are mirror images of each other, that cannot be made to coincide only by rotations or translations – like a left hand and a right hand. The two shapes are said to be chiral.
A classical example is bromochlorofluoromethane (CHFClBr). The two enantiomers can be distinguished, for example, by whether the path F->Cl->Br turns clockwise or counterclockwise as seen from the hydrogen atom. In order to change one conformation to the other, at some point those four atoms would have to lie on the same plane – which would require severely straining or breaking their bonds to the carbon atom. The corresponding energy barrier between the two conformations is so high that there is practically no conversion between them at room temperature, and they can be regarded as different configurations.
The compound chlorofluoromethane CH2ClF, in contrast, is not chiral: the mirror image of its molecule is also obtained by a half-turn about a suitable axis.
Another example of a chiral compound is 2,3-pentadiene H3C-CH=C=CH-CH3 a hydrocarbon that contains two overlapping double bonds. The double bonds are such that the three middle carbons are in a straight line, while the first three and last three lie on perpendicular planes. The molecule and its mirror image are not superimposable, even though the molecule has an axis of symmetry. The two enantiomers can be distinguished, for example, by the right-hand rule. This type of isomerism is called axial isomerism.
Enantiomers behave identically in chemical reactions, except when reacted with chiral compounds or in the presence of chiral catalysts, such as most enzymes. For this latter reason, the two enantiomers of most chiral compounds usually have markedly different effects and roles in living organisms. In biochemistry and food science, the two enantiomers of a chiral molecule – such as glucose – are usually identified, and treated as very different substances.
Each enantiomer of a chiral compound typically rotates the plane of polarized light that passes through it. The rotation has the same magnitude but opposite senses for the two isomers, and can be a useful way of distinguishing and measuring their concentration in a solution. For this reason, enantiomers were formerly called "optical isomers". However, this term is ambiguous and is discouraged by the IUPAC.
Some enantiomer pairs (such as those of trans-cyclooctene) can be interconverted by internal motions that change bond lengths and angles only slightly. Other pairs (such as CHFClBr) cannot be interconverted without breaking bonds, and therefore are different configurations.
Diastereomers
Stereoisomers that are not enantiomers are called diastereomers. Some diastereomers may contain chiral center, some not.
Cis-trans isomerism
A double bond between two carbon atoms forces the remaining four bonds (if they are single) to lie on the same plane, perpendicular to the plane of the bond as defined by its π orbital. If the two bonds on each carbon connect to different atoms, two distinct conformations are possible, that differ from each other by a twist of 180 degrees of one of the carbons about the double bond.
The classical example is dichloroethene C2H2Cl2, specifically the structural isomer Cl-HC=CH-Cl that has one chlorine bonded to each carbon. It has two conformational isomers, with the two chlorines on the same side or on opposite sides of the double bond's plane. They are traditionally called cis (from Latin meaning "on this side of") and trans ("on the other side of"), respectively; or Z and E in the IUPAC recommended nomenclature. Conversion between these two forms usually requires temporarily breaking bonds (or turning the double bond into a single bond), so the two are considered different configurations of the molecule.
More generally, cis–trans isomerism (formerly called "geometric isomerism") occurs in molecules where the relative orientation of two distinguishable functional groups is restricted by a somewhat rigid framework of other atoms.
For example, in the cyclic alcohol inositol (CHOH)6 (a six-fold alcohol of cyclohexane), the six-carbon cyclic backbone largely prevents the hydroxyl -OH and the hydrogen -H on each carbon from switching places. Therefore, one has different configurational isomers depending on whether each hydroxyl is on "this side" or "the other side" of the ring's mean plane. Discounting isomers that are equivalent under rotations, there are nine isomers that differ by this criterion, and behave as different stable substances (two of them being enantiomers of each other). The most common one in nature (myo-inositol) has the hydroxyls on carbons 1, 2, 3 and 5 on the same side of that plane, and can therefore be called cis-1,2,3,5-trans-4,6-cyclohexanehexol. And each of these cis-trans isomers can possibly have stable "chair" or "boat" conformations (although the barriers between these are significantly lower than those between different cis-trans isomers).
Cis and trans isomers also occur in inorganic coordination compounds, such as square planar MX2Y2 complexes and octahedral MX4Y2 complexes.
For more complex organic molecules, the cis and trans labels can be ambiguous. In such cases, a more precise labeling scheme is employed based on the Cahn-Ingold-Prelog priority rules.
Isotopes and spin
Isotopomers
Different isotopes of the same element can be considered as different kinds of atoms when enumerating isomers of a molecule or ion. The replacement of one or more atoms by their isotopes can create multiple structural isomers and/or stereoisomers from a single isomer.
For example, replacing two atoms of common hydrogen (^1 H ) by deuterium (^2 H , or D) on an ethane molecule yields two distinct structural isomers, depending on whether the substitutions are both on the same carbon (1,1-dideuteroethane, HD2C-CH3) or one on each carbon (1,2-dideuteroethane, DH2C-CDH2); as if the substituent was chlorine instead of deuterium. The two molecules do not interconvert easily and have different properties, such as their microwave spectrum.
Another example would be substituting one atom of deuterium for one of the hydrogens in chlorofluoromethane (CH2ClF). While the original molecule is not chiral and has a single isomer, the substitution creates a pair of chiral enantiomers of CHDClF, which could be distinguished (at least in theory) by their optical activity.
When two isomers would be identical if all isotopes of each element were replaced by a single isotope, they are described as isotopomers or isotopic isomers. In the above two examples if all D were replaced by H, the two dideuteroethanes would both become ethane and the two deuterochlorofluoromethanes would both become CH2ClF.
The concept of isotopomers is different from isotopologs or isotopic homologs, which differ in their isotopic composition. For example, C2H5D and C2H4D2 are isotopologues and not isotopomers, and are therefore not isomers of each other.
Spin isomers
Another type of isomerism based on nuclear properties is spin isomerism, where molecules differ only in the relative spin magnetic quantum numbers ms of the constituent atomic nuclei. This phenomenon is significant for molecular hydrogen, which can be partially separated into two long-lived states described as spin isomers or nuclear spin isomers: parahydrogen, with the spins of the two nuclei pointing in opposite directions, and orthohydrogen, where the spins point in the same direction.
Applications
Isomers having distinct biological properties are common; for example, the placement of methyl groups. In substituted xanthines, theobromine, found in chocolate, is a vasodilator with some effects in common with caffeine; but, if one of the two methyl groups is moved to a different position on the two-ring core, the isomer is theophylline, which has a variety of effects, including bronchodilation and anti-inflammatory action. Another example of this occurs in the phenethylamine-based stimulant drugs. Phentermine is a non-chiral compound with a weaker effect than that of amphetamine. It is used as an appetite-reducing medication and has mild or no stimulant properties. However, an alternate atomic arrangement gives dextromethamphetamine, which is a stronger stimulant than amphetamine.
In medicinal chemistry and biochemistry, enantiomers are a special concern because they may possess distinct biological activity. Many preparative procedures afford a mixture of equal amounts of both enantiomeric forms. In some cases, the enantiomers are separated by chromatography using chiral stationary phases. They may also be separated through the formation of diastereomeric salts. In other cases, enantioselective synthesis have been developed.
As an inorganic example, cisplatin (see structure above) is an important drug used in cancer chemotherapy, whereas the trans isomer (transplatin) has no useful pharmacological activity.
History
Isomerism was first observed in 1827, when Friedrich Wöhler prepared silver cyanate and discovered that, although its elemental composition of AgCNO was identical to silver fulminate (prepared by Justus von Liebig the previous year), its properties were distinct. This finding challenged the prevailing chemical understanding of the time, which held that chemical compounds could be distinct only when their elemental compositions differ. (We now know that the bonding structures of fulminate and cyanate can be approximately described as O- N+≡C- and O=C=N-, respectively.)
Additional examples were found in succeeding years, such as Wöhler's 1828 discovery that urea has the same atomic composition (CH4N2O) as the chemically distinct ammonium cyanate. (Their structures are now known to be (H2N-)2C=O and [NH+4] [O=C=N^ -] , respectively.) In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon.
In 1848, Louis Pasteur observed that tartaric acid crystals came into two kinds of shapes that were mirror images of each other. Separating the crystals by hand, he obtained two version of tartaric acid, each of which would crystallize in only one of the two shapes, and rotated the plane of polarized light to the same degree but in opposite directions. In 1860, Pasteur explicitly hypothesized that the molecules of isomers might have the same composition but different arrangements of their atoms.
| Physical sciences | Substance | Chemistry |
20648143 | https://en.wikipedia.org/wiki/Cestoda | Cestoda | Cestoda is a class of parasitic worms in the flatworm phylum (Platyhelminthes). Most of the species—and the best-known—are those in the subclass Eucestoda; they are ribbon-like worms as adults, commonly known as tapeworms. Their bodies consist of many similar units known as proglottids—essentially packages of eggs which are regularly shed into the environment to infect other organisms. Species of the other subclass, Cestodaria, are mainly fish infecting parasites.
All cestodes are parasitic; many have complex life histories, including a stage in a definitive (main) host in which the adults grow and reproduce, often for years, and one or two intermediate stages in which the larvae develop in other hosts. Typically the adults live in the digestive tracts of vertebrates, while the larvae often live in the bodies of other animals, either vertebrates or invertebrates. For example, Diphyllobothrium has at least two intermediate hosts, a crustacean and then one or more freshwater fish; its definitive host is a mammal. Some cestodes are host-specific, while others are parasites of a wide variety of hosts. Some six thousand species have been described; probably all vertebrates can host at least one species.
The adult tapeworm has a scolex (head), a short neck, and a strobila (segmented body) formed of proglottids. Tapeworms anchor themselves to the inside of the intestine of their host using their scolex, which typically has hooks, suckers, or both. They have no mouth, but absorb nutrients directly from the host's gut. The neck continually produces proglottids, each one containing a reproductive tract; mature proglottids are full of eggs, and fall off to leave the host, either passively in the feces or actively moving. All tapeworms are hermaphrodites, with each individual having both male and female reproductive organs.
Humans are subject to infection by several species of tapeworms if they eat undercooked meat such as pork (Taenia solium), beef (T. saginata), and fish (Diphyllobothrium), or if they live in, or eat food prepared in, conditions of poor hygiene (Hymenolepis or Echinococcus species). The unproven concept of using tapeworms as a slimming aid has been touted since around 1900.
Diversity and habitat
All 6,000 species of Cestoda are parasites, mainly intestinal; their definitive hosts are vertebrates, both terrestrial and marine, while their intermediate hosts include insects, crustaceans, molluscs, and annelids as well as other vertebrates.
T. saginata, the beef tapeworm, can grow up to 20 m (65 ft); the largest species, the whale tapeworm Tetragonoporus calyptocephalus, can grow to over 30 m (100 ft). Species with small hosts tend to be small. For example, vole and lemming tapeworms are only in length, and those parasitizing shrews only .
Anatomy
Cestodes have no gut or mouth and absorb nutrients from the host's alimentary tract through their specialised neodermal cuticle, or tegument, through which gas exchange also takes place. The tegument also protects the parasite from the host's digestive enzymes and allows it to transfer molecules back to the host.
The body form of adult eucestodes is simple, with a scolex, or grasping head, adapted for attachment to the definitive host, a short neck, and a strobila, or segmented trunk formed of proglottids, which makes up the worm's body. Members of the subclass Cestodaria, the Amphilinidea and Gyrocotylidea, are wormlike but not divided into proglottids. Amphilinids have a muscular proboscis at the front end; Gyrocotylids have a sucker or proboscis which they can pull inside or push outside at the front end, and a holdfast rosette at the posterior end.
The Cestodaria have 10 larval hooks while Eucestoda have 6 larval hooks.
Scolex
The scolex, which attaches to the intestine of the definitive host, is often minute in comparison with the proglottids. It is typically a four-sided knob, armed with suckers or hooks or both. In some species, the scolex is dominated by bothria, or "sucking grooves" that function like suction cups. Cyclophyllid cestodes can be identified by the presence of four suckers on their scolices. Other species have ruffled or leaflike scolices, and there may be other structures to aid attachment.
In the larval stage the scolex is similarly shaped and is known as the protoscoleces.
Body systems
Circular and longitudinal muscles lie under the neodermis, beneath which further longitudinal, dorso-ventral and transverse muscles surround the central parenchyma. Protonephridial cells drain into the parenchyma. There are four longitudinal collection canals, two dorso-lateral and two ventro-lateral, running along the length of the worm, with a transverse canal linking the ventral ones at the posterior of each segment. When the proglottids begin to detach, these canals open to the exterior through the terminal segment.
The main nerve centre of a cestode is a cerebral ganglion in its scolex. Nerves emanate from the ganglion to supply the general body muscular and sensory endings, with two lateral nerve cords running the length of the strobila. The cirrus and vagina are innervated, and sensory endings around the genital pore are more plentiful than in other areas. Sensory function includes both tactoreception (touch) and chemoreception (smell or taste).
Proglottids
Once anchored to the host's intestinal wall, tapeworms absorb nutrients through their surface as their food flows past them. Cestodes are unable to synthesise lipids, which they use for reproduction, and are therefore entirely dependent on their hosts.
The tapeworm body is composed of a series of segments called proglottids. These are produced from the neck by mitotic growth, which is followed by transverse constriction. The segments become larger and more mature as they are displaced backwards by newer segments. Each proglottid contains an independent reproductive tract, and like some other flatworms, cestodes excrete waste through flame cells (protonephridia) located in the proglottids. The sum of the proglottids is called a strobila, which is thin and resembles a strip of tape; from this is derived the common name "tapeworm". Proglottids are continually being produced by the neck region of the scolex, as long as the scolex is attached and alive.
Mature proglottids are essentially bags of eggs, each of which is infective to the proper intermediate host. They are released and leave the host in feces, or migrate outwards as independent motile proglottids. The number of proglottids forming the tapeworm ranges from three to four thousand. Their layout comes in two forms: craspedote, meaning any given proglottid is overlapped by the previous proglottid, or acraspedote, indicating the proglottids do not overlap.
Reproduction
Cestodes are exclusively hermaphrodites, with both male and female reproductive systems in each body. The reproductive system includes one or more testes, cirri, vas deferens, and seminal vesicles as male organs, and a single lobed or unlobed ovary with the connecting oviduct and uterus as female organs. The common external opening for both male and female reproductive systems is known as the genital pore, which is situated at the surface opening of the cup-shaped atrium. Though they are sexually hermaphroditic and cross-fertilization is the norm, self-fertilization sometimes occurs and makes possible the reproduction of a worm when it is the only individual in its host's gut. During copulation, the cirri of one individual connect with those of the other through the genital pore, and then spermatozoa are exchanged.
Life cycle
Cestodes are parasites of vertebrates, with each species infecting a single definitive host or group of closely related host species. All but amphilinids and gyrocotylids (which burrow through the gut or body wall to reach the coelom) are intestinal, though some life cycle stages rest in muscle or other tissues. The definitive host is always a vertebrate but in nearly all cases, one or more intermediate hosts are involved in the life cycle, typically arthropods or other vertebrates. Infections can be long-lasting; in humans, tapeworm infection may last as much as 30 years. No asexual phases occur in the life cycle, as they do in other flatworms, but the life cycle pattern has been a crucial criterion for assessing evolution among Platyhelminthes.
Cestodes produce large numbers of eggs, but each one has a low probability of finding a host. To increase their chances, different species have adopted various strategies of egg release. In the Pseudophyllidea, many eggs are released in the brief period when their aquatic intermediate hosts are abundant (semelparity). In contrast, in the terrestrial Cyclophyllidea, proglottids are released steadily over a period of years, or as long as their host lives (iteroparity). Another strategy is to have very long-lived larvae; for example, in Echinococcus, the hydatid larvae can survive for ten years or more in humans and other vertebrate hosts, giving the tapeworm an exceptionally long time window in which to find another host.
Many tapeworms have a two-phase life cycle with two types of host. The adult Taenia saginata lives in the gut of a primate such as a human, its definitive host. Proglottids leave the body through the anus and fall to the ground, where they may be eaten with grass by a grazing animal such as a cow. This animal then becomes an intermediate host, the oncosphere boring through the gut wall and migrating to another part of the body such as the muscle. Here it encysts, forming a cysticercus. The parasite completes its life cycle when the intermediate host passes on the parasite to the definitive host, usually when the definitive host eats contaminated parts of the intermediate host, for example a human eating raw or undercooked meat. Another two-phase life cycle is exhibited by Anoplocephala perfoliata, the definitive host being an equine and the intermediate host an oribatid mite.
Diphyllobothrium exhibits a more complex, three-phase life cycle. If the eggs are laid in water, they develop into free-swimming oncosphere larvae. After ingestion by a suitable freshwater crustacean such as a copepod, the first intermediate host, they develop into procercoid larvae. When the copepod is eaten by a suitable second intermediate host, typically a minnow or other small freshwater fish, the procercoid larvae migrate into the fish's flesh where they develop into plerocercoid larvae. These are the infective stages for the mammalian definitive host. If the small fish is eaten by a predatory fish, its muscles too can become infected.
Schistocephalus solidus is another three-phase example. The intermediate hosts are copepods and small fish, and the definitive hosts are waterbirds. This species has been used to demonstrate that cross-fertilisation produces a higher infective success rate than self-fertilisation.
Host immunity
Hosts can become immune to infection by a cestode if the lining, the mucosa, of the gut is damaged. This exposes the host's immune system to cestode antigens, enabling the host to mount an antibody defence. Host antibodies can kill or limit cestode infection by damaging their digestive enzymes, which reduces their ability to feed and therefore to grow and to reproduce; by binding to their bodies; and by neutralising toxins that they produce. When cestodes feed passively in the gut, they do not provoke an antibody reaction.
Evolution and phylogeny
Fossil history
Parasite fossils are rare, but recognizable clusters of cestode eggs, some with an operculum (lid) indicating that they had not erupted, one with a developing larva, have been discovered in fossil shark coprolites dating to the Permian, some 270 million years ago.
The fossil Rugosusivitta, which was found in China at base of the Cambrian deposits in Yunnan just above the Ediacaran-Cambrian border, has great similarities to present day Cestodians. If correct, this would be the earliest example of a Platyzoan and also one of the earliest bilaterian body-fossils and might thus provide an insight to the living mode of Cestodians before they became specialized parasites.
External
The position of the Cestoda within the Platyhelminthes and other Spiralian phyla based on genomic analysis is shown in the phylogenetic tree. The non-parasitic flatworms, traditionally grouped as the "Turbellaria", are paraphyletic, as the parasitic Neodermata including the Cestoda arose within that grouping. The approximate times when major groups first appeared is shown in millions of years ago.
Internal
The evolutionary history of the Cestoda has been studied using ribosomal RNA, mitochondrial and other DNA, and morphological analysis and continues to be revised. "Tetraphyllidea" is seen to be paraphyletic; "Pseudophyllidea" has been broken up into two orders, Bothriocephalidea and Diphyllobothriidea. Hosts, whose phylogeny often mirrors that of the parasites (Fahrenholz's rule), are indicated in italics and parentheses, the life-cycle sequence (where known) shown by arrows as (intermediate host1 [→ intermediate host2 ] → definitive host). Alternatives, generally for different species within an order, are shown in square brackets.
The Taeniidae, including species such as the pork tapeworm and the beef tapeworm that often infect humans, may be the most basal of the 12 orders of the Cyclophyllidea.
Interactions with humans
Infection and treatment
Like other species of mammal, humans can become infected with tapeworms. There may be few or no symptoms, and the first indication of the infection may be the presence of one or more proglottids in the stools. The proglottids appear as flat, rectangular, whitish objects about the size of a grain of rice, which may change size or move about. Bodily symptoms which are sometimes present include abdominal pain, nausea, diarrhea, increased appetite and weight loss.
There are several classes of anthelminthic drugs, some effective against many kinds of parasite, others more specific; these can be used both preventatively and to treat infections. For example, praziquantel is an effective treatment for tapeworm infection, and is preferred over the older niclosamide. While accidental tapeworm infections in developed countries are quite rare, such infections are more likely to occur in countries with poor sanitation facilities or where food hygiene standards are low.
History and culture
In Ancient Greece, the comic playwright Aristophanes and philosopher Aristotle described the lumps that form during cysticercosis as "hailstones". In Medieval times, in The Canon of Medicine, completed in 1025, the Persian physician Avicenna recorded parasites including tapeworms. In the Early Modern period, Francesco Redi described and illustrated many parasites, and was the first to identify the cysts of Echinococcus granulosus seen in dogs and sheep as parasitic in origin; a century later, in 1760, Peter Simon Pallas correctly suggested that these were the larvae of tapeworms.
Tapeworms have occasionally appeared in fiction. Peter Marren and Richard Mabey in Bugs Britannica write that Irvine Welsh's sociopathic policeman in his 1998 novel Filth owns a talking tapeworm, which they call "the most attractive character in the novel"; it becomes the policeman's alter ego and better self. Mira Grant's 2013 novel Parasite envisages a world where people's immune systems are maintained by genetically engineered tapeworms. Tapeworms are prominently mentioned in the System of a Down song "Needles": their inclusion within the song resulted in a lyrical dispute among band members.
There are unproven claims that, around 1900, tapeworm eggs were marketed to the public as slimming tablets. A full-page coloured image, purportedly from a women's magazine of that period, reads "Fat: the enemy ... that is banished! How? With sanitized tape worms. Jar packed. No ill effects!" When television presenter Michael Mosley deliberately infected himself with tapeworms he gained weight due to increased appetite. Dieters still sometimes risk intentional infection, evidenced by a 2013 warning on American television.
| Biology and health sciences | Platyzoa | null |
3472480 | https://en.wikipedia.org/wiki/Carex | Carex | Carex is a vast genus of over 2,000 species of grass-like plants in the family Cyperaceae, commonly known as sedges (or seg, in older books). Other members of the family Cyperaceae are also called sedges, however those of genus Carex may be called true sedges, and it is the most species-rich genus in the family. The study of Carex is known as caricology.
Description
All species of Carex are perennial, although some species, such as C. bebbii and C. viridula can fruit in their first year of growth, and may not survive longer. They typically have rhizomes, stolons or short rootstocks, but some species grow in tufts (caespitose). The culm – the flower-bearing stalk – is unbranched and usually erect. It is usually distinctly triangular in section.
The leaves of Carex comprise a blade, which extends away from the stalk, and a sheath, which encloses part of the stalk. The blade is normally long and flat, but may be folded, inrolled, channelled or absent. The leaves have parallel veins and a distinct midrib. Where the blade meets the culm there is a structure called the ligule. The colour of foliage may be green, red or brown, and "ranges from fine and hair-like, sometimes with curled tips, to quite broad with a noticeable midrib and sometimes razor sharp edges".
The flowers of Carex are small and are combined into spikes, which are themselves combined into a larger inflorescence. The spike typically contains many flowers, but can hold as few as one in some species. Almost all Carex species are monoecious; each flower is either male (staminate) or female (pistillate). A few species are dioecious. Sedges exhibit diverse arrangements of male and female flowers. Often, the lower spikes are entirely pistillate and upper spikes staminate, with one or more spikes in between having pistillate flowers near the base and staminate flowers near the tip. In other species, all spikes are similar. In that case, they may have male flowers above and female flowers below (androgynous) or female flowers above and male flowers below (gynecandrous). In relatively few species, the arrangement of flowers is irregular.
The defining structure of the genus Carex is the bottle-shaped bract surrounding each female flower. This structure is called the perigynium or utricle, a modified prophyll. It is typically extended into a "rostrum" or beak, which is often divided at the tip (bifid) into two teeth. The shape, venation, and vestiture (hairs) of the perigynium are important structures for distinguishing Carex species.
The fruit of Carex is a dry, one-seeded indehiscent achene or nut which grows within the perigynium. Perigynium features aid in fruit dispersal.
Ecology and distribution
Carex species are found across most of the world, albeit with few species in tropical lowlands, and relatively few in sub-Saharan Africa. Most (but not all) sedges are found in wetlands – such as marshes, calcareous fens, bogs and other peatlands, pond and stream banks, riparian zones, and even ditches. They are one of the dominant plant groups in arctic and alpine tundra, and in wetland habitats with a water depth of up to .
Taxonomy and cytogenetics
The genus Carex was established by Carl Linnaeus in his work Species Plantarum in 1753, and it is one of the largest genera of flowering plants. Estimates of the number of species vary from about 1100 to almost 2000. Carex displays the most dynamic chromosome evolution of all flowering plants. Chromosome numbers range from n = 6 to n = 66, and over 100 species are known to show variation in chromosome number within the species, with differences of up to 10 chromosomes between populations.
The genomes of Carex kokanica, Carex parvula and Carex littledalei have been sequenced.
Carex has been divided into subgenera in a number of ways. The most influential was Georg Kükenthal's classification using four subgenera – Carex, Vignea, Indocarex and Primocarex – based primarily on the arrangement of the male and female flowers. There has been considerable debate about the status of these four groups, with some species being transferred between groups and some authors, such as Kenneth Kent Mackenzie, eschewing the subgenera altogether and dividing the genus directly into sections. The genus is now divided into around four subgenera, some of which may not, however, be monophyletic:
Carex subg. Carex – 1450 species, distributed globally
Carex subg. Psyllophora (Degl.) Peterm. (equivalent to Kükenthal's "Primocarex") – 70 species
Carex subg. Vignea (P. Beauv. ex T. Lestib.) Peterm. – 350 species, cosmopolitan
Carex subg. Vigneastra (Tuckerman) Kükenthal (equivalent to Kükenthal's "Indocarex") – 100 species, tropical and subtropical Asia
Fossil record
Several fossil fruits of two Carex species have been described from middle Miocene strata of the Fasterholt area near Silkeborg in Central Jutland, Denmark.
Uses
Ornamental
Carex species and cultivars are popular in horticulture, particularly in shady positions. Native species are used in wildland habitat restoration projects, natural landscaping, and in sustainable landscaping as drought-tolerant grass replacements for lawns and garden meadows. Some require damp or wet conditions, others are relatively drought-tolerant. Propagation is by seed or division in spring.
The cultivars Carex elata 'Aurea' (Bowles' golden sedge) and Carex oshimensis 'Evergold' have received the Royal Horticultural Society's Award of Garden Merit.
Other uses
A mix of dried specimens of several species of Carex (including Carex vesicaria) have a history of being used as thermal insulation in footwear (such as nutukas used by Sámi people). Sennegrass is one of the names for such mixes. During the first human expedition to the South Pole in 1911, such a mix were used in skaller, when camps had been set (after each stretch of travelling had been completed). Carsten Borchgrevink of the British Antarctic Expedition 1898-1900 reported "I found the Lapps method of never using socks in their Finn boots answered well. Socks are never used in Finnmarken in winter time, but 'senne grass' which they, of course, had a special method of arranging in the 'komager' (Finn boots) … if you get wet feet while wearing the grass in the 'komager' you will be warmer than ever, as the fresh grass will, by the moisture and the heat of your feet, in a way start to burn or produce its own heat by spontaneous combustion. The great thing seems to be to arrange the grass properly in the boots, and although we all tried to imitate the Finns in their skill at this work, none of us felt as warm on our feet as when they had helped us."
Species serve as a food source for numerous animals, and some are used as a livestock hay.
Use by Native Americans
The Blackfoot put carex in moccasins to protect the feet during winter. The Cherokee use an infusion of the leaf to "check bowels". The Ohlone use the roots of many species for basketry. The Goshute use the root as medicine. The Jemez consider the plant sacred and use it in the kiva. The Klamath people weave the leaves into mats, use the juice of the pith as a beverage, eat the fresh stems for food and use the tuberous base of the stem for food. The indigenous people of Mendocino County, California use the rootstocks to make baskets and rope. The indigenous people of Montana also weave the leaves into mats and use the young stems as food. The Navajo of Kayenta, Arizona grind the seeds into mush and eat them. The Oregon Paiute weave it to make spoons. The Pomo use the roots to make baskets, and use it to tend fishing traps. They also use it to make torches. The Coast Salish use the leaves to make baskets and twine. The Songhees eat the leaves to induce abortions. The Nlaka'pamux used the leaves as brushes for cleaning things and use the leaves as forage for their livestock. The Wailaki weave the roots and leaves into baskets and use the leaves to weave mats. The Yuki people use the large roots to make baskets.
| Biology and health sciences | Poales | Plants |
19461794 | https://en.wikipedia.org/wiki/Porosity | Porosity | Porosity or void fraction is a measure of the void (i.e. "empty") spaces in a material, and is a fraction of the volume of voids over the total volume, between 0 and 1, or as a percentage between 0% and 100%. Strictly speaking, some tests measure the "accessible void", the total amount of void space accessible from the surface (cf. closed-cell foam).
There are many ways to test porosity in a substance or part, such as industrial CT scanning.
The term porosity is used in multiple fields including pharmaceutics, ceramics, metallurgy, materials, manufacturing, petrophysics, hydrology, earth sciences, soil mechanics, rock mechanics, and engineering.
Void fraction in two-phase flow
In gas-liquid two-phase flow, the void fraction is defined as the fraction of the flow-channel volume that is occupied by the gas phase or, alternatively, as the fraction of the cross-sectional area of the channel that is occupied by the gas phase.
Void fraction usually varies from location to location in the flow channel (depending on the two-phase flow pattern). It fluctuates with time and its value is usually time averaged. In separated (i.e., non-homogeneous) flow, it is related to volumetric flow rates of the gas and the liquid phase, and to the ratio of the velocity of the two phases (called slip ratio).
Porosity in earth sciences and construction
Used in geology, hydrogeology, soil science, and building science, the porosity of a porous medium (such as rock or sediment) describes the fraction of void space in the material, where the void may contain, for example, air or water. It is defined by the ratio:
where VV is the volume of void-space (such as fluids) and VT is the total or bulk volume of material, including the solid and void components. Both the mathematical symbols and are used to denote porosity.
Porosity is a fraction between 0 and 1, typically ranging from less than 0.005 for solid granite to more than 0.5 for peat and clay.
The porosity of a rock, or sedimentary layer, is an important consideration when attempting to evaluate the potential volume of water or hydrocarbons it may contain. Sedimentary porosity is a complicated function of many factors, including but not limited to: rate of burial, depth of burial, the nature of the connate fluids, the nature of overlying sediments (which may impede fluid expulsion). One commonly used relationship between porosity and depth is the decreasing exponential function given by the Athy (1930) equation:
where, is the porosity of the sediment at a given depth () (m), is the initial porosity of the sediment at the surface of soil (before its burial), and is the compaction coefficient (m−1). The letter with a negative exponent denotes the decreasing exponential function. The porosity of the sediment exponentially decreases with depth, as a function of its compaction.
A value for porosity can alternatively be calculated from the bulk density , saturating fluid density and particle density :
If the void space is filled with air, the following simpler form may be used:
A mean normal particle density can be taken as approximately 2.65 g/cm3 (silica, siliceous sediments or aggregates), or 2.70 g/cm3 (calcite, carbonate sediments or aggregates), although a better estimation can be obtained by examining the lithology of the particles.
Porosity and hydraulic conductivity
Porosity can be proportional to hydraulic conductivity; for two similar sandy aquifers, the one with a higher porosity will typically have a higher hydraulic conductivity (more open area for the flow of water), but there are many complications to this relationship. The principal complication is that there is not a direct proportionality between porosity and hydraulic conductivity but rather an inferred proportionality. There is a clear proportionality between pore throat radii and hydraulic conductivity. Also, there tends to be a proportionality between pore throat radii and pore volume. If the proportionality between pore throat radii and porosity exists then a proportionality between porosity and hydraulic conductivity may exist. However, as grain size or sorting decreases the proportionality between pore throat radii and porosity begins to fail and therefore so does the proportionality between porosity and hydraulic conductivity. For example: clays typically have very low hydraulic conductivity (due to their small pore throat radii) but also have very high porosities (due to the structured nature of clay minerals), which means clays can hold a large volume of water per volume of bulk material, but they do not release water rapidly and therefore have low hydraulic conductivity.
Sorting and porosity
Well sorted (grains of approximately all one size) materials have higher porosity than similarly sized poorly sorted materials (where smaller particles fill the gaps between larger particles). The graphic illustrates how some smaller grains can effectively fill the pores (where all water flow takes place), drastically reducing porosity and hydraulic conductivity, while only being a small fraction of the total volume of the material. For tables of common porosity values for earth materials, see the "further reading" section in the Hydrogeology article.
Porosity of rocks
Consolidated rocks (e.g., sandstone, shale, granite or limestone) potentially have more complex "dual" porosities, as compared with alluvial sediment. This can be split into connected and unconnected porosity. Connected porosity is more easily measured through the volume of gas or liquid that can flow into the rock, whereas fluids cannot access unconnected pores.
Porosity is the ratio of pore volume to its total volume. Porosity is controlled by: rock type, pore distribution, cementation, diagenetic history and composition. Porosity is not controlled by grain size, as the volume of between-grain space is related only to the method of grain packing.
Rocks normally decrease in porosity with age and depth of burial. Tertiary age Gulf Coast sandstones are in general more porous than Cambrian age sandstones. There are exceptions to this rule, usually because of the depth of burial and thermal history.
Porosity of soil
Porosity of surface soil typically decreases as particle size increases. This is due to soil aggregate formation in finer textured surface soils when subject to soil biological processes. Aggregation involves particulate adhesion and higher resistance to compaction. Typical bulk density of sandy soil is between 1.5 and 1.7 g/cm3. This calculates to a porosity between 0.43 and 0.36. Typical bulk density of clay soil is between 1.1 and 1.3 g/cm3. This calculates to a porosity between 0.58 and 0.51. This seems counterintuitive because clay soils are termed heavy, implying lower porosity. Heavy apparently refers to a gravitational moisture content effect in combination with terminology that harkens back to the relative force required to pull a tillage implement through the clayey soil at field moisture content as compared to sand.
Porosity of subsurface soil is lower than in surface soil due to compaction by gravity. Porosity of 0.20 is considered normal for unsorted gravel size material at depths below the biomantle. Porosity in finer material below the aggregating influence of pedogenesis can be expected to approximate this value.
Soil porosity is complex. Traditional models regard porosity as continuous. This fails to account for anomalous features and produces only approximate results. Furthermore, it cannot help model the influence of environmental factors which affect pore geometry. A number of more complex models have been proposed, including fractals, bubble theory, cracking theory, Boolean grain process, packed sphere, and numerous other models. The characterisation of pore space in soil is an associated concept.
Types of geologic porosities
Primary porosity The main or original porosity system in a rock or unconfined alluvial deposit.
Secondary porosity A subsequent or separate porosity system in a rock, often enhancing overall porosity of a rock. This can be a result of chemical leaching of minerals or the generation of a fracture system. This can replace the primary porosity or coexist with it (see dual porosity below).
Fracture porosity This is porosity associated with a fracture system or faulting. This can create secondary porosity in rocks that otherwise would not be reservoirs for hydrocarbons due to their primary porosity being destroyed (for example due to depth of burial) or of a rock type not normally considered a reservoir (for example igneous intrusions or metasediments).
Vuggy porosity This is secondary porosity generated by dissolution of large features (such as macrofossils) in carbonate rocks leaving large holes, vugs, or even caves.
Effective porosity (also called open porosity) Refers to the fraction of the total volume in which fluid flow is effectively taking place and includes catenary and dead-end (as these pores cannot be flushed, but they can cause fluid movement by release of pressure like gas expansion) pores and excludes closed pores (or non-connected cavities). This is very important for groundwater and petroleum flow, as well as for solute transport.
Ineffective porosity (also called closed porosity) Refers to the fraction of the total volume in which fluids or gases are present but in which fluid flow can not effectively take place and includes the closed pores. Understanding the morphology of the porosity is thus very important for groundwater and petroleum flow.
Dual porosity Refers to the conceptual idea that there are two overlapping reservoirs which interact. In fractured rock aquifers, the rock mass and fractures are often simulated as being two overlapping but distinct bodies. Delayed yield, and leaky aquifer flow solutions are both mathematically similar solutions to that obtained for dual porosity; in all three cases water comes from two mathematically different reservoirs (whether or not they are physically different).
Macroporosity In solids (i.e. excluding aggregated materials such as soils), the term 'macroporosity' refers to pores greater than 50 nm in diameter. Flow through macropores is described by bulk diffusion.
Mesoporosity In solids (i.e. excluding aggregated materials such as soils), the term 'mesoporosity' refers to pores greater than 2 nm and less than 50 nm in diameter. Flow through mesopores is described by Knudsen diffusion.
Microporosity In solids (i.e. excluding aggregated materials such as soils), the term 'microporosity' refers to pores smaller than 2 nm in diameter. Movement in micropores is activated by diffusion.
Porosity of fabric or aerodynamic porosity
The ratio of holes to solid that the wind "sees". Aerodynamic porosity is less than visual porosity, by an amount that depends on the constriction of holes.
Die casting porosity
Casting porosity is a consequence of one or more of the following: gasification of contaminants at molten-metal temperatures; shrinkage that takes place as molten metal solidifies; and unexpected or uncontrolled changes in temperature or humidity.
While porosity is inherent in die casting manufacturing, its presence may lead to component failure where pressure integrity is a critical characteristic. Porosity may take on several forms from interconnected micro-porosity, folds, and inclusions to macro porosity visible on the part surface. The end result of porosity is the creation of a leak path through the walls of a casting that prevents the part from holding pressure. Porosity may also lead to out-gassing during the painting process, leaching of plating acids and tool chatter in machining pressed metal components.
Measuring porosity
Several methods can be employed to measure porosity:
Direct methods (determining the bulk volume of the porous sample, and then determining the volume of the skeletal material with no pores (pore volume = total volume − material volume).
Optical methods (e.g., determining the area of the material versus the area of the pores visible under the microscope). The "areal" and "volumetric" porosities are equal for porous media with random structure.
Computed tomography method (using industrial CT scanning to create a 3D rendering of external and internal geometry, including voids. Then implementing a defect analysis utilizing computer software)
Imbibition methods, i.e., immersion of the porous sample, under vacuum, in a fluid that preferentially wets the pores.
Water saturation method (pore volume = total volume of water − volume of water left after soaking).
Water evaporation method (pore volume = (weight of saturated sample − weight of dried sample)/density of water)
Mercury intrusion porosimetry (several non-mercury intrusion techniques have been developed due to toxicological concerns, and the fact that mercury tends to form amalgams with several metals and alloys).
Gas expansion method. A sample of known bulk volume is enclosed in a container of known volume. It is connected to another container with a known volume which is evacuated (i.e., near vacuum pressure). When a valve connecting the two containers is opened, gas passes from the first container to the second until a uniform pressure distribution is attained. Using ideal gas law, the volume of the pores is calculated as
,
where
VV is the effective volume of the pores,
VT is the bulk volume of the sample,
Va is the volume of the container containing the sample,
Vb is the volume of the evacuated container,
P1 is the initial pressure in the initial pressure in volume Va and VV, and
P2 is final pressure present in the entire system.
The porosity follows straightforwardly by its proper definition
.
Note that this method assumes that gas communicates between the pores and the surrounding volume. In practice, this means that the pores must not be closed cavities.
Thermoporosimetry and cryoporometry. A small crystal of a liquid melts at a lower temperature than the bulk liquid, as given by the Gibbs-Thomson equation. Thus if a liquid is imbibed into a porous material, and frozen, the melting temperature will provide information on the pore-size distribution. The detection of the melting can be done by sensing the transient heat flows during phase-changes using differential scanning calorimetry – (DSC thermoporometry), measuring the quantity of mobile liquid using nuclear magnetic resonance – (NMR cryoporometry) or measuring the amplitude of neutron scattering from the imbibed crystalline or liquid phases – (ND cryoporometry).
| Physical sciences | Petrology | Earth science |
19463014 | https://en.wikipedia.org/wiki/Atomic%20emission%20spectroscopy | Atomic emission spectroscopy | Atomic emission spectroscopy (AES) is a method of chemical analysis that uses the intensity of light emitted from a flame, plasma, arc, or spark at a particular wavelength to determine the quantity of an element in a sample. The wavelength of the atomic spectral line in the emission spectrum gives the identity of the element while the intensity of the emitted light is proportional to the number of atoms of the element. The sample may be excited by various methods.
Atomic Emission Spectroscopy allows us to measure interactions between electromagnetic radiation and physical atoms and molecules. This interaction is measured in the form of electromagnetic waves representing the changes in energy between atomic energy levels. When elements are burned by a flame, they emit electromagnetic radiation that can be recorded in the form of spectral lines. Each element has its own unique spectral line due to the fact that each element has a different atomic arrangement, so this method is an important tool for identifying the makeup of materials. Robert Bunsen and Gustav Kirchhoff were the first to establish atomic emission spectroscopy as a tool in chemistry.
When an element is burned in a flame, its atoms move from the ground electronic state to the excited electronic state. As atoms in the excited state move back down into the ground state, they emit light. The Boltzmann expression is used to relate temperature to the number of atoms in the excited state where larger temperatures indicate a larger population of excited atoms. This relationship is written as:
where nupper and nlower are the number of atoms in the higher and lower energy levels, gupper and glower are the degeneracies in the higher and lower energy levels, and εupper and εlower are the energies of the higher and lower energy levels. The wavelengths of this light can be dispersed and measured by a monochromator, and the intensity of the light can be leveraged to determine the number of excited state electrons present. For atomic emission spectroscopy, the radiation emitted by atoms in the excited state are measured specifically after they have already been excited.
Much information can be obtained from the use of atomic emission spectroscopy by interpreting the spectral lines produced from exciting an atom. The width of spectral lines can provide information about an atom’s kinetic temperature and electron density. Looking at the different intensities of spectral lines is useful for determining the chemical makeup of mixtures and materials. Atomic emission spectroscopy is mainly used for determining the makeup of mixes of molecules due to the fact that each element has its own unique spectrum.
Flame
The sample of a material (analyte) is brought into the flame as a gas, sprayed solution, or directly inserted into the flame by use of a small loop of wire, usually platinum. The heat from the flame evaporates the solvent and breaks intramolecular bonds to create free atoms. The thermal energy also excites the atoms into excited electronic states that subsequently emit light when they return to the ground electronic state. Each element emits light at a characteristic wavelength, which is dispersed by a grating or prism and detected in the spectrometer.
A frequent application of the emission measurement with the flame is the regulation of alkali metals for pharmaceutical analytics.
Inductively coupled plasma
Inductively coupled plasma atomic emission spectroscopy (ICP-AES) uses an inductively coupled plasma to produce excited atoms and ions that emit electromagnetic radiation at wavelengths characteristic of a particular element.
Advantages of ICP-AES are the excellent limit of detection and linear dynamic range, multi-element capability, low chemical interference and a stable and reproducible signal. Disadvantages are spectral interferences (many emission lines), cost and operating expense and the fact that samples typically must be in a liquid solution.
Inductively coupled plasma (ICP) source of the emission consists of an induction coil and plasma. An induction coil is a coil of wire that has an alternating current flowing through it. This current induces a magnetic field inside the coil, coupling a great deal of energy to plasma contained in a quartz tube inside the coil. Plasma is a collection of charged particles (cations and electrons) capable, by virtue of their charge, of interacting with a magnetic field. The plasmas used in atomic emissions are formed by ionizing a flowing stream of argon gas. Plasma's high-temperature results from resistive heating as the charged particles move through the gas. Because plasmas operate at much higher temperatures than flames, they provide better atomization and a higher population of excited states.
The predominant form of sample matrix in ICP-AES today is a liquid sample: acidified water or solids digested into aqueous forms. Liquid samples are pumped into the nebulizer and sample chamber via a peristaltic pump. Then the samples pass through a nebulizer that creates a fine mist of liquid particles. Larger water droplets condense on the sides of the spray chamber and are removed via the drain, while finer water droplets move with the argon flow and enter the plasma. With plasma emission, it is possible to analyze solid samples directly. These procedures include incorporating electrothermal vaporization, laser and spark ablation, and glow-discharge vaporization.
Spark and arc
Spark or arc atomic emission spectroscopy is used for the analysis of metallic elements in solid samples. For non-conductive materials, the sample is ground with graphite powder to make it conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly ground up and destroyed during analysis. An electric arc or spark is passed through the sample, heating it to a high temperature to excite the atoms within it. The excited analyte atoms emit light at characteristic wavelengths that can be dispersed with a monochromator and detected. In the past, the spark or arc conditions were typically not well controlled, the analysis for the elements in the sample were qualitative. However, modern spark sources with controlled discharges can be considered quantitative. Both qualitative and quantitative spark analysis are widely used for production quality control in foundry and metal casting facilities.
| Physical sciences | Spectroscopy | Chemistry |
19467352 | https://en.wikipedia.org/wiki/Richter%20scale | Richter scale | The Richter scale (), also called the Richter magnitude scale, Richter's magnitude scale, and the Gutenberg–Richter scale, is a measure of the strength of earthquakes, developed by Charles Richter in collaboration with Beno Gutenberg, and presented in Richter's landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or .
Because of various shortcomings of the original scale, most seismological authorities now use other similar scales such as the moment magnitude scale () to report earthquake magnitudes, but much of the news media still erroneously refers to these as "Richter" magnitudes. All magnitude scales retain the logarithmic character of the original and are scaled to have roughly comparable numeric values (typically in the middle of the scale). Due to the variance in earthquakes, it is essential to understand the Richter scale uses common logarithms simply to make the measurements manageable (i.e., a magnitude 3 quake factors 10³ while a magnitude 5 quake factors 105 and has seismometer readings 100 times larger).
Richter magnitudes
The Richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs. Adjustments are included to compensate for the variation in the distance between the various seismographs and the epicenter of the earthquake. The original formula is:
where is the maximum excursion of the Wood-Anderson seismograph, the empirical function depends only on the epicentral distance of the station, . In practice, readings from all observing stations are averaged after adjustment with station-specific corrections to obtain the value.
Because of the logarithmic basis of the scale, each whole number increase in magnitude represents a tenfold increase in measured amplitude. In terms of energy, each whole number increase corresponds to an increase of about 31.6 times the amount of energy released, and each increase of 0.2 corresponds to approximately a doubling of the energy released.
Events with magnitudes greater than 4.5 are strong enough to be recorded by a seismograph anywhere in the world, so long as its sensors are not located in the earthquake's shadow.
The following describes the typical effects of earthquakes of various magnitudes near the epicenter. The values are typical and may not be exact in a future event because intensity and ground effects depend not only on the magnitude but also on (1) the distance to the epicenter, (2) the depth of the earthquake's focus beneath the epicenter, (3) the location of the epicenter, and (4) geological conditions.
(Based on U.S. Geological Survey documents.)
The intensity and death toll depend on several factors (earthquake depth, epicenter location, and population density, to name a few) and can vary widely.
Millions of minor earthquakes occur every year worldwide, equating to hundreds every hour every day. On the other hand, earthquakes of magnitude ≥8.0 occur about once a year, on average. The largest recorded earthquake was the Great Chilean earthquake of May 22, 1960, which had a magnitude of 9.5 on the moment magnitude scale.
Seismologist Susan Hough has suggested that a magnitude 10 quake may represent a very approximate upper limit for what the Earth's tectonic zones are capable of, which would be the result of the largest known continuous belt of faults rupturing together (along the Pacific coast of the Americas). A research at the Tohoku University in Japan found that a magnitude 10 earthquake was theoretically possible if a combined of faults from the Japan Trench to the Kuril–Kamchatka Trench ruptured together and moved by (or if a similar large-scale rupture occurred elsewhere). Such an earthquake would cause ground motions for up to an hour, with tsunamis hitting shores while the ground is still shaking, and if this kind of earthquake occurred, it would probably be a 1-in-10,000-year event.
Development
Prior to the development of the magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi–Forel scale. ("Size" is used in the sense of the quantity of energy released, not the size of the area affected by shaking, though higher-energy earthquakes do tend to affect a wider area, depending on the local geology.) In 1883, John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo. In the 1920s, Harry O. Wood and John A. Anderson developed the Wood–Anderson seismograph, one of the first practical instruments for recording seismic waves. Wood then built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California. He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves.
In 1931, Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method and then, using data collected by his colleague Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes.
Additional developments were required to produce a practical method of assigning an absolute measure of magnitude. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of . The scale was calibrated by defining a magnitude 0 shock as one that produces (at a distance of ) a maximum amplitude of 1 micron (1 μm, or 0.001 millimeters) on a seismogram recorded by a Wood-Anderson torsion seismometer. Finally, Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is strongly affected by the structure and properties of the regional geology.
When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale. "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's and "should be referred to as such." In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol , to distinguish it from two other scales they had developed, the surface-wave magnitude (MS) and body wave magnitude (MB) scales.
Details
The Richter scale was defined in 1935 for particular circumstances and instruments; the particular circumstances refer to it being defined for Southern California and "implicitly incorporates the attenuative properties of Southern California crust and mantle." The particular instrument used would become saturated by strong earthquakes and unable to record high values. The scale was replaced in the 1970s by the moment magnitude scale (MMS, symbol ); for earthquakes adequately measured by the Richter scale, numerical values are approximately the same. Although values measured for earthquakes now are , they are frequently reported by the press as Richter values, even for earthquakes of magnitude over 8, when the Richter scale becomes meaningless.
The Richter and MMS scales measure the energy released by an earthquake; another scale, the Mercalli intensity scale, classifies earthquakes by their effects, from detectable by instruments but not noticeable, to catastrophic. The energy and effects are not necessarily strongly correlated; a shallow earthquake in a populated area with soil of certain types can be far more intense in impact than a much more energetic deep earthquake in an isolated area.
Several scales have been historically described as the "Richter scale",, especially the local magnitude and the surface wave scale. In addition, the body wave magnitude, , and the moment magnitude, , abbreviated MMS, have been widely used for decades. A couple of new techniques to measure magnitude are in the development stage by seismologists.
All magnitude scales have been designed to give numerically similar results. This goal has been achieved well for , , and . The scale gives somewhat different values than the other scales. The reason for so many different ways to measure the same thing is that at different distances, for different hypocentral depths, and for different earthquake sizes, the amplitudes of different types of elastic waves must be measured.
is the scale used for the majority of earthquakes reported (tens of thousands) by local and regional seismological observatories. For large earthquakes worldwide, the moment magnitude scale (MMS) is most common, although is also reported frequently.
The seismic moment, , is proportional to the area of the rupture times the average slip that took place in the earthquake, thus it measures the physical size of the event. is derived from it empirically as a quantity without units, just a number designed to conform to the scale. A spectral analysis is required to obtain . In contrast, the other magnitudes are derived from a simple measurement of the amplitude of a precisely defined wave.
All scales, except , saturate for large earthquakes, meaning they are based on the amplitudes of waves that have a wavelength shorter than the rupture length of the earthquakes. These short waves (high-frequency waves) are too short a yardstick to measure the extent of the event. The resulting effective upper limit of measurement for is about 7 and about 8.5 for .
New techniques to avoid the saturation problem and to measure magnitudes rapidly for very large earthquakes are being developed. One of these is based on the long-period P-wave; The other is based on a recently discovered channel wave.
The energy release of an earthquake, which closely correlates to its destructive power, scales with the power of the shaking amplitude (see Moment magnitude scale for an explanation). Thus, a difference in magnitude of 1.0 is equivalent to a factor of 31.6 () in the energy released; a difference in magnitude of 2.0 is equivalent to a factor of 1000 () in the energy released. The elastic energy radiated is best derived from an integration of the radiated spectrum, but an estimate can be based on because most energy is carried by the high-frequency waves.
Magnitude empirical formulae
These formulae for Richter magnitude are alternatives to using Richter correlation tables based on Richter standard seismic event In the formulas below, is the epicentral distance in kilometers, and is the same distance represented as sea level great circle degrees.
The Lillie empirical formula is:
where is the amplitude (maximum ground displacement) of the P wave, in micrometers (μm), measured at 0.8 Hz.
Lahr's empirical formula proposal is:
where
is seismograph signal amplitude in mm and
is in km, for distances under 200 km .
and
where is in km, for distances between 200 km and 600 km .
The Bisztricsany empirical formula (1958) for epicentre distances between 4° and 160° is:
where
is the duration of the surface wave in seconds, and
is in degrees.
is mainly between 5 and 8.
The Tsumura empirical formula is:
where
is the total duration of oscillation in seconds.
mainly takes on values between 3 and 5.
The Tsuboi (University of Tokyo) empirical formula is:
where is the amplitude in μm.
| Physical sciences | Seismology | Earth science |
19468046 | https://en.wikipedia.org/wiki/Autoimmune%20disease | Autoimmune disease | An autoimmune disease is a condition that results from an anomalous response of the adaptive immune system, wherein it mistakenly targets and attacks healthy, functioning parts of the body as if they were foreign organisms. It is estimated that there are more than 80 recognized autoimmune diseases, with recent scientific evidence suggesting the existence of potentially more than 100 distinct conditions. Nearly any body part can be involved.
Autoimmune diseases are a separate class from autoinflammatory diseases. Both are characterized by an immune system malfunction which may cause similar symptoms, such as rash, swelling, or fatigue, but the cardinal cause or mechanism of the diseases are different. A key difference is a malfunction of the innate immune system in autoinflammatory diseases, whereas in autoimmune diseases there is a malfunction of the adaptive immune system.
Symptoms of autoimmune diseases can significantly vary, primarily based on the specific type of the disease and the body part that it affects. Symptoms are often diverse and can be fleeting, fluctuating from mild to severe, and typically comprise low-grade fever, fatigue, and general malaise. However, some autoimmune diseases may present with more specific symptoms such as joint pain, skin rashes (e.g., urticaria), or neurological symptoms.
The exact causes of autoimmune diseases remain unclear and are likely multifactorial, involving both genetic and environmental influences. While some diseases like lupus exhibit familial aggregation, suggesting a genetic predisposition, other cases have been associated with infectious triggers or exposure to environmental factors, implying a complex interplay between genes and environment in their etiology.
Some of the most common diseases that are generally categorized as autoimmune include coeliac disease, type 1 diabetes, Graves' disease, inflammatory bowel diseases (such as Crohn's disease and ulcerative colitis), multiple sclerosis, alopecia areata, Addison's disease, pernicious anemia, psoriasis, rheumatoid arthritis, and systemic lupus erythematosus. Diagnosing autoimmune diseases can be challenging due to their diverse presentations and the transient nature of many symptoms.
Treatment modalities for autoimmune diseases vary based on the type of disease and its severity. Therapeutic approaches primarily aim to manage symptoms, reduce immune system activity, and maintain the body's ability to fight diseases. Nonsteroidal anti-inflammatory drugs (NSAIDs) and immunosuppressants are commonly used to reduce inflammation and control the overactive immune response. In certain cases, intravenous immunoglobulin may be administered to regulate the immune system. Despite these treatments often leading to symptom improvement, they usually do not offer a cure and long-term management is often required.
In terms of prevalence, a UK study found that 10% of the population were affected by an autoimmune disease. Women are more commonly affected than men. Autoimmune diseases predominantly begin in adulthood, although they can start at any age. The initial recognition of autoimmune diseases dates back to the early 1900s, and since then, advancements in understanding and management of these conditions have been substantial, though much more is needed to fully unravel their complex etiology and pathophysiology.
Signs and symptoms
Autoimmune diseases represent a vast and diverse category of disorders that, despite their differences, share some common symptomatic threads. These shared symptoms occur as a result of the body's immune system mistakenly attacking its own cells and tissues, causing inflammation and damage. However, due to the broad range of autoimmune diseases, the specific presentation of symptoms can significantly vary based on the type of disease, the organ systems affected, and individual factors such as age, sex, hormonal status, and environmental influences.
An individual may simultaneously have more than one autoimmune disease (known as polyautoimmunity), further complicating the symptomatology.
Common symptoms
Symptoms that are commonly associated with autoimmune diseases include:
fatigue. This is the most common complaint of people with autoimmune disease. A 2015 US survey found that 98% of people with autoimmune diseases experienced fatigue, 89% said it was a "major issue", 68% said "fatigue is anything but normal. It is profound and prevents [them] from doing the simplest everyday tasks." and 59% said it was "probably the most debilitating symptom of having an [autoimmune disease]."
low-grade fever
malaise (a general feeling of discomfort or unease)
muscle aches
joint pain
skin rashes
Specific autoimmune diseases have a wide range of other symptoms, with examples including dry mouth, dry eyes, tingling and numbness in parts of the body, unexpected weight loss or gain, and diarrhoea.
Patterns of symptom occurrence
These symptoms often reflect the body's systemic inflammatory response. However, their occurrence and intensity can fluctuate over time, leading to periods of heightened disease activity, referred to as flare-ups, and periods of relative inactivity, known as remissions.
The specific presentation of symptoms largely depends on the location and type of autoimmune response. For instance, in rheumatoid arthritis, an autoimmune disease primarily affecting the joints, symptoms typically include joint pain, swelling, and stiffness. On the other hand, type 1 diabetes, which results from an autoimmune attack on the insulin-producing cells of the pancreas, primarily presents with symptoms related to high blood sugar, such as increased thirst, frequent urination, and unexplained weight loss.
Commonly affected body areas
Commonly affected areas in autoimmune diseases include blood vessels, connective tissues, joints, muscles, red blood cells, skin, and endocrine glands such as the thyroid gland (in diseases like Hashimoto's thyroiditis and Graves' disease) and the pancreas (in type 1 diabetes). The impacts of these diseases can range from localized damage to certain tissues, alteration in organ growth and function, to more systemic effects when multiple tissues throughout the body are affected.
Value of tracking symptom occurrence
The appearance of these signs and symptoms can not only provide clues for the diagnosis of an autoimmune condition, often in conjunction with tests for specific biological markers, but also help monitor disease progression and response to treatment. Ultimately, due to the diverse nature of autoimmune diseases, a multidimensional approach is often needed for the management of these conditions, taking into consideration the variety of symptoms and their impacts on individuals' lives.
Types
While it is estimated that over 80 recognized types of autoimmune diseases exist, this section provides an overview of some of the most common and well-studied forms.
Coeliac disease
Coeliac disease is an immune reaction to eating gluten, a protein found in wheat, barley, and rye. For those with the disease, eating gluten triggers an immune response in the small intestine, leading to damage on the villi, small fingerlike projections that line the small intestine and promote nutrient absorption. This explains the increased risk of gastrointestinal cancers, as the gastrointestinal tract includes the esophagus, stomach, small intestine, large intestine, rectum, and anus, all areas that the ingested gluten would traverse in digestion. The incidence of gastrointestinal cancer can be partially reduced or eliminated if a patient removes gluten from their diet. Additionally, coeliac disease is correlated with lymphoproliferative disorders.
Graves' disease
Graves' disease is a condition characterized by development of autoantibodies to thyroid-stimulating hormone receptors. The binding of the autoantibodies to the receptors results in unregulated production and release of thyroid hormone, which can lead to stimulatory effects such as rapid heart rate, weight loss, nervousness, and irritability. Other symptoms more specific to Graves' disease include bulging eyes and swelling of the lower legs.
Inflammatory bowel disease
Inflammatory bowel disease encompasses conditions characterized by chronic inflammation of the digestive tract, including Crohn's disease and ulcerative colitis. In both cases, individuals lose immune tolerance for normal bacteria present in the gut microbiome. Symptoms include severe diarrhea, abdominal pain, fatigue, and weight loss. Inflammatory bowel disease is associated with cancers of the gastrointestinal tract and some lymphoproliferative cancers.
Multiple sclerosis
Multiple sclerosis (MS) is a neurodegenerative disease in which the immune system attacks myelin, a protective covering of nerve fibers in the central nervous system, causing communication problems between the brain and the rest of the body. Symptoms can include fatigue, difficulty walking, numbness or tingling, muscle weakness, and problems with coordination and balance. MS is associated with an increased risk of central nervous system cancer, primarily in the brain.
Rheumatoid arthritis
Rheumatoid arthritis (RA) primarily targets the joints, causing persistent inflammation that results in joint damage and pain. It is often symmetrical, meaning that if one hand or knee has it, the other one does too. RA can also affect the heart, lungs, and eyes. Additionally, the chronic inflammation and over-activation of the immune system creates an environment that favors further malignant transformation of other cells, perhaps explaining the associations with cancer of the lungs and skin as well as the increased risk of other hematologic cancers, none of which are directly affected by the inflammation of joints.
Psoriasis and psoriatic arthritis
Psoriasis is a skin condition characterized by the rapid buildup of skin cells, leading to scaling on the skin's surface. Inflammation and redness around the scales is common. Some individuals with psoriasis also develop psoriatic arthritis, which causes joint pain, stiffness, and swelling.
Sjögren's syndrome
Sjögren syndrome is a long-term autoimmune disease that affects the body's moisture-producing glands (lacrimal and salivary), and often seriously affects other organ systems, such as the lungs, kidneys, and nervous system.
Systemic lupus erythematosus
Systemic lupus erythematosus, referred to simply as lupus, is a systemic autoimmune disease that affects multiple organs, including the skin, joints, kidneys, and the nervous system. It is characterized by a widespread loss of immune tolerance. The disease is characterized by periods of flares and remissions, and symptoms range from mild to severe. Women, especially those of childbearing age, are disproportionately affected.
Type 1 diabetes
Type 1 diabetes is a condition resulting from the immune system attacking insulin-producing beta cells in the pancreas, leading to high blood sugar levels. Symptoms include increased thirst, frequent urination, and unexplained weight loss. It is most commonly diagnosed in children and young adults.
Undifferentiated connective tissue disease
Undifferentiated connective tissue disease occurs when people have features of connective tissue disease, such as blood test results and external characteristics, but do not fulfill the diagnostic criteria established for any one connective tissue disease. Some 30–40% transition to a specific connective tissue disease over time.
Causes
The exact causes of autoimmune diseases remain largely unknown; however, research has suggested that a combination of genetic, environmental, and hormonal factors, as well as certain infections, may contribute to the development of these disorders.
The human immune system is equipped with several mechanisms to maintain a delicate balance between defending against foreign invaders and protecting its own cells. To achieve this, it generates both T cells and B cells, which are capable of reacting with self-proteins. However, in a healthy immune response, self-reactive cells are generally either eliminated before they become active, rendered inert via a process called anergy, or their activities are suppressed by regulatory cells.
Genetics
A familial tendency to develop autoimmune diseases suggests a genetic component. Some conditions, like lupus and multiple sclerosis, often occur in several members of the same family, indicating a potential hereditary link. Additionally, certain genes have been identified that increase the risk of developing specific autoimmune diseases.
Genetic predisposition
Evidence suggests a strong genetic component in the development of autoimmune diseases. For instance, conditions such as lupus and multiple sclerosis frequently appear in multiple members of the same family, signifying a potential hereditary link. Furthermore, certain genes have been identified that augment the risk of developing specific autoimmune diseases.
Experimental methods like genome-wide association studies have proven instrumental in pinpointing genetic risk variants potentially responsible for autoimmune diseases. For example, these studies have been used to identify risk variants for diseases such as type 1 diabetes and rheumatoid arthritis.
In twin studies, autoimmune diseases consistently demonstrate a higher concordance rate among identical twins compared with fraternal twins. For instance, the rate in multiple sclerosis is 35% in identical twins compared to 6% in fraternal twins.
Balancing infection and autoimmunity
There is increasing evidence that certain genes selected during evolution offer a balance between susceptibility to infection and the capacity to avoid autoimmune diseases. For example, variants in the ERAP2 gene provide some resistance to infection even though they increase the risk of autoimmunity (positive selection). In contrast, variants in the TYK2 gene protect against autoimmune diseases but increase the risk of infection (negative selection). This suggests the benefits of infection resistance may outweigh the risks of autoimmune diseases, particularly given the historically high risk of infection.
Several experimental methods such as the genome-wide association studies have been used to identify genetic risk variants that may be responsible for diseases such as type 1 diabetes and rheumatoid arthritis.
Environmental factors
A significant number of environmental factors have been implicated in the development and progression of various autoimmune diseases, either directly or as catalysts. Current research suggests that up to seventy percent of autoimmune diseases could be attributed to environmental influences, which encompass an array of elements such as chemicals, infectious agents, dietary habits, and gut dysbiosis. However, a unifying theory that definitively explains the onset of autoimmune diseases remains elusive, emphasizing the complexity and multifaceted nature of these conditions.
Various environmental triggers are identified, some of which include:
Impaired oral tolerance
Gut dysbiosis
Increased gut permeability
Heightened immune reactivity
Chemicals, which are either a part of the immediate environment or found in drugs, are key players in this context. Examples of such chemicals include hydrazines, hair dyes, trichloroethylene, tartrazines, hazardous wastes, and industrial emissions.
Ultraviolet radiation has been implicated as a potential causative factor in the development of autoimmune diseases, such as dermatomyositis. Furthermore, exposure to pesticides has been linked with an increased risk of developing rheumatoid arthritis. Vitamin D, on the other hand, appears to play a protective role, particularly in older populations, by preventing immune dysfunctions.
Infectious agents are also being increasingly recognized for their role as T cell activators a crucial step in triggering autoimmune diseases. The exact mechanisms by which they contribute to disease onset remain to be fully understood. For instance, certain autoimmune conditions like Guillain-Barre syndrome and rheumatic fever are thought to be triggered by infections. Furthermore, analysis of large-scale data has revealed a significant link between SARS-CoV-2 infection (the causative agent of COVID-19) and an increased risk of developing a wide range of new-onset autoimmune diseases.
Sex
Women typically make up some 80% of autoimmune disease patients. Whilst many proposals have been made for the cause of this high weighting, no clear explanation is available. A possible role for hormonal factors has been suggested. For example, some autoimmune diseases tend to flare during pregnancy (possibly as an evolutionary mechanism to increase health protection for the child), when hormone levels are high, and improve after menopause, when hormone levels decrease. Women may also naturally have autoimmune disease trigger events in puberty and pregnancy. Under-reporting by men may also be a factor, as men may interact less with the health system than women.
Infections
Certain viral and bacterial infections have been linked to autoimmune diseases. For instance, research suggests that the bacterium that causes strep throat, Streptococcus pyogenes, might trigger rheumatic fever, an autoimmune response affecting the heart. Similarly, some studies propose a link between the Epstein–Barr virus, responsible for mononucleosis, and the subsequent development of multiple sclerosis or lupus.
Dysregulated immune response
Another area of interest is the immune system's ability to distinguish between self and non-self, a function that is compromised in autoimmune diseases. In healthy individuals, immune tolerance prevents the immune system from attacking the body's own cells. When this process fails, the immune system may produce antibodies against its own tissues, leading to an autoimmune response.
Negative selection and the role of the thymus
The elimination of self-reactive T cells occurs primarily through a mechanism known as "negative selection" within the thymus, an organ responsible for the maturation of T cells. This process serves as a key line of defense against autoimmunity. If these protective mechanisms fail, a pool of self-reactive cells can become functional within the immune system, contributing to the development of autoimmune diseases.
Molecular mimicry
Some infectious agents, like Campylobacter jejuni, bear antigens that resemble, but are not identical to, the body's self-molecules. This phenomenon, known as molecular mimicry, can lead to cross-reactivity, where the immune response to such infections inadvertently results in the production of antibodies that also react with self-antigens. An example of this is Guillain–Barré syndrome, in which antibodies generated in response to a C. jejuni infection also react with the gangliosides in the myelin sheath of peripheral nerve axons.
Diagnosis
Diagnosing autoimmune disorders can be complex due to the wide range of diseases within this category and their often overlapping symptoms. Accurate diagnosis is crucial for determining appropriate treatment strategies. Generally, the diagnostic process involves a combination of medical history evaluation, physical examination, laboratory tests, and, in some cases, imaging or biopsies.
Medical history and examination
The first step in diagnosing autoimmune disorders typically involves a thorough evaluation of the patient's medical history and a comprehensive physical examination. Clinicians often pay close attention to the patient's symptoms, family history of autoimmune diseases, and any exposure to environmental factors that might trigger an autoimmune response. The physical examination can reveal signs of inflammation or organ damage, which are common features of autoimmune disorders.
Laboratory tests
Laboratory testing plays a pivotal role in the diagnosis of autoimmune diseases. These tests can identify the presence of certain autoantibodies or other immune markers that indicate a self-directed immune response.
Autoantibody testing: Many autoimmune diseases are characterized by the presence of autoantibodies. Blood tests can identify these antibodies, which are directed against the body's own tissues. For example, antinuclear antibody (ANA) testing is commonly used in the diagnosis of systemic lupus erythematosus and other autoimmune diseases.
Complete Blood Count: Blood counts can provide valuable information about the number and characteristics of different blood cells, which can be affected in some autoimmune diseases.
C-Reactive Protein and Erythrocyte Sedimentation Rate: These tests measure the levels of inflammation in the body, which is often elevated in autoimmune disorders.
Organ-specific tests: Certain autoimmune diseases target specific organs, so tests to evaluate the function of these organs can aid in diagnosis. For example, thyroid function tests are used in diagnosing autoimmune thyroid disorders, while a biopsy can diagnose coeliac disease by identifying damage to the small intestine.
Imaging studies
In some cases, imaging studies may be used to assess the extent of organ involvement and damage. For example, chest x-rays or CT scans can identify lung involvement in diseases like rheumatoid arthritis or systemic lupus erythematosus, while an MRI can reveal inflammation or damage in the brain and spinal cord in multiple sclerosis.
Differential diagnosis
Given the variety and nonspecific nature of symptoms that can be associated with autoimmune diseases, differential diagnosis—determining which of several diseases with similar symptoms is causing a patient's illness—is an important part of the diagnostic process. This often involves ruling out other potential causes of symptoms, such as infections, malignancies, or genetic disorders.
Multidisciplinary approach
Given the systemic nature of many autoimmune disorders, a multidisciplinary approach may be necessary for their diagnosis and management. This can involve rheumatologists, endocrinologists, gastroenterologists, neurologists, dermatologists, and other specialists, depending on the organs or systems affected by the disease.
In summary, the diagnosis of autoimmune disorders is a complex process that requires a thorough evaluation of clinical, laboratory, and imaging data. Due to the diverse nature of these diseases, an individualized approach, often involving multiple specialists, is crucial for an accurate diagnosis.
Treatment
Treatment depends on the type and severity of the condition. The majority of the autoimmune diseases are chronic and there is no definitive cure, but symptoms can be alleviated and controlled with treatment.
Standard treatment methods include:
Vitamin or hormone supplements for what the body is lacking due to the disease (insulin, vitamin B12, thyroid hormone, etc.)
Blood transfusions if the disease is blood related
Physical therapy if the disease impacts bones, joints, or muscles
Traditional treatment options include immunosuppressant drugs to reduce the immune response against the body's own tissues, such as:
Non-steroidal anti-inflammatory drugs (NSAIDs) to reduce inflammation
Glucocorticoids to reduce inflammation
Disease-modifying anti-rheumatic drugs (DMARDs) to decrease the damaging tissue and organ effects of the inflammatory autoimmune response
Because immunosuppressants weaken the overall immune response, relief of symptoms must be balanced with preserving the patient's ability to combat infections, which could potentially be life-threatening.
Non-traditional treatments are being researched, developed, and used, especially when traditional treatments fail. These methods aim to either block the activation of pathogenic cells in the body, or alter the pathway that suppresses these cells naturally. These treatments aim to be less toxic to the patient and have more specific targets. Such options include:
Monoclonal antibodies that can be used to block pro-inflammatory cytokines
Antigen-specific immunotherapy which allows immune cells to specifically target the abnormal cells that cause autoimmune disease
Co-stimulatory blockade that works to block the pathway that leads to the autoimmune response
Regulatory T cell therapy that utilizes this special type of T cell to suppress the autoimmune response
Thymoquinone, a compound found in the flower Nigella sativa, has been studied for potential in treating several autoimmune diseases due to its effects on inflammation.
Epidemiology
The first estimate of US prevalence for autoimmune diseases as a group was published in 1997 by Jacobson, et al. They reported US prevalence to be around 9 million, applying prevalence estimates for 24 diseases to a US population of 279 million. Jacobson's work was updated by Hayter & Cook in 2012. This study used Witebsky's postulates, as revised by Rose & Bona, to extend the list to 81 diseases and estimated overall cumulative US prevalence for the 81 autoimmune diseases at 5.0%, with 3.0% for males and 7.1% for females.
The estimated community prevalence, which takes into account the observation that many people have more than one autoimmune disease, was 4.5% overall, with 2.7% for males and 6.4% for females.
A 2024 estimate was that 1 in 15 people in the U.S. had at least one autoimmune disease.
Research
In both autoimmune and inflammatory diseases, the condition arises through aberrant reactions of the human adaptive or innate immune systems. In autoimmunity, the patient's immune system is activated against the body's own proteins. In chronic inflammatory diseases, neutrophils and other leukocytes are constitutively recruited by cytokines and chemokines, resulting in tissue damage.
Mitigation of inflammation by activation of anti-inflammatory genes and the suppression of inflammatory genes in immune cells is a promising therapeutic approach. There is a body of evidence that once the production of autoantibodies has been initialized, autoantibodies have the capacity to maintain their own production.
Stem-cell therapy
Stem cell transplantation is being studied and has shown promising results in certain cases.
Medical trials to replace the pancreatic β cells that are destroyed in type 1 diabetes are in progress.
Altered glycan theory
According to this theory, the effector function of the immune response is mediated by the glycans (polysaccharides) displayed by the cells and humoral components of the immune system. Individuals with autoimmunity have alterations in their glycosylation profile such that a proinflammatory immune response is favored. It is further hypothesized that individual autoimmune diseases will have unique glycan signatures.
Hygiene hypothesis
According to the hygiene hypothesis, high levels of cleanliness expose children to fewer antigens than in the past, causing their immune systems to become overactive and more likely to misidentify own tissues as foreign, resulting in autoimmune or allergic conditions such as asthma.
Vitamin D influence on immune response
Vitamin D is known as an immune regulator that assists in the adaptive and innate immune response. A deficiency in vitamin D, from hereditary or environmental influence, can lead to a more inefficient and weaker immune response and seen as a contributing factor to the development of autoimmune diseases. With vitamin D present, vitamin D response elements are encoded and expressed via pattern recognition receptors responses and the genes associated with those responses. The specific DNA target sequence expressed is known as 1,25-(OH)2D3. The expression of 1,25-(OH)2D3 can be induced by macrophages, dendritic cells, T-cells, and B-cells. In the presence of 1,25-(OH)2D3, the immune system's production of inflammatory cytokines are suppressed and more tolerogenic regulatory T-cells are expressed. This is due to vitamin D's influence on cell maturation, specifically T-cells, and their phenotype expression. Lack of 1,25-(OH)2D3 expression can lead to less tolerant regulatory T-cells, larger presentation of antigens to less tolerant T-cells, and increased inflammatory response.
| Biology and health sciences | Non-infectious disease | null |
19468696 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20calculus | Fundamental theorem of calculus | The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each point in time) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). Roughly speaking, the two operations can be thought of as inverses of each other.
The first part of the theorem, the first fundamental theorem of calculus, states that for a continuous function , an antiderivative or indefinite integral can be obtained as the integral of over an interval with a variable upper bound.
Conversely, the second part of the theorem, the second fundamental theorem of calculus, states that the integral of a function over a fixed interval is equal to the change of any antiderivative between the ends of the interval. This greatly simplifies the calculation of a definite integral provided an antiderivative can be found by symbolic integration, thus avoiding numerical integration.
History
The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration. The origins of differentiation likewise predate the fundamental theorem of calculus by hundreds of years; for example, in the fourteenth century the notions of continuity of functions and motion were studied by the Oxford Calculators and other scholars. The historical relevance of the fundamental theorem of calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of gradients) are actually closely related.
From the conjecture and the proof of the fundamental theorem of calculus, calculus as a unified theory of integration and differentiation is started. The first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory (1638–1675). Isaac Barrow (1630–1677) proved a more generalized version of the theorem, while his student Isaac Newton (1642–1727) completed the development of the surrounding mathematical theory. Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities and introduced the notation used today.
Geometric meaning/Proof
The first fundamental theorem may be interpreted as follows. Given a continuous function whose graph is plotted as a curve, one defines a corresponding "area function" such that is the area beneath the curve between and . The area may not be easily computable, but it is assumed to be well defined.
The area under the curve between and could be computed by finding the area between and , then subtracting the area between and . In other words, the area of this "strip" would be .
There is another way to estimate the area of this same strip. As shown in the accompanying figure, is multiplied by to find the area of a rectangle that is approximately the same size as this strip. So:
Dividing by h on both sides, we get:
This estimate becomes a perfect equality when h approaches 0:
That is, the derivative of the area function exists and is equal to the original function , so the area function is an antiderivative of the original function.
Thus, the derivative of the integral of a function (the area) is the original function, so that derivative and integral are inverse operations which reverse each other. This is the essence of the Fundamental Theorem.
Physical intuition
Intuitively, the fundamental theorem states that integration and differentiation are inverse operations which reverse each other.
The second fundamental theorem says that the sum of infinitesimal changes in a quantity (the integral of the derivative of the quantity) adds up to the net change in the quantity. To visualize this, imagine traveling in a car and wanting to know the distance traveled (the net change in position along the highway). You can see the velocity on the speedometer but cannot look out to see your location. Each second, you can find how far the car has traveled using , that is, multiplying the current speed (in kilometers or miles per hour) by the time interval (1 second = hour). By summing up all these small steps, you can approximate the total distance traveled, in spite of not looking outside the car:As becomes infinitesimally small, the summing up corresponds to integration. Thus, the integral of the velocity function (the derivative of position) computes how far the car has traveled (the net change in position).
The first fundamental theorem says that the value of any function is the rate of change (the derivative) of its integral from a fixed starting point up to any chosen end point. Continuing the above example using a velocity as the function, you can integrate it from the starting time up to any given time to obtain a distance function whose derivative is that velocity. (To obtain your highway-marker position, you would need to add your starting position to this integral and to take into account whether your travel was in the direction of increasing or decreasing mile markers.)
Formal statements
There are two parts to the theorem. The first part deals with the derivative of an antiderivative, while the second part deals with the relationship between antiderivatives and definite integrals.
First part
This part is sometimes referred to as the first fundamental theorem of calculus.
Let be a continuous real-valued function defined on a closed interval . Let be the function defined, for all in , by
Then is uniformly continuous on and differentiable on the open interval , and
for all in so is an antiderivative of .
Corollary
The fundamental theorem is often employed to compute the definite integral of a function for which an antiderivative is known. Specifically, if is a real-valued continuous function on and is an antiderivative of in , then
The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following part of the theorem.
Second part
This part is sometimes referred to as the second fundamental theorem of calculus or the Newton–Leibniz theorem.
Let be a real-valued function on a closed interval and a continuous function on which is an antiderivative of in :
If is Riemann integrable on then
The second part is somewhat stronger than the corollary because it does not assume that is continuous.
When an antiderivative of exists, then there are infinitely many antiderivatives for , obtained by adding an arbitrary constant to . Also, by the first part of the theorem, antiderivatives of always exist when is continuous.
Proof of the first part
For a given function , define the function as
For any two numbers and in , we have
the latter equality resulting from the basic properties of integrals and the additivity of areas.
According to the mean value theorem for integration, there exists a real number such that
It follows that
and thus that
Taking the limit as and keeping in mind that one gets
that is,
according to the definition of the derivative, the continuity of , and the squeeze theorem.
Proof of the corollary
Suppose is an antiderivative of , with continuous on . Let
By the first part of the theorem, we know is also an antiderivative of . Since the mean value theorem implies that is a constant function, that is, there is a number such that for all in . Letting , we have
which means . In other words, , and so
Proof of the second part
This is a limit proof by Riemann sums.
To begin, we recall the mean value theorem. Stated briefly, if is continuous on the closed interval and differentiable on the open interval , then there exists some in such that
Let be (Riemann) integrable on the interval , and let admit an antiderivative on such that is continuous on . Begin with the quantity . Let there be numbers such that
It follows that
Now, we add each along with its additive inverse, so that the resulting quantity is equal:
The above quantity can be written as the following sum:
The function is differentiable on the interval and continuous on the closed interval ; therefore, it is also differentiable on each interval and continuous on each interval . According to the mean value theorem (above), for each there exists a in such that
Substituting the above into (), we get
The assumption implies Also, can be expressed as of partition .
We are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the mean value theorem, describes an approximation of the curve section it is drawn over. Also need not be the same for all values of , or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with rectangles. Now, as the size of the partitions get smaller and increases, resulting in more partitions to cover the space, we get closer and closer to the actual area of the curve.
By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral. We know that this limit exists because was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.
So, we take the limit on both sides of (). This gives us
Neither nor is dependent on , so the limit on the left side remains .
The expression on the right side of the equation defines the integral over from to . Therefore, we obtain
which completes the proof.
Relationship between the parts
As discussed above, a slightly weaker version of the second part follows from the first part.
Similarly, it almost looks like the first part of the theorem follows directly from the second. That is, suppose is an antiderivative of . Then by the second theorem, . Now, suppose . Then has the same derivative as , and therefore . This argument only works, however, if we already know that has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem.
For example, if , then has an antiderivative, namely
and there is no simpler expression for this function. It is therefore important not to interpret the second part of the theorem as the definition of the integral. Indeed, there are many functions that are integrable but lack elementary antiderivatives, and discontinuous functions can be integrable but lack any antiderivatives at all. Conversely, many functions that have antiderivatives are not Riemann integrable (see Volterra's function).
Examples
Computing a particular integral
Suppose the following is to be calculated:
Here, and we can use as the antiderivative. Therefore:
Using the first part
Suppose
is to be calculated. Using the first part of the theorem with gives
This can also be checked using the second part of the theorem. Specifically, is an antiderivative of , so
An integral where the corollary is insufficient
Suppose
Then is not continuous at zero. Moreover, this is not just a matter of how is defined at zero, since the limit as of does not exist. Therefore, the corollary cannot be used to compute
But consider the function
Notice that is continuous on (including at zero by the squeeze theorem), and is differentiable on with Therefore, part two of the theorem applies, and
Theoretical example
The theorem can be used to prove that
Since,
the result follows from,
Generalizations
The function does not have to be continuous over the whole interval. Part I of the theorem then says: if is any Lebesgue integrable function on and is a number in such that is continuous at , then
is differentiable for with . We can relax the conditions on still further and suppose that it is merely locally integrable. In that case, we can conclude that the function is differentiable almost everywhere and almost everywhere. On the real line this statement is equivalent to Lebesgue's differentiation theorem. These results remain true for the Henstock–Kurzweil integral, which allows a larger class of integrable functions.
In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost every , the average value of a function over a ball of radius centered at tends to as tends to 0.
Part II of the theorem is true for any Lebesgue integrable function , which has an antiderivative (not all integrable functions do, though). In other words, if a real function on admits a derivative at every point of and if this derivative is Lebesgue integrable on , then
This result may fail for continuous functions that admit a derivative at almost every point , as the example of the Cantor function shows. However, if is absolutely continuous, it admits a derivative at almost every point , and moreover is integrable, with equal to the integral of on . Conversely, if is any integrable function, then as given in the first formula will be absolutely continuous with almost everywhere.
The conditions of this theorem may again be relaxed by considering the integrals involved as Henstock–Kurzweil integrals. Specifically, if a continuous function admits a derivative at all but countably many points, then is Henstock–Kurzweil integrable and is equal to the integral of on . The difference here is that the integrability of does not need to be assumed.
The version of Taylor's theorem that expresses the error term as an integral can be seen as a generalization of the fundamental theorem.
There is a version of the theorem for complex functions: suppose is an open set in and is a function that has a holomorphic antiderivative on . Then for every curve , the curve integral can be computed as
The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds. One such generalization offered by the calculus of moving surfaces is the time evolution of integrals. The most familiar extensions of the fundamental theorem of calculus in higher dimensions are the divergence theorem and the gradient theorem.
One of the most powerful generalizations in this direction is the generalized Stokes theorem (sometimes known as the fundamental theorem of multivariable calculus): Let be an oriented piecewise smooth manifold of dimension and let be a smooth compactly supported -form on . If denotes the boundary of given its induced orientation, then
Here is the exterior derivative, which is defined using the manifold structure only.
The theorem is often used in situations where is an embedded oriented submanifold of some bigger manifold (e.g. ) on which the form is defined.
The fundamental theorem of calculus allows us to pose a definite integral as a first-order ordinary differential equation.
can be posed as
with as the value of the integral.
| Mathematics | Calculus and analysis | null |
1215420 | https://en.wikipedia.org/wiki/Trajan%27s%20Bridge | Trajan's Bridge | Trajan's Bridge (; ), also called Bridge of Apollodorus over the Danube, was a Roman segmental arch bridge, the first bridge to be built over the lower Danube and considered one of the greatest achievements in Roman architecture. Though it was only functional for 165 years, it is often considered to have been the longest arch bridge in both total span and length for more than 1,000 years.
The bridge was completed in 105 AD and designed by Emperor Trajan's architect Apollodorus of Damascus before the Second Dacian War to allow Roman troops to cross the river. Fragmentary ruins of the bridge's piers are still in existence.
The site
The bridge was situated east of the Iron Gates, near the present-day cities of Drobeta-Turnu Severin in Romania and Kladovo in Serbia. Its construction was ordered by the Emperor Trajan as a supply route for the Roman legions fighting in Dacia.
Construction of the bridge was part of a wider project, which included the digging of side canals so that whitewater rapids could be avoided to make the Danube safer for navigation enabling an effective river fleet, a string of defense posts and development of the intelligence service on the border.
The remains of the embankment which protected the area during the construction of the canal (in a loop to the south of the Danube) show the magnitude of the works. The long canal bypassed the problematic section of the river in an arch-like style. Former canals eventually filled with sand, and empty shells are regularly found in the ground.
All these works, especially the bridge, served the purpose of preparing for the Roman invasion of Dacia, which ended with Roman victory in 106 AD. The effect of finally defeating the Dacians and acquiring their gold mines was so great that Roman games celebrating the conquest lasted for 123 days, with 10,000 gladiators engaging in fights and 11,000 wild animals being killed during that period.
The bridge was long (the Danube is now wide in that area), wide, and high, measured from the surface of the river. At each end was a Roman fort so that crossing the bridge was only possible through the camps.
On the south bank, at the modern village of Kostol near Kladovo, the Pontes fort was built in 103, concurrently with the bridge, occupying several hectares. Remnants of the long castrum with thick ramparts are still visible today. A vicus (civilian settlement) grew up around it later. A bronze head of Emperor Trajan has been discovered in Pontes, part of a statue which was erected at the bridge entrance and is today kept in the National Museum in Belgrade.
On the north bank is the Drobeta fort. It also had a bronze statue of Trajan.
Design and construction
Apollodorus used wooden arches, each spanning , set on twenty masonry pillars made of bricks, mortar, and pozzolana cement. It was built unusually quickly (between 103 and 105), employing the construction of a wooden caisson for each pier.
Apollodorus applied the technique of river flow relocation, using the principles set by Thales of Miletus some six centuries beforehand. Engineers waited for a low water level to dig a canal, west of the modern downtown of Kladovo. The water was redirected downstream from the construction site, through the lowland of , to the location of the modern village of Mala Vrbica. Wooden pillars were driven into the river bed in a rectangular layout, which served as the foundation for the supporting piers, which were coated with clay. The hollow piers were filled with stones held together by mortar, while from the outside they were built around with Roman bricks. The bricks can still be found around the village of Kostol, retaining the same physical properties that they had 2 millennia ago. The piers were tall, wide and apart. It is considered today that the bridge construction was assembled on the land and then installed on the pillars. A mitigating circumstance was that the year the relocating canals were dug was very dry and the water level was quite low. The river bed was almost completely drained when the foundation of the pillars began. There were 20 pillars in total in an interval of . Oak wood was used and the bridge was high enough to allow ship transport on the Danube.
The bricks also have a historical value, as the members of the Roman legions and cohorts which participated in the construction of the bridge carved the names of their units into the bricks. Thus, it is known that work was done by the legions of IV Flavia Felix, VII Claudia, V Macedonica and XIII Gemina and the cohorts of I Cretum, II Hispanorum, III Brittonum and I Antiochensium.
Tabula Traiana
A Roman memorial plaque ("Tabula Traiana"), 4 metres wide and 1.75 metres high, commemorating the completion of Trajan's military road is located on the Serbian side facing Romania near Ogradina, 29 km west of the bridge. In 1972, when the Iron Gate I Hydroelectric Power Station was built (causing the water level to rise by about 35 m), the plaque was moved from its original location, and lifted to the present place. It reads:
IMP. CAESAR. DIVI. NERVAE. FNERVA TRAIANVS. AVG. GERMPONTIF MAXIMUS TRIB POT IIII PATER PATRIAE COS IIIMONTIBVS EXCISI(s) ANCO(ni)BVSSVBLAT(i)S VIA(m) F(ecit)
The text was interpreted by Otto Benndorf to mean:
Emperor Caesar son of the divine Nerva, Nerva Trajan, the Augustus, Germanicus, Pontifex Maximus, invested for the fourth time as Tribune, Father of the Fatherland, Consul for the third time, excavating mountain rocks and using wood beams has made this road.
The Tabula Traiana was declared a Monument of Culture of Exceptional Importance in 1979, and is protected by the Republic of Serbia.
Relocation
When the plan for the future hydro plant and its reservoir was made in 1965, it was clear that numerous settlements along the banks would be flooded in both Yugoslavia and Romania, and that historical remains, including the plaque, would also be affected. Serbian Academy of Sciences and Arts urged for the plaque to be preserved and the government accepted the motion. The enterprise entrusted with the task of relocation was the mining company "Venčac" as its experts previously participated in the relocation of the Abu Simbel temple in Egypt.
First idea was to leave the plaque at its position and to build the caisson around it but the calculations showed this wouldn't work. The idea of cutting the plaque in several smaller pieces in order to be moved was abandoned due to the quality of the rock of which it was made. The proposition of lifting it with the floating elevator "Veli Jože" was discarded, too. The motion of cutting the table in one piece and placing it somewhere else was rejected as the plaque would lose its authenticity.
In the end it was decided to dig in a new bed into the rock above the plaque's original location. The plaque was then cut in one piece with the parts of the surrounding rock and road. After being cut with the cable saws, the 350 tons heavy chunk was lifted to the new bed. Works began in September 1967 and were finished in 1969.
Destruction and remains
The wooden superstructure of the bridge was dismantled by Trajan's successor, Hadrian, presumably in order to protect the empire from barbarian invasions from the north. The superstructure was destroyed by fire.
The remains of the bridge reappeared in 1858 when the level of the Danube hit a record low due to the extensive drought. The twenty pillars were still visible.
In 1906, the Commission of the Danube decided to destroy two of the pillars that were obstructing navigation.
In 1932, there were 16 pillars remaining underwater, but in 1982 only 12 were mapped by archaeologists; the other four had probably been swept away by water. Only the entrance pillars are now visible on either bank of the Danube, one in Romania and one in Serbia.
In 1979, Trajan's Bridge was added to the Monument of Culture of Exceptional Importance, and in 1983 on Archaeological Sites of Exceptional Importance list, and by that it is protected by the Republic of Serbia.
| Technology | Bridges | null |
1216060 | https://en.wikipedia.org/wiki/Nitride | Nitride | In chemistry, a nitride is a chemical compound of nitrogen. Nitrides can be inorganic or organic, ionic or covalent. The nitride anion, N3- ion, is very elusive but compounds of nitride are numerous, although rarely naturally occurring. Some nitrides have a found applications, such as wear-resistant coatings (e.g., titanium nitride, TiN), hard ceramic materials (e.g., silicon nitride, Si3N4), and semiconductors (e.g., gallium nitride, GaN). The development of GaN-based light emitting diodes was recognized by the 2014 Nobel Prize in Physics. Metal nitrido complexes are also common.
Synthesis of inorganic metal nitrides is challenging because nitrogen gas (N2) is not very reactive at low temperatures, but it becomes more reactive at higher temperatures. Therefore, a balance must be achieved between the low reactivity of nitrogen gas at low temperatures and the entropy driven formation of N2 at high temperatures. However, synthetic methods for nitrides are growing more sophisticated and the materials are of increasing technological relevance.
Uses of nitrides
Like carbides, nitrides are often refractory materials owing to their high lattice energy, which reflects the strong bonding of "N3−" to metal cation(s). Thus, cubic boron nitride, titanium nitride, and silicon nitride are used as cutting materials and hard coatings. Hexagonal boron nitride, which adopts a layered structure, is a useful high-temperature lubricant akin to molybdenum disulfide. Nitride compounds often have large band gaps, thus nitrides are usually insulators or wide-bandgap semiconductors; examples include boron nitride and silicon nitride. The wide-band gap material gallium nitride is prized for emitting blue light in LEDs. Like some oxides, nitrides can absorb hydrogen and have been discussed in the context of hydrogen storage, e.g. lithium nitride.
Examples
Classification of such a varied group of compounds is somewhat arbitrary. Compounds where nitrogen is not assigned −3 oxidation state are not included, such as nitrogen trichloride where the oxidation state is +3; nor are ammonia and its many organic derivatives.
Nitrides of the s-block elements
Only one alkali metal nitride is stable, the purple-reddish lithium nitride (), which forms when lithium burns in an atmosphere of . Sodium nitride and potassium nitride has been generated, but remains a laboratory curiosity. The nitrides of the alkaline earth metals that have the formula are however numerous. Examples include beryllium nitride (), magnesium nitride (), calcium nitride (), and strontium nitride (). The nitrides of electropositive metals (including Li, Zn, and the alkaline earth metals) readily hydrolyze upon contact with water, including the moisture in the air:
Nitrides of the p-block elements
Boron nitride exists as several forms (polymorphs). Nitrides of silicon and phosphorus are also known, but only the former is commercially important. The nitrides of aluminium, gallium, and indium adopt the hexagonal wurtzite structure in which each atom occupies tetrahedral sites. For example, in aluminium nitride, each aluminium atom has four neighboring nitrogen atoms at the corners of a tetrahedron and similarly each nitrogen atom has four neighboring aluminium atoms at the corners of a tetrahedron. This structure is like hexagonal diamond (lonsdaleite) where every carbon atom occupies a tetrahedral site (however wurtzite differs from sphalerite and diamond in the relative orientation of tetrahedra). Thallium(I) nitride () is known, but thallium(III) nitride (TlN) is not.
Transition metal nitrides
Most metal-rich transition metal nitrides adopt a relatively ordered face-centered cubic or hexagonal close-packed crystal structure, with octahedral coordination. Sometimes these materials are called "interstitial nitrides". They are essential for industrial metallurgy, because they are typically much harder and less ductile than their parent metal, and resist air-oxidation. For the group 3 metals, ScN and YN are both known. Group 4, 5, and 6 transition metals (the titanium, vanadium and chromium groups) all form chemically stable, refractory nitrides with high melting point. Thin films of titanium nitride, zirconium nitride, and tantalum nitride protect many industrial surfaces.
Nitrides of the group 7 and 8 transition metals tend to be nitrogen-poor, and decompose readily at elevated temperatures. For example, iron nitride, decomposes at 200 °C. Platinum nitride and osmium nitride may contain units, and as such should not be called nitrides.
Nitrides of heavier members from group 11 and 12 are less stable than copper nitride () and zinc nitride (): dry silver nitride () is a contact explosive which may detonate from the slightest touch, even a falling water droplet.
Nitrides of the lanthanides and actinides
Nitride containing species of the lanthanides and actinides are of scientific interest as they can provide a useful handle for determining covalency of bonding. Nuclear magnetic resonance (NMR) spectroscopy along with quantum chemical analysis has often been used to determine the degree to which metal nitride bonds are ionic or covalent in character. One example, a uranium nitride, has the highest known nitrogen-15 chemical shift.
Molecular nitrides
Many metals form molecular nitrido complexes, as discussed in the specialized article. The main group elements also form some molecular nitrides. Cyanogen () and tetrasulfur tetranitride () are rare examples of a molecular binary (containing one element aside from nitrogen) nitrides. They dissolve in nonpolar solvents. Both undergo polymerization. is also unstable with respect to the elements, but less so that the isostructural . Heating gives a polymer, and a variety of molecular sulfur nitride anions and cations are also known.
Related to but distinct from nitride is pernitride diatomic anion () and the azide triatomic anion (N3−).
| Physical sciences | Nitride salts | Chemistry |
1216077 | https://en.wikipedia.org/wiki/Electrophilic%20addition | Electrophilic addition | In organic chemistry, an electrophilic addition (AE) reaction is an addition reaction where a chemical compound containing a double or triple bond has a π bond broken, with the formation of two new σ bonds.
The driving force for this reaction is the formation of an electrophile X+ that forms a covalent bond with an electron-rich, unsaturated C=C bond. The positive charge on X is transferred to the carbon-carbon bond, forming a carbocation during the formation of the C-X bond.
In the second step of an electrophilic addition, the positively charge on the intermediate combines with an electron-rich species to form the second covalent bond. The second step is the same nucleophilic attack process found in an SN1 reaction. The exact nature of the electrophile and the nature of the positively charged intermediate are not always clear and depend on reactants and reaction conditions.
In all asymmetric addition reactions to carbon, regioselectivity is important and often determined by Markovnikov's rule. Organoborane compounds give anti-Markovnikov additions. Electrophilic attack to an aromatic system results in electrophilic aromatic substitution rather than an addition reaction.
Typical electrophilic additions
Typical electrophilic additions to alkenes with reagents are:
Halogen addition reactions: X2
Hydrohalogenations: HX
Hydration reactions: H2O
Hydrogenations: H2
Oxymercuration reactions: mercuric acetate, water
Hydroboration-oxidation reactions: diborane
the Prins reaction: formaldehyde, water
| Physical sciences | Organic reactions | Chemistry |
1218564 | https://en.wikipedia.org/wiki/Paratransit | Paratransit | Paratransit (the term used in North America) or intermediate public transport (also known by other names such as community transport (UK)), is a type of transportation service that supplements fixed-route mass transit by providing individualized rides without fixed routes or timetables. Paratransit services may vary considerably on the degree of flexibility they provide their customers. At their simplest they may consist of a taxi or small bus that will run along a more or less defined route and then stop to pick up or discharge passengers on request. At the other end of the spectrum—fully demand-responsive transport—the most flexible paratransit systems offer on-demand call-up door-to-door service from any origin to any destination in a service area. In addition to public transit agencies, paratransit services may be operated by community groups or not-for-profit organizations, and for-profit private companies or operators.
The concept of intermediate public transport (IPT) or paratransit, exhibits considerable variation between developed and developing nations. In developed countries, it is typically a flexible, demand-responsive form of public transportation designed to provide point-to-point service. These systems are generally well-structured and organized. On the other hand, in developing countries, IPT often operates as an informal, cost-effective alternative to formal transportation modes. It tends to be unorganized and subject to minimal government regulation, serving as a prevalent form of spontaneous public transport that facilitates quick and convenient travel.
The importance of IPT may extends beyond mobility, as it can also contribute to the economic well-being of those who operate these services. In some cases, drivers of vehicles such as tempos and autorickshaws can earn a substantial daily income, which supports their livelihoods.
Typically, minibuses are used to provide paratransit service in USA. Most paratransit vehicles are equipped with wheelchair lifts or ramps to facilitate access.
In the United States, private transportation companies often provide paratransit service in cities and metropolitan areas under contract to local public transportation agencies.
Terminology
The use of "paratransit" ("para transit", "para-transit") has evolved and taken on two somewhat separate broad sets of meaning and application in North America; the term is rarely used in the rest of the world.
The more general meaning includes any transit service operating alongside conventional fixed-route services, including airport limousines and carpools. Since the early 1980s, particularly in North America, the term began to be used increasingly to describe the second meaning: special transport services for people with disabilities. In this respect, paratransit has become a subsector and business in its own right. The term paratransit is rarely used outside of North America.
Projects in the broader sense were documented by the Urban Institute in the 1974 book Para-transit: Neglected options for urban mobility, followed a year later by the first international overview, Paratransit: Survey of International Experience and Prospects. Robert Cervero's 1997 book, Paratransit in America: Redefining Mass Transportation, embraced this wider definition of paratransit, arguing that America's mass transit sector should enlarge to include micro-vehicles, minibuses, and shared-taxi services found in many developing cities. Paratransit, as an alternative mode of flexible passenger transportation that does not follow fixed routes or schedules, are common and often offer the only mechanized mobility options for the poor in many parts of the developing world.
Diversion to taxis and ride-hailing services
Some paratransit systems have begun subsidizing private taxi or ride-hailing trips as an alternative to the government-run or government-contracted system. For example, in 2010, Solano County, California dissolved Solano Paratransit and allowed paratransit-eligible passengers to buy $100 worth of taxi scrip for $15. This eliminated the need for passengers to book a week in advance, and reduced the cost to the county from $81 to about $30.
In 2016, the Massachusetts Bay Transportation Authority began a pilot program which has subsidized paratransit passengers on Uber, Lyft, and Curb, up to a cap of $42 per ride. This retained the ability to book by phone, lowered the fare for riders, eliminated the need to book the trip a day in advance, eliminated shared trips, reduced in-transit time, and reduced the pickup wait time from 30 minutes to as low as 5 minutes in the urban core. With the subsidy cap initially set at $13, the MBTA reduced the average cost of a paratransit trip from $35 to $9. Pilot participants on average substantially increased the number of trips they took, but still at a lower overall cost to the agency. Availability of wheelchair-accessible vehicles remained an occasional problem, but these were only needed by about 20% of paratransit riders.
In the United States
Rehabilitation Act of 1973
Before passage of the Americans with Disabilities Act of 1990 (ADA), paratransit was provided by not-for-profit human service agencies and public transit agencies in response to the requirements in Section 504 of the Rehabilitation Act of 1973. Section 504 prohibited the exclusion of disabled people from "any program or activity receiving federal financial assistance". In Title 49 Part 37 (49 CFR 37) of the Code of Federal Regulations, the Federal Transit Administration defined requirements for making buses accessible or providing complementary paratransit services within public transit service areas.
Most transit agencies did not see fixed route accessibility as desirable and opted for a flexible system of small paratransit vehicles operating parallel to a system of larger, fixed-route buses. The expectation was that the paratransit services would not be heavily used, making a flexible system of small vehicles a less expensive alternative for accessibility than options with larger, fixed-route vehicles. This however ended up not being the case. Often paratransit services were being filled up to their capacity. In some cases, leaving individuals who were in need of the door to door service provided by paratransit unable to utilize it due to the fact that disabled people who could use fixed-route vehicles also found themselves using these paratransit services.
Americans with Disabilities Act of 1990
With the passage of the ADA, Section 504 of the Rehabilitation Act was extended to include all activities of state and local government. Its provisions were not limited to programs receiving federal funds and applied to all public transit services, regardless of how the services were funded or managed. Title II of the ADA also more clearly defined a disabled person's right to equal participation in transit programs, and the provider's responsibility to make that participation possible.
In revisions to Title 49 Part 37, the Federal Transit Administration defined the combined requirements of the ADA and the Rehabilitation Act for transit providers. These requirements included "complementary" paratransit to destinations within 3/4 mile of all fixed routes (49 CFR 37.131) and submission of a plan for complying with complementary paratransit service regulations (49 CFR 37.135). Paratransit service is an unfunded mandate.
Under the ADA, complementary paratransit service is required for passengers who are 1) Unable to navigate the public bus system, 2) unable to get to a point from which they could access the public bus system, or 3) have a temporary need for these services because of injury or some type of limited duration cause of disability (49 CFR 37.123). Title 49 Part 37 details the eligibility rules along with requirements governing how the service must be provided and managed. In the United States, paratransit service is now highly regulated and closely monitored for compliance with standards set by the Federal Transit Administration (FTA).
As the ADA came into effect in 1992 (49 CFR 37.135), the FTA required transit systems in the United States to plan and begin implementing ADA compliant services, with full implementation by 1997 (49 CFR 37.139). During this period, paratransit demand and services rapidly expanded. This growth led to many new approaches to manage and provide these services. Computerized reservation, scheduling and dispatching for paratransit have also evolved substantially and are now arguably among the most sophisticated management systems available in the world of rubber tire transit (land-based non-rail public transit).
Since the passage of the ADA, paratransit service has grown rapidly as a mode of public transit in the United States. Continued growth can be expected due to the aging of baby boomers and disabled Iraq War veterans. The growth of the number of people requiring paratransit has resulted in an increase in cost for the paratransit industry to maintain these services. The results of this rising cost are the paratransit industry trying to get individuals to move from a reliance on paratransit vehicles to fixed-route vehicles. Due to the push to have paratransit vehicles being the main method of transportation for disabled individuals prior to the passing of the ADA, the paratransit industry is finding it hard to get individuals to switch over to fixed route transportation.
Statistics
Beginning in 2004, the bus, rail and motor coach trade magazine Metro Magazine began conducting annual surveys of public and private paratransit providers:
The US Government Accountability Office GAO released a report in November 2012 for the Federal Transit Administration which "examined: (1) the extent of compliance with ADA paratransit requirements, (2) changes in ADA paratransit demand and costs since 2007, and (3) actions transit agencies are taking to help address changes in the demand for and costs of ADA paratransit service." The report found that "average number of annual ADA paratransit trips provided by a transit agency increased 7 percent from 2007 to 2010" and that the average cost of providing a paratransit trip is "an estimated three and a half times more expensive than the average cost of $8.15 to provide a fixed-route trip."
.
The Maryland Transit Administration reported paratransit ridership increases of 15% in fiscal 2012, with double-digit increases expected in fiscal 2013 and 2014. The cost of providing paratransit service is considerably higher than traditional fixed-route bus service, with Maryland's Mobility service reporting per-passenger costs of over $40 per trip in 2010.
Paratransit ridership growth of more than 10% per year was reported in the District of Columbia metropolitan area for 2006 through 2009.
Washington Metropolitan Area Transit Authority's MetroAccess service in Washington, D.C. conducted a peer review of large urban paratransit systems in the US in 2009:
In response to increasing ridership and costs of providing paratransit service, WMATA made two significant changes beginning in 2010: the paratransit service area was reduced from jurisdictional boundaries to the ADA requirement of within a 3/4 mile corridor of fixed-route services; and, fares were linked to WMATA's fixed route services and charged to the ADA allowable maximum of two times the fastest equivalent bus or rail fare.
These changes helped result in the first-ever reduction in the number of year-over-year trips between 2011 and 2012.
Annually, the Canadian Urban Transit Association publishes a fact book providing statistics for all of the Ontario specialized public transit services; as of 2015 there were 79 in operation.
Technology
The complicated nature of providing paratransit service in accordance with ADA guidelines led to the development of sophisticated software for the industry.
Intelligent transportation systems technologies, primarily GPS, mobile data terminals, digital mobile radios, and cell phones, and scheduling, dispatching, and call reservation software are now in use increasingly in North America and Europe. Interactive voice response systems and web-based initiatives are the next technology innovation anticipated for paratransit services.
Advanced analytics is another field being applied to paratransit operations. Some companies are beginning to integrate cloud computing models to find operational efficiencies and cost savings for smaller paratransit service providers.
Canada
There is no legislation providing details on paratransit standards, but the Canadian Urban Transit Association has provided voluntary guidelines for member transit agencies to use to determine paratransit needs and standards. Various operators including the TTC, BC Transit, OC Transpo and TransLink offer the service, and in the province of British Columbia paratransit is referred to as HandyDART throughout by both major transit operators.
Developing countries
Paratransit systems in many developing world cities are operated by individuals and small business. The fragmented, intensely competitive nature of the industry makes government regulation and control much harder than traditional public transport. Government authorities have cited problems with unsafe vehicles and drivers as justifying efforts to regulate and "formalize" paratransit operations. However, these efforts have been limited by ignorance on the part of regulatory authorities and mistrust between
authorities and operators.
Sub-Saharan Africa
In sub-Saharan Africa, this form of transport (called "transport artisanal" in French) serves more than 70% of commuters, evolved organically and replaced formal transit after independence. Paratransit can take many forms, from the 16-seater minibus taxis (see share taxi), to motorbikes (boda boda).
Other areas
In the United Kingdom, services are called community transport and provided locally. The Community Transport Association
is a central organization recognized by the government which "promotes excellence through training, publications, advice, events and project support on voluntary, community and accessible transport."
In Zagreb, Croatia, the municipal mass transit operator ZET operates a fleet of minibuses equipped with several seats and lift for wheelchairs for on-demand transport of disabled persons.
In Hong Kong, Rehabus service is provided by the Hong Kong Society for Rehabilitation.
The New Zealand Transport Agency provides a comprehensive list of options in the country, including Total Mobility (TM) in Auckland.
In Australia, Disability Standards for Accessible Public Transport under subsection 31 (1) of the Disability Discrimination Act of 1992 mandated that as of 2002 "all new public transport conveyances, premises and infrastructure must comply with the transport standards. Facilities already in operation at that time have between five and thirty years to comply with the standards."
In some parts of the world, transportation services for the elderly and disabled are obtainable through share taxi options, often without formal government involvement.
Specific services
See Demand-responsive transport for examples
| Technology | Specific-purpose transportation | null |
1218763 | https://en.wikipedia.org/wiki/Sivapithecus | Sivapithecus | Sivapithecus () (syn: Ramapithecus) is a genus of extinct apes. Fossil remains of animals now assigned to this genus, dated from 12.2 million years old in the Miocene, have been found since the 19th century in the Sivalik Hills of the Indian subcontinent as well as in Kutch. Any one of the species in this genus may have been the ancestor to the modern orangutans.
Some early discoveries were given the separate names Ramapithecus (Rama's Ape) and Bramapithecus (Brahma's Ape), and were thought to be possible ancestors of humans.
Discovery
The first incomplete specimens of Sivapithecus were found in northern India in the late 19th century.
Another find was made in Nepal on the bank of the Tinau River situated in Palpa District; a western part of the country in 1932. This find was named "Ramapithecus". The discoverer, G. Edward Lewis, claimed that it was distinct from Sivapithecus, as the jaw was more like a human's than any other fossil ape then known, a claim revived in the 1960s. At that time, it was believed that the ancestors of humans had diverged from other apes 14 million years ago. Biochemical studies upset this view, suggesting that there was an early split between orangutan ancestors and the common ancestors of chimpanzees, gorillas and humans.
Meanwhile, more complete specimens of Ramapithecus were found in 1975 and 1976, which showed that it was less human-like than had been thought. It began to look more and more like Sivapithecus, meaning that the older name must take priority. It is also possible that fossils assigned to Ramapithecus belonged to the female form of Sivapithecus. They were definitely members of the same genus. It is also likely that they were already separate from the common ancestor of chimpanzees, gorillas and humans, which may be represented by the prehistoric great ape Nakalipithecus nakayamai. Siwalik specimens once assigned to the genus Ramapithecus are now considered by most researchers to belong to one or more species of Sivapithecus. Ramapithecus is no longer regarded as a likely ancestor of humans.
In 1982, David Pilbeam published a description of a significant fossil find from Potwar Plateau, Pakistan, formed by a large part of the face and jaw of a Sivapithecus. The partial skull was likely scavenged after death. The specimen (GSP 15000) bore many similarities to the orangutan skull and strengthened the theory (previously suggested by others) that Sivapithecus was closely related to orangutans.
In 2011, a 10.8 million-year old (Neogene period) upper jawbone of Sivapithecus was found in Kutch district of Gujarat, India. The find also extended Sivapithecus southern range in Indian subcontinent significantly. The species can not be identified.
Description
Sivapithecus was about in body length, similar in size to a modern orangutan. In most respects, it would have resembled a chimpanzee, but its face was closer to that of an orangutan. The shape of its wrists and general body proportions suggest that it spent a significant amount of its time on the ground, as well as in trees. It had large canine teeth, and heavy molars, suggesting a diet of relatively tough food, such as seeds and savannah grasses.
Similarities to orangutans in what are chiefly jaw and partial skull fossils are a concave face with large zygomatic arch bones, narrow setting of eyes from each other, smoothness of nasal floor, and central incisor enlargement. However Sivapithecus' "dental characteristics and postcranial skeleton do not confirm this phylogenetic position" say Yaowalak Chaimanee of the Paleontology section of Thailand's Department of Mineral Resources and colleagues, while reporting a find in 2003, so neat affinities are not the state of finding to date.
Species
Currently three species are generally recognized:Sivapithecus indicus fossils date from about 12.5 million to 10.5 million years ago.Sivapithecus sivalensis lived from 9.5 million to 8.5 million years ago. It was found at the Pothwar plateau in Pakistan as well as in parts of India. The animal was about the size of a chimpanzee but had the facial morphology of an orangutan; it ate soft fruit (detected in the toothwear pattern) and was probably mainly arboreal.Sivapithecus parvada''' described in 1988, this species is significantly larger and dated to about 10 million years ago.
| Biology and health sciences | Apes | Animals |
8073009 | https://en.wikipedia.org/wiki/Tonsil | Tonsil | The tonsils are a set of lymphoid organs facing into the aerodigestive tract, which is known as Waldeyer's tonsillar ring and consists of the adenoid tonsil (or pharyngeal tonsil), two tubal tonsils, two palatine tonsils, and the lingual tonsils. These organs play an important role in the immune system.
When used unqualified, the term most commonly refers specifically to the palatine tonsils, which are two lymphoid organs situated at either side of the back of the human throat. The palatine tonsils and the adenoid tonsil are organs consisting of lymphoepithelial tissue located near the oropharynx and nasopharynx (parts of the throat).
Structure
Humans are born with four types of tonsils: the pharyngeal tonsil, two tubal tonsils, two palatine tonsils and the lingual tonsils.
Development
The palatine tonsils tend to reach their largest size in puberty, and they gradually undergo atrophy thereafter. However, they are largest relative to the diameter of the throat in young children. In adults, each palatine tonsil normally measures up to 2.5 cm in length, 2.0 cm in width and 1.2 cm in thickness.
The adenoid grows until the age of 5, starts to shrink at the age of 7 and becomes small in adulthood.
Function
The tonsils are immunocompetent organs that serve as the immune system's first line of defense against ingested or inhaled foreign pathogens, and as such frequently engorge with blood to assist in immune responses to common illnesses such as the common cold. Their surface contains specialized antigen capture cells called microfold cells (M cells) that allow for the uptake of antigens produced by pathogens. These M cells then alert the B cells and T cells in the tonsil that a pathogen is present and an immune response is stimulated. B cells are activated and proliferate in areas called germinal centers in the tonsil. These germinal centers are places where B memory cells are created and secretory antibody (IgA) is produced.
Clinical significance
The palatine tonsils can become enlarged (adenotonsillar hyperplasia) or inflamed (tonsillitis). The most common way to treat tonsillitis is with anti-inflammatory drugs such as ibuprofen, or if bacterial in origin, antibiotics, e.g. amoxicillin and azithromycin. Surgical removal (tonsillectomy) may be advised if the tonsils obstruct the airway or interfere with swallowing, or in patients with severe or recurrent tonsillitis. However, different mechanisms of pathogenesis for these two subtypes of tonsillar hypertrophy have been described, and may have different responses to identical therapeutic efforts. In older patients, asymmetric tonsils (also known as asymmetric tonsil hypertrophy) may be an indicator of virally infected tonsils, or tumors such as lymphoma or squamous cell carcinoma.
A tonsillolith (also known as a "tonsil stone") is material that accumulates on the palatine tonsil. This can reach the size of a blueberry and is white or cream in color. The main substance is mostly calcium, but it has a strong unpleasant odor because of hydrogen sulfide and methyl mercaptan and other chemicals.
Palatine tonsil enlargement can affect speech, making it hypernasal and giving it the sound of velopharyngeal incompetence (when space in the mouth is not fully separated from the nose's air space). Tonsil size may have a more significant impact on upper airway obstruction for obese children than for those of average weight.
As mucosal lymphatic tissue of the aerodigestive tract, the palatine tonsils are viewed in some classifications as belonging to both the gut-associated lymphoid tissue (GALT) and the mucosa-associated lymphoid tissue (MALT). Other viewpoints treat them (and the spleen and thymus) as large lymphatic organs contradistinguished from the smaller tissue loci of GALT and MALT.
Additional images
| Biology and health sciences | Human anatomy | Health |
20653168 | https://en.wikipedia.org/wiki/Earth%20science | Earth science | Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history.
Geology
Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time.
Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks.
Earth's interior
Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction.
Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes.
Atmospheric science
Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change.
The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere.
Earth's magnetic field
Hydrology
Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere.
Ecology
Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature.
Physical geography
Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment.
Methodology
Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains).
A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history.
Earth's spheres
In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere.
The following fields of science are generally categorized within the Earth sciences:
Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology.
Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology.
Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity.
Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry.
Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology.
Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life.
Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry."
Glaciology covers the icy parts of the Earth (or cryosphere).
Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics.
Earth science breakup
Atmosphere
Atmospheric chemistry
Geography
Climatology
Meteorology
Hydrometeorology
Paleoclimatology
Biosphere
Biogeochemistry
Biogeography
Ecology
Landscape ecology
Geoarchaeology
Geomicrobiology
Paleontology
Palynology
Micropaleontology
Hydrosphere
Hydrology
Hydrogeology
Limnology (freshwater science)
Oceanography (marine science)
Chemical oceanography
Physical oceanography
Biological oceanography (marine biology)
Geological oceanography (marine geology)
Paleoceanography
Lithosphere (geosphere)
Geology
Economic geology
Engineering geology
Environmental geology
Forensic geology
Historical geology
Quaternary geology
Planetary geology and planetary geography
Sedimentology
Stratigraphy
Structural geology
Geography
Human geography
Physical geography
Geochemistry
Geomorphology
Geophysics
Geochronology
Geodynamics (see also Tectonics)
Geomagnetism
Gravimetry (also part of Geodesy)
Seismology
Glaciology
Hydrogeology
Mineralogy
Crystallography
Gemology
Petrology
Petrophysics
Speleology
Volcanology
Pedosphere
Geography
Soil science
Edaphology
Pedology
Systems
Earth system science
Environmental science
Geography
Human geography
Physical geography
Gaia hypothesis
Systems ecology
Systems geology
Others
Geography
Cartography
Geoinformatics (GIScience)
Geostatistics
Geodesy and Surveying
Remote Sensing
Hydrography
Nanogeoscience
| Physical sciences | Earth science | null |
20656228 | https://en.wikipedia.org/wiki/Maize | Maize | Maize (Zea mays), also known as corn in North American English, is a tall stout grass that produces cereal grain. It was domesticated by indigenous peoples in southern Mexico about 9,000 years ago from wild teosinte. Native Americans planted it alongside beans and squashes in the Three Sisters polyculture. The leafy stalk of the plant gives rise to male inflorescences or tassels which produce pollen, and female inflorescences called ears. The ears yield grain, known as kernels or seeds. In modern commercial varieties, these are usually yellow or white; other varieties can be of many colors.
Maize relies on humans for its propagation. Since the Columbian exchange, it has become a staple food in many parts of the world, with the total production of maize surpassing that of wheat and rice. Much maize is used for animal feed, whether as grain or as the whole plant, which can either be baled or made into the more palatable silage. Sugar-rich varieties called sweet corn are grown for human consumption, while field corn varieties are used for animal feed, for uses such as cornmeal or masa, corn starch, corn syrup, pressing into corn oil, alcoholic beverages like bourbon whiskey, and as chemical feedstocks including ethanol and other biofuels.
Maize is cultivated throughout the world; a greater weight of maize is produced each year than any other grain. In 2020, world production was 1.1 billion tonnes. It is afflicted by many pests and diseases; two major insect pests, European corn borer and corn rootworms, have each caused annual losses of a billion dollars in the US. Modern plant breeding has greatly increased output and qualities such as nutrition, drought tolerance, and tolerance of pests and diseases. Much maize is now genetically modified.
As a food, maize is used to make a wide variety of dishes including Mexican tortillas and tamales, Italian polenta, and American hominy grits. Maize protein is low in some essential amino acids, and the niacin it contains only becomes available if freed by alkali treatment. In Mesoamerica, maize is deified as a maize god and depicted in sculptures.
History
Pre-Columbian development
Maize requires human intervention for its propagation. The kernels of its naturally-propagating teosinte ancestor fall off the cob on their own, while those of domesticated maize do not. All maize arose from a single domestication in southern Mexico about 9,000 years ago. The oldest surviving maize types are those of the Mexican highlands. Maize spread from this region to the lowlands and over the Americas along two major paths. The centre of domestication was most likely the Balsas River valley of south-central Mexico. Maize reached highland Ecuador at least 8000 years ago. It reached lower Central America by 7600 years ago, and the valleys of the Colombian Andes between 7000 and 6000 years ago.
The earliest maize plants grew a single, small ear per plant. The Olmec and Maya cultivated maize in numerous varieties throughout Mesoamerica; they cooked, ground and processed it through nixtamalization. By 3000 years ago, maize was central to Olmec culture, including their calendar, language, and myths.
The Mapuche people of south-central Chile cultivated maize along with quinoa and potatoes in pre-Hispanic times. Before the expansion of the Inca Empire, maize was traded and transported as far south as 40° S in Melinquina, Lácar Department, Argentina, probably brought across the Andes from Chile.
Columbian exchange
After the arrival of Europeans in 1492, Spanish settlers consumed maize, and explorers and traders carried it back to Europe. Spanish settlers much preferred wheat bread to maize. Maize flour could not be substituted for wheat for communion bread, since in Christian belief at that time only wheat could undergo transubstantiation and be transformed into the body of Christ.
Maize spread to the rest of the world because of its ability to grow in diverse climates. It was cultivated in Spain just a few decades after Columbus's voyages and then spread to Italy, West Africa and elsewhere. By the 17th century, it was a common peasant food in Southern Europe. By the 18th century, it was the chief food of the southern French and Italian peasantry, especially as polenta in Italy.
When maize was introduced into Western farming systems, it was welcomed for its productivity. However, a widespread problem of malnutrition soon arose wherever it had become a staple food. Indigenous Americans had learned to soak maize in alkali-water — made with ashes and lime — since at least 1200–1500 BC, creating the process of nixtamalization. They did this to liberate the corn hulls, but coincidentally it also liberated the B-vitamin niacin, the lack of which caused pellagra. Once alkali processing and dietary variety were understood and applied, pellagra disappeared in the developed world. The development of high-lysine maize and the promotion of a more balanced diet have contributed to its demise. Pellagra still exists in food-poor areas and refugee camps where people survive on donated maize.
Names
The name maize derives from the Spanish form of the Taíno . The Swedish botanist Carl Linnaeus used the common name maize as the species epithet in Zea mays. The name maize is preferred in formal, scientific, and international usage as a common name because it refers specifically to this one grain, unlike corn, which has a complex variety of meanings that vary by context and geographic region. Most countries primarily use the term maize, and the name corn is used mainly in the United States and a handful of other English-speaking countries. In countries that primarily use the term maize, the word corn may denote any cereal crop, varying geographically with the local staple, such as wheat in England and oats in Scotland or Ireland. The usage of corn for maize started as a shortening of "Indian corn" in 18th-century North America.
The historian of food Betty Fussell writes in an article on the history of the word corn in North America that "[t]o say the word corn is to plunge into the tragi-farcical mistranslations of language and history". Similar to the British usage, the Spanish referred to maize as , a generic term for cereal grains, as did Italians with the term . The British later referred to maize as Turkey wheat, Turkey corn, or Indian corn; Fussell comments that "they meant not a place but a condition, a savage rather than a civilized grain".
International groups such as the Centre for Agriculture and Bioscience International consider maize the preferred common name. The word maize is used by the UN's Food and Agriculture Organization, and in the names of the International Maize and Wheat Improvement Center of Mexico, the Indian Institute of Maize Research, the Maize Association of Australia, the National Maize Association of Nigeria, the National Maize Association of Ghana, the Maize Trust of South Africa, and the Zimbabwe Seed Maize Association.
Structure and physiology
Maize is a tall annual grass with a single stem, ranging in height from to . The long narrow leaves arise from the nodes or joints, alternately on opposite sides on the stalk. Maize is monoecious, with separate male and female flowers on the same plant. At the top of the stem is the tassel, an inflorescence of male flowers; their anthers release pollen, which is dispersed by wind. Like other pollen, it is an allergen, but most of it falls within a few meters of the tassel and the risk is largely restricted to farm workers.
The female inflorescence, some way down the stem from the tassel, is first seen as a silk, a bundle of soft tubular hairs, one for the carpel in each female flower, which develops into a kernel (often called a seed. Botanically, as in all grasses, it is a fruit, fused with the seed coat to form a caryopsis) when it is pollinated. A whole female inflorescence develops into an ear or corncob, enveloped by multiple leafy layers or husks.
The is the leaf most closely associated with a particular developing ear. This leaf and those above it contribute over three quarters of the carbohydrate (starch) that fills the grain.
The grains are usually yellow or white in modern varieties; other varieties have orange, red, brown, blue, purple, or black grains. They are arranged in 8 to 32 rows around the cob; there can be up to 1200 grains on a large cob. Yellow maizes derive their color from carotenoids; red maizes are colored by anthocyanins and phlobaphenes; and orange and green varieties may contain combinations of these pigments.
Maize has short-day photoperiodism, meaning that it requires nights of a certain length to flower. Flowering further requires enough warm days above . The control of flowering is set genetically; the physiological mechanism involves the phytochrome system. Tropical cultivars can be problematic if grown in higher latitudes, as the longer days can make the plants grow tall instead of setting seed before winter comes. On the other hand, growing tall rapidly could be convenient for producing biofuel.
Immature maize shoots accumulate a powerful antibiotic substance, 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one (DIMBOA), which provides a measure of protection against a wide range of pests. Because of its shallow roots, maize is susceptible to droughts, intolerant of nutrient-deficient soils, and prone to being uprooted by severe winds.
Genomics and genetics
Maize is diploid with 20 chromosomes. 83% of allelic variation within the genome derives from its teosinte ancestors, primarily due to the freedom of Zea species to outcross. Barbara McClintock used maize to validate her transposon theory of "jumping genes", for which she won the 1983 Nobel Prize in Physiology or Medicine. Maize remains an important model organism for genetics and developmental biology. The MADS-box motif is involved in the development of maize flowers.
The Maize Genetics and Genomics Database is funded by the US Department of Agriculture to support maize research. The International Maize and Wheat Improvement Center maintains a large collection of maize accessions tested and cataloged for insect resistance. In 2005, the US National Science Foundation, Department of Agriculture, and the Department of Energy formed a consortium to sequence the maize genome. The resulting DNA sequence data was deposited immediately into GenBank, a public repository for genome-sequence data. Sequencing of the maize genome was completed in 2008. In 2009, the consortium published results of its sequencing effort. The genome, 85% of which is composed of transposons, contains 32,540 genes. Much of it has been duplicated and reshuffled by helitrons, a group of transposable elements within maize's DNA.
Breeding
Conventional breeding
Maize breeding in prehistory resulted in large plants producing large ears. Modern breeding began with individuals who selected highly productive varieties in their fields and then sold seed to other farmers. James L. Reid was one of the earliest and most successful, developing Reid's Yellow Dent in the 1860s. These early efforts were based on mass selection (a row of plants is grown from seeds of one parent), and the choosing of plants after pollination (which means that only the female parents are known). Later breeding efforts included ear to row selection (C. G. Hopkins c. 1896), hybrids made from selected inbred lines (G. H. Shull, 1909), and the highly successful double cross hybrids using four inbred lines (D. F. Jones c. 1918, 1922). University-supported breeding programs were especially important in developing and introducing modern hybrids.
Since the 1940s, the best strains of maize have been first-generation hybrids made from inbred strains that have been optimized for specific traits, such as yield, nutrition, drought, pest and disease tolerance. Both conventional cross-breeding and genetic engineering have succeeded in increasing output and reducing the need for cropland, pesticides, water and fertilizer. There is conflicting evidence to support the hypothesis that maize yield potential has increased over the past few decades. This suggests that changes in yield potential are associated with leaf angle, lodging resistance, tolerance of high plant density, disease/pest tolerance, and other agronomic traits rather than increase of yield potential per individual plant.
Certain varieties of maize have been bred to produce many ears; these are the source of the "baby corn" used as a vegetable in Asian cuisine. A fast-flowering variety named mini-maize was developed to aid scientific research, as multiple generations can be obtained in a single year. One strain called olotón has evolved a symbiotic relationship with nitrogen-fixing microbes, which provides the plant with 29%–82% of its nitrogen. The International Maize and Wheat Improvement Center (CIMMYT) operates a conventional breeding program to provide optimized strains. The program began in the 1980s. Hybrid seeds are distributed in Africa by its Drought Tolerant Maize for Africa project.
Tropical landraces remain an important and underused source of resistance alleles – both those for disease and for herbivores. Such alleles can then be introgressed into productive varieties. Rare alleles for this purpose were discovered by Dao and Sood, both in 2014. In 2018, Zerka Rashid of CIMMYT used its association mapping panel, developed for tropical drought tolerance traits. to find new genomic regions providing sorghum downy mildew resistance, and to further characterize known differentially methylated regions.
Genetic engineering
Genetically modified maize was one of the 26 genetically engineered food crops grown commercially in 2016. The vast majority of this is Bt maize. Genetically modified maize has been grown since 1997 in the United States and Canada; by 2016, 92% of the US maize crop was genetically modified. As of 2011, herbicide-tolerant maize and insect-resistant maize varieties were each grown in over 20 countries.
In September 2000, up to $50 million worth of food products were recalled due to the presence of Starlink genetically modified corn, which had been approved only for animal consumption.
Origin
External phylogeny
The maize genus Zea is relatively closely related to sorghum, both being in the PACMAD clade of Old World grasses, and much more distantly to rice and wheat, which are in the other major group of grasses, the BOP clade. It is closely related to Tripsacum, gamagrass.
Maize and teosinte
Maize is the domesticated variant of the four species of teosintes, which are its crop wild relatives. The teosinte origin theory was proposed by the Russian botanist Nikolai Ivanovich Vavilov in 1931, and the American Nobel Prize-winner George Beadle in 1932. The two plants have dissimilar appearance, maize having a single tall stalk with multiple leaves and teosinte being a short, bushy plant. The difference between the two is largely controlled by differences in just two genes, called grassy tillers-1 (gt1, ) and teosinte branched-1 (tb1, ). In the late 1930s, Paul Mangelsdorf suggested that domesticated maize was the result of a hybridization event between an unknown wild maize and a species of Tripsacum, a related genus; this has been refuted by modern genetic testing.
In 2004, John Doebley identified Balsas teosinte, Zea mays subsp. parviglumis, native to the Balsas River valley in Mexico's southwestern highlands, as the crop wild relative genetically most similar to modern maize. The middle part of the short Balsas River valley is the likely location of early domestication. Stone milling tools with maize residue have been found in an 8,700 year old layer of deposits in a cave not far from Iguala, Guerrero. Doebley and colleagues showed in 2002 that maize had been domesticated only once, about 9,000 years ago, and then spread throughout the Americas.
Maize pollen dated to 7,300 years ago from San Andres, Tabasco has been found on the Caribbean coast. A primitive corn was being grown in southern Mexico, Central America, and northern South America 7,000 years ago. Archaeological remains of early maize ears, found at Guila Naquitz Cave in the Oaxaca Valley, are roughly 6,250 years old; the oldest ears from caves near Tehuacan, Puebla, are 5,450 years old.
Spreading to the north
Around 4,500 years ago, maize began to spread to the north. In the United States, maize was first cultivated at several sites in New Mexico and Arizona about 4,100 years ago. During the first millennium AD, maize cultivation spread more widely in the areas north. In particular, the large-scale adoption of maize agriculture and consumption in eastern North America took place about A.D. 900. Native Americans cleared large forest and grassland areas for the new crop. The rise in maize cultivation 500 to 1,000 years ago in what is now the southeastern United States corresponded with a decline of freshwater mussels, which are very sensitive to environmental changes.
Agronomy
Growing
Because it is cold-intolerant, in the temperate zones maize must be planted in the spring. Its root system is generally shallow, so the plant is dependent on soil moisture. As a plant that uses carbon fixation, maize is a considerably more water-efficient crop than plants that use carbon fixation such as alfalfa and soybeans. Maize is most sensitive to drought at the time of silk emergence, when the flowers are ready for pollination. In the United States, a good harvest was traditionally predicted if the maize was "knee-high by the Fourth of July", although modern hybrids generally exceed this growth rate. Maize used for silage is harvested while the plant is green and the fruit immature. Sweet corn is harvested in the "milk stage", after pollination but before starch has formed, between late summer and early to mid-autumn. Field maize is left in the field until very late in the autumn to thoroughly dry the grain, and may, in fact, sometimes not be harvested until winter or even early spring. The importance of sufficient soil moisture is shown in many parts of Africa, where periodic drought regularly causes maize crop failure and consequent famine. Although it is grown mainly in wet, hot climates, it can thrive in cold, hot, dry or wet conditions, meaning that it is an extremely versatile crop.
Maize was planted by the Native Americans in small hills of soil, in the polyculture system called the Three Sisters. Maize provided support for beans; the beans provided nitrogen derived from nitrogen-fixing rhizobia bacteria which live on the roots of beans and other legumes; and squashes provided ground cover to stop weeds and inhibit evaporation by providing shade over the soil.
Harvesting
Sweet corn, harvested earlier than maize grown for grain, grows to maturity in a period of from 60 to 100 days according to variety. An extended sweet corn harvest, picked at the milk stage, can be arranged either by planting a selection of varieties which ripen earlier and later, or by planting different areas at fortnightly intervals.
Maize harvested as a grain crop can be kept in the field a relatively long time, even months, after the crop is ready to harvest; it can be harvested and stored in the husk leaves if kept dry.
According to the U.S. Department of Agriculture, in the four decades from 1855 to 1894 the amount of labor required to produce one bushel of maize declined from four hours and thirty four minutes to only forty-one minutes. Before 1940 , most maize in North America was harvested by hand. This involved a large number of workers and associated social events (husking or shucking bees). From the 1850s onward, some machinery became available to partially mechanize the processes, such as one- and two-row mechanical pickers (picking the ear, leaving the stover) and corn binders, which are reaper-binders designed specifically for maize. The latter produce sheaves that can be shocked. By hand or mechanical picker, the entire ear is harvested, which requires a separate operation of a maize sheller to remove the kernels from the ear. Whole ears of maize were often stored in corn cribs, sufficient for some livestock feeding uses. Today corn cribs with whole ears, and corn binders, are less common because most modern farms harvest the grain from the field with a combine harvester and store it in bins. The combine with a corn head (with points and snap rolls instead of a reel) does not cut the stalk; it simply pulls the stalk down. The stalk continues downward and is crumpled into a mangled pile on the ground, where it usually is left to become organic matter for the soil. The ear of maize is too large to pass between slots in a plate as the snap rolls pull the stalk away, leaving only the ear and husk to enter the machinery. The combine separates the husk and the cob, keeping only the kernels.
Grain storage
Drying is vital to prevent or at least reduce damage by mould fungi, which contaminate the grain with mycotoxins. Aspergillus and Fusarium spp. are the most common mycotoxin sources, and accordingly important in agriculture. If the moisture content of the harvested grain is too high, grain dryers are used to reduce the moisture content by blowing heated air through the grain. This can require large amounts of energy in the form of combustible gases (propane or natural gas) and electricity to power the blowers.
Production
Maize is widely cultivated throughout the world, and a greater weight of maize is produced each year than any other grain. In 2020, total world production was 1.16 billion tonnes, led by the United States with 31.0% of the total (table). China produced 22.4% of the global total.
Pests
Many pests can affect maize growth and development, including invertebrates, weeds, and pathogens.
Maize is susceptible to a large number of fungal, bacterial, and viral plant diseases. Those of economic importance include diseases of the leaf, smuts such as corn smut, ear rots and stalk rots. Northern corn leaf blight damages maize throughout its range, whereas banded leaf and sheath blight is a problem in Asia. Some fungal diseases of maize produce potentially dangerous mycotoxins such as aflatoxin. In the United States, major diseases include tar spot, bacterial leaf streak, gray leaf spot, northern corn leaf blight, and Goss's wilt; in 2022, the most damaging disease was tar spot, which caused losses of 116.8 million bushels.
Maize sustains a billion dollars' worth of losses annually in the US from each of two major insect pests, namely the European corn borer or ECB (Ostrinia nubilalis) and corn rootworms (Diabrotica spp) western corn rootworm, northern corn rootworm, and southern corn rootworm. Another serious pest is the fall armyworm (Spodoptera frugiperda).
The maize weevil (Sitophilus zeamais) is a serious pest of stored grain. The Northern armyworm, Oriental armyworm or Rice ear-cutting caterpillar (Mythimna separata) is a major pest of maize in Asia.
Nematodes too are pests of maize. It is likely that every maize plant harbors some nematode parasites, and populations of Pratylenchus lesion nematodes in the roots can be "enormous". The effects on the plants include stunting, sometimes of whole fields, sometimes in patches, especially when there is also water stress and poor control of weeds.
Many plants, both monocots (grasses) such as Echinochloa crus-galli (barnyard grass) and dicots (forbs) such as Chenopodium and Amaranthus may compete with maize and reduce crop yields. Control may involve mechanical weed removal, flame weeding, or herbicides.
Uses
Culinary
Maize and cornmeal (ground dried maize) constitute a staple food in many regions of the world. Maize is used to produce the food ingredient cornstarch. Maize starch can be hydrolyzed and enzymatically treated to produce high fructose corn syrup, a sweetener. Maize may be fermented and distilled to produce Bourbon whiskey. Corn oil is extracted from the germ of the grain.
In prehistoric times, Mesoamerican women used a metate quern to grind maize into cornmeal. After ceramic vessels were invented the Olmec people began to cook maize together with beans, improving the nutritional value of the staple meal. Although maize naturally contains niacin, an important nutrient, it is not bioavailable without the process of nixtamalization. The Maya used nixtamal meal to make porridges and tamales.
Maize is a staple of Mexican cuisine. Masa (nixtamal) is the main ingredient for tortillas, atole and many other dishes of Central American food. It is the main ingredient of corn tortilla, tamales, atole and the dishes based on these.
The corn smut fungus, known as huitlacoche, which grows on maize, is a Mexican delicacy.
Coarse maize meal is made into a thick porridge in many cultures: from the polenta of Italy, the angu of Brazil, the mămăligă of Romania, to cornmeal mush in the US (or hominy grits in the Southern US) or the food called mieliepap in South Africa and sadza, nshima, ugali and other names in other parts of Africa. Introduced into Africa by the Portuguese in the 16th century, maize has become Africa's most important staple food crop.
Sweet corn, a genetic variety that is high in sugars and low in starch, is eaten in the unripe state as corn on the cob.
Nutritional value
Raw, yellow, sweet maize kernels are composed of 76% water, 19% carbohydrates, 3% protein, and 1% fat (table). In a 100-gram serving, maize kernels provide 86 calories and are a good source (10–19% of the Daily Value) of the B vitamins, thiamin, niacin (if freed), pantothenic acid (B5) and folate. Maize has suboptimal amounts of the essential amino acids tryptophan and lysine, which accounts for its lower status as a protein source. The proteins of beans and legumes complement those of maize.
Animal feed
Maize is a major source of animal feed. As a grain crop, the dried kernels are used as feed. They are often kept on the cob for storage in a corn crib, or they may be shelled off for storage in a grain bin. When the grain is used for feed, the rest of the plant (the corn stover) can be used later as fodder, bedding (litter), or soil conditioner. When the whole maize plant (grain plus stalks and leaves) is used for fodder, it is usually chopped and made into silage, as this is more digestible and more palatable to ruminants than the dried form. Traditionally, maize was gathered into shocks after harvesting, where it dried further. It could then be stored for months until fed to livestock. Silage can be made in silos or in silage wrappers. In the tropics, maize is harvested year-round and fed as green forage to the animals. Baled cornstalks offer an alternative to hay for animal feed, alongside direct grazing of maize grown for this purpose.
Chemicals
Starch from maize can be made into plastics, fabrics, adhesives, and many other chemical products. Corn steep liquor, a plentiful watery byproduct of maize wet milling process, is used in the biochemical industry and research as a culture medium to grow microorganisms.
Biofuel
Feed maize is being used for heating; specialized corn stoves (similar to wood stoves) use either feed maize or wood pellets to generate heat. Maize cobs can be used as a biomass fuel source. Home-heating furnaces which use maize kernels as a fuel have a large hopper that feeds the kernels into the fire. Maize is used as a feedstock for the production of ethanol fuel. The price of food is indirectly affected by the use of maize for biofuel production: use of maize for biofuel production increases the demand, and therefore the price of maize. A pioneering biomass gasification power plant in Strem, Burgenland, Austria, started operating in 2005. It would be possible to create diesel from the biogas by the Fischer Tropsch method.
In human culture
In Mesoamerica, maize is seen as a vital force, deified as a maize god, usually female. In the United States, maize ears are carved into column capitals in the United States Capitol building. The Corn Palace in Mitchell, South Dakota, uses cobs and ears of colored maize to implement a mural design that is recycled annually. The concrete Field of Corn sculpture in Dublin, Ohio depicts hundreds of ears of corn in a grassy field. A maize stalk with two ripe ears is depicted on the reverse of the Croatian 1 lipa coin, minted since 1993.
| Biology and health sciences | Food and drink | null |
20657585 | https://en.wikipedia.org/wiki/Scenic%20route | Scenic route | A scenic route, tourist road, tourist route, tourist drive, holiday route, theme route, or scenic byway is a specially designated road or waterway that travels through an area of natural or cultural beauty. It often passes by scenic viewpoints. The designation is usually determined by a governmental body, such as a Department of Transportation or a Ministry of Transport.
Tourist highway
A tourist highway or holiday route is a road that is marketed as being particularly suited for tourists. Tourist highways may be formed when existing roads are promoted with traffic signs and advertising material. Some tourist highways such as the Blue Ridge Parkway are built especially for tourism purposes. Others may be roadways enjoyed by local citizens in areas of unique or exceptional natural beauty, such as the Lake District. Still others, such as the Lincoln Highway in Illinois are former main roads, only designated as "scenic" after most traffic bypasses them (termed scenic highway in the United States). Some tourist routes, such as Great West Way, can be described as 'multi-modal', able to be followed by a mix of transportation types, including road, waterway, rail, bicycle or on foot.
In Europe and other countries around the world, they are often marked with brown tourist signs with the individual route symbol or name, or both.
United States
In the United States, a scenic route may also refer to a type of special route of the U.S. highway system that travels through a particularly beautiful area. These special routes, which boast "Scenic" banners are typically longer than the "parent route". There is only one route in the country that remains with the official scenic designation: U.S. Route 40 Scenic in Maryland.
Scenic byways in the United States, also include state, National Scenic Byway, National Forest Scenic Byways and Bureau of Land Management Back Country Byways programs which designate roads or routes as scenic byways due to some unique characteristics.
National Parkways are scenic roads in the National Park System built for recreational driving through scenic or historic areas. Unlike most scenic routes, National Parkways are built with a buffer of park land along both sides of the roadway. They also may have large satellite parks or recreation areas built periodically along their length.
Most National Historic Trails are commemorative motor routes which follow historic pathways.
Theme routes
Theme routes are special theme-based tours, aimed at providing a visitor or tourist with a better insight on that theme. Being popular in Europe, they can cover anything from an individual city, a wine growing region, Dutch tulip fields, Swiss Mountains, to Norwegian Fjords. Subjects can be architectural, historical, or cultural.
Examples of theme routes:
Bergstraße
Bertha Benz Memorial Route
Castle Road
Cheese Route
Deutsche Fährstraße
European Route of Industrial Heritage
German Wine Route
Golden Ring of Russia of historical sites
Japan Romantic Road
Liberation Route Europe
Silver Ring of Russia of historical sites
Romantic Road
Scotland's Malt Whisky Trail
Silver Road
Trail of the Eagle's Nests, along a chain of medieval castles in Poland
Upper Swabian Baroque Route
Wild Atlantic Way
Great West Way
| Technology | Road infrastructure | null |
15537745 | https://en.wikipedia.org/wiki/Frequentist%20inference | Frequentist inference | Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.
History of frequentist statistics
Frequentism is based on the presumption that statistics represent probabilistic frequencies. This view was primarily developed by Ronald Fisher and the team of Jerzy Neyman and Egon Pearson. Ronald Fisher contributed to frequentist statistics by developing the frequentist concept of "significance testing", which is the study of the significance of a measure of a statistic when compared to the hypothesis.
Neyman-Pearson extended Fisher's ideas to apply to multiple hypotheses. They posed that the ratio of probabilities of two given hypotheses, when maximizing the difference between the them, leads to a maximization of exceeding a given p-value. This relationship serves as the basis of type I and type II errors and confidence intervals.
Definition
For statistical inference, the statistic about which we want to make inferences is , where the random vector is a function of an unknown parameter, .
The parameter , in turn, is partitioned into (), where is the parameter of interest, and is the nuisance parameter. For concreteness, might be the population mean, , and the nuisance parameter the standard deviation of the population mean, .
Thus, statistical inference is concerned with the expectation of random vector , .
To construct areas of uncertainty in frequentist inference, a pivot is used which defines the area around that can be used to provide an interval to estimate uncertainty. The pivot is a probability such that for a pivot, , which is a function, that is strictly increasing in , where is a random vector.
This allows that, for some 0 < < 1, we can define , which is the probability that the pivot function is less than some well-defined value. This implies , where is a upper limit for .
Note that is a range of outcomes that define a one-sided limit for , and that is a two-sided limit for , when we want to estimate a range of outcomes where may occur. This rigorously defines the confidence interval, which is the range of outcomes about which we can make statistical inferences.
Fisherian reduction and Neyman-Pearson operational criteria
Two complementary concepts in frequentist inference are the Fisherian reduction and the Neyman-Pearson operational criteria. Together these concepts illustrate a way of constructing frequentist intervals that define the limits for . The Fisherian reduction is a method of determining the interval within which the true value of may lie, while the Neyman-Pearson operational criteria is a decision rule about making a priori probability assumptions.
The Fisherian reduction is defined as follows:
Determine the likelihood function (this is usually just gathering the data);
Reduce to a sufficient statistic of the same dimension as ;
Find the function of that has a distribution depending only on ;
Invert that distribution (this yields a cumulative distribution function or CDF) to obtain limits for at an arbitrary set of probability levels;
Use the conditional distribution of the data given informally or formally as to assess the adequacy of the formulation.
Essentially, the Fisherian reduction is design to find where the sufficient statistic can be used to determine the range of outcomes where may occur on a probability distribution that defines all the potential values of . This is necessary to formulating confidence intervals, where we can find a range of outcomes over which is likely to occur in the long-run.
The Neyman-Pearon operational criteria is an even more specific understanding of the range of outcomes where the relevant statistic, , can be said to occur in the long run. The Neyman-Pearson operational criteria defines the likelihood of that range actually being adequate or of the range being inadequate. The Neyman-Pearson criteria defines the range of the probability distribution that, if exists in this range, is still below the true population statistic. For example, if the distribution from the Fisherian reduction exceeds a threshold that we consider to be a priori implausible, then the Neyman-Pearson reduction's evaluation of that distribution can be used to infer where looking purely at the Fisherian reduction's distributions can give us inaccurate results. Thus, the Neyman-Pearson reduction is used to find the probability of type I and type II errors. As a point of reference, the complement to this in Bayesian statistics is the minimum Bayes risk criterion.
Because of the reliance of the Neyman-Pearson criteria on our ability to find a range of outcomes where is likely to occur, the Neyman-Pearson approach is only possible where a Fisherian reduction can be achieved.
Experimental design and methodology
Frequentist inferences are associated with the application frequentist probability to experimental design and interpretation, and specifically with the view that any given experiment can be considered one of an infinite sequence of possible repetitions of the same experiment, each capable of producing statistically independent results. In this view, the frequentist inference approach to drawing conclusions from data is effectively to require that the correct conclusion should be drawn with a given (high) probability, among this notional set of repetitions.
However, exactly the same procedures can be developed under a subtly different formulation. This is one where a pre-experiment point of view is taken. It can be argued that the design of an experiment should include, before undertaking the experiment, decisions about exactly what steps will be taken to reach a conclusion from the data yet to be obtained. These steps can be specified by the scientist so that there is a high probability of reaching a correct decision where, in this case, the probability relates to a yet to occur set of random events and hence does not rely on the frequency interpretation of probability. This formulation has been discussed by Neyman, among others. This is especially pertinent because the significance of a frequentist test can vary under model selection, a violation of the likelihood principle.
The statistical philosophy of frequentism
Frequentism is the study of probability with the assumption that results occur with a given frequency over some period of time or with repeated sampling. As such, frequentist analysis must be formulated with consideration to the assumptions of the problem frequentism attempts to analyze. This requires looking into whether the question at hand is concerned with understanding variety of a statistic or locating the true value of a statistic. The difference between these assumptions is critical for interpreting a hypothesis test.
There are broadly two camps of statistical inference, the epistemic approach and the epidemiological approach. The epistemic approach is the study of variability; namely, how often do we expect a statistic to deviate from some observed value. The epidemiological approach is concerned with the study of uncertainty; in this approach, the value of the statistic is fixed but our understanding of that statistic is incomplete. For concreteness, imagine trying to measure the stock market quote versus evaluating an asset's price. The stock market fluctuates so greatly that trying to find exactly where a stock price is going to be is not useful: the stock market is better understood using the epistemic approach, where we can try to quantify its fickle movements. Conversely, the price of an asset might not change that much from day to day: it is better to locate the true value of the asset rather than find a range of prices and thus the epidemiological approach is better. The difference between these approaches is non-trivial for the purposes of inference.
For the epistemic approach, we formulate the problem as if we want to attribute probability to a hypothesis. This can only be done with Bayesian statistics, where the interpretation of probability is straightforward because Bayesian statistics is conditional on the entire sample space, whereas frequentist testing is concerned with the whole experimental design. Frequentist statistics is conditioned not on solely the data but also on the experimental design. In frequentist statistics, the cutoff for understanding the frequency occurrence is derived from the family distribution used in the experiment design. For example, a binomial distribution and a negative binomial distribution can be used to analyze exactly the same data, but because their tail ends are different the frequentist analysis will realize different levels of statistical significance for the same data that assumes different probability distributions. This difference does not occur in Bayesian inference. For more, see the likelihood principle, which frequentist statistics inherently violates.
For the epidemiological approach, the central idea behind frequentist statistics must be discussed. Frequentist statistics is designed so that, in the long-run, the frequency of a statistic may be understood, and in the long-run the range of the true mean of a statistic can be inferred. This leads to the Fisherian reduction and the Neyman-Pearson operational criteria, discussed above. When we define the Fisherian reduction and the Neyman-Pearson operational criteria for any statistic, we are assessing, according to these authors, the likelihood that the true value of the statistic will occur within a given range of outcomes assuming a number of repetitions of our sampling method. This allows for inference where, in the long-run, we can define that the combined results of multiple frequentist inferences to mean that a 95% confidence interval literally means the true mean lies in the confidence interval 95% of the time, but not that the mean is in a particular confidence interval with 95% certainty. This is a popular misconception.
Very commonly the epistemic view and the epidemiological view are incorrectly regarded as interconvertible. First, the epistemic view is centered around Fisherian significance tests that are designed to provide inductive evidence against the null hypothesis, , in a single experiment, and is defined by the Fisherian p-value. Conversely, the epidemiological view, conducted with Neyman-Pearson hypothesis testing, is designed to minimize the Type II false acceptance errors in the long-run by providing error minimizations that work in the long-run. The difference between the two is critical because the epistemic view stresses the conditions under which we might find one value to be statistically significant; meanwhile, the epidemiological view defines the conditions under which long-run results present valid results. These are extremely different inferences, because one-time, epistemic conclusions do not inform long-run errors, and long-run errors cannot be used to certify whether one-time experiments are sensical. The assumption of one-time experiments to long-run occurrences is a misattribution, and the assumption of long run trends to individuals experiments is an example of the ecological fallacy.
Relationship with other approaches
Frequentist inferences stand in contrast to other types of statistical inferences, such as Bayesian inferences and fiducial inferences. While the "Bayesian inference" is sometimes held to include the approach to inferences leading to optimal decisions, a more restricted view is taken here for simplicity.
Bayesian inference
Bayesian inference is based in Bayesian probability, which treats “probability” as equivalent with “certainty”, and thus that the essential difference between the frequentist inference and the Bayesian inference is the same as the difference between the two interpretations of what a “probability” means. However, where appropriate, Bayesian inferences (meaning in this case an application of Bayes' theorem) are used by those employing frequency probability.
There are two major differences in the frequentist and Bayesian approaches to inference that are not included in the above consideration of the interpretation of probability:
In a frequentist approach to inference, unknown parameters are typically considered as being fixed, rather than as being random variates. In contrast, a Bayesian approach allows probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true (see Bayesian probability - Personal probabilities and objective methods for constructing priors).
The result of a Bayesian approach can be a probability distribution for what is known about the parameters given the results of the experiment or study. The result of a frequentist approach is either a decision from a significance test or a confidence interval.
| Mathematics | Statistics | null |
15548640 | https://en.wikipedia.org/wiki/Adrenaline | Adrenaline | Adrenaline, also known as epinephrine, is a hormone and medication which is involved in regulating visceral functions (e.g., respiration). It appears as a white microcrystalline granule. Adrenaline is normally produced by the adrenal glands and by a small number of neurons in the medulla oblongata. It plays an essential role in the fight-or-flight response by increasing blood flow to muscles, heart output by acting on the SA node, pupil dilation response, and blood sugar level. It does this by binding to alpha and beta receptors. It is found in many animals, including humans, and some single-celled organisms. It has also been isolated from the plant Scoparia dulcis found in Northern Vietnam.
Medical uses
As a medication, it is used to treat several conditions, including allergic reaction anaphylaxis, cardiac arrest, and superficial bleeding. Inhaled adrenaline may be used to improve the symptoms of croup. It may also be used for asthma when other treatments are not effective. It is given intravenously, by injection into a muscle, by inhalation, or by injection just under the skin. Common side effects include shakiness, anxiety, and sweating. A fast heart rate and high blood pressure may occur. Occasionally it may result in an abnormal heart rhythm. While the safety of its use during pregnancy and breastfeeding is unclear, the benefits to the mother must be taken into account.
A case has been made for the use of adrenaline infusion in place of the widely accepted treatment of inotropes for preterm infants with clinical cardiovascular compromise. Although sufficient data strongly recommends adrenaline infusions as a viable treatment, more trials are needed to conclusively determine that these infusions will successfully reduce morbidity and mortality rates among preterm, cardiovascularly compromised infants.
Epinephrine can also be used to treat open-angle glaucoma, as it has been found to increase the outflow of aqueous humor in the eye. This lowers the intraocular pressure in the eye and thus aids in treatment.
Physiological effects
The adrenal medulla is a major contributor to total circulating catecholamines (L-DOPA is at a higher concentration in the plasma), though it contributes over 90% of circulating adrenaline. Little adrenaline is found in other tissues, mostly in scattered chromaffin cells and in a small number of neurons that use adrenaline as a neurotransmitter. Following adrenalectomy, adrenaline disappears below the detection limit in the bloodstream.
Pharmacological doses of adrenaline stimulate α1, α2, β1, β2, and β3 adrenoceptors of the sympathetic nervous system. Sympathetic nerve receptors are classified as adrenergic, based on their responsiveness to adrenaline. The term "adrenergic" is often misinterpreted in that the main sympathetic neurotransmitter is noradrenaline, rather than adrenaline, as discovered by Ulf von Euler in 1946. Adrenaline has a β2 adrenoceptor-mediated effect on metabolism and the airway, with no direct neural connection from the sympathetic ganglia to the airway.
Walter Bradford Cannon originally proposed the concept of the adrenal medulla and the sympathetic nervous system being involved in the flight, fight, and fright response. But the adrenal medulla, in contrast to the adrenal cortex, is not required for survival. In adrenalectomized patients, hemodynamic and metabolic responses to stimuli such as hypoglycemia and exercise remain normal.
Exercise
One physiological stimulus to adrenaline secretion is exercise. This was first demonstrated by measuring the dilation of a (denervated) pupil of a cat on a treadmill, later confirmed using a biological assay of urine samples. Biochemical methods for measuring catecholamines in plasma were published from 1950 onwards. Although much valuable work has been published using fluorimetric assays to measure total catecholamine concentrations, the method is too non-specific and insensitive to accurately determine the very small quantities of adrenaline in plasma. The development of extraction methods and enzyme–isotope derivate radio-enzymatic assays (REA) transformed the analysis down to a sensitivity of 1 pg for adrenaline. Early REA plasma assays indicated that adrenaline and total catecholamines rise late in exercise, mostly when anaerobic metabolism commences.
During exercise, the adrenaline blood concentration rises partially from the increased secretion of the adrenal medulla and partly from the decreased metabolism of adrenaline due to reduced blood flow to the liver. Infusion of adrenaline to reproduce exercise circulating concentrations of adrenaline in subjects at rest has little hemodynamic effect other than a slight β2-mediated fall in diastolic blood pressure. Infusion of adrenaline well within the physiological range suppresses human airway hyper-reactivity sufficiently to antagonize the constrictor effects of inhaled histamine.
A link between the sympathetic nervous system and the lungs was shown in 1887 when Grossman showed that stimulation of cardiac accelerator nerves reversed muscarine-induced airway constriction. In experiments in the dog, where the sympathetic chain was cut at the level of the diaphragm, Jackson showed that there was no direct sympathetic innervation to the lung, but bronchoconstriction was reversed by the release of adrenaline from the adrenal medulla. An increased incidence of asthma has not been reported for adrenalectomized patients; those with a predisposition to asthma will have some protection from airway hyper-reactivity from their corticosteroid replacement therapy. Exercise induces progressive airway dilation in normal subjects that correlates with workload and is not prevented by beta-blockade. The progressive airway dilation with increasing exercise is mediated by a progressive reduction in resting vagal tone. Beta blockade with propranolol causes a rebound in airway resistance after exercise in normal subjects over the same time course as the bronchoconstriction seen with exercise-induced asthma. The reduction in airway resistance during exercise reduces the work of breathing.
Emotional responses
Every emotional response has a behavioral, an autonomic, and a hormonal component. The hormonal component includes the release of adrenaline, an adrenomedullary response to stress controlled by the sympathetic nervous system. The major emotion studied in relation to adrenaline is fear. In an experiment, subjects who were injected with adrenaline expressed more negative and fewer positive facial expressions to fear films compared to a control group. These subjects also reported a more intense fear from the films and greater mean intensity of negative memories than control subjects. The findings from this study demonstrate that there are learned associations between negative feelings and levels of adrenaline. Overall, the greater amount of adrenaline is positively correlated with an aroused state of negative emotions. These findings can be an effect in part that adrenaline elicits physiological sympathetic responses, including an increased heart rate and knee shaking, which can be attributed to the feeling of fear regardless of the actual level of fear elicited from the video. Although studies have found a definite relation between adrenaline and fear, other emotions have not had such results. In the same study, subjects did not express a greater amusement to an amusement film nor greater anger to an anger film. Similar findings were also supported in a study that involved rodent subjects that either were able or unable to produce adrenaline. Findings support the idea that adrenaline has a role in facilitating the encoding of emotionally arousing events, contributing to higher levels of arousal due to fear.
Memory
It has been found that adrenergic hormones, such as adrenaline, can produce retrograde enhancement of long-term memory in humans. The release of adrenaline due to emotionally stressful events, which is endogenous adrenaline, can modulate memory consolidation of the events, ensuring memory strength that is proportional to memory importance. Post-learning adrenaline activity also interacts with the degree of arousal associated with the initial coding. There is evidence that suggests adrenaline does have a role in long-term stress adaptation and emotional memory encoding specifically. Adrenaline may also play a role in elevating arousal and fear memory under particular pathological conditions, including post-traumatic stress disorder. Overall, "Extensive evidence indicates that epinephrine (EPI) modulates memory consolidation for emotionally arousing tasks in animals and human subjects." Studies have also found that recognition memory involving adrenaline depends on a mechanism that depends on β adrenoceptors. Adrenaline does not readily cross the blood-brain barrier, so its effects on memory consolidation are at least partly initiated by β adrenoceptors in the periphery. Studies have found that sotalol, a β adrenoceptor antagonist that also does not readily enter the brain, blocks the enhancing effects of peripherally administered adrenaline on memory. These findings suggest that β adrenoceptors are necessary for adrenaline to have an impact on memory consolidation.
Pathology
Increased adrenaline secretion is observed in pheochromocytoma, hypoglycemia, myocardial infarction, and to a lesser degree, in essential tremor (also known as benign, familial, or idiopathic tremor). A general increase in sympathetic neural activity is usually accompanied by increased adrenaline secretion, but there is selectivity during hypoxia and hypoglycemia, when the ratio of adrenaline to noradrenaline is considerably increased. Therefore, there must be some autonomy of the adrenal medulla from the rest of the sympathetic system.
Myocardial infarction is associated with high levels of circulating adrenaline and noradrenaline, particularly in cardiogenic shock.
Benign familial tremor (essential tremor) (BFT) is responsive to peripheral β adrenergic blockers, and β2-stimulation is known to cause tremor. Patients with BFT were found to have increased plasma adrenaline but not noradrenaline.
Low or absent concentrations of adrenaline can be seen in autonomic neuropathy or following adrenalectomy. Failure of the adrenal cortex, as with Addison's disease, can suppress adrenaline secretion as the activity of the synthesizing enzyme, phenylethanolamine-N-methyltransferase, depends on the high concentration of cortisol that drains from the cortex to the medulla.
Terminology
In 1901, Jōkichi Takamine patented a purified extract from the adrenal glands, which was trademarked by Parke, Davis & Co in the US. The British Approved Name and European Pharmacopoeia term for this drug is hence adrenaline (from Latin ad, "on", and rēnālis, "of the kidney", from ren, "kidney").
However, the pharmacologist John Abel had already prepared an extract from adrenal glands as early as 1897, and he coined the name epinephrine to describe it (from Ancient Greek ἐπῐ́ (epí), "upon", and νεφρός (nephrós), "kidney"). As the term Adrenaline was a registered trademark in the US, and in the belief that Abel's extract was the same as Takamine's (a belief since disputed), epinephrine instead became the generic name used in the US and remains the pharmaceutical's United States Adopted Name and International Nonproprietary Name (though the name adrenaline is frequently used).
The terminology is now one of the few differences between the INN and BAN systems of names. Although European health professionals and scientists preferentially use the term adrenaline, the converse is true among American health professionals and scientists. Nevertheless, even among the latter, receptors for this substance are called adrenergic receptors or adrenoceptors, and pharmaceuticals that mimic its effects are often called adrenergics. The history of adrenaline and epinephrine is reviewed by Rao.
Mechanism of action
As a hormone, adrenaline acts on nearly all body tissues by binding to adrenergic receptors. Its effects on various tissues depend on the type of tissue and expression of specific forms of adrenergic receptors. For example, high levels of adrenaline cause smooth muscle relaxation in the airways but causes contraction of the smooth muscle that lines most arterioles.
Adrenaline is a nonselective agonist of all adrenergic receptors, including the major subtypes α1, α2, β1, β2, and β3. Adrenaline's binding to these receptors triggers a number of metabolic changes. Binding to α-adrenergic receptors inhibits insulin secretion by the pancreas, stimulates glycogenolysis in the liver and muscle, and stimulates glycolysis and inhibits insulin-mediated glycogenesis in muscle. β adrenergic receptor binding triggers glucagon secretion in the pancreas, increased adrenocorticotropic hormone (ACTH) secretion by the pituitary gland, and increased lipolysis by adipose tissue. Together, these effects increase blood glucose and fatty acids, providing substrates for energy production within cells throughout the body. Binding of β adrenergic receptor also increases the production of cyclic AMP.
Adrenaline causes liver cells to release glucose into the blood, acting through both alpha and beta-adrenergic receptors to stimulate glycogenolysis. Adrenaline binds to β2 receptors on liver cells, which changes conformation and helps Gs, a heterotrimeric G protein, exchange GDP to GTP. This trimeric G protein dissociates to Gs alpha and Gs beta/gamma subunits. Gs alpha stimulates adenylyl cyclase, thus converting adenosine triphosphate into cyclic adenosine monophosphate (AMP). Cyclic AMP activates protein kinase A. Protein kinase A phosphorylates and partially activates phosphorylase kinase. Adrenaline also binds to α1 adrenergic receptors, causing an increase in inositol trisphosphate, inducing calcium ions to enter the cytoplasm. Calcium ions bind to calmodulin, which leads to further activation of phosphorylase kinase. Phosphorylase kinase phosphorylates glycogen phosphorylase, which then breaks down glycogen leading to the production of glucose.
Adrenaline also has significant effects on the cardiovascular system. It increases peripheral resistance via α1 receptor-dependent vasoconstriction and increases cardiac output by binding to β1 receptors. The goal of reducing peripheral circulation is to increase coronary and cerebral perfusion pressures and therefore increase oxygen exchange at the cellular level. While adrenaline does increase aortic, cerebral, and carotid circulation pressure, it lowers carotid blood flow and end-tidal CO2 or ETCO2 levels. It appears that adrenaline improves microcirculation at the expense of the capillary beds where perfusion takes place.
Measurement in biological fluids
Adrenaline may be quantified in blood, plasma, or serum as a diagnostic aid, to monitor therapeutic administration, or to identify the causative agent in a potential poisoning victim. Endogenous plasma adrenaline concentrations in resting adults usually are less than 10 ng/L, but they may increase by 10-fold during exercise and by 50-fold or more during times of stress. Pheochromocytoma patients often have plasma adrenaline levels of 1000–10,000 ng/L. Parenteral administration of adrenaline to acute-care cardiac patients can produce plasma concentrations of 10,000 to 100,000 ng/L.
Biosynthesis
In chemical terms, adrenaline is one of a group of monoamines called the catecholamines. Adrenaline is synthesized in the chromaffin cells of the adrenal gland's adrenal medulla and a small number of neurons in the medulla oblongata in the brain through a metabolic pathway that converts the amino acids phenylalanine and tyrosine into a series of metabolic intermediates and, ultimately, adrenaline. Tyrosine is first oxidized to L-DOPA by tyrosine hydroxylase; this is the rate-limiting step. Then it is subsequently decarboxylated to give dopamine by DOPA decarboxylase (aromatic L-amino acid decarboxylase). Dopamine is then converted to noradrenaline by dopamine beta-hydroxylase, which utilizes ascorbic acid (vitamin C) and copper. The final step in adrenaline biosynthesis is the methylation of the primary amine of noradrenaline. This reaction is catalyzed by the enzyme phenylethanolamine N-methyltransferase (PNMT), which utilizes S-adenosyl methionine (SAMe) as the methyl donor. While PNMT is found primarily in the cytosol of the endocrine cells of the adrenal medulla (also known as chromaffin cells), it has been detected at low levels in both the heart and brain.
Regulation
The major physiologic triggers of adrenaline release center upon stresses, such as physical threat, excitement, noise, bright lights, and high or low ambient temperature. All of these stimuli are processed in the central nervous system.
Adrenocorticotropic hormone (ACTH) and the sympathetic nervous system stimulate the synthesis of adrenaline precursors by enhancing the activity of tyrosine hydroxylase and dopamine β-hydroxylase, two key enzymes involved in catecholamine synthesis. ACTH also stimulates the adrenal cortex to release cortisol, which increases the expression of PNMT in chromaffin cells, enhancing adrenaline synthesis. This is most often done in response to stress. The sympathetic nervous system, acting via splanchnic nerves to the adrenal medulla, stimulates the release of adrenaline. Acetylcholine released by preganglionic sympathetic fibers of these nerves acts on nicotinic acetylcholine receptors, causing cell depolarization and an influx of calcium through voltage-gated calcium channels. Calcium triggers the exocytosis of chromaffin granules and, thus, the release of adrenaline (and noradrenaline) into the bloodstream. For noradrenaline to be acted upon by PNMT in the cytosol, it must first be shipped out of granules of the chromaffin cells. This may occur via the catecholamine-H+ exchanger VMAT1. VMAT1 is also responsible for transporting newly synthesized adrenaline from the cytosol back into chromaffin granules in preparation for release.
Unlike many other hormones, adrenaline (as with other catecholamines) does not exert negative feedback to down-regulate its own synthesis. Abnormal adrenaline levels can occur in various conditions, such as surreptitious adrenaline administration, pheochromocytoma, and other tumors of the sympathetic ganglia.
Its action is terminated with reuptake into nerve terminal endings, some minute dilution, and metabolism by monoamine oxidase and catechol-O-methyl transferase into 3,4-Dihydroxymandelic acid and Metanephrine.
History
Extracts of the adrenal gland were first obtained by Polish physiologist Napoleon Cybulski in 1895. These extracts, which he called nadnerczyna ("adrenalin"), contained adrenaline and other catecholamines. American ophthalmologist William H. Bates discovered adrenaline's usage for eye surgeries prior to 20 April 1896. In 1897, John Jacob Abel (1857–1938), the father of modern pharmacology, found a natural substance produced by the adrenal glands that he named epinephrine. The first hormone to be identified, it remains a crucial, first-line treatment for cardiac arrests, severe allergic reactions, and other conditions. In 1901, Jokichi Takamine successfully isolated and purified the hormone from the adrenal glands of sheep and oxen. Adrenaline was first synthesized in the laboratory by Friedrich Stolz and Henry Drysdale Dakin, independently, in 1904.
Although secretin is mentioned as the first hormone, adrenaline is the first hormone since the discovery of the activity of adrenal extract on blood pressure was observed in 1895 before that of secretin in 1902. In 1895, George Oliver (1841–1915), a general practitioner in North Yorkshire, and Edward Albert Schäfer (1850–1935), a physiologist at University College of London published a paper about the active component of adrenal gland extract causing the increase in blood pressure and heart rate was from the medulla, but not the cortex of the adrenal gland. In 1897, John Jacob Abel (1857–1938) of Johns Hopkins University, the first chairman of the first US department of pharmacology, found a compound called epinephrine with the molecular formula of C17H15NO4. Abel claimed his principle from adrenal gland extract was active.
In 1900, Jōkichi Takamine (1854–1922), a Japanese chemist, worked with his assistant, (1876–1960), to purify a 2000 times more active principle than epinephrine from the adrenal gland, named adrenaline with the molecular formula C10H15NO3. Additionally, in 1900 Thomas Aldrich of Parke-Davis Scientific Laboratory also purified adrenaline independently. Takamine and Parke-Davis later in 1901 both got the patent for adrenaline. The fight for terminology between adrenaline and epinephrine was not ended until the first adrenaline structural discovery by Hermann Pauly (1870–1950) in 1903 and the first adrenaline synthesis by Friedrich Stolz (1860–1936), a German chemist in 1904. They both believed that Takamine's compound was the active principle while Abel's compound was the inactive one. Stolz synthesized adrenaline from its ketone form (adrenalone).
Society and culture
Adrenaline junkie
An adrenaline junkie is someone who engages in sensation-seeking behavior through "the pursuit of novel and intense experiences without regard for physical, social, legal or financial risk". Such activities include extreme and risky sports, substance abuse, unsafe sex, and crime. The term relates to the increase in circulating levels of adrenaline during physiological stress. Such an increase in the circulating concentration of adrenaline is secondary to the activation of the sympathetic nerves innervating the adrenal medulla, as it is rapid and not present in animals where the adrenal gland has been removed. Although such stress triggers adrenaline release, it also activates many other responses within the central nervous system reward system, which drives behavioral responses; while the circulating adrenaline concentration is present, it may not drive behavior. Nevertheless, adrenaline infusion alone does increase alertness and has roles in the brain, including the augmentation of memory consolidation.
Strength
Adrenaline has been implicated in feats of great strength, often occurring in times of crisis. For example, there are stories of a parent lifting part of a car when their child is trapped underneath, showcasing the ability of the body to endure under stress and highlighting the significant effects of adrenaline in unlocking extraordinary physical abilities.
| Biology and health sciences | Biochemistry and molecular biology | null |
18455584 | https://en.wikipedia.org/wiki/Thyroid%20hormones | Thyroid hormones | Thyroid hormones are two hormones produced and released by the thyroid gland, triiodothyronine (T3) and thyroxine (T4). They are tyrosine-based hormones that are primarily responsible for regulation of metabolism. T3 and T4 are partially composed of iodine, derived from food. A deficiency of iodine leads to decreased production of T3 and T4, enlarges the thyroid tissue and will cause the disease known as simple goitre.
The major form of thyroid hormone in the blood is thyroxine (T4), whose half-life of around one week is longer than that of T3. In humans, the ratio of T4 to T3 released into the blood is approximately 14:1. T4 is converted to the active T3 (three to four times more potent than T4) within cells by deiodinases (5′-deiodinase). These are further processed by decarboxylation and deiodination to produce iodothyronamine (T1a) and thyronamine (T0a). All three isoforms of the deiodinases are selenium-containing enzymes, thus dietary selenium is essential for T3 production. Calcitonin, a peptide hormone produced and secreted by the thyroid, is usually not included in the meaning of "thyroid hormone".
Thyroid hormones are one of the factors responsible for the modulation of energy expenditure. This is achieved through several mechanisms, such as mitochondrial biogenesis and adaptive thermogenesis.
American chemist Edward Calvin Kendall was responsible for the isolation of thyroxine in 1915. In 2020, levothyroxine, a manufactured form of thyroxine, was the second most commonly prescribed medication in the United States, with more than 98million prescriptions. Levothyroxine is on the World Health Organization's List of Essential Medicines.
Function
Thyroid hormones act on nearly every cell in the body. They act to increase the basal metabolic rate, affect protein synthesis, help regulate long bone growth (synergy with growth hormone) and neural maturation, and increase the body's sensitivity to catecholamines (such as adrenaline) by permissiveness. Thyroid hormones are essential to proper development and differentiation of all cells of the human body. These hormones also regulate protein, fat, and carbohydrate metabolism, affecting how human cells use energetic compounds. They also stimulate vitamin metabolism. Numerous physiological and pathological stimuli influence thyroid hormone synthesis.
Thyroid hormones lead to heat generation in humans. However, the thyronamines function via some unknown mechanism to inhibit neuronal activity; this plays an important role in the hibernation cycles of mammals and the moulting behaviour of birds. One effect of administering the thyronamines is a severe drop in body temperature.
Medical use
Both T3 and T4 are used to treat thyroid hormone deficiency (hypothyroidism). They are both absorbed well by the stomach, so can be given orally. Levothyroxine is the chemical name of the manufactured version of T4, which is metabolised more slowly than T3 and hence usually only needs once-daily administration. Natural desiccated thyroid hormones are derived from pig thyroid glands, and are a "natural" hypothyroid treatment containing 20% T3 and traces of T2, T1 and calcitonin.
Also available are synthetic combinations of T3/T4 in different ratios (such as liotrix) and pure-T3 medications (INN: liothyronine).
Levothyroxine Sodium is usually the first course of treatment tried. Some patients feel they do better on desiccated thyroid hormones; however, this is based on anecdotal evidence and clinical trials have not shown any benefit over the biosynthetic forms. Thyroid tablets are reported to have different effects, which can be attributed to the difference in torsional angles surrounding the reactive site of the molecule.
Thyronamines have no medical usages yet, though their use has been proposed for controlled induction of hypothermia, which causes the brain to enter a protective cycle, useful in preventing damage during ischemic shock.
Synthetic thyroxine was first successfully produced by Charles Robert Harington and George Barger in 1926.
Formulations
Most people are treated with levothyroxine, or a similar synthetic thyroid hormone. Different polymorphs of the compound have different solubilities and potencies. Additionally, natural thyroid hormone supplements from the dried thyroids of animals are still available. Levothyroxine contains T4 only and is therefore largely ineffective for patients unable to convert T4 to T3. These patients may choose to take natural thyroid hormone, as it contains a mixture of T4 and T3, or alternatively supplement with a synthetic T3 treatment. In these cases, synthetic liothyronine is preferred due to the potential differences between the natural thyroid products. Some studies show that the mixed therapy is beneficial to all patients, but the addition of lyothyronine contains additional side effects and the medication should be evaluated on an individual basis. Some natural thyroid hormone brands are FDA approved, but some are not. Thyroid hormones are generally well tolerated. Thyroid hormones are usually not dangerous for pregnant women or nursing mothers, but should be given under a doctor's supervision. In fact, if a woman who is hypothyroid is left untreated, her baby is at a higher risk for birth defects. When pregnant, a woman with a low-functioning thyroid will also need to increase her dosage of thyroid hormone. One exception is that thyroid hormones may aggravate heart conditions, especially in older patients; therefore, doctors may start these patients on a lower dose and work up to a larger one to avoid risk of heart attack.
Thyroid metabolism
Central
Thyroid hormones (T4 and T3) are produced by the follicular cells of the thyroid gland and are regulated by TSH made by the thyrotropes of the anterior pituitary gland. The effects of T4 in vivo are mediated via T3 (T4 is converted to T3 in target tissues). T3 is three to five times more active than T4.
T4, Thyroxine (3,5,3′,5′-tetraiodothyronine), is produced by follicular cells of the thyroid gland. It is produced from the precursor thyroglobulin (this is not the same as thyroxine-binding globulin (TBG)), which is cleaved by enzymes to produce active T4.
The steps in this process are as follows:
The Na+/I− symporter transports two sodium ions across the basement membrane of the follicular cells along with an iodide ion. This is a secondary active transporter that utilises the concentration gradient of Na+ to move I− against its concentration gradient. This is called iodide trapping. Sodium is cotransported with iodide from the basolateral side of the membrane into the cell, and then concentrated in the thyroid follicles to about thirty times its concentration in the blood.
I− is moved across the apical membrane into the colloid of the follicle by pendrin.
Thyroperoxidase (TPO) oxidizes two I− to form I2. Iodide is non-reactive, and only the more reactive iodine is required for the next step.
Iodine is converted into HOI, which iodinates the tyrosyl residues of the thyroglobulin within the colloid to form 3-monoiodityrosyl (MIT-yl) and 3,5-diiodityrosyl (DIT-yl) residues – introducting iodine atoms at one or both locations ortho to the hydroxyls of tyrosine. The thyroglobulin was synthesised in the ER of the follicular cell and secreted into the colloid.
TPO also converts tyrosyl, MIT-yl, and DIT-yl residues into their free radical forms. These forms attack other MIT-yl and DIT-yl residues. When a DIT-yl radical attacks a DIT, T4-yl (peptidic T4) is formed. When a MIT-yl radical attacks a DIT, T3-yl is formed. Other reactions are possible, but do not form physiologically active products.
Iodinated Thyroglobulin binds megalin for endocytosis back into cell.
Thyroid-stimulating hormone (TSH) released from the anterior pituitary (also known as the adenohypophysis) binds the TSH receptor (a Gs protein-coupled receptor) on the basolateral membrane of the cell and stimulates the endocytosis of the colloid.
The endocytosed vesicles fuse with the lysosomes of the follicular cell. The lysosomal enzymes cleave any MIT, DIT, T3, T4 as well as the inactive analogues from the iodinated thyroglobulin.
The thyroid hormones cross the follicular cell membrane towards the blood vessels by an unknown mechanism. Text books have stated that diffusion is the main means of transport, but recent studies indicate that monocarboxylate transporter (MCT) 8 and 10 play major roles in the efflux of the thyroid hormones from the thyroid cells.
Thyroglobulin (Tg) is a 660 kDa, dimeric protein produced by the follicular cells of the thyroid and used entirely within the thyroid gland. Thyroxine is produced by attaching iodine atoms to the ring structures of this protein's tyrosine residues; thyroxine (T4) contains four iodine atoms, while triiodothyronine (T3), otherwise identical to T4, has one less iodine atom per molecule. The thyroglobulin protein accounts for approximately half of the protein content of the thyroid gland. Each thyroglobulin molecule contains approximately 100–120 tyrosine residues, a small number of which (<20) are subject to iodination catalysed by thyroperoxidase. The same enzyme then catalyses "coupling" of one modified tyrosine with another, via a free-radical-mediated reaction, and when these iodinated bicyclic molecules are released by hydrolysis of the protein, T3 and T4 are the result. Therefore, each thyroglobulin protein molecule ultimately yields very small amounts of thyroid hormone (experimentally observed to be on the order of 5–6 molecules of either T4 or T3 per original molecule of thyroglobulin).
Hydrolysis (cleavage to individual amino acids) of the modified protein by proteases then liberates T3 and T4, as well as the non-coupled tyrosine derivatives MIT and DIT. The hormones T4 and T3 are the biologically active agents central to metabolic regulation.
Peripheral
Thyroxine is believed to be a prohormone and a reservoir for the most active and main thyroid hormone T3. T4 is converted as required in the tissues by iodothyronine deiodinase. Deficiency of deiodinase can mimic hypothyroidism due to iodine deficiency. T3 is more active than T4, though it is present in less quantity than T4.
Initiation of production in fetuses
Thyrotropin-releasing hormone (TRH) is released from hypothalamus by 6 – 8 weeks, and thyroid-stimulating hormone (TSH) secretion from fetal pituitary is evident by 12 weeks of gestation, and fetal production of thyroxine (T4) reaches a clinically significant level at 18–20 weeks. Fetal triiodothyronine (T3) remains low (less than 15 ng/dL) until 30 weeks of gestation, and increases to 50 ng/dL at term. Fetal self-sufficiency of thyroid hormones protects the fetus against e.g. brain development abnormalities caused by maternal hypothyroidism.
Iodine deficiency
If there is a deficiency of dietary iodine, the thyroid will not be able to make thyroid hormones. The lack of thyroid hormones will lead to decreased negative feedback on the pituitary, leading to increased production of thyroid-stimulating hormone, which causes the thyroid to enlarge (the resulting medical condition is called endemic colloid goitre; see goitre). This has the effect of increasing the thyroid's ability to trap more iodide, compensating for the iodine deficiency and allowing it to produce adequate amounts of thyroid hormone.
Circulation and transport
Plasma transport
Most of the thyroid hormone circulating in the blood is bound to transport proteins, and only a very small fraction is unbound and biologically active. Therefore, measuring concentrations of free thyroid hormones is important for diagnosis, while measuring total levels can be misleading.
Thyroid hormone in the blood is usually distributed as follows:
Despite being lipophilic, T3 and T4 cross the cell membrane via carrier-mediated transport, which is ATP-dependent.
T1a and T0a are positively charged and do not cross the membrane; they are believed to function via the trace amine-associated receptor (TAR1, TA1), a G-protein-coupled receptor located in the cytoplasm.
Another critical diagnostic tool is measurement of the amount of thyroid-stimulating hormone (TSH) that is present.
Membrane transport
Contrary to common belief, thyroid hormones cannot traverse cell membranes in a passive manner like other lipophilic substances. The iodine in o-position makes the phenolic OH-group more acidic, resulting in a negative charge at physiological pH. However, at least 10 different active, energy-dependent and genetically regulated iodothyronine transporters have been identified in humans. They guarantee that intracellular levels of thyroid hormones are higher than in blood plasma or interstitial fluids.
Intracellular transport
Little is known about intracellular kinetics of thyroid hormones. However, recently it could be demonstrated that the crystallin CRYM binds 3,5,3′-triiodothyronine in vivo.
Mechanism of action
The thyroid hormones function via a well-studied set of nuclear receptors, termed the thyroid hormone receptors. These receptors, together with corepressor molecules, bind DNA regions called thyroid hormone response elements (TREs) near genes. This receptor-corepressor-DNA complex can block gene transcription. Triiodothyronine (T3), which is the active form of thyroxine (T4), goes on to bind to receptors. The deiodinase catalyzed reaction removes an iodine atom from the 5′ position of the outer aromatic ring of thyroxine's (T4) structure. When triiodothyronine (T3) binds a receptor, it induces a conformational change in the receptor, displacing the corepressor from the complex. This leads to recruitment of coactivator proteins and RNA polymerase, activating transcription of the gene. Although this general functional model has considerable experimental support, there remain many open questions.
More recently genetic evidence has been obtained for a second mechanism of thyroid hormone action involving one of the same nuclear receptors, TRβ, acting rapidly in the cytoplasm through the PI3K. This mechanism is conserved in all mammals but not fish or amphibians, and regulates brain development and adult metabolism. The mechanism itself parallels the actions of the nuclear receptor in the nucleus: in the absence of hormone, TRβ binds to PI3K and inhibits its activity, but when hormone binds the complex dissociates, PI3K activity increases, and the hormone bound receptor diffuses into the nucleus.
Thyroxine, iodine and apoptosis
Thyroxine and iodine stimulate the apoptosis of the cells of the larval gills, tail and fins in amphibian metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog. In fact, amphibian frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis.
Effects of triiodothyronine
Effects of triiodothyronine (T3) which is the metabolically active form:
Increases cardiac output
Increases heart rate
Increases ventilation rate
Increases basal metabolic rate
Potentiates the effects of catecholamines (i.e. increases sympathetic activity)
Potentiates brain development
Thickens endometrium in females
Increases catabolism of proteins and carbohydrates
Measurement
Further information: Thyroid function tests
Triiodothyronine (T3) and thyroxine (T4) can be measured as free T3 and free T4, which are indicators of their activities in the body. They can also be measured as total T3 and total T4, which depend on the amount that is bound to thyroxine-binding globulin (TBG). A related parameter is the free thyroxine index, which is total T4 multiplied by thyroid hormone uptake, which, in turn, is a measure of the unbound TBG. Additionally, thyroid disorders can be detected prenatally using advanced imaging techniques and testing fetal hormone levels.
Related diseases
Both excess and deficiency of thyroxine can cause disorders.
Hyperthyroidism (an example is Graves' disease) is the clinical syndrome caused by an excess of circulating free thyroxine, free triiodothyronine, or both. It is a common disorder that affects approximately 2% of women and 0.2% of men. Thyrotoxicosis is often used interchangeably with hyperthyroidism, but there are subtle differences. Although thyrotoxicosis also refers to an increase in circulating thyroid hormones, it can be caused by the intake of thyroxine tablets or by an over-active thyroid, whereas hyperthyroidism refers solely to an over-active thyroid.
Hypothyroidism (an example is Hashimoto's thyroiditis) is the case where there is a deficiency of thyroxine, triiodothyronine, or both.
Clinical depression can sometimes be caused by hypothyroidism. Some research has shown that T3 is found in the junctions of synapses, and regulates the amounts and activity of serotonin, norepinephrine, and γ-aminobutyric acid (GABA) in the brain.
Hair loss can sometimes be attributed to a malfunction of T3 and T4. Normal hair growth cycle may be affected disrupting the hair growth.
Both thyroid excess and deficiency can cause cardiovascular disorders or make preexisting conditions worse. The link between excess and deficiency of thyroid hormone on conditions like arrhythmias, heart failure, and atherosclerotic vascular diseases, have been established for nearly 200 years.
Abnormal thyroid function—hypo- and hyperthyroidism—can manifest as myopathy with symptoms of exercise-induced muscle fatigue, cramping, muscle pain and may include proximal weakness or muscle hypertrophy (particularly of the calves). Prolonged hypo- and hyperthyroid myopathy leads to atrophy of type II (fast-twitch/glycolytic) muscle fibres, and a predominance of type I (slow-twitch/oxidative) muscle fibres. Muscle biopsy shows abnormal muscle glycogen: high accumulation in hypothyroidism and low accumulation in hyperthyroidism. Myopathy associated with hypothyroidism includes Kocher-Debre-Semelaigne syndrome (childhood-onset), Hoffman syndrome (adult-onset), myasthenic syndrome, and atrophic form. Myopathy associated with hyperthyroidism includes thyrotoxic myopathy, thyrotoxic periodic paralysis, and Graves' ophthalmopathy. In Graves' ophthalmopathy, the proptosis is secondary to extraocular muscle (EOM) enlargement and gross expansion of orbital fat.
Preterm births can suffer neurodevelopmental disorders due to lack of maternal thyroid hormones, at a time when their own thyroid is unable to meet their postnatal needs. Also in normal pregnancies, adequate levels of maternal thyroid hormone are vital in order to ensure thyroid hormone availability for the foetus and its developing brain. Congenital hypothyroidism occurs in every 1 in 1600–3400 newborns with most being born asymptomatic and developing related symptoms weeks after birth.
Anti-thyroid drugs
Iodine uptake against a concentration gradient is mediated by a sodium–iodine symporter and is linked to a sodium-potassium ATPase. Perchlorate and thiocyanate are drugs that can compete with iodine at this point. Compounds such as goitrin, carbimazole, methimazole, propylthiouracil can reduce thyroid hormone production by interfering with iodine oxidation.
| Biology and health sciences | Animal hormones | Biology |
18457137 | https://en.wikipedia.org/wiki/Personal%20computer | Personal computer | A personal computer, often referred to as a PC or simply computer, is a computer designed for individual use. It is typically used for tasks such as word processing, internet browsing, email, multimedia playback, and gaming. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large, costly minicomputers and mainframes, time-sharing by many people at the same time is not used with personal computers. The term home computer has also been used, primarily in the late 1970s and 1980s. The advent of personal computers and the concurrent Digital Revolution have significantly affected the lives of people.
Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with computers. While personal computer users may develop their applications, usually these systems run commercial software, free-of-charge software ("freeware"), which is most often proprietary, or free and open-source software, which is provided in ready-to-run, or binary form. Software for personal computers is typically developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their programs to make any use of a personal computer, although end-user programming is still feasible. This contrasts with mobile systems, where software is often available only through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems (first with MS-DOS and then with Windows) and CPUs based on Intel's x86 architecture – collectively called Wintel – have dominated the personal computer market, and today the term PC normally refers to the ubiquitous Wintel platform, or to Windows PCs in general (including those running ARM chips), to the point where software for Windows is marketed as "for PC". Alternatives to Windows occupy a minority share of the market; these include the Mac platform from Apple (running the macOS operating system), and free and open-source, Unix-like operating systems, such as Linux (including the Linux-derived ChromeOS. Other notable platforms until the 1990s were the Amiga from Commodore, the Atari ST, and the PC-98 from NEC.
Terminology
The term PC is an initialism for personal computer. While the IBM Personal Computer incorporated the designation into its model name, the term originally described personal computers of any brand. In some contexts, PC is used to contrast with Mac, an Apple Macintosh computer.
Since none of these Apple products were mainframes or time-sharing systems, they were all personal computers but not PC (brand) computers. In 1995, a CBS segment on the growing popularity of PC reported: "For many newcomers PC stands for Pain and Confusion."
History
Origins
In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit highly trained, person. This mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, and could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30 of 1956, and the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
1960s
The personal computer was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor, and the metal–oxide–semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs. The MOS integrated circuit was commercialized by RCA in 1964, and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968. Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own.
In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.
1970s
Early personal computersgenerally called microcomputerswere often sold in a kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers.
Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972, and a few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use. The CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants.
In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT, and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts, statisticians, and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weighed about half a ton.
Another desktop portable APL machine, the MCM/70, was demonstrated in 1973 and shipped in 1974. It used the Intel 8008 processor.
A seminal step in personal computing was the 1973 Xerox Alto, developed at Xerox's Palo Alto Research Center (PARC). It had a graphical user interface (GUI) which later served as inspiration for Apple's Macintosh, and Microsoft's Windows operating system. The Alto was a demonstration project, not commercialized, as the parts were too expensive to be affordable.
Also in 1973 Hewlett Packard introduced fully BASIC programmable microcomputers that fit entirely on top of a desk, including a keyboard, a small one-line display, and printer. The Wang 2200 microcomputer of 1973 had a full-size cathode ray tube (CRT) and cassette tape storage. These were generally expensive specialized computers sold for business or scientific uses.
1974 saw the introduction of what is considered by many to be the first true personal computer, the Altair 8800 created by Micro Instrumentation and Telemetry Systems (MITS). Based on the 8-bit Intel 8080 Microprocessor, the Altair is widely recognized as the spark that ignited the microcomputer revolution as the first commercially successful personal computer. The computer bus designed for the Altair was to become a de facto standard in the form of the S-100 bus, and the first programming language for the machine was Microsoft's founding product, Altair BASIC.
In 1976, Steve Jobs and Steve Wozniak sold the Apple I computer circuit board, which was fully prepared and contained about 30 chips. The Apple I computer differed from the other kit-style hobby computers of era. At the request of Paul Terrell, owner of the Byte Shop, Jobs and Wozniak were given their first purchase order, for 50 Apple I computers, only if the computers were assembled and tested and not a kit computer. Terrell wanted to have computers to sell to a wide range of users, not just experienced electronics hobbyists who had the soldering skills to assemble a computer kit. The Apple I as delivered was still technically a kit computer, as it did not have a power supply, case, or keyboard when it was delivered to the Byte Shop.
The first successfully mass-marketed personal computer to be announced was the Commodore PET after being revealed in January 1977. However, it was back-ordered and not available until later that year. Three months later (April), the Apple II (usually referred to as the Apple) was announced with the first units being shipped 10 June 1977, and the TRS-80 from Tandy Corporation / Tandy Radio Shack following in August 1977, which sold over 100,000 units during its lifetime. Together, especially in the North American market, these 3 machines were referred to as the "1977 trinity". Mass-market, ready-assembled computers had arrived, and allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware.
In 1977 the Heath company introduced personal computer kits known as Heathkits, starting with the Heathkit H8, followed by the Heathkit H89 in late 1979. With the purchase of the Heathkit H8 you would obtain the chassis and CPU card to assemble yourself, additional hardware such as the H8-1 memory board that contained 4k of RAM could also be purchased in order to run software. The Heathkit H11 model was released in 1978 and was one of the first 16-bit personal computers; however, due to its high retail cost of $1,295 was discontinued in 1982.
1980s
During the early 1980s, home computers were further developed for household use, with software for personal productivity, programming and games. They typically could be used with a television already in the home as the computer display, with low-detail blocky graphics and a limited color range, and text about 40 characters wide by 25 characters tall. Sinclair Research, a UK company, produced the ZX Seriesthe ZX80 (1980), ZX81 (1981), and the ZX Spectrum; the latter was introduced in 1982, and totaled 8 million unit sold. Following came the Commodore 64, totaled 17 million units sold, the Galaksija (1983) introduced in Yugoslavia and the Amstrad CPC series (464–6128).
In the same year, the NEC PC-98 was introduced, which was a very popular personal computer that sold in more than 18 million units. Another famous personal computer, the revolutionary Amiga 1000, was unveiled by Commodore on 23 July 1985. The Amiga 1000 featured a multitasking, windowing operating system, color graphics with a 4096-color palette, stereo sound, Motorola 68000 CPU, 256 KB RAM, and 880 KB 3.5-inch disk drive, for US$1,295.
IBM's first PC was introduced on 12 August 1981 setting what became a mass market standard for PC architecture.
In 1982 The Computer was named Machine of the Year by Time magazine.
Somewhat larger and more expensive systems were aimed at office and small business use. These often featured 80-column text displays but might not have had graphics or sound capabilities. These microprocessor-based systems were still less costly than time-shared mainframes or minicomputers.
Workstations were characterized by high-performance processors and graphics displays, with large-capacity local disk storage, networking capability, and running under a multitasking operating system. Eventually, due to the influence of the IBM PC on the personal computer market, personal computers and home computers lost any technical distinction. Business computers acquired color graphics capability and sound, and home computers and game systems users used the same processors and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of personal computers used at home.
An increasingly important set of uses for personal computers relied on the ability of the computer to communicate with other computer systems, allowing interchange of information. Experimental public access to a shared mainframe computer system was demonstrated as early as 1973 in the Community Memory project, but bulletin board systems and online service providers became more commonly available after 1978. Commercial Internet service providers emerged in the late 1980s, giving public access to the rapidly growing network.
In 1984, Apple Computer launched the Macintosh, with an advertisement during the Super Bowl. The Macintosh was the first successful mass-market mouse-driven computer with a graphical user interface or 'WIMP' (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa's features at a price of US$2,495. The Macintosh was introduced with 128 KB of RAM and later that year a 512 KB RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5-inch floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing.
The Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple's partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker to users of the personal computer. During Steve Jobs's hiatus from Apple, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM's major competition up until the early 1990s.
1990s
In 1991, the World Wide Web was made available for public use. The combination of powerful personal computers with high-resolution graphics and sound, with the infrastructure provided by the Internet, and the standardization of access methods of the Web browsers, established the foundation for a significant fraction of modern life, from bus time tables through unlimited distribution of free videos through to online user-edited encyclopedias.
Types
Stationary
Workstation
A workstation is a high-end personal computer designed for technical, mathematical, or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. Workstations are used for tasks such as computer-aided design, drafting and modeling, computation-intensive scientific and engineering calculations, image processing, architectural modeling, and computer graphics for animation and motion picture visual effects.
Desktop computer
Before the widespread use of PCs, a computer that could fit on a desk was remarkably small, leading to the desktop nomenclature. More recently, the phrase usually indicates a particular style of computer case. Desktop computers come in a variety of styles ranging from large vertical tower cases to small models which can be tucked behind or rest directly beneath (and support) LCD monitors.
While the term desktop often refers to a computer with a vertically aligned computer tower case, these varieties often rest on the ground or underneath desks. Despite this seeming contradiction, the term desktop does typically refer to these vertical tower cases as well as the horizontally aligned models which are designed to literally rest on top of desks and are therefore more appropriate to the desktop term, although both types qualify for this desktop label in most practical situations aside from certain physical arrangement differences. Both styles of these computer cases hold the systems hardware components such as the motherboard, processor chip and other internal operating parts. Desktop computers have an external monitor with a display screen and an external keyboard, which are plugged into ports on the back of the computer case. Desktop computers are popular for home and business computing applications as they leave space on the desk for multiple monitors.
A gaming computer is a desktop computer that generally comprises a high-performance video card, processor and RAM, to improve the speed and responsiveness of demanding video games.
An all-in-one computer (also known as single-unit PCs) is a desktop computer that combines the monitor and processor within a single unit. A separate keyboard and mouse are standard input devices, with some monitors including touchscreen capability. The processor and other working components are typically reduced in size relative to standard desktops, located behind the monitor, and configured similarly to laptops.
A nettop computer was introduced by Intel in February 2008, characterized by low cost and lean functionality. These were intended to be used with an Internet connection to run Web browsers and Internet applications.
A Home theater PC (HTPC) combines the functions of a personal computer and a digital video recorder. It is connected to a TV set or an appropriately sized computer display, and is often used as a digital photo viewer, music and video player, TV receiver, and digital video recorder. HTPCs are also referred to as media center systems or media servers. The goal is to combine many or all components of a home theater setup into one box. HTPCs can also connect to services providing on-demand movies and TV shows. HTPCs can be purchased pre-configured with the required hardware and software needed to add television programming to the PC, or can be assembled from components.
Keyboard computers are computers inside of keyboards, generally still designed to be connected to an external computer monitor or television. Examples include the Atari ST, Amstrad CPC, BBC Micro, Commodore 64, MSX, Raspberry Pi 400, and the ZX Spectrum.
Portable
Luggable
The potential utility of portable computers was apparent early on. Alan Kay described the Dynabook in 1972, but no hardware was developed. The Xerox NoteTaker was produced in a very small experimental batch around 1978. In 1975, the IBM 5100 could be fit into a transport case, making it a portable computer, but it weighed about 50 pounds. Such early portable computers were termed luggables by journalists owing to their heft.
Before the introduction of the IBM PC, portable computers consisting of a processor, display, disk drives and keyboard, in a suit-case style portable housing, allowed users to bring a computer home from the office or to take notes at a classroom. Examples include the Osborne 1 and Kaypro; and the Commodore SX-64. These machines were AC-powered and included a small CRT display screen. The form factor was intended to allow these systems to be taken on board an airplane as carry-on baggage, though their high power demand meant that they could not be used in flight. The integrated CRT display made for a relatively heavy package, but these machines were more portable than their contemporary desktop equals. Some models had standard or optional connections to drive an external video monitor, allowing a larger screen or use with video projectors.
IBM PC-compatible suitcase format computers became available soon after the introduction of the PC, with the Compaq Portable being a leading example of the type. Later models included a hard drive to give roughly equivalent performance to contemporary desktop computers.
The development of thin plasma display and LCD screens permitted a somewhat smaller form factor, called the lunchbox computer. The screen formed one side of the enclosure, with a detachable keyboard and one or two half-height floppy disk drives, mounted facing the ends of the computer. Some variations included a battery, allowing operation away from AC outlets.
Laptop
A laptop computer is designed for portability with clamshell design, where the keyboard and computer components are on one panel, with a hinged second panel containing a flat display screen. Closing the laptop protects the screen and keyboard during transportation. Laptops generally have a rechargeable battery, enhancing their portability. To save power, weight and space, laptop graphics chips are in many cases integrated into the CPU or chipset and use system RAM, resulting in reduced graphics performance when compared to desktop machines, that more typically have a graphics card installed. For this reason, desktop computers are usually preferred over laptops for gaming purposes.
Unlike desktop computers, only minor internal upgrades (such as memory and hard disk drive) are feasible owing to the limited space and power available. Laptops have the same input and output ports as desktops, for connecting to external displays, mice, cameras, storage devices and keyboards. Laptops are also a little more expensive compared to desktops, as the miniaturized components for laptops themselves are expensive.
Notebook computers such as the TRS-80 Model 100 and Epson HX-20 had roughly the plan dimensions of a sheet of typing paper (ANSI A or ISO A4). These machines had a keyboard with slightly reduced dimensions compared to a desktop system, and a fixed LCD display screen coplanar with the keyboard. These displays were usually small, with 8 to 16 lines of text, sometimes only 40 columns line length. However, these machines could operate for extended times on disposable or rechargeable batteries. Although they did not usually include internal disk drives, this form factor often included a modem for telephone communication and often had provisions for external cassette or disk storage. Later, clamshell format laptop computers with similar small plan dimensions were also called notebooks.
A desktop replacement computer is a portable computer that provides the full capabilities of a desktop computer. Such computers are currently large laptops. This class of computers usually includes more powerful components and a larger display than generally found in smaller portable computers, and may have limited battery capacity or no battery.
Netbooks, also called mini notebooks or subnotebooks, were a subgroup of laptops suited for general computing tasks and accessing web-based applications. Initially, the primary defining characteristic of netbooks was the lack of an optical disc drive, smaller size, and lower performance than full-size laptops. By mid-2009 netbooks had been offered to users "free of charge", with an extended service contract purchase of a cellular data plan. Ultrabooks and Chromebooks have since filled the gap left by Netbooks. Unlike the generic Netbook name, Ultrabook and Chromebook are technically both specifications by Intel and Google respectively.
Tablet
A tablet uses a touchscreen display, which can be controlled using either a stylus pen or finger. Some tablets may use a hybrid or convertible design, offering a keyboard that can either be removed as an attachment, or a screen that can be rotated and folded directly over top the keyboard. Some tablets may use desktop-PC operating system such as Windows or Linux, or may run an operating system designed primarily for tablets. Many tablet computers have USB ports, to which a keyboard or mouse can be connected.
Smartphone
Smartphones are often similar to tablet computers, the difference being that smartphones always have cellular integration. They are generally smaller than tablets, and may not have a slate form factor.
Ultra-mobile PC
The ultra-mobile PC (UMPC) is a small tablet computer. It was developed by Microsoft, Intel and Samsung, among others. Current UMPCs typically feature the Windows XP, Windows Vista, Windows 7, or Linux operating system, and low-voltage Intel Atom or VIA C7-M processors.
Pocket PC
A pocket PC is a hardware specification for a handheld-sized computer (personal digital assistant, PDA) that runs the Microsoft Windows Mobile operating system. It may have the capability to run an alternative operating system like NetBSD or Linux. Pocket PCs have many of the capabilities of desktop PCs. Numerous applications are available for handhelds adhering to the Microsoft Pocket PC specification, many of which are freeware. Microsoft-compliant Pocket PCs can also be used with many other add-ons like GPS receivers, barcode readers, RFID readers and cameras.
In 2007, with the release of Windows Mobile 6, Microsoft dropped the name Pocket PC in favor of a new naming scheme: devices without an integrated phone are called Windows Mobile Classic instead of Pocket PC, while devices with an integrated phone and a touch screen are called Windows Mobile Professional.
Palmtop and handheld PCs
Palmtop PCs were miniature pocket-sized computers running DOS that first came about in the late 1980s, typically in a clamshell form factor with a keyboard. Non-x86 based devices were often called palmtop computers, examples being Psion Series 3. In later years a hardware specification called Handheld PC was later released by Microsoft that run the Windows CE operating system.
Hardware
Computer hardware is a comprehensive term for all physical and tangible parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. Some sub-systems of a personal computer may contain processors that run a fixed program, or firmware, such as a keyboard controller. Firmware usually is not changed by the end user of the personal computer.
Most 2010s and 2020s-era personal computers require users only to plug in the power supply, monitor, and other cables. A typical desktop computer consists of a computer case (or tower), a metal chassis that holds the power supply, motherboard, a storage device such as a hard disk drive or solid-state drive, and often an optical disc drive. Most towers have empty space where users can add additional components. External devices such as a computer monitor or visual display unit, keyboard, and a pointing device (mouse) are usually found in a personal computer.
The motherboard connects all processor, memory and peripheral devices together. The RAM, graphics card and processor are in most cases mounted directly onto the motherboard. The central processing unit (microprocessor chip) plugs into a CPU socket, while the ram modules plug into corresponding ram sockets. Some motherboards have the video display adapter, sound and other peripherals integrated onto the motherboard, while others use expansion slots for graphics cards, network cards, or other input/output devices. The graphics card or sound card may employ a break out box to keep the analog parts away from the electromagnetic radiation inside the computer case. Disk drives, which provide mass storage, are connected to the motherboard with one cable, and to the power supply through another cable. Usually, disk drives are mounted in the same case as the motherboard; expansion chassis are also made for additional disk storage.
For large amounts of data, a tape drive can be used or extra hard disks can be put together in an external case. The keyboard and the mouse are external devices plugged into the computer through connectors on an I/O panel on the back of the computer case. The monitor is also connected to the input/output (I/O) panel, either through an onboard port on the motherboard, or a port on the graphics card. Capabilities of the personal computer's hardware can sometimes be extended by the addition of expansion cards connected via an expansion bus. Standard peripheral buses often used for adding expansion cards in personal computers include PCI, PCI Express (PCIe), and AGP (a high-speed PCI bus dedicated to graphics adapters, found in older computers). Most modern personal computers have multiple physical PCI Express expansion slots, with some having PCI slots as well.
A peripheral is "a device connected to a computer to provide communication (such as input and output) or auxiliary functions (such as additional storage)". Peripherals generally connect to the computer through the use of USB ports or inputs located on the I/O panel. USB flash drives provide portable storage using flash memory which allows users to access the files stored on the drive on any computer. Memory cards also provide portable storage for users, commonly used on other electronics such as mobile phones and digital cameras, the information stored on these cards can be accessed using a memory card reader to transfer data between devices. Webcams, which are either built into computer hardware or connected via USB are video cameras that records video in real time to either be saved to the computer or streamed somewhere else over the internet. Game controllers can be plugged in via USB and can be used as an input device for video games as an alternative to using keyboard and mouse. Headphones and speakers can be connected via USB or through an auxiliary port (found on I/O panel) and allow users to listen to audio accessed on their computer; however, speakers may also require an additional power source to operate. Microphones can be connected through an audio input port on the I/O panel and allow the computer to convert sound into an electrical signal to be used or transmitted by the computer.
Software
Computer software is any kind of computer program, procedure, or documentation that performs some task on a computer system. The term includes application software such as word processors that perform productive tasks for users, system software such as operating systems that interface with computer hardware to provide the necessary services for application software, and middleware that controls and co-ordinates distributed systems.
Software applications are common for word processing, Internet browsing, Internet faxing, e-mail and other digital messaging, multimedia playback, playing of computer game, and computer programming. The user may have significant knowledge of the operating environment and application programs, but is not necessarily interested in programming nor even able to write programs for the computer. Therefore, most software written primarily for personal computers tends to be designed with simplicity of use, or user-friendliness in mind. However, the software industry continuously provide a wide range of new products for use in personal computers, targeted at both the expert and the non-expert user.
Operating system
An operating system (OS) manages computer resources and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. An operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input/output devices, facilitating computer networking, and managing files.
Common contemporary desktop operating systems are Microsoft Windows, macOS, Linux, Solaris and FreeBSD. Windows, macOS, and Linux all have server and personal variants. With the exception of Microsoft Windows, the designs of each of them were inspired by or directly inherited from the Unix operating system.
Early personal computers used operating systems that supported command line interaction, using an alphanumeric display and keyboard. The user had to remember a large range of commands to, for example, open a file for editing or to move text from one place to another. Starting in the early 1960s, the advantages of a graphical user interface began to be explored, but widespread adoption required lower-cost graphical display equipment. By 1984, mass-market computer systems using graphical user interfaces were available; by the turn of the 21st century, text-mode operating systems were no longer a significant fraction of the personal computer market.
Applications
Generally, a computer user uses application software to carry out a specific task. System software supports applications and provides common services such as memory management, network connectivity and device drivers, all of which may be used by applications but are not directly of interest to the end user. A simplified analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system): the power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and LibreOffice, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. Often, they may have some capability to interact with each other in ways beneficial to the user; for example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application.
End-user development tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts; even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
Gaming
PC gaming is popular among the high-end PC market. According to an April 2018 market analysis done by Newzoo, PC gaming was the third largest gaming sector behind console and mobile gaming in terms of market share sitting at a 24% share of the entire market. The market for PC gaming continues to grow and is expected to generate $32.3 billion in revenue in the year 2021. PC gaming is at the forefront of competitive gaming, known as esports, with games such as League of Legends, Valorant and Counter-Strike: Global Offensive leading the industry that is suspected to surpass a billion dollars in revenue in 2019. According to a December 2023 market analysis done by Visual Capitalist, the PC gaming sector was the second-largest category across all platforms as of 2022, valued at US$45 billion, surpassing console market revenue by 2020.
There are multiple different game distributors; players are able to purchase games in-person at retail stores and digitally. Some large names for digital game distributors are Epic Games, Valve Corporation, Electronic Arts, and Ubisoft. Distributors such as the ones listed allow many games to be purchasable and accessible to users. Though some distributors may only sell games that have been created by their own company, many games and franchises are available on multiple distributor platforms. Some multiplayer pc games can also be cross-platform, allowing players the ability the play with other platforms, such as pc and different consoles. There are games on distributor platforms that may allow players to play other known games using the game application as an emulator; these games originally may not be supported by the player's current device, whether it be platform locked or no longer supported by the operating system of the pc. The number of different video game genres can range across each distributor platform, first-person shooters, MMO games, adventure games, etc. Many games, frequently free-to-play games, have microtransactions available for players. These transactions can help enhance gameplay or to personalize their characters. There are games such as The Sims that allow players to purchase additional game packs in order to gain access to additional new gameplay.
Sales
Market share
In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million personal computers were in use in 2002 and one billion personal computers had been sold worldwide from the mid-1970s up to this time. Of the latter figure, 75% were professional or work related, while the rest were sold for personal or home use. About 81.5% of personal computers shipped had been desktop computers, 16.4% laptops and 2.1% servers. The United States had received 38.8% (394 million) of the computers shipped, Europe 25% and 11.7% had gone to the Asia-Pacific region, the fastest-growing market as of 2002. The second billion was expected to be sold by 2008. Almost half of all households in Western Europe had a personal computer and a computer could be found in 40% of homes in United Kingdom, compared with only 13% in 1985.
The global personal computer shipments were 350.9 million units in 2010,
308.3 million units in 2009
and 302.2 million units in 2008.
The shipments were 264 million units in the year 2007, according to iSuppli, up 11.2% from 239 million in 2006. In 2004, the global shipments were 183 million units, an 11.6% increase over 2003. In 2003, 152.6 million computers were shipped, at an estimated value of $175 billion. In 2002, 136.7 million PCs were shipped, at an estimated value of $175 billion. In 2000, 140.2 million personal computers were shipped, at an estimated value of $226 billion. Worldwide shipments of personal computers surpassed the 100-million mark in 1999, growing to 113.5 million units from 93.3 million units in 1998. In 1999, Asia had 14.1 million units shipped.
As of June 2008, the number of personal computers in use worldwide hit one billion, while another billion is expected to be reached by 2014. Mature markets like the United States, Western Europe and Japan accounted for 58% of the worldwide installed PCs. The emerging markets were expected to double their installed PCs by 2012 and to take 70% of the second billion PCs. About 180 million computers (16% of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12% annually.
Based on International Data Corporation (IDC) data for Q2 2011, for the first time China surpassed US in PC shipments by 18.5 million and 17.7 million respectively. This trend reflects the rising of emerging markets as well as the relative stagnation of mature regions.
In the developed world, there has been a vendor tradition to keep adding functions to maintain high prices of personal computers. However, since the introduction of the One Laptop per Child foundation and its low-cost XO-1 laptop, the computing industry started to pursue the price too. Although introduced only one year earlier, there were 14 million netbooks sold in 2008. Besides the regular computer manufacturers, companies making especially rugged versions of computers have sprung up, offering alternatives for people operating their machines in extreme weather or environments.
In 2011, Deloitte consulting firm predicted that, smartphones and tablet computers as computing devices would surpass the PCs sales (as has happened since 2012). As of 2013, worldwide sales of PCs had begun to fall as many consumers moved to tablets and smartphones. Sales of 90.3 million units in the fourth quarter of 2012 represented a 4.9% decline from sales in the fourth quarter of 2011. Global PC sales fell sharply in the first quarter of 2013, according to IDC data. The 14% year-over-year decline was the largest on record since the firm began tracking in 1994, and double what analysts had been expecting. The decline of Q2 2013 PC shipments marked the fifth straight quarter of falling sales. "This is horrific news for PCs," remarked an analyst. "It's all about mobile computing now. We have definitely reached the tipping point." Data from Gartner showed a similar decline for the same time period. China's Lenovo Group bucked the general trend as strong sales to first-time buyers in the developing world allowed the company's sales to stay flat overall. Windows 8, which was designed to look similar to tablet/smartphone software, was cited as a contributing factor in the decline of new PC sales. "Unfortunately, it seems clear that the Windows 8 launch not only didn't provide a positive boost to the PC market, but appears to have slowed the market," said IDC Vice President Bob O’Donnell.
In August 2013, Credit Suisse published research findings that attributed around 75% of the operating profit share of the PC industry to Microsoft (operating system) and Intel (semiconductors). According to IDC, in 2013 PC shipments dropped by 9.8% as the greatest drop-ever in line with consumers trends to use mobile devices.
In the second quarter of 2018, PC sales grew for the first time since the first quarter of 2012. According to research firm Gartner, the growth mainly came from the business market while the consumer market experienced decline.
In 2020, as the result of the COVID-19 Pandemic with more people working at home and learning remotely, PC sales grew by 26.1% compared to previous years according to IDC. According to Canalys, 2020 was the highest growth rate for the PC market since 2011.
Average selling price
Selling prices of personal computers steadily declined due to lower costs of production and manufacture, while the capabilities of computers increased. In 1975, an Altair kit sold for around only , but required customers to solder components into circuit boards; peripherals required to interact with the system in alphanumeric form instead of blinking lights would add another , and the resultant system was of use only to hobbyists.
At their introduction in 1981, the price of the Osborne 1 and its competitor Kaypro was considered an attractive price point; these systems had text-only displays and only floppy disks for storage. By 1982, Michael Dell observed that a personal computer system selling at retail for about was made of components that cost the dealer about ; typical gross margin on a computer unit was around . The total value of personal computer purchases in the US in 1983 was about , comparable to total sales of pet food. By late 1998, the average selling price of personal computer systems in the United States had dropped below .
For Microsoft Windows systems, the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing for desktop computers and $689 for laptops at U.S. retail in August 2008. In 2009, ASP had further fallen to for desktops and to for notebooks by January and to in February. According to research firm NPD, the average selling price of all Windows portable PCs has fallen from in October 2008 to in October 2009.
Environmental impact
External costs of environmental impact are not fully included in the selling price of personal computers.
Personal computers have become a large contributor to the 50 million tons of discarded electronic waste generated annually, according to the United Nations Environment Programme. To address the electronic waste issue affecting developing countries and the environment, extended producer responsibility (EPR) acts have been implemented in various countries and states. In the absence of comprehensive national legislation or regulation on the export and import of electronic waste, the Silicon Valley Toxics Coalition and BAN (Basel Action Network) teamed up with electronic recyclers in the US and Canada to create an e-steward program for the orderly disposal of electronic waste. Some organizations oppose EPR regulation, and claim that manufacturers naturally move toward reduced material and energy use.
| Technology | Computer hardware | null |
18457910 | https://en.wikipedia.org/wiki/Mollusc%20shell | Mollusc shell | The mollusc (or mollusk) shell is typically a calcareous exoskeleton which encloses, supports and protects the soft parts of an animal in the phylum Mollusca, which includes snails, clams, tusk shells, and several other classes. Not all shelled molluscs live in the sea; many live on the land and in freshwater.
The ancestral mollusc is thought to have had a shell, but this has subsequently been lost or reduced on some families, such as the squid, octopus, and some smaller groups such as the caudofoveata and solenogastres. Today, over 100,000 living species bear a shell; there is some dispute as to whether these shell-bearing molluscs form a monophyletic group (conchifera) or whether shell-less molluscs are interleaved into their family tree.
Malacology, the scientific study of molluscs as living organisms, has a branch devoted to the study of shells, and this is called conchology—although these terms used to be, and to a minor extent still are, used interchangeably, even by scientists (this is more common in Europe).
Within some species of molluscs, there is often a wide degree of variation in the exact shape, pattern, ornamentation, and color of the shell.
Formation
A mollusc shell is formed, repaired and maintained by a part of the anatomy called the mantle. Any injuries to or abnormal conditions of the mantle are usually reflected in the shape and form and even color of the shell. When the animal encounters harsh conditions that limit its food supply, or otherwise cause it to become dormant for a while, the mantle often ceases to produce the shell substance. When conditions improve again and the mantle resumes its task, a "growth line" is produced.
The mantle edge secretes a shell which has two components. The organic constituent is mainly made up of polysaccharides and glycoproteins; its composition may vary widely: some molluscs employ a wide range of chitin-control genes to create their matrix, whereas others express just one, suggesting that the role of chitin in the shell framework is highly variable; it may even be absent in monoplacophora. This organic framework controls the formation of calcium carbonate crystals (never phosphate, with the questionable exception of Cobcrephora), and dictates when and where crystals start and stop growing, and how fast they expand; it even controls the polymorph of the crystal deposited, controlling positioning and elongation of crystals and preventing their growth where appropriate.
The shell formation requires certain biological machinery. The shell is deposited within a small compartment, the extrapallial space, which is sealed from the environment by the periostracum, a leathery outer layer around the rim of the shell, where growth occurs. This caps off the extrapallial space, which is bounded on its other surfaces by the existing shell and the mantle. The periostracum acts as a framework from which the outer layer of carbonate can be suspended, but also, in sealing the compartment, allows the accumulation of ions in concentrations sufficient for crystallization to occur. The accumulation of ions is driven by ion pumps packed within the calcifying epithelium. Calcium ions are obtained from the organism's environment through the gills, gut and epithelium, transported by the haemolymph ("blood") to the calcifying epithelium, and stored as granules within or in-between cells ready to be dissolved and pumped into the extrapallial space when they are required. The organic matrix forms the scaffold that directs crystallization, and the deposition and rate of crystals is also controlled by hormones produced by the mollusc. Because the extrapallial space is supersaturated, the matrix could be thought of as impeding, rather than encouraging, carbonate deposition; although it does act as a nucleating point for the crystals and controls their shape, orientation and polymorph, it also terminates their growth once they reach the necessary size. Nucleation is endoepithelial in Neopilina and Nautilus, but exoepithelial in the bivalves and gastropods.
The formation of the shell involves a number of genes and transcription factors. On the whole, the transcription factors and signalling genes are deeply conserved, but the proteins in the secretome are highly derived and rapidly evolving. engrailed serves to demark the edge of the shell field; dpp controls the shape of the shell, and Hox1 and Hox4 have been implicated in the onset of mineralization. In gastropod embryos, Hox1 is expressed where the shell is being accreted; however no association has been observed between Hox genes and cephalopod shell formation. Perlucin increases the rate at which calcium carbonate precipitates to form a shell when in saturated seawater; this protein is from the same group of proteins (C-type lectins) as those responsible for the formation of eggshell and pancreatic stone crystals, but the role of C-type lectins in mineralization is unclear. Perlucin operates in association with Perlustrin, a smaller relative of lustrin A, a protein responsible for the elasticity of organic layers that makes nacre so resistant to cracking. Lustrin A bears remarkable structural similarity to the proteins involved in mineralization in diatoms – even though diatoms use silica, not calcite, to form their tests!
Development
The shell-secreting area is differentiated very early in embryonic development. An area of the ectoderm thickens, then invaginates to become a "shell gland". The shape of this gland is tied to the form of the adult shell; in gastropods, it is a simple pit, whereas in bivalves, it forms a groove which will eventually become the hinge line between the two shells, where they are connected by a ligament. The gland subsequently evaginates in molluscs that produce an external shell. Whilst invaginated, a periostracum - which will form a scaffold for the developing shell - is formed around the opening of the invagination, allowing the deposition of the shell when the gland is everted. A wide range of enzymes are expressed during the formation of the shell, including carbonic anhydrase, alkaline phosphatase, and DOPA-oxidase (tyrosinase)/peroxidase.
The form of the molluscan shell is constrained by the organism's ecology. In molluscs whose ecology changes from the larval to adult form, the morphology of the shell also undergoes a pronounced modification at metamorphosis. The larval shell may have a completely different mineralogy to the adult conch, perhaps formed from amorphous calcite as opposed to an aragonite adult conch.
In those shelled molluscs that have indeterminate growth, the shell grows steadily over the lifetime of the mollusc by the addition of calcium carbonate to the leading edge or opening. Thus the shell gradually becomes longer and wider, in an increasing spiral shape, to better accommodate the growing animal inside. The shell thickens as it grows, so that it stays proportionately strong for its size.
Secondary loss
The loss of a shell in the adult form of some gastropods is achieved by the discarding of the larval shell; in other gastropods and in cephalopods, the shell is lost or demineralized by the resorption of its carbonate component by the mantle tissue.
Shell proteins
Hundreds of soluble and insoluble proteins control shell formation. They are secreted into the extrapallial space by the mantle, which also secretes the glycoproteins, proteoglycans, polysaccharides and chitin that make up the organic shell matrix. Insoluble proteins tend to be thought of as playing a more important/major role in crystallization control. The organic matrix of shells tends to consist of β-chitin and silk fibroin. Perlucin encourages carbonate deposition, and is found at the interface of the chitinous and aragonitic layer in some shells. An acidic shell matrix appears to be essential to shell formation, in the cephalopods at least; the matrix in the non-mineralized squid gladius is basic.
In oysters and potentially most molluscs, the nacreous layer has an organic framework of the protein MSI60, which has a structure a little like spider silk and forms sheets; the prismatic layer uses MSI31 to construct its framework. This too forms beta-pleated sheets. Since acidic amino acids, such as aspartic acid and glutamic acid, are important mediators of biomineralization, shell proteins tend to be rich in these amino acids. Aspartic acid, which can make up up to 50% of shell framework proteins, is most abundant in calcitic layers, and also heavily present in aragonitic layers. Proteins with high proportions of glutamic acid are usually associated with amorphous calcium carbonate.
The soluble component of the shell matrix acts to inhibit crystallization when in its soluble form, but when it attaches to an insoluble substrate, it permits the nucleation of crystals. By switching from a dissolved to an attached form and back again, the proteins can produce bursts of growth, producing the brick-wall structure of the shell.
It may be possible to use shell protein information in gastropod systematics, e.g. to discriminate species level diversity, but methods need further development.
Chemistry
The formation of a shell in molluscs appears to be related to the secretion of ammonia, which originates from urea. The presence of an ammonium ion raises the pH of the extrapallial fluid, favouring the deposition of calcium carbonate. This mechanism has been proposed not only for molluscs, but also for other unrelated mineralizing lineages.
Structure
The calcium carbonate layers in a shell are generally of two types: an outer, chalk-like prismatic layer and an inner pearly, lamellar or nacreous layer. The layers usually incorporate a substance called conchiolin, often in order to help bind the calcium carbonate crystals together. Conchiolin is composed largely of quinone-tanned proteins.
The periostracum and prismatic layer are secreted by a marginal band of cells, so that the shell grows at its outer edge. Conversely, the nacreous layer is derived from the main surface of the mantle.
Some shells contain pigments which are incorporated into the structure. This is what accounts for the striking colors and patterns that can be seen in some species of seashells, and the shells of some tropical land snails. These shell pigments sometimes include compounds such as pyrroles and porphyrins.
Shells are almost always composed of polymorphs of calcium carbonate - either calcite or aragonite. In many cases, such as the shells of many of the marine gastropods, different layers of the shell are composed of calcite and aragonite. In a few species which dwell near hydrothermal vents, iron sulfide is used to construct the shell. Phosphate is never utilised by molluscs, with the exception of Cobcrephora, whose molluscan affinity is uncertain.
Shells are composite materials of calcium carbonate (found either as calcite or aragonite) and organic macromolecules (mainly proteins and polysaccharides). Shells can have numerous ultrastructural motifs, the most common being crossed-lamellar (aragonite), prismatic (aragonite or calcite), homogeneous (aragonite), foliated (aragonite) and nacre (aragonite). Although not the most common, nacre is the most studied type of layer.
Size
In most shelled molluscs, the shell is large enough for all of the soft parts to be retracted inside when necessary, for protection from predation or from desiccation. However, there are many species of gastropod mollusc in which the shell is somewhat reduced or considerably reduced, such that it offers some degree of protection only to the visceral mass, but is not large enough to allow the retraction of the other soft parts. This is particularly common in the opisthobranchs and in some of the pulmonates, for example in the semi-slugs.
Some gastropods have no shell at all, or only an internal shell or internal calcareous granules, and these species are often known as slugs. Semi-slugs are pulmonate slugs with a greatly reduced external shell which is in some cases partly covered by the mantle.
Shape
The shape of the molluscan shell is controlled both by transcription factors (such as engrailed and decapentaplegic) and by developmental rate. The simplification of a shell form is thought to be relatively easily evolved, and many gastropod lineages have independently lost the complex coiled shape. However, re-gaining the coiling requires many morphological modifications and is much rarer. Despite this, it can still be accomplished; it is known from one lineage that was uncoiled for at least 20 million years, before modifying its developmental timing to restore the coiled morphology.
In bivalves at least, the shape does change through growth, but the pattern of growth is constant. At each point around the aperture of the shell, the rate of growth remains constant. This results in different areas growing at different rates, and thus a coiling of the shell and a change in its shape - its convexity, and the shape of the opening - in a predictable and consistent fashion.
The shape of the shell has an environmental as well as a genetic component; clones of gastropods can exert different shell morphologies. Indeed, intra-species variation can be many times larger than inter-species variation.
A number of terms are used to describe molluscan shell shape; in the univalved molluscs, endogastric shells coil backwards (away from the head), whereas exogastric shells coil forwards; the equivalent terms in bivalved molluscs are opisthogyrate and prosogyrate respectively.
Nacre
Nacre, commonly known as mother of pearl, forms the inner layer of the shell structure in some groups of gastropod and bivalve molluscs, mostly in the more ancient families such as top snails (Trochidae), and pearl oysters (Pteriidae). Like the other calcareous layers of the shell, the nacre is created by the epithelial cells (formed by the germ layer ectoderm) of the mantle tissue.
However, nacre does not seem to represent a modification of other shell types, as it uses a distinct set of proteins.
Evolution
The fossil record shows that all molluscan classes evolved some 500 million years ago from a shelled ancestor looking something like a modern monoplacophoran, and that modifications of the shell form ultimately led to the formation of new classes and lifestyles. However, a growing body of molecular and biological data indicate that at least certain shell features have evolved many times, independently. The nacreous layer of shells is a complex structure, but rather than being difficult to evolve, it has in fact arisen many times convergently. The genes used to control its formation vary greatly between taxa: under 10% of the (non-housekeeping) genes expressed in the shells that produce gastropod nacre are also found in the equivalent shells of bivalves: and most of these shared genes are also found in mineralizing organs in the deuterostome lineage. The independent origins of this trait are further supported by crystallographic differences between clades: the orientation of the axes of the deposited aragonite 'bricks' that make up the nacreous layer is different in each of the monoplacophora, gastropods and bivalves.
Mollusc shells (especially those formed by marine species) are very durable and outlast the otherwise soft-bodied animals that produce them by a very long time (sometimes thousands of years even without being fossilized). Most shells of marine molluscs fossilize rather easily, and fossil mollusc shells date all the way back to the Cambrian period. Large amounts of shell sometimes forms sediment, and over a geological time span can become compressed into limestone deposits.
Most of the fossil record of molluscs consists of their shells, since the shell is often the only mineralised part of a mollusc (however also see Aptychus and operculum). The shells are usually preserved as calcium carbonate – usually any aragonite is pseudomorphed with calcite. Aragonite can be protected from recrystalization if water is kept away by carbonaceous material, but this did not accumulate in sufficient quantity until the Carboniferous; consequently aragonite older than the Carboniferous is practically unknown: but the original crystal structure can sometimes be deduced in fortunate circumstances, such as if an alga closely encrusts the surface of a shell, or if a phosphatic mould quickly forms during diagenesis.
The shell-less aplacophora have a chitinous cuticle that has been likened to the shell framework; it has been suggested that tanning of this cuticle, in conjunction with the expression of additional proteins, could have set the evolutionary stage for the secretion of a calcareous shell in an aplacophoran-like ancestral mollusc.
The molluscan shell has been internalized in a number of lineages, including the coleoid cephalopods and many gastropod lineages. Detorsion of gastropods results in an internal shell, and can be triggered by relatively minor developmental modifications such as those induced by exposure to high platinum concentrations.
Pattern formation
The pattern formation processes in mollusc shells have been modeled successfully using one-dimensional reaction–diffusion systems, in particular the Gierer-Meinhardt system which leans heavily on the Turing model.
Varieties
Monoplacophora
The nacreous layer of monoplacophoran shells appears to have undergone some modification. Whilst normal nacre, and indeed part of the nacreous layer of one monoplacophoran species (Veleropilina zografi), consists of "brick-like" crystals of aragonite, in monoplacophora these bricks are more like layered sheets. The c-axis is perpendicular to the shell wall, and the a-axis parallel to the growth direction. This foliated aragonite is presumed to have evolved from the nacreous layer, with which it has historically been confused, but represents a novelty within the molluscs.
Chitons
Shells of chitons are made up of eight overlapping calcareous valves, surrounded by a girdle.
Gastropods
In some marine genera, during the course of normal growth the animal undergoes periodic resting stages where the shell does not increase in overall size, but a greatly thickened and strengthened lip is produced instead. When these structures are formed repeatedly with normal growth between the stages, evidence of this pattern of growth is visible on the outside of the shell, and these unusual thickened vertical areas are called varices, singular "varix". Varices are typical in some marine gastropod families, including the Bursidae, Muricidae, and Ranellidae.
Finally, gastropods with a determinate growth pattern may create a single and terminal lip structure when approaching maturity, after which growth ceases. These include the cowries (Cypraeidae) and helmet shells (Cassidae), both with in-turned lips, the true conchs (Strombidae) that develop flaring lips, and many land snails that develop tooth structures or constricted apertures upon reaching full size.
Cephalopods
Nautiluses are the only extant cephalopods which have an external shell. Extinct cephalopods with external shells include other nautiloids and the subclass Ammonoidea. Cuttlefish, squid, spirula, vampire squid, and cirrate octopuses have small internal shells. Females of the octopus genus Argonauta secrete a specialised paper-thin eggcase in which they partially reside, and this is popularly regarded as a "shell", although it is not attached to the body of the animal.
Bivalves
The shell of the Bivalvia is composed of two parts, two valves which are hinged together and joined by a ligament.
Scaphopods
The shell of many of the scaphopods ("tusk shells") resembles a miniature elephant's tusk in overall shape, except that it is hollow, and is open at both ends.
Damage to shells in collections
As a structure made primarily of calcium carbonate, mollusc shells are vulnerable to attack by acidic fumes. This can become a problem when shells are in storage or on display and are in the proximity of non-archival materials, see Byne's disease.
| Biology and health sciences | Skeletal system | Biology |
4679310 | https://en.wikipedia.org/wiki/Crowned%20pigeon | Crowned pigeon | The crowned pigeons (Goura) are a genus of birds in the family Columbidae. It contains four large species of pigeon that are endemic to the island of New Guinea and a few surrounding islands. The species are extremely similar to each other in appearance, and occupy different regions of New Guinea. The genus was introduced by the English naturalist James Francis Stephens in 1819.
They forage on the forest floor eating fallen fruit, seeds and snails. The males and females are almost identical, but during courtship the male will coo and bow for the female. Both parents incubate one egg for 28 to 30 days and the chick takes another 30 days to fledge. The life span can be over 20 years.
Systematics and evolution
The genus Goura was introduced by the English naturalist James Francis Stephens in 1819. The type species is the western crowned pigeon. The word Goura comes from the New Guinea aboriginal name for crowned pigeons.
The genus contains four species:
Scheepmaker's crowned pigeon and Sclater's crowned pigeon were previously considered as conspecific with the English name "southern crowned-pigeon".
A molecular phylogenetic study published in 2018 found that the four species in the genus formed two pairs: the western crowned pigeon was sister to Sclater's crowned pigeon while Scheepmaker's crowned pigeon was sister to the Victoria crowned pigeon.
| Biology and health sciences | Columbimorphae | Animals |
449669 | https://en.wikipedia.org/wiki/Silver%20iodide | Silver iodide | Silver iodide is an inorganic compound with the formula AgI. The compound is a bright yellow solid, but samples almost always contain impurities of metallic silver that give a grey colouration. The silver contamination arises because some samples of AgI can be highly photosensitive. This property is exploited in silver-based photography. Silver iodide is also used as an antiseptic and in cloud seeding.
Structure
The structure adopted by silver iodide is temperature dependent:
Below 420 K, the β phase of AgI, with the wurtzite structure, is most stable. This phase is encountered in nature as the mineral iodargyrite.
Above 420 K, the α phase becomes more stable. This motif is a body-centered cubic structure which has the silver centers distributed randomly between 6 octahedral, 12 tetrahedral and 24 trigonal sites. At this temperature, Ag+ ions can move rapidly through the solid, allowing fast ion conduction. The transition between the β and α forms represents the melting of the silver (cation) sublattice. The entropy of fusion for α-AgI is approximately half that for sodium chloride (a typical ionic solid). This can be rationalized by considering the AgI crystalline lattice to have already "partly melted" in the transition between α and β polymorphs.
A metastable γ phase also exists below 420 K with the zinc blende structure.
Preparation and properties
Silver iodide is prepared by reaction of an iodide solution (e.g., potassium iodide) with a solution of silver ions (e.g., silver nitrate). A yellowish solid quickly precipitates. The solid is a mixture of the two principal phases. Dissolution of the AgI in hydroiodic acid, followed by dilution with water, precipitates β-AgI. Alternatively, dissolution of AgI in a solution of concentrated silver nitrate followed by dilution affords α-AgI. Unless the preparation is conducted in dark conditions, the solid darkens rapidly, the light causing the reduction of ionic silver to metallic. The photosensitivity varies with sample purity.
Cloud seeding
The crystalline structure of β-AgI is similar to that of ice, allowing it to induce freezing by the process known as heterogeneous nucleation. Approximately 50,000 kg are used for cloud seeding annually, each seeding experiment consuming 10–50 grams. (see also Project Stormfury, Operation Popeye).
Safety
Extreme exposure can lead to argyria, characterized by localized discolouration of body tissue.
| Physical sciences | Halide salts | Chemistry |
449738 | https://en.wikipedia.org/wiki/Fixed%20point%20%28mathematics%29 | Fixed point (mathematics) | In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set.
Fixed point of a function
Formally, is a fixed point of a function if belongs to both the domain and the codomain of , and .
In particular, cannot have any fixed point if its domain is disjoint from its codomain.
If is defined on the real numbers, it corresponds, in graphical terms, to a curve in the Euclidean plane, and each fixed-point corresponds to an intersection of the curve with the line , cf. picture.
For example, if is defined on the real numbers by
then 2 is a fixed point of , because .
Not all functions have fixed points: for example, has no fixed points because is never equal to for any real number.
Fixed point iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function with the same domain and codomain, a point in the domain of , the fixed-point iteration is
which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of .
The notions of attracting fixed points, repelling fixed points, and periodic points are defined with respect to fixed-point iteration.
Fixed-point theorems
A fixed-point theorem is a result saying that at least one fixed point exists, under some general condition.
For example, the Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, fixed-point iteration will always converge to a fixed point.
The Brouwer fixed-point theorem (1911) says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point.
The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology give a way to count fixed points.
Fixed point of a group action
In algebra, for a group G acting on a set X with a group action , x in X is said to be a fixed point of g if .
The fixed-point subgroup of an automorphism f of a group G is the subgroup of G:
Similarly, the fixed-point subring of an automorphism f of a ring R is the subring of the fixed points of f, that is,
In Galois theory, the set of the fixed points of a set of field automorphisms is a field called the fixed field of the set of automorphisms.
Topological fixed point property
A topological space is said to have the fixed point property (FPP) if for any continuous function
there exists such that .
The FPP is a topological invariant, i.e., it is preserved by any homeomorphism. The FPP is also preserved by any retraction.
According to the Brouwer fixed-point theorem, every compact and convex subset of a Euclidean space has the FPP. Compactness alone does not imply the FPP, and convexity is not even a topological property, so it makes sense to ask how to topologically characterize the FPP. In 1932 Borsuk asked whether compactness together with contractibility could be a necessary and sufficient condition for the FPP to hold. The problem was open for 20 years until the conjecture was disproved by Kinoshita, who found an example of a compact contractible space without the FPP.
Fixed points of partial orders
In domain theory, the notion and terminology of fixed points is generalized to a partial order. Let ≤ be a partial order over a set X and let f: X → X be a function over X. Then a prefixed point (also spelled pre-fixed point, sometimes shortened to prefixpoint or pre-fixpoint) of f is any p such that f(p) ≤ p. Analogously, a postfixed point of f is any p such that p ≤ f(p). The opposite usage occasionally appears. Malkis justifies the definition presented here as follows: "since f is the inequality sign in the term f(x) ≤ x, such x is called a fix point." A fixed point is a point that is both a prefixpoint and a postfixpoint. Prefixpoints and postfixpoints have applications in theoretical computer science.
Least fixed point
In order theory, the least fixed point of a function from a partially ordered set (poset) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique.
One way to express the Knaster–Tarski theorem is to say that a monotone function on a complete lattice has a least fixed point that coincides with its least prefixpoint (and similarly its greatest fixed point coincides with its greatest postfixpoint).
Fixed-point combinator
In combinatory logic for computer science, a fixed-point combinator is a higher-order function that returns a fixed point of its argument function, if one exists. Formally, if the function f has one or more fixed points, then
Fixed-point logics
In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.
Applications
In many fields, equilibria or stability are fundamental concepts that can be described in terms of fixed points. Some examples follow.
In projective geometry, a fixed point of a projectivity has been called a double point.
In economics, a Nash equilibrium of a game is a fixed point of the game's best response correspondence. John Nash exploited the Kakutani fixed-point theorem for his seminal paper that won him the Nobel prize in economics.
In physics, more precisely in the theory of phase transitions, linearization near an unstable fixed point has led to Wilson's Nobel prize-winning work inventing the renormalization group, and to the mathematical explanation of the term "critical phenomenon."
Programming language compilers use fixed point computations for program analysis, for example in data-flow analysis, which is often required for code optimization. They are also the core concept used by the generic program analysis method abstract interpretation.
In type theory, the fixed-point combinator allows definition of recursive functions in the untyped lambda calculus.
The vector of PageRank values of all web pages is the fixed point of a linear transformation derived from the World Wide Web's link structure.
The stationary distribution of a Markov chain is the fixed point of the one step transition probability function.
Fixed points are used to finding formulas for iterated functions.
| Mathematics | Functions: General | null |
449744 | https://en.wikipedia.org/wiki/Storm%20surge | Storm surge | A storm surge, storm flood, tidal surge, or storm tide is a coastal flood or tsunami-like phenomenon of rising water commonly associated with low-pressure weather systems, such as cyclones. It is measured as the rise in water level above the normal tidal level, and does not include waves.
The main meteorological factor contributing to a storm surge is high-speed wind pushing water towards the coast over a long fetch. Other factors affecting storm surge severity include the shallowness and orientation of the water body in the storm path, the timing of tides, and the atmospheric pressure drop due to the storm.
As extreme weather becomes more intense and the sea level rises due to climate change, storm surges are expected to cause more risk to coastal populations. Communities and governments can adapt by building hard infrastructure, like surge barriers, soft infrastructure, like coastal dunes or mangroves, improving coastal construction practices and building social strategies such as early warning, education and evacuation plans.
Mechanics
At least five processes can be involved in altering tide levels during storms.
Direct wind effect
Wind stresses cause a phenomenon referred to as wind setup, which is the tendency for water levels to increase at the downwind shore and to decrease at the upwind shore. Intuitively, this is caused by the storm blowing the water toward one side of the basin in the direction of its winds. Strong surface winds cause surface currents at a 45° angle to the wind direction, by an effect known as the Ekman spiral. Because the Ekman spiral effects spread vertically through the water, the effect is proportional to depth. The surge will be driven into bays in the same way as the astronomical tide.
Atmospheric pressure effect
The pressure effects of a tropical cyclone will cause the water level in the open ocean to rise in regions of low atmospheric pressure and fall in regions of high atmospheric pressure. The rising water level will counteract the low atmospheric pressure such that the total pressure at some plane beneath the water surface remains constant. This effect is estimated at a increase in sea level for every millibar (hPa) drop in atmospheric pressure. For example, a major storm with a 100 millibar pressure drop would be expected to have a water level rise from the pressure effect.
Effect of the Earth's rotation
The Earth's rotation causes the Coriolis effect, which bends currents to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. When this bend brings the currents into more perpendicular contact with the shore, it can amplify the surge, and when it bends the current away from the shore it has the effect of lessening the surge.
Effect of waves
The effect of waves, while directly powered by the wind, is distinct from a storm's wind-powered currents. Powerful wind whips up large, strong waves in the direction of its movement. Although these surface waves are responsible for very little water transport in open water, they may be responsible for significant transport near the shore. When waves are breaking on a line more or less parallel to the beach, they carry considerable water shoreward. As they break, the water moving toward the shore has considerable momentum and may run up a sloping beach to an elevation above the mean water line, which may exceed twice the wave height before breaking.
Rainfall effect
The rainfall effect is experienced predominantly in estuaries. Hurricanes may dump as much as of rainfall in 24 hours over large areas and higher rainfall densities in localized areas. As a result, surface runoff can quickly flood streams and rivers. This can increase the water level near the head of tidal estuaries as storm-driven waters surging in from the ocean meet rainfall flowing downstream into the estuary.
Sea depth and topography
In addition to the above processes, storm surge and wave heights on shore are also affected by the flow of water over the underlying topography, i.e. the shape and depth of the ocean floor and coastal area. A narrow shelf, with deep water relatively close to the shoreline, tends to produce a lower surge but higher and more powerful waves. A wide shelf, with shallower water, tends to produce a higher storm surge with relatively smaller waves.
For example, in Palm Beach on the southeast coast of Florida, the water depth reaches offshore, and out. This is relatively steep and deep; storm surge is not as great but the waves are larger compared to the west coast of Florida. Conversely, on the Gulf side of Florida, the edge of the Floridian Plateau can lie more than offshore. Florida Bay, lying between the Florida Keys and the mainland, is very shallow with depths between and . These shallow areas are subject to higher storm surges with smaller waves. Other shallow areas include much of the Gulf of Mexico coast, and the Bay of Bengal.
The difference is due to how much flow area the storm surge can dissipate to. In deeper water, there is more area and a surge can be dispersed down and away from the hurricane. On a shallow, gently sloping shelf, the surge has less room to disperse and is driven ashore by the wind forces of the hurricane.
The topography of the land surface is another important element in storm surge extent. Areas, where the land lies less than a few meters above sea level, are at particular risk from storm surge inundation.
Storm size
The size of the storm also affects the surge height; this is due to the storm's area not being proportional to its perimeter. If a storm doubles in diameter, its perimeter also doubles, but its area quadruples. As there is proportionally less perimeter for the surge to dissipate to, the surge height ends up being higher.
Extratropical storms
Similar to tropical cyclones, extratropical cyclones cause an offshore rise of water. However, unlike most tropical cyclone storm surges, extratropical cyclones can cause higher water levels across a large area for longer periods of time, depending on the system.
In North America, extratropical storm surges may occur on the Pacific and Alaska coasts, and north of 31°N on the Atlantic Coast. Coasts with sea ice may experience an "ice tsunami" causing significant damage inland. Extratropical storm surges may be possible further south for the Gulf coast mostly during the wintertime, when extratropical cyclones affect the coast, such as in the 1993 Storm of the Century.
November 9–13, 2009, marked a significant extratropical storm surge event on the United States east coast when the remnants of Hurricane Ida developed into a nor'easter off the southeast U.S. coast. During the event, winds from the east were present along the northern periphery of the low-pressure center for a number of days, forcing water into locations such as Chesapeake Bay. Water levels rose significantly and remained as high as above normal in numerous locations throughout the Chesapeake for a number of days as water was continually built-up inside the estuary from the onshore winds and freshwater rains flowing into the bay. In many locations, water levels were shy of records by only .
Measuring surge
Surge can be measured directly at coastal tidal stations as the difference between the forecast tide and the observed rise of water. Another method of measuring surge is by the deployment of pressure transducers along the coastline just ahead of an approaching tropical cyclone. This was first tested for Hurricane Rita in 2005. These types of sensors can be placed in locations that will be submerged and can accurately measure the height of water above them.
After surge from a cyclone has receded, teams of surveyors map high-water marks (HWM) on land, in a rigorous and detailed process that includes photographs and written descriptions of the marks. HWMs denote the location and elevation of floodwaters from a storm event. When HWMs are analyzed, if the various components of the water height can be broken out so that the portion attributable to surge can be identified, then that mark can be classified as storm surge. Otherwise, it is classified as storm tide. HWMs on land are referenced to a vertical datum (a reference coordinate system). During the evaluation, HWMs are divided into four categories based on the confidence in the mark; in the U.S., only HWMs evaluated as "excellent" are used by the National Hurricane Center in the post-storm analysis of the surge.
Two different measures are used for storm tide and storm surge measurements. Storm tide is measured using a geodetic vertical datum (NGVD 29 or NAVD 88). Since storm surge is defined as the rise of water beyond what would be expected by the normal movement caused by tides, storm surge is measured using tidal predictions, with the assumption that the tide prediction is well-known and only slowly varying in the region subject to the surge. Since tides are a localized phenomenon, storm surge can only be measured in relationship to a nearby tidal station. Tidal benchmark information at a station provides a translation from the geodetic vertical datum to mean sea level (MSL) at that location, then subtracting the tidal prediction yields a surge height above the normal water height.
SLOSH
The U.S. National Hurricane Center forecasts storm surge using the SLOSH model, which is an abbreviation for Sea, Lake and Overland Surges from Hurricanes. The model is accurate to within 20 percent. SLOSH inputs include the central pressure of a tropical cyclone, storm size, the cyclone's forward motion, its track, and maximum sustained winds. Local topography, bay and river orientation, depth of the sea bottom, astronomical tides, as well as other physical features, are taken into account in a predefined grid referred to as a SLOSH basin. Overlapping SLOSH basins are defined for the southern and eastern coastline of the continental U.S. Some storm simulations use more than one SLOSH basin; for instance, Hurricane Katrina SLOSH model runs used both the Lake Pontchartrain / New Orleans basin, and the Mississippi Sound basin, for the northern Gulf of Mexico landfall. The final output from the model run will display the maximum envelope of water, or MEOW, that occurred at each location.
To allow for track or forecast uncertainties, usually several model runs with varying input parameters are generated to create a map of MOMs or Maximum of Maximums. For hurricane evacuation studies, a family of storms with representative tracks for the region, and varying intensity, eye diameter, and speed are modeled to produce worst-case water heights for any tropical cyclone occurrence. The results of these studies are typically generated from several thousand SLOSH runs. These studies have been completed by the United States Army Corps of Engineers, under contract to the Federal Emergency Management Agency (FEMA), for several states and are available on their Hurricane Evacuation Studies (HES) website. They include coastal county maps, shaded to identify the minimum category of hurricane that will result in flooding, in each area of the county.
Impacts
Storm surge is responsible for significant property damage and loss of life as part of cyclones. Storm surge both destroys built infrastructure, like roads, and undermines foundations and building structures.
Unexpected flooding in estuaries and coastal areas can catch populations unprepared, causing loss of life. The deadliest storm surge on record was the 1970 Bhola cyclone.
Additionally, storm surge can cause or transform human-utilized land through other processes, hurting soil fertility, increasing saltwater intrusion, hurting wildlife habitat, and spreading chemical or other contaminants from human storage.
Mitigation
Although meteorological surveys alert about hurricanes or severe storms, in the areas where the risk of coastal flooding is particularly high, there are specific storm surge warnings. These have been implemented, for instance, in the Netherlands, Spain, the United States, and the United Kingdom. Similarly educating coastal communities and developing local evacuation plans can reduce the relative impact on people.
A prophylactic method introduced after the North Sea flood of 1953 is the construction of dams and storm-surge barriers (flood barriers). They are open and allow free passage, but close when the land is under threat of a storm surge. Major storm surge barriers are the Oosterscheldekering and Maeslantkering in the Netherlands, which are part of the Delta Works project; the Thames Barrier protecting London; and the Saint Petersburg Dam in Russia.
Another modern development (in use in the Netherlands) is the creation of housing communities at the edges of wetlands with floating structures, restrained in position by vertical pylons. Such wetlands can then be used to accommodate runoff and surges without causing damage to the structures while also protecting conventional structures at somewhat higher low-lying elevations, provided that dikes prevent major surge intrusion.
Other soft adaptation methods can include changing structures so that they are elevated to avoid flooding directly, or increasing natural protections like mangroves or dunes.
For mainland areas, storm surge is more of a threat when the storm strikes land from seaward, rather than approaching from landward.
Reverse storm surge
Water can also be sucked away from shore prior to a storm surge. This was the case on the western Florida coast in 2017, just before Hurricane Irma made landfall, uncovering land usually underwater. This phenomenon is known as a reverse storm surge, or a negative storm surge.
Historic storm surges
The deadliest storm surge on record was the 1970 Bhola cyclone, which killed up to 500,000 people in the area of the Bay of Bengal. The low-lying coast of the Bay of Bengal is particularly vulnerable to surges caused by tropical cyclones. The deadliest storm surge in the twenty-first century was caused by Cyclone Nargis, which killed more than 138,000 people in Myanmar in May 2008. The next deadliest in this century was caused by Typhoon Haiyan (Yolanda), which killed more than 6,000 people in the central Philippines in 2013. and resulted in economic losses estimated at $14 billion (USD).
The 1900 Galveston hurricane, a Category 4 hurricane that struck Galveston, Texas, drove a devastating surge ashore; between 6,000 and 12,000 people died, making it the deadliest natural disaster ever to strike the United States.
The highest storm tide noted in historical accounts was produced by the 1899 Cyclone Mahina, estimated at almost at Bathurst Bay, Australia, but research published in 2000 concluded that the majority of this likely was wave run-up because of the steep coastal topography. However, much of this storm surge was likely due to Mahina's extreme intensity, as computer modeling required an intensity of (the same intensity as the lowest recorded pressure from the storm) to produce the recorded storm surge. In the United States, one of the greatest recorded storm surges was generated by Hurricane Katrina on August 29, 2005, which produced a maximum storm surge of more than in southern Mississippi, with a storm surge height of in Pass Christian. Another record storm surge occurred in this same area from Hurricane Camille in 1969, with a storm tide of , also at Pass Christian. A storm surge of occurred in New York City during Hurricane Sandy in October 2012.
| Physical sciences | Storms | Earth science |
449843 | https://en.wikipedia.org/wiki/Hypericum | Hypericum | Hypericum is a genus of flowering plants in the family Hypericaceae (formerly considered a subfamily of Clusiaceae). The genus has a nearly worldwide distribution, missing only from tropical lowlands, deserts and polar regions. Many Hypericum species are regarded as invasive species and noxious weeds. All members of the genus may be referred to as St. John's wort, and some are known as goatweed. The white or pink flowered marsh St. John's worts of North America and eastern Asia are generally accepted as belonging to the separate genus Triadenum Raf.
Hypericum is unusual for a genus of its size because a worldwide taxonomic monograph was produced for it by Norman Robson (working at the Natural History Museum, London). Robson recognizes 36 sections within Hypericum.
Description
Hypericum species are quite variable in habit, occurring as trees, shrubs, annuals, and perennials. Trees in the sense of single stemmed woody plants are rare, as most woody species have multiple stems arising from a single base. Shrubs have erect or spreading stems but never root from nodes that touch the ground. However, perennial herbs tend to root from these horizontal nodes, especially those that occur in wet habitats. Annual herbs tend to have taproots with a developed system of secondary hair roots. Many species of Hypericum are completely glabrous, others have simple uniseriate hairs, and some species have long, fine hairs.
Two types of glands form the characteristic punctiform patterns of Hypericum, "dark glands" and "pale glands". Dark glands consist of clusters of cells with a distinct black to reddish color. Their hue is indicative of a presence of naphthodianthrone, either hypericin or pseudohypericin, or both. These glands occur in about two-thirds of Hypericum sections and are usually restricted to certain organs. When these glands are crushed, the naphthodianthrones give a red stain. Paracelsus called the red secretions "Johannes-blut" in the 16th century, linking the plant to the martyr St. John and giving rise to the English and German common names of "St. John's wort". The pale glands, forming the pellucid dots, are each a schizogenous intracellular space lined with flattened cells that secrete oils and phloroglucinol derivates, including hyperforin. The distribution of these hypericin glands dissuades generalist herbivores from feeding on the plants. When generalist insects feed on Hypericum perforatum, 30-100% more naphthodianthrones are produced, repelling the insects.
The four thin ridges of tissue along the stems are closely to the opposite-decussate leaves of Hypericum. The ridges can be minor, just being called "ridges", or prominent, being called "wings". Terete, two-lined, and six-lined stems can occur occasionally. When a species has a tree or shrub habit, the internodes become mostly terete with age, though some trace of lines can still be detected in mature plants. The number of lines is an important distinguishing characteristic; for example, H. perforatum and Hypericum maculatum are easily confused save for H. perforatum having two lines and H. maculatum having four. The pale and dark glands are present on stems of various species, and other various species have stems without any glands. In section Hypericum, the glands are only present on stem lines, and in other sections, including Origanifolia and Hirtella, the glands are distributed across the stems.
Nearly all leaves of Hypericum species are arranged opposite and decussate, an exception being section Coridium in which whorls of three to four leaves occur. The leaves lack stipules and can be sessile or shortly petiolar, though long petioles exist in sections Adenosepalum and Hypericum. Basal articulation can be present, in which case leaves are deciduous above the articulation, or absent, in which case the leaves are persistent. Some species in sections Campylosporus and Brathys have an auricle-like, reflexed leaf base, whereas true auricles only exist in sections Drosocarpium, Thasia, and Crossophyllum. Laminar venation is highly variable, being dichotomous to pinnate to densely reticulate. Leaves are typically ovoid to elongate to linear in shape. Leaves are typically shorter than the internodes. Pale or dark glands can be present on or near the leaf margin and on the main leaf surface.
Typically there are four or five sepals, though in section Myriandra there are rarely three. When five sepals are present they are quincuncial, and when four sepals are present they are opposite and decussate. Sepals can be equal or unequal. Sepals can be united at their base, as seen in sections Hirtella, Taeniocarpium, and Arthrophyllum. The margins are variable, having marginal glands, teeth, or hairs. The presence or absence of dark glands on the sepals is a useful distinguishing characteristic.
Almost all Hypericum petals are yellow, though a range of color exists from a pale lemony hue to a deep orangish-yellow. Exceptions include the white or pinkish petals of Hypericum albiflorum var. albiflorum and H. geminiflorum. Many species have petals that are lined or tinged with red, including the deep crimson petals of H. capitatum var. capitatum. Petal lengths can be equal or unequal. The petals are mostly asymmetrical except those of sections Adenotrias and Elodes. In those two sections, sterile bodies have developed between the stamen fascicles, working as lodicules to spread the petals of the pseudotubular flower, a specialized pollination mechanism. Nearly all species have glands on their petals; only section Adenotrias has completely eglandular petals. It has been hypothesized that the intensity of red on the petals is correlated with the hypericin content of the glands, but other pigments including skyrin derivatives can create a red color.
Hypericum flowers have four or five fascicles that have, in total, five to two hundred stamens. The fascicles can be free or fused in various ways, often into three apparent fascicles. In sections Myriandra, Brathys, and some of Trigynobrathys, the stamens form a ring. Though stamens are usually persistent, some are deciduous. The stamens have an anther gland on the connective tissue, varying in color from amber to black.
The ovaries are three or five-merous, occasionally two-merous, with a corresponding number of free or united styles. Developing seeds are borne on axile or parietal placentae, with at least two ovules per placenta. Hypericum fruits are dissimilar to most of Hypericaceae, being capsular and dehisce from the apex. The capsule can be dry or remain fleshy when mature. The capsules have elongate or punctate glands on their surface that create various shapes and patterns. These glands are typically pale amber, though in section Drosocarpium the glands are reddish-black. Extractions of these glands in certain species yielded phloroglucinol and terpenoid derivatives, suggesting a connection between these glands and the pale glands of vegetative tissue. Seeds of Hypericum species are small and range in color from a yellowish brown to dark purplish brown. The seeds are cylindric to ellipsoid and may have narrow wings. In some seeds, a basal ridge may be present, and rarely in section Adenotrias an apical caruncle is present which attracts ants to disperse seeds. Some species have highly specific germination and survival condition requirements. For example, H. lloydii is susceptible to a fungal infection as a seedling if conditions are too moist, whereas other species including H. chapmanii can grow underwater.
Taxonomy
There are over 490 species in the genus. The name hypericum derives from hypereikos (variants: hypereikon and hyperikon), i.e. the Greek name for Hypericum crispum and Hypericum revolutum, itself possibly meaning "above pictures", for its use over shrines to repel evil spirits, though some have translated it as "above the heath".
Sections
Hypericum is broken up into 36 sections, each with its own subsections and species. They include:
Adenosepalum
Adenotrias
Androsaemum
Arthrophyllum
Ascyreia
Brathys
Bupleuroides
Campylopus
Concinna
Coridium
Crossophyllum
Drosocarpium
Elodeoida
Graveolentia
Heterophylla
Hirtella
Humifusoideum
Hypericum
Inodora
Monanthama
Myriandra
Oligostema
Olympia
Origanifolia
Psorophytum
Roscyna
Sampsonia
Santomasia
Taeniocarpium
Takasagoya
Triadenoides
Trigynobrathys
Tripentas
Umbraculoides
Webbia
Ecology
H. perforatum is an invasive species and noxious weed in farmland and gardens in the humid and sub-humid temperate zones of several continents. It is considered poisonous to livestock.
Part of the invasive success of Hypericum species is due to the absence of natural pests. The beetles Chrysolina quadrigemina, Chrysolina hyperici and the St. John's-wort root borer (Agrilus hyperici) feed on common St. John's-wort (H. perforatum) plants and have been used for biocontrol where the plant has become an invasive weed. Hypericum species are the only known food plants of the caterpillar of the treble-bar, a species of moth. Other Lepidoptera species whose larvae sometimes feed on Hypericum include the common emerald, the engrailed (recorded on imperforate St. John's-wort, H. maculatum), the grey pug and the setaceous Hebrew character. A leaf beetle, Paria sellata, feeds on the foliage of Hypericum adpressum, while ant species Formica montana and F. subsericea decorate their nests with its bright yellow petals. A small, reddish-brown weevil, Anthonomous rutilus breeds in the inflorescences of Hypericum kalmianum and H. swinkianum, the larvae developing within the fruit capsules.
Traditional medicine and adverse effects
Common St. John's-wort (H. perforatum) has long been used in traditional medicine as an extract to treat depression. H. perforatum is the most commonly used species – especially in Europe – as an herbal substitute for prescription drugs to treat depression, and is also sold as a dietary supplement. One meta-analysis found that St John's wort had similar efficacy and safety as prescriptions drugs to treat mild-to-moderate depression, such as selective serotonin reuptake inhibitors.
There is evidence that combining St. John’s wort with prescription antidepressants may cause adverse effects, such as a life-threatening increase of serotonin, the brain chemical targeted by some drugs used for depression. Symptoms may include agitation, diarrhea, high blood pressure, and hallucinations. Taking St. John’s wort may interfere with and reduce the efficacy of prescription drugs used to treat depression.
St. John's wort interacts with hormonal contraceptives, reducing their effectiveness and increasing the risk of unplanned pregnancy.
Ornamental plants
Some species are used as ornamental plants as many have large, showy flowers. Species found in cultivation include:
H. aegypticum
H. androsaemum
H. balearicum
H. bellum
H. calycinum
H. elodes
H. forrestii
H. kalmianum
H. kouytchense
H. olympicum
H. perforatum
Numerous hybrids and cultivars have been developed for use in horticulture. The following have gained the Royal Horticultural Society's Award of Garden Merit:
H. × moserianum (H. calycinum × H. patulum)
'Hidcote'
'Rowallane'
Most species of Hypericum are prone to thrips, scale, anthracnose, rust, and leaf spots. They are also eaten or infected by aphids, white flies, and spodoptera littoralis.
Fossil record
The oldest fossil species is †Hypericum antiquum from the Eocene of Siberia. Fossil seeds from the early Miocene of †Hypericum septestum have been found in the Czech part of the Zittau Basin. Many fossil seeds of †Hypericum holyi have been described from middle Miocene strata of the Fasterholt area near Silkeborg in Central Jutland, Denmark.
| Biology and health sciences | Malpighiales | null |
449934 | https://en.wikipedia.org/wiki/Dairy%20farming | Dairy farming | Dairy farming is a class of agriculture for the long-term production of milk, which is processed (either on the farm or at a dairy plant, either of which may be called a dairy) for the eventual sale of a dairy product. Dairy farming has a history that goes back to the early Neolithic era, around the seventh millennium BC, in many regions of Europe and Africa. Before the 20th century, milking was done by hand on small farms. Beginning in the early 20th century, milking was done in large scale dairy farms with innovations including rotary parlors, the milking pipeline, and automatic milking systems that were commercially developed in the early 1990s.
Milk preservation methods have improved starting with the arrival of refrigeration technology in the late 19th century, which included direct expansion refrigeration and the plate heat exchanger. These cooling methods allowed dairy farms to preserve milk by reducing spoiling due to bacterial growth and humidity.
Worldwide, leading dairy industries in many countries including India, the United States, China, and New Zealand serve as important producers, exporters, and importers of milk. Since the late 20th century, there has generally been an increase in total milk production worldwide, with around 827,884,000 tonnes of milk being produced in 2017 according to the FAO.
There has been substantial concern over the amount of waste output created by dairy industries, seen through manure disposal and air pollution caused by methane gas. The industry's role in agricultural greenhouse gas emissions has also been noted to implicate environmental consequences. Various measures have been put in place in order to control the amount of phosphorus excreted by dairy livestock. The usage of rBST has also been controversial. Dairy farming in general has been criticized by animal welfare activists due to the health issues imposed upon dairy cows through intensive animal farming.
Common types
Although any mammal can produce milk, commercial dairy farms are typically one-species enterprises. In developed countries, dairy farms typically consist of high producing dairy cows. Other species used in commercial dairy farming include goats, sheep, water buffaloes, and camels. In Italy, donkey dairies are growing in popularity to produce an alternative milk source for human infants.
History
While cattle were domesticated as early as 12,000 years ago as a food source and as beasts of burden, the earliest evidence of using domesticated cows for dairy production is from the seventh millennium BC – the early Neolithic era – in northwestern Anatolia. Dairy farming developed elsewhere in the world in subsequent centuries: the sixth millennium BC in eastern Europe, the fifth millennium BC in Africa, and the fourth millennium BC in Britain and Northern Europe.
In the last century or so larger farms specialising in dairy alone have emerged. Large scale dairy farming is only viable where either a large amount of milk is required for production of more durable dairy products such as cheese, butter, etc. or there is a substantial market of people with money to buy milk, but no cows of their own. In the 1800s von Thünen argued that there was about a 100-mile radius surrounding a city where such fresh milk supply was economically viable.
Hand milking
Centralized dairy farming as we understand it primarily developed around villages and cities, where residents were unable to have cows of their own due to a lack of grazing land. Near the town, farmers could make some extra money on the side by having additional animals and selling the milk in town. The dairy farmers would fill barrels with milk in the morning and bring it to market on a wagon. Until the late 19th century, the milking of the cow was done by hand. In the United States, several large dairy operations existed in some northeastern states and in the west, that involved as many as several hundred cows, but an individual milker could not be expected to milk more than a dozen cows a day. Smaller operations predominated.
For most herds, milking took place indoors twice a day, in a barn with the cattle tied by the neck with ropes or held in place by stanchions. Feeding could occur simultaneously with milking in the barn, although most dairy cattle were pastured during the day between milkings. Such examples of this method of dairy farming are difficult to locate, but some are preserved as a historic site for a glimpse into the days gone by. One such instance that is open for this is at Point Reyes National Seashore.
Dairy farming has been part of agriculture for thousands of years. Historically it has been one part of small, diverse farms. In the last century or so larger farms concentrating on dairy production emerged. Large scale dairy farming is only viable where either a large amount of milk is required for production of more durable dairy products such as cheese, butter, etc. or there is a substantial market of people with cash to buy milk, but no cows of their own. Dairy farms were the best way to meet demand.
Vacuum bucket milking
The first milking machines were an extension of the traditional milking pail. The early milker device fit on top of a regular milk pail and sat on the floor under the cow. Following each cow being milked, the bucket would be dumped into a holding tank. These were introduced in the early 20th century.
This developed into the Surge hanging milker. Prior to milking a cow, a large wide leather strap called a surcingle was put around the cow, across the cow's lower back. The milker device and collection tank hung underneath the cow from the strap. This innovation allowed the cow to move around naturally during the milking process rather than having to stand perfectly still over a bucket on the floor.
Milking pipeline
The next innovation in automatic milking was the milk pipeline, introduced in the late 20th century. This uses a permanent milk-return pipe and a second vacuum pipe that encircles the barn or milking parlor above the rows of cows, with quick-seal entry ports above each cow. By eliminating the need for the
milk container, the milking device shrank in size and weight to the point where it could hang under the cow, held up only by the sucking force of the milker nipples on the cow's udder. The milk is pulled up into the milk-return pipe by the vacuum system, and then flows by gravity to the milkhouse vacuum-breaker that puts the milk in the storage tank. The pipeline system greatly reduced the physical labor of milking since the farmer no longer needed to carry around huge heavy buckets of milk from each cow.
The pipeline allowed barn length to keep increasing and expanding, but after a point farmers started to milk the cows in large groups, filling the barn with one-half to one-third of the herd, milking the animals, and then emptying and refilling the barn. As herd sizes continued to increase, this evolved into the more efficient milking parlor.
Milking parlors
Innovation in milking focused on mechanizing the milking parlor (known in Australia and New Zealand as the 'cowshed') to maximize the number of cows per operator which streamlined the milking process to permit cows to be milked as if on an assembly line, and to reduce physical stresses on the farmer by putting the cows on a platform slightly above the person milking the cows to eliminate having to constantly bend over. Many older and smaller farms still have tie-stall or stanchion barns, but worldwide a majority of commercial farms have parlors.
Herringbone and parallel parlors
In herringbone and parallel parlors, the milker generally milks one row at a time. The milker will move a row of cows from the holding yard into the milking parlor, and milk each cow in that row. Once all of the milking machines have been removed from the milked row, the milker releases the cows to their feed. A new group of cows is then loaded into the now vacant side and the process repeats until all cows are milked. Depending on the size of the milking parlor, which normally is the bottleneck, these rows of cows can range from four to sixty at a time. The benefits of a herringbone parlour are easy maintenance, the durability, stability, and improved safety for animals and humans when compared to tie stall The first herringbone shed is thought to have been built in 1952 by a Gordonton farmer.
Rotary parlors
In rotary parlors, the cows are loaded one at a time onto the parlor as the whole thing rotates in a circle. One milker stands near the entry to the parlor and pre-dips the teats on the udder to help prevent bacteria from entering. The next milker puts the machine on the cow to begin milking. By the time the platform has completed almost a full rotation, the cow is done milking and the unit will come off automatically. The last milker will post-dip her teats to protect them before entering back into the pen. Once this process is done, the cow will back out of the parlor and return to the barn. Rotary cowsheds, as they are called in New Zealand, started in the 1980s but are expensive compared to Herringbone cowshed – the older New Zealand norm.
Automatic milker take-off
It can be harmful to an animal for it to be over-milked past the point where the udder has stopped releasing milk. Consequently, the milking process involves not just applying the milker, but also monitoring the process to determine when the animal has been milked out and the milker should be removed. While parlor operations allowed a farmer to milk many more animals much more quickly, it also increased the number of animals to be monitored simultaneously by the farmer. The automatic take-off system was developed to remove the milker from the cow when the milk flow reaches a preset level, relieving the farmer of the duties of carefully watching over 20 or more animals being milked at the same time.
Fully automated robotic milking
In the 1980s and 1990s, robotic milking systems were developed and introduced (principally in the EU). Thousands of these systems are now in routine operation. In these systems the cow has a high degree of autonomy to choose her time of milking freely during the day (some alternatives may apply, depending on cow-traffic solution used at a farm level). These systems are generally limited to intensively managed systems although research continues to match them to the requirements of grazing cattle and to develop sensors to detect animal health and fertility automatically. Every time the cow enters the milking unit she is fed concentrates and her collar is scanned to record production data.
History of milk preservation methods
Cool temperature has been the main method by which milk freshness has been extended. When windmills and well pumps were invented, one of their first uses on the farm, besides providing water for animals themselves, was for cooling milk, to extend its storage life, until it would be transported to the town market.
The naturally cold underground water would be continuously pumped into a cooling tub or vat. Tall, ten-gallon metal containers filled with freshly obtained milk, which is naturally warm, were placed in this cooling bath. This method of milk cooling was popular before the arrival of electricity and refrigeration.
Refrigeration
When refrigeration first the equipment was initially used to cool cans of milk, which were filled by hand milking. These cans were placed into a cooled water bath to remove heat and keep them cool until they were able to be transported to collect facilities. As more automated methods were developed for eating milk, hand milking was replaced and, as a result, the milk can was replaced by a bulk milk cooler. 'Ice banks' were the first type of bulk milk cooler. This was a double wall vessel with evaporator coils and water located between the walls at the bottom and sides of the tank. A small refrigeration compressor was used to remove heat from the evaporator coils. Ice eventually builds up around the coils, until it reaches a thickness of about three inches surrounding each pipe, and the cooling system shuts off. When the milking operation starts, only the milk agitator and the water circulation pump, which flows water across the ice and the steel walls of the tank, are needed to reduce the incoming milk to a temperature below 5 degrees.
This cooling method worked well for smaller dairies, however was fairly inefficient and was unable to meet the increasingly higher cooling demand of larger milking parlors. In the mid-1950s direct expansion refrigeration was first applied directly to the bulk milk cooler. This type of cooling utilizes an evaporator built directly into the inner wall of the storage tank to remove heat from the milk. Direct expansion is able to cool milk at a much faster rate than early ice bank type coolers and is still the primary method for bulk tank cooling today on small to medium-sized operations.
Another device which has contributed significantly to milk quality is the plate heat exchanger (PHE). This device utilizes a number of specially designed stainless steel plates with small spaces between them. Milk is passed between every other set of plates with water being passed between the balance of the plates to remove heat from the milk. This method of cooling can remove large amounts of heat from the milk in a very short time, thus drastically slowing bacteria growth and thereby improving milk quality. Ground water is the most common source of cooling medium for this device. Dairy cows consume approximately 3 gallons of water for every gallon of milk production and prefer to drink slightly warm water as opposed to cold ground water. For this reason, PHE's can result in drastically improved milk quality, reduced operating costs for the dairymen by reducing the refrigeration load on his bulk milk cooler, and increased milk production by supplying the cows with a source of fresh warm water.
Plate heat exchangers have also evolved as a result of the increase of dairy farm herd sizes in the United States. As a dairyman increases the size of his herd, he must also increase the capacity of his milking parlor in order to harvest the additional milk. This increase in parlor sizes has resulted in tremendous increases in milk throughput and cooling demand. Today's larger farms produce milk at a rate which direct expansion refrigeration systems on bulk milk coolers cannot cool in a timely manner. PHE's are typically utilized in this instance to rapidly cool the milk to the desired temperature (or close to it) before it reaches the bulk milk tank. Typically, ground water is still utilized to provide some initial cooling to bring the milk to between . A second (and sometimes third) section of the PHE is added to remove the remaining heat with a mixture of chilled pure water and propylene glycol. These chiller systems can be made to incorporate large evaporator surface areas and high chilled water flow rates to cool high flow rates of milk.
Milking operation
Milking machines are held in place automatically by a vacuum system that draws the ambient air pressure down from of vacuum. The vacuum is also used to lift milk vertically through small diameter hoses, into the receiving can. A milk lift pump draws the milk from the receiving can through large diameter stainless steel piping, through the plate cooler, then into a refrigerated bulk tank.
Milk is extracted from the cow's udder by flexible rubber sheaths known as liners or inflations that are surrounded by a rigid air chamber. A pulsating flow of ambient air and vacuum is applied to the inflation's air chamber during the milking process. When ambient air is allowed to enter the chamber, the vacuum inside the inflation causes the inflation to collapse around the cow's teat, squeezing the milk out of teat in a similar fashion as a baby calf's mouth massaging the teat. When the vacuum is reapplied in the chamber the flexible rubber inflation relaxes and opens up, preparing for the next squeezing cycle.
It takes the average cow three to five minutes to give her milk. Some cows are faster or slower. Slow-milking cows may take up to fifteen minutes to let down all their milk. Though milking speed is not related to the quality of milk produced by the cow, it does impact the management of the milking process. Because most milkers milk cattle in groups, the milker can only process a group of cows at the speed of the slowest-milking cow. For this reason, many farmers will group slow-milking cows so as not to stress the faster milking cows.
The extracted milk passes through a strainer and plate heat exchangers before entering the tank, where it can be stored safely for a few days at approximately . At pre-arranged times, a milk truck arrives and pumps the milk from the tank for transport to a dairy factory where it will be pasteurized and processed into many products. The frequency of pick up depends and the production and storage capacity of the dairy; large dairies will have milk pick-ups once per day.
Management of the herd
The dairy industry is a constantly evolving business. Management practices change with new technology and regulations that move the industry toward increased economic and environmental sustainability. Management strategies can also loosely be divided into intensive and extensive systems. Extensive systems operate based on a low input and low output philosophy, where intensive systems adopt a high input high output philosophy. These philosophies as well as available technologies, local regulations, and environmental conditions manifest in different management of nutrition, housing, health, reproduction and waste.
Most modern dairy farms divide the animals into different management units depending on their age, nutritional needs, reproductive status, and milk production status. The group of cows that are currently lactating, the milking herd, is often managed most intensively to make sure their diet and environmental conditions are conducive to producing as much high quality milk as possible. On some farms the milking herd is further divided into milking strings, which are groups of animals with different nutritional needs. The segment of the adult herd that are in the resting period before giving birth to their next calf are called dry cows because they are not being milked. All female animals that have yet to give birth to their first calf are called heifers. Some of them will grow up to take the place of older animals in the milking herd and thus are sometimes generally referred to as the replacement herd. The others, as well as most male calves are considered surplus dairy calves and are slaughtered for meat, such as veal dairy beef, or killed on farm.
Housing systems
Dairy cattle housing systems vary greatly throughout the world depending on the climate, dairy size, and feeding strategies. Housing must provide access to feed, water and protection from relevant environmental conditions. One issue for housing cattle is temperature extremes. Heat stress can decrease fertility and milk production in cattle. Providing shade is a very common method for reducing heat stress. Barns may also incorporate fans or tunnel ventilation into the architecture of the barn structure. Overly cold conditions, while rarely deadly for cattle, cause increases in maintenance energy requirements and thus increased feed intake and decreased milk production. During the winter months, where temperatures are low enough, dairy cattle are often kept inside barns which are warmed by their collective body heat.
Feed provision is also an important feature of dairy housing. Pasture based dairies are a more extensive option where cows are turned out to graze on pasture when the weather permits. Often the diet must be supplemented with when poor pasture conditions persist. Free stall barns and open lots are intensive housing options where feed is brought to the cattle at all times of year. Free stall barns are designed to allow the cows freedom to choose when they feed, rest, drink, or stand. They can be either fully enclosed or open air barns again depending on the climate. The resting areas, called free stalls, are divided beds lined with anything from mattresses to sand. In the lanes between rows of stalls, the floor is often make of grooved concrete. Most barns open onto uncovered corrals, which the cattle are free to enjoy as the weather allows. Open lots are dirt lots with constructed shade structures and a concrete pad where feed is delivered.
Milking systems
Life on a dairy farm revolves around the milking parlor. Each lactating cow will visit the parlor at least twice a day to be milked. A remarkable amount of engineering has gone into designing milking parlors and milking machines. Efficiency is crucial; every second saved while milking a single cow adds up to hours over the whole herd.
Milking machines
Milking is now performed almost exclusively by machine, though human technicians are still essential on most facilities. The most common milking machine is called a cluster milker. This milker consists of four metal cupsone per teateach lined with rubber or silicone. The cluster is attached to both a milk collection system and a pulsating vacuum system. When the vacuum is on, it pulls air from between the outer metal cup and the liner, drawing milk out of the teat. When the vacuum turns off, it gives the teat an opportunity to refill with milk. In most milking systems, a milking technician must attach the cluster to each cow, but the machine senses when the cow has been fully milked and drops off independently.
Milking routine
Every time a cow enters the parlor several things need to happen to ensure milk quality and cow health. First, the cow's udder must be cleaned and disinfected to prevent both milk contamination and udder infections. Then the milking technician must check each teat for signs of infection by observing the first stream of milk. During this processes, called stripping the teat, the milking technician is looking for any discoloration or chunkiness that would indicate mastitis, an infection in the cow's mammary gland. Milk from a cow with mastitis cannot enter the human milk supply, thus farmers must be careful that infected milk does not mix with the milk from healthy cows and that the cow gets the necessary treatment. If the cow passes the mastitis inspection, the milking technician will attach the milking cluster. The cluster will run until the cow is fully milked and then drop off. The milk travels immediately through a cooling system and then into a large cooled storage tank, where it will stay until picked up by a refrigerated milk truck. Before the cow is released from the milking stalls her teats are disinfected one last time to prevent infection.
Nutritional management
Feed for their cattle is by far one of the largest expenses for dairy producer whether it be provided by the land they graze or crops grown or purchased. Pasture based dairy producers invest much time and effort into maintaining their pastures and thus feed for their cattle. Pasture management techniques such as rotational grazing are common for dairy production. Many large dairies that deliver food to their cattle have a dedicated nutritionist who is responsible for formulating diets with animal health, milk production, and cost efficiency in mind. For maximum productivity diets must be formulated differently depending on the growth rate, milk production, and reproductive status of each animal.
Cattle are classified as ruminants (suborder belonging to the order ) as they are able to acquire nutrients from even low quality plant-based food, thanks mainly to their symbiotic relationship with the microbes that ferment it in a chamber of their stomachs called the rumen. The rumen is a literal micro-ecosystem within each dairy cow. For optimal digestion, the environment of the rumen must be ideal for the microbes. In this way, the job of a ruminant nutritionist is to feed the microbes not the cow.
The nutritional requirements of cattle are usually divided into maintenance requirements, which depend on the cow's weight; and milk production requirements, which in turn depend on the volume of milk the cow is producing. The nutritional contents of each available feed are used to formulate a diet that meets all nutritional needs in the most cost effective way. Notably, cattle must be fed a diet high in fiber to maintain a proper environment for the rumen microbes. Farmers typically grow their own forage for their cattle. Crops grown may include corn, alfalfa, timothy, wheat, oats, sorghum and clover. These plants are often processed after harvest to preserve or improve nutrient value and prevent spoiling. Corn, alfalfa, wheat, oats, and sorghum crops are often anaerobically fermented to create silage. Many crops such as alfalfa, timothy, oats, and clover are allowed to dry in the field after cutting before being baled into hay.
To increase the energy density of their diet, cattle are commonly fed cereal grains. In many areas of the world, dairy rations also commonly include byproducts from other agricultural sectors. For example, in California cattle are commonly fed almond hulls and cotton seed. Feeding of byproducts can reduce the environmental impact of other agricultural sectors by keeping these materials out of landfills.
To meet all of their nutritional requirements cows must eat their entire ration. Unfortunately, much like humans, cattle have their favorite foods. To keep cattle from selectively eating the most desirable parts of the diet, most produces feed a total mixed ration (TMR). In this system all the components of the feed are well mixed in a mixing truck before being delivered to the cattle. Different TMRs are often prepared for groups of cows with different nutritional requirements.
Reproductive management
Female calves born on a dairy farm will typically be raised as replacement stock to take the place of older cows that are no longer sufficiently productive. The life of a dairy cow is a cycle of pregnancy and lactation starting at puberty. The timing of these events is very important to the production capacity of the dairy. A cow will not produce milk until she has given birth to a calf. Consequently, timing of the first breeding as well as all the subsequent breeding is important for maintaining milk production levels.
Puberty and first breeding
Most dairy producers aim for a replacement heifer to give birth to her first calf, and thus join the milking herd, on her second birthday. As the cow's gestation period is a little over 9 months this means the cow must be inseminated by the age of 15 months. Because the breeding process is inefficient, most producers aim to first breed their heifers between 12 and 14 months. Before a heifer can be bred she must reach sexual maturity and attain the proper body condition to successfully bear a calf. Puberty in cattle depends largely on weight among other factors. Holstein heifers reach puberty at an average body weight between 550 and 650 lbs. Smaller breeds of cattle, such as Jerseys, usually reach puberty earlier at a lighter weight. Under typical nutritional conditions, Holstein heifers will reach puberty at the age 9–10 months. Proper body condition for breeding is also largely judged by weight. At about 800lbs Holstein heifers will normally be able to carry a healthy calf and give birth with relative ease. In this way, the heifers will be able to give birth and join the milking herd before their second birthday.
Estrous cycle
Puberty coincides with the beginning of estrous cycles. Estrous cycles are the recurring hormonal and physiological changes that occur within the bodies of most mammalian females that lead to ovulation and the development of a suitable environment for embryonic and fetal growth. The cow is considered polyestrous, which means that she will continue to undergo regular estrous cycles until death unless the cycle is interrupted by a pregnancy.
In cows, a complete estrous cycle lasts 21 days. Most commonly, dairy producers discuss the estrous cycle as beginning when the cow is receptive to breeding. This short phase lasting only about a day is also known as estrus or colloquially, heat. The cow will often exhibit several behavioral changes during this phase including increased activity and vocalizations. Most importantly, during estrus she will stand still when mounted by another cow or bull.
Mating and pregnancy
In the United States, artificial insemination (AI) is a very important reproductive tool used on dairy facilities. AI, is the process by which sperm is deliberately delivered by dairy managers or veterinarians into the cow's uterus. Bulls “donate” semen at a stud farm but there is never any physical contact between the cow and the bull when using this method.
This method of insemination quickly gained popularity among dairy producers for several reasons. Dairy bulls are notoriously dangerous to keep on the average dairy facility. AI also makes it possible to speed the genetic improvement of the dairy herd because every dairy farmer has access to sperm from genetically superior sires. Additionally, AI has been shown to reduce spread of venereal diseases within herd that would ultimately lead to fertility problems. Many producers also find it to be more economical than keeping a bull. On the other hand, AI does require more intensive reproductive management of the herd as well as more time and expertise. Detection of estrus, becomes reliant on observation in the absence of bulls. It takes considerable expertise to properly inseminate a cow and high quality sperm is valuable. Ultimately, because dairy production was already a management intensive industry the disadvantages are dwarfed by the advantages of the AI for many dairy producers.
The majority of cows carry a single calf. Pregnancy lasts an average of 280 to 285 days or a little less than 9 and one half months.
Lactation management
After the birth of a calf the cow begins to lactate. Lactation will normally continue for as long as the cow is milked but production will steadily decline. Dairy farmers are extremely familiar with the pattern of milk production and carefully time the cow's next breeding to maximize milk production. The pattern of lactation and pregnancy is known as the lactation cycle.
For a period of 20 days post parturition the cow is called a fresh cow. Milk production quickly increases during this phase but milk composition is also significantly different from the rest of the cycle. This first milk, called colostrum, is rich in fats, protein, and also maternal immune cells. This colostrum is not usually commercially sold, but is extremely important for early calf nutrition. Perhaps most importantly, it conveys passive immunity to the calf before its immune system is fully developed.
The next 30 to 60 days of the lactation cycle is characterized by peak milk production levels. The amount of milk produced per day during this period varies considerably by breed and by individual cow depending on her body condition, genetics, health, and nutrition. During this period the body condition of the cow will suffer because the cow will draw on her body stores to maintain such high milk production. Food intake of the cow also will increase. After peak lactation, the cow's milk production levels will slowly decline for the rest of the lactation cycle. The producer will often breed the cow soon after she leaves peak production. For a while, the cow's food intake will remain high before also beginning a decline to pre lactation levels. After peak milk production her body condition will also steadily recover.
Producers will typically continue to milk the cow until she is two months away from parturition then they will dry her off. Giving the cow a break during the final stages of pregnancy allows her mammary gland to regress and re-develop, her body condition to recover, and the calf to develop normally. Decreased body condition in the cow means she will not be as productive in subsequent milk cycles. Decreased health in the new born calf will negatively impact the quality of the replacement herd. There is also evidence that increased rates of mammary cell proliferation occur during the dry period that is essential to maintaining high production levels in subsequent lactation cycles.
Concerns
Animal waste from large cattle dairies
As measured in phosphorus, the waste output of 5,000 cows roughly equals a municipality of 70,000 people. In the U.S., dairy operations with more than 1,000 cows meet the EPA definition of a CAFO (Concentrated Animal Feeding Operation), and are subject to EPA regulations. For example, in the San Joaquin Valley of California a number of dairies have been established on a very large scale. Each dairy consists of several modern milking parlor set-ups operated as a single enterprise. Each milking parlor is surrounded by a set of 3 or 4 loafing barns housing 1,500 or 2,000 cattle. Some of the larger dairies have planned 10 or more series of loafing barns and milking parlors in this arrangement, so that the total operation may include as many as 15,000 or 20,000 cows. The milking process for these dairies is similar to a smaller dairy with a single milking parlor but repeated several times. The size and concentration of cattle creates major environmental issues associated with manure handling and disposal, which requires substantial areas of cropland (a ratio of 5 or 6 cows to the acre, or several thousand acres for dairies of this size) for manure spreading and dispersion, or several-acre methane digesters. Air pollution from methane gas associated with manure management also is a major concern. As a result, proposals to develop dairies of this size can be controversial and provoke substantial opposition from environmentalists including the Sierra Club and local activists.
The potential impact of large dairies was demonstrated when a massive manure spill occurred on a 5,000-cow dairy in Upstate New York, contaminating a stretch of the Black River, and killing 375,000 fish. On 10 August 2005, a manure storage lagoon collapsed releasing of manure into the Black River. Subsequently, the New York Department of Environmental Conservation mandated a settlement package of $2.2 million against the dairy.
When properly managed, dairy and other livestock waste, due to its nutrient content (N, P, K), makes an excellent fertilizer promoting crop growth, increasing soil organic matter, and improving overall soil fertility and tilth characteristics. Most dairy farms in the United States are required to develop nutrient management plans for their farms, to help balance the flow of nutrients and reduce the risks of environmental pollution. These plans encourage producers to monitor all nutrients coming onto the farm as feed, forage, animals, fertilizer, etc. and all nutrients exiting the farm as product, crop, animals, manure, etc. For example, a precision approach to animal feeding results in less overfeeding of nutrients and a subsequent decrease in environmental excretion of nutrients, such as phosphorus. In recent years, nutritionists have realized that requirements for phosphorus are much lower than previously thought. These changes have allowed dairy producers to reduce the amount of phosphorus being fed to their cows with a reduction in environmental pollution.
Use of hormones
It is possible to maintain higher milk production by supplementing cows with growth hormones known as recombinant BST or rBST, but this is controversial due to its effects on animal and possibly human health. The European Union, Japan, Australia, New Zealand and Canada have banned its use due to these concerns.
In the US however, no such prohibition exists, but rBST is not used on dairy farms. Most dairy processors, if not all, will not accept milk with rBST. The U.S. Food and Drug Administration states that no "significant difference" has been found between milk from treated and non-treated cows but based on consumer concerns several milk purchasers and resellers have elected not to purchase milk produced with rBST.
Animal welfare
The practice of dairy production in a factory farm environment has been criticized by animal welfare activists. Some of the ethical complaints regarding dairy production cited include how often the dairy cattle must remain pregnant, the separation of calves from their mothers, how dairy cattle are housed and environmental concerns regarding dairy production.
The production of milk requires that the cow be in lactation, which is a result of the cow having given birth to a calf. The cycle of insemination, pregnancy, parturition, and lactation, followed by a "dry" period of about two months of forty-five to fifty days, before calving which allows udder tissue to regenerate. A dry period that falls outside this time frame can result in decreased milk production in subsequent lactation.
An important part of the dairy industry is the removal of the calves off the mother's milk after the three days of needed colostrum, allowing for the collection of the milk produced. On some dairies, in order for this to take place, the calves are fed milk replacer, a substitute for the whole milk produced by the cow. Milk replacer is generally a powder, which comes in large bags, and is added to precise amounts of water, and then fed to the calf via bucket, bottle or automated feeder.
Milk replacers are classified by three categories: protein source, protein/fat (energy) levels, and medication or additives (e.g. vitamins and minerals). Proteins for the milk replacer come from different sources; the more favorable and more expensive all milk protein (e.g. whey protein- a by-product of the cheese industry) and alternative proteins including soy, animal plasma and wheat gluten. The ideal levels for fat and protein in milk replacer are 10-28% and 18-30%, respectively.
The higher the energy levels (fat and protein), the less starter feed (feed which is given to young animals) the animal will consume. Weaning can take place when a calf is consuming at least two pounds of starter feed a day and has been on starter for at least three weeks.
Milk replacer has climbed in cost US$15–20 a bag in recent years, so early weaning is economically crucial to effective calf management.
Common ailments affecting dairy cows include infectious disease (e.g. mastitis, endometritis and digital dermatitis), metabolic disease (e.g. milk fever and ketosis) and injuries caused by their environment (e.g. hoof and hock lesions).
Lameness is commonly considered one of the most significant animal welfare issues for dairy cattle, and is best defined as any abnormality that causes an animal to change its gait. It can be caused by a number of sources, including infections of the hoof tissue (e.g. fungal infections that cause dermatitis) and physical damage causing bruising or lesions (e.g. ulcers or hemorrhage of the hoof).
Housing and management features common in modern dairy farms (such as concrete barn floors, limited access to pasture and suboptimal bed-stall design) have been identified as contributing risk factors to infections and injuries.
Greenhouse gas emissions
Milk is estimated to have been responsible for 18% of agricultural greenhouse gas emissions in 2014.
Market
Worldwide
There is a great deal of variation in the pattern of dairy production worldwide. Many countries which are large producers consume most of this internally, while others (in particular New Zealand), export a large percentage of their production. Internal consumption is often in the form of liquid milk, while the bulk of international trade is in processed dairy products such as milk powder.
The milking of cows was traditionally a labor-intensive operation and still is in less developed countries. Small farms need several people to milk and care for only a few dozen cows, though for many farms these employees have traditionally been the children of the farm family, giving rise to the term "family farm".
Advances in technology have mostly led to the radical redefinition of "family farms" in industrialized countries such as Australia, New Zealand, and the United States. With farms of hundreds of cows producing large volumes of milk, the larger and more efficient dairy farms are more able to weather severe changes in milk price and operate profitably, while "traditional" family farms generally do not have the equity or income other larger scale farms do. The common public perception of large corporate farms supplanting smaller ones is generally a misconception, as many small family farms expand to take advantage of economies of scale, and incorporate the business to limit the legal liabilities of the owners and simplify such things as tax management.The transition from family farms to farms with employed staff who carry out the day-to-day management of the herd's animals has changed the farmer's duties and role on the farm. New questions have arisen concerning how the development of bigger farms places greater demands on strategies focused on financial control, leadership, and personnel issues.
Before large scale mechanization arrived in the 1950s, keeping a dozen milk cows for the sale of milk was profitable. Now most dairies must have more than one hundred cows being milked at a time in order to be profitable, with other cows and heifers waiting to be "freshened" to join the milking herd. In New Zealand, the average herd size increased from 113 cows in the 1975–76 season to 435 cows in 2018–19 season.
Worldwide, the largest cow milk producer is the United States, the largest cow milk exporter is New Zealand, and the largest importer is China. The European Union with its present 27 member countries produced in 2013(96.8% cow milk), the most by any politico-economic union.
Supply management
The Canadian dairy industry is one of four sectors that is under the supply management system, a national agricultural policy framework that coordinates supply and demand through production and import control and pricing mechanisms designed to prevent shortages and surpluses, to ensure farmers a fair rate of return and Canadian consumer access to a high-quality, stable, and secure supply of these sensitive products. The milk supply management system is a "federated provincial policy" with four governing agencies, organizations and committees—Canadian Dairy Commission, Canadian Milk Supply Management Committee (CMSMC), regional milk pools, and provincial milk marketing boards. The dairy supply management system is administered by the federal government through the Canadian Dairy Commission (CDC), which was established in 1966 and is composed mostly of dairy farmers, administers the dairy supply management system for Canada's 12,000 dairy farms. The federal government is involved in supply management through the CDC in the administration of imports and exports. The Canadian Milk Supply Management Committee (CMSMC) was introduced in 1970 as the body responsible for monitoring the production rates of milk and setting the national Market Sharing Quota (MSQ) for industrial raw milk. The supply management system was authorized in 1972 through the Farm Products Agencies Act. Supply management ensures consistent pricing of milk for farmers with no fluctuation in the market. The prices are based on the demand for milk throughout the country and how much is being produced. In order to start a new farm or increase production more share into the SMS needs to be bought into known as "Quota". in this case farmers must remain up to or below the amount of "quota" they have bought share of. Each province in Canada has their own cap on quota based on the demand in the market. There is a cap on the countries quota known as total quota per month. In 2016 the total butter fat produced per month was 28,395,848 kg.
World Milk Production
United States
In the United States, the top five dairy states are, in order by total milk production; California, Wisconsin, New York, Idaho, and Texas. Dairy farming is also an important industry in Florida, Minnesota, Ohio and Vermont. There are 40,000 dairy farms in the United States.
Pennsylvania has 8,500 farms with 555,000 dairy cows. Milk produced in Pennsylvania yields an annual revenue of about US$1.5 billion.
Milk prices collapsed in 2009. Senator Bernie Sanders accused Dean Foods of controlling 40% of the country's milk market. He has requested the United States Department of Justice to pursue an anti-trust investigation. Dean Foods says it buys 15% of the country's raw milk. In 2011, a federal judge approved a settlement of $30 million to 9,000 farmers in the Northeast.
Herd size in the US varies between 1,200 on the West Coast and Southwest, where large farms are commonplace, to roughly 50 in the Midwest and Northeast, where land-base is a significant limiting factor to herd size. The average herd size in the U.S. is about one hundred cows per farm but the midpoint size is 900 cows with 49% of all cows residing on farms of 1000 or more cows.
European Union
Climate change and milk yields
| Technology | Agriculture_2 | null |
450023 | https://en.wikipedia.org/wiki/Ginseng | Ginseng | Ginseng () is the root of plants in the genus Panax, such as Korean ginseng (P. ginseng), South China ginseng (P. notoginseng), and American ginseng (P. quinquefolius), characterized by the presence of ginsenosides and gintonin. Ginseng is common in the cuisines and medicines of China and Korea.
Ginseng has been used in traditional medicine over centuries, though modern clinical research is inconclusive about its medical effectiveness. There is no substantial evidence that ginseng is effective for treating any medical condition and it has not been approved by the US Food and Drug Administration (FDA) to treat or prevent a disease or to provide a health benefit. Although ginseng is sold as a dietary supplement, inconsistent manufacturing practices for supplements have led to analyses of some ginseng products contaminated with toxic metals or unrelated filler compounds, and its excessive use may have adverse effects or untoward interactions with prescription drugs.
History
One of the first written texts covering the use of ginseng as a medicinal herb was the Shen Nong Pharmacopoeia, written in China in 196 AD. In his Compendium of Materia Medica herbal of 1596, Li Shizhen described ginseng as a "superior tonic". However, the herb was not used as a "cure-all" medicine, but more specifically as a tonic for patients with chronic illnesses and those who were convalescing.
Control over ginseng fields in China and Korea became an issue in the 16th century.
In folk belief
In Chinese folk tales from the northeastern regions, ginseng is said to transform into children, often depicted with skyward-reaching braids, sometimes tied with red ribbons, and occasionally dressed in bellybands. In these stories, a ginseng child will typically enter a house to play with another child. However, if the adults tie a red ribbon around the child's feet, the child vanishes. When they follow the ribbon, they find it tied to a blade of grass, and upon digging, they uncover a ginseng root.
Ginseng species
Ginseng plants belong only to the genus Panax. Cultivated species include Panax ginseng (Korean ginseng), Panax notoginseng (South China ginseng), Panax pseudoginseng (Himalayan ginseng), Panax quinquefolius (American ginseng), Panax trifolius (Dwarf ginseng), and Panax vietnamensis (Vietnamese ginseng). Ginseng is found in cooler climates – Korean Peninsula, Northeast China, Russian Far East, Canada and the United States, although some species grow in warm regions – South China ginseng being native to Southwest China and Vietnam. Panax vietnamensis (Vietnamese ginseng) is the southernmost Panax species known.
Wild and cultivated ginseng
Wild ginseng
Wild ginseng () grows naturally in mountains and is hand-picked by gatherers known as simmani (). The wild ginseng plant is almost extinct in China and endangered globally. This is due to high demand for the product in recent years, leading to the harvesting of wild plants faster than they can grow and reproduce (a wild ginseng plant can take years to reach maturity). Wild ginseng can be processed to be red or white ginseng. Wild American ginseng has long been used by Native Americans for medicine. Since the mid-1700s, it has been harvested for international trade. Wild American ginseng can be harvested in 19 states and the Appalachian Mountains but has restrictions for exporting.
Cultivated ginseng
Cultivated ginseng () is less expensive than the rarely available wild ginseng.
Cultivated ginseng () is planted on mountains by humans and is allowed to grow like wild ginseng.
Ginseng processing
Ginseng seed normally does not germinate until the second spring following the harvest of berries in Autumn. They must first be subjected to a long period of storage in a moist medium with a warm/cold treatment, a process known as stratification.
Fresh ginseng
Fresh ginseng (), also called "green ginseng", is non-dried raw product. Its use is limited by availability.
White ginseng
White ginseng () is peeled and dried ginseng. White ginseng is fresh ginseng which has been dried without being heated. It is peeled and dried to reduce the water content to 12% or less. Drying in the sun bleaches the root to a yellowish-white color.
Red ginseng
Red ginseng () is steamed and dried ginseng, which has reddish color. Red ginseng is less vulnerable to decay than white ginseng. It is ginseng that has been peeled, heated through steaming at standard boiling temperatures of , and then dried or sun-dried. It is frequently marinated in an herbal brew which results in the root becoming extremely brittle.
Production
Commercial ginseng is sold in over 35 countries, with China as the largest consumer. In 2013, global sales of ginseng exceeded $2 billion, of which half was produced by South Korea. In the early 21st century, 99% of the world's 80,000 tons of ginseng was produced in just four countries: China (44,749 tons), South Korea (27,480 tons), Canada (6,486 tons), and the United States (1,054 tons). All ginseng produced in South Korea is Korean ginseng (P. ginseng), while ginseng produced in China includes P. ginseng and South China ginseng (P. notoginseng). Ginseng produced in Canada and the United States is mostly American ginseng (P. quinquefolius).
Uses
Ginseng may be included in energy drinks or herbal teas in small amounts or sold as a dietary supplement.
Food or beverage
The root is most often available in dried form, either whole or sliced. In Korean cuisine, ginseng is used in various banchan (side dishes) and guk (soups), as well as tea and alcoholic beverages. Ginseng-infused tea and liquor, known as insam cha (literally "ginseng tea") and insam-ju ("ginseng liquor") is consumed. Ginseng leaves are also used to prepare foods and beverages. Leaves are used to prepare Asian soups, steamed with chicken or combined with ginger, dates, and pork, or are eaten fresh.
Traditional medicine and phytochemicals
Although ginseng has been used in traditional medicine for centuries, there is no good evidence it causes any improvement of health or lowers the risk of any disease. Clinical research indicates there are no confirmed effects on memory, fatigue, menopause symptoms, and insulin response in people with mild diabetes. A 2021 review indicated that ginseng had "only trivial effects on erectile function or satisfaction with intercourse compared to placebo".
Although the roots are used in traditional Chinese medicine, the leaves and stems contain larger quantities of the phytochemicals than the roots, and are easier to harvest. The constituents include steroid saponins known as ginsenosides, as well as polyacetylenes, polysaccharides, peptidoglycans, and polyphenols, among other compounds. Ginsenosides from the leaves and stem () is an approved over-the-counter medication in China. The indication is written in traditional Chinese medicine language.
FDA warning letters
As of 2019, the United States FDA and Federal Trade Commission have issued numerous warning letters to manufacturers of ginseng dietary supplements for making false claims of health or anti-disease benefits, stating that the "products are not generally recognized as safe and effective for the referenced uses" and are illegal as unauthorized "new drugs" under federal law.
Safety and side effects
Ginseng supplements are not subjected to the same pre-market approval process in the US by the Food and Drug Administration (FDA) as pharmaceutical drugs. FDA mandates that manufacturers must ensure the safety of their ginseng supplements before marketing, without the necessity to substantiate the safety and efficacy of these supplements in a pre-market scenario. Ginseng supplements can be complex, often containing multiple constituents. It is not uncommon to observe discrepancies between the ingredients listed on the product label and the actual components or their quantities present in the supplement. While manufacturers can employ independent organizations to authenticate the quality of a product or its ingredients, such verification does not equate to a certification of the product's safety or effectiveness. These independent quality checks primarily focus on the integrity of the product in terms of its composition and do not extend to safety evaluations or purported clinical efficacy.
Ginseng generally has a good safety profile and the incidence of adverse effects is minor when used over the short term. The FDA has classified ginseng as "generally recognized as safe" (GRAS), indicating its general tolerability in adult populations.
The risk of interactions between ginseng and prescription medications is believed to be low, but ginseng may have adverse effects when used with blood thinners. Ginseng interacts with certain blood thinner medications, such as warfarin, leading to decreased blood levels of these drugs. Ginseng can also potentiate the effects of sedative medications. Concerns exist when ginseng is used over a longer term, potentially causing side effects such as skin rashes, headaches, insomnia, and digestive problems. The long-term use of ginseng may result in nervousness, anxiety, diarrhea, confusion, depression, or feelings of depersonalization, nausea, and fluctuations in blood pressure (including hypertension). There have been reports of gynecomastia and breast pain associated with ginseng use. Other side effects include breast pain and vaginal bleeding. As of 2023, there is a lack of data regarding the safety and efficacy of ginseng in lactating mothers and infants. Given its potential estrogenic activity and the absence of safety data during lactation, ginseng is not recommended for use during breastfeeding. Ginseng also has adverse drug reactions with phenelzine, and a potential interaction has been reported with imatinib, resulting in hepatotoxicity, and with lamotrigine.
Overdose
The common ginsengs (P. ginseng and P. quinquefolia) are generally considered to be relatively safe even in large amounts. One of the most common and characteristic symptoms of an acute overdose of P. ginseng is bleeding. Symptoms of mild overdose may include dry mouth and lips, excitation, fidgeting, irritability, tremor, palpitations, blurred vision, headache, insomnia, increased body temperature, increased blood pressure, edema, decreased appetite, dizziness, itching, eczema, early morning diarrhea, bleeding, and fatigue.
Symptoms of severe overdose with P. ginseng may include nausea, vomiting, irritability, restlessness, urinary and bowel incontinence, fever, increased blood pressure, increased respiration, decreased sensitivity and reaction to light, decreased heart rate, cyanotic (blue) facial complexion, red facial complexion, seizures, convulsions, and delirium.
Terminology and etymology
The English word "ginseng" comes from the Teochew Chinese (; where this transliteration is in Pe̍h-ōe-jī). The first character (pinyin rén; or ) means "person" and the second character (; ) means "plant root" in a forked shape.
The Korean loanword insam comes from the cultivated ginseng (), which is less expensive than wild ginseng.
The botanical genus name Panax, meaning "all-healing" in Greek, shares the same origin as "panacea" and was applied to this genus because Carl Linnaeus was aware of its wide use in Chinese medicine as a muscle relaxant.
Other plants sometimes called ginseng
True ginseng plants belong only to the genus Panax. Several other plants are sometimes referred to as ginseng, but they are from a different genus or even family. Siberian ginseng is in the same family, but not genus, as true ginseng. The active compounds in Siberian ginseng are eleutherosides, not ginsenosides. Instead of a fleshy root, Siberian ginseng has a woody root.
Angelica sinensis (female ginseng, dong quai)
Codonopsis pilosula (poor man's ginseng, dangshen)
Eleutherococcus senticosus (Siberian ginseng)
Gynostemma pentaphyllum (five-leaf ginseng, jiaogulan)
Kaempferia parviflora (Thai ginseng, krachai dum)
Lepidium meyenii (Peruvian ginseng, maca)
Oplopanax horridus (Alaskan ginseng)
Pfaffia paniculata (Brazilian ginseng, suma)
Pseudostellaria heterophylla (Prince ginseng)
Schisandra chinensis (five-flavoured berry)
Trichopus zeylanicus (Kerala ginseng)
Withania somnifera (Indian ginseng, ashwagandha)
Eurycoma longifolia (Malaysian ginseng, tongkat ali)
| Biology and health sciences | Apiales | null |
450175 | https://en.wikipedia.org/wiki/Zoraptera | Zoraptera | The insect order Zoraptera, commonly known as angel insects, contains small and soft bodied insects with two forms: winged with wings sheddable as in termites, dark and with eyes (compound) and ocelli (simple); or wingless, pale and without eyes or ocelli. They have a characteristic nine-segmented beaded (moniliform) antenna. They have mouthparts adapted for chewing and are mostly found under bark, in dry wood or in leaf litter.
Description
The name Zoraptera, given by Filippo Silvestri in 1913, is misnamed and potentially misleading: "zor" is Greek for pure and "aptera" means wingless. "Pure wingless" clearly does not fit the winged alate forms, which were discovered several years after the wingless forms had been described.
The members of this order are small insects, or less in length, that resemble termites in appearance and in their gregarious behavior. They are short and swollen in appearance. They belong to the hemimetabolous insects. They possess mandibulated biting mouthparts, short cerci (usually 1 segment only), and short antennae with 9 segments. The abdomen is segmented in 11 sections. The maxillary palps have five segments, labial palps three, in both the most distal segment is enlarged. They have six Malpighian tubules, and their abdominal ganglia have fused into two separate ganglionic complexes. Immature nymphs resemble small adults. Each species shows polymorphism. Most individuals are the apterous form or "morph", with no wings, no eyes, and no or little pigmentation. A few females and even fewer males are in the alate form with relatively large membranous wings that can be shed at a basal fracture line. Alates also have compound eyes and ocelli, and more pigmentation. This polymorphism can be observed already as two forms of nymphs. Wingspan can be up to , and the wings can be shed spontaneously. When observed, wings are paddle shaped and have simple venation. Under good conditions the blind and wingless form predominates, but if their surroundings become too tough, they produce offspring which develop into winged adults with eyes. These winged offspring are then able to disperse and establish new colonies in areas with more resources. Once established, future generations are once again born blind and wingless.
Systematics
Phylogeny
The phylogenetic relationship of the order remains controversial and elusive. At present the best supported position based on morphological traits recognizes the Zoraptera as polyneopterous insects related to the webspinners of the order Embioptera. However, molecular analysis of 18s ribosomal DNA supports a close relationship with the superorder Dictyoptera.
The following cladogram, based on the molecular phylogeny of Wipfler et al. 2019, places Zoraptera as the sister group of Dermaptera (earwigs); Zoraptera and Dermaptera together form the sister group of the remaining Polyneoptera:
Classification
The Zoraptera are currently divided into two families, four subfamilies, nine genera and a total of 51 species, some of which have not been yet described. There are eleven extinct species known as of 2017, many of the fossil species are known from Burmese amber.
Family Zorotypidae
Subfamily Zorotypinae
Zorotypus — 7 spp.
Usazoros — 1 sp.
Subfamily Spermozorinae
Spermozoros — 6 spp.
Family Spiralizoridae
Subfamily Latinozorinae
Latinozoros — 3 spp.
Subfamily Spiralizorinae
Spiralizoros — 12 spp.
Centrozoros (=Meridozoros ; Floridazoros ) — 8 spp.
Cordezoros — 1 sp.
Scapulizoros — 1 sp.
Brazilozoros — 3 spp.
Incertae sedis
The following nine species are considered Zoraptera incertae sedis:
Zorotypus congensis – Congo (Dem.Rep.)
Zorotypus javanicus – Indonesia (Java)
Zorotypus juninensis (considered a synonym of Centrozoros hamiltoni) – Peru
Zorotypus lawrencei New, 1995 – Christmas Island
Zorotypus leleupi – Ecuador (Galapagos Islands)
Zorotypus longicercatus – Jamaica
Zorotypus newi (=Formosozoros newi, is in actuality an immature earwig) – Taiwan
Zorotypus sechellensis – Seychelles
Zorotypus swezeyi – United States (Hawaii)
Extinct taxa
Zorotypus Silvestri, 1913
Subgenus Zorotypus Silvestri, 1913
Zorotypus (Zorotypus) absonus Engel, 2008 – Dominican amber, Dominican Republic (Miocene)
Zorotypus (Zorotypus) denticulatus Yin, Cai, & Huang, 2018 – Burmese amber, Myanmar (Cretaceous)
Zorotypus (Zorotypus) dilaticeps Yin, Cai, Huang, & Engel, 2018 – Kachin amber, Myanmar (Cretaceous)
Zorotypus (Zorotypus) goeleti Engel & Grimaldi, 2002 – Dominican amber, Dominican Republic (Miocene)
Zorotypus (Zorotypus) mnemosyne Engel, 2008 – Dominican amber, Dominican Republic (Miocene)
Subgenus Octozoros Engel, 2003
Zorotypus (Octozoros) acanthothorax Engel & Grimaldi, 2002 – Kachin amber, Myanmar (Cretaceous)
Zorotypus (Octozoros) nascimbenei Engel & Grimaldi, 2002 – Kachin amber, Myanmar (Cretaceous)
Zorotypus (Octozoros) cenomanianus Yin, Cai, & Huang, 2018 – Kachin amber, Myanmar (Cretaceous)
Zorotypus (Octozoros) hudai Kaddumi, 2005 – Jordanian amber, Jordan (Cretaceous)
Zorotypus cretatus Engel & Grimaldi, 2002 – Kachin amber, Myanmar (Cretaceous)
Zorotypus oligophleps Liu, Zhang, Cai & Li, 2018
Zorotypus robustus Liu, Zhang, Cai & Li, 2018
Xenozorotypus Engel & Grimaldi, 2002
Xenozorotypus burmiticus Engel & Grimaldi, 2002 – Kachin amber, Myanmar (Cretaceous)
Behavior and ecology
Zorapterans live in small colonies beneath rotting wood, lacking in mouthparts able to tunnel into wood, but feeding on fungal spores and detritus. These insects can also hunt smaller arthropods like mites and collembolans. Much of their time is spent grooming themselves or others.
Centrozoros gurneyi lives in colonies which range in size from a few dozen to several hundred individuals, but most often number about 30 individuals. The males are slightly larger than the females, and they fight for dominance.
When two colonies of Usazoros hubbardi are brought together experimentally, there is no difference in behavior towards members of the new colony. Therefore, colonies in the wild might merge easily. Winged forms are rare. The males in most colonies establish a linear dominance hierarchy in which age or duration of colony membership is the prime factor determining dominance. Males appearing later in colonies are at the bottom of the hierarchy, regardless of their body size. By continually attacking other males, the dominant male monopolizes a harem of females. The members of this harem stay clumped together. There is a high correlation between rank and reproductive success of the males.
Latinozoros barberi lack such a dominance structure but display complex courtship behavior including nuptial feeding. The males possess a cephalic gland that opens in the middle of their head. During courtship they secrete a fluid from this gland and offer it to the female. Acceptance of this droplet by the female acts as behavioral releaser and immediately leads to copulation.
In Spermozoros impolitus, copulation does not occur, but fertilization is accomplished instead by transfer of a spermatophore from the male to the female. This spermatophore contains a single giant sperm cell, which unravels to about the same length as the female herself, . It is thought that this large sperm cell prevents fertilization by other males, by physically blocking the female's genital tract.
Effects on ecosystem
Zorapterans are thought to provide some important services to ecosystems. By consuming detritus, such as dead arthropods, they assist in decomposition and nutrient cycling.
| Biology and health sciences | Insects: General | Animals |
450257 | https://en.wikipedia.org/wiki/Urban%20agriculture | Urban agriculture | Urban agriculture refers to various practices of cultivating, processing, and distributing food in urban areas. The term also applies to the area activities of animal husbandry, aquaculture, beekeeping, and horticulture in an urban context. Urban agriculture is distinguished from peri-urban agriculture, which takes place in rural areas at the edge of suburbs.
Urban agriculture can appear at varying levels of economic and social development. It can involve a movement of organic growers, "foodies" and "locavores", who seek to form social networks founded on a shared ethos of nature and community holism. These networks can develop by way of formal institutional support, becoming integrated into local town planning as a "transition town" movement for sustainable urban development. For others, food security, nutrition, and income generation are key motivations for the practice. In either case, the more direct access to fresh vegetable, fruit, and meat products that may be realised through urban agriculture can improve food security and food safety while decreasing food miles, leading to lower greenhouse gas emissions, thereby contributing to climate change mitigation.
History
Some of the first evidence of urban agriculture comes from early Mesopotamian cultures. Farmers would set aside small plots of land for farming within the city's walls. (3500BC) In Persia's semi-desert towns, oases were fed through aqueducts carrying mountain water to support intensive food production, nurtured by wastes from the communities.. The Hanging Gardens of Babylon are another famous - if potentially legendary - regional example. In China, Xi'an has been continuously inhabited since at least 5000 BC, whose citizens have engaged in urban agriculture at varying degrees during different points of its history. At the Incans' Machu Picchu, water was conserved and reused as part of the stepped architecture of the city, and vegetable beds were designed to gather sun in order to prolong the growing season. Elsewhere in the Americas, well-documented examples of pre-Columbian Amerindian urban agriculture include the Aztecs' lake-based chinampas which were crucial to population growth in Mexico Valley's cities; Cahokia's maize-based economy in the Mississippi River near present-day St. Louis; and the thriving mesa agricultural plots of the cliff-based Pueblo cultures such as Mesa Verde of today's Four Corners region, among others.
The idea of supplemental food production beyond rural farming operations and distant imports is not new. It was used during war and depression times when food shortage issues arose, as well as during times of relative abundance. Allotment gardens emerged in Germany in the early 19th century as a response to poverty and food insecurity.
In the context of the US, urban agriculture as a widely recognized practice took root in response of the 1893–1897 economic depression in Detroit. In 1894, Mayor Hazen S. Pingree called on outlying citizens of a depression-struck Detroit to lend their properties to the city government ahead of the winter season. The Detroit government would in turn develop these lots as makeshift potato gardens - nicknamed Pingree's Potato Patches after the mayor - as potatoes were weather resistant and easy to grow. He intended for these gardens to produce income, food supply, and boost independence during times of hardship. The Detroit project was successful enough that other US cities adopted similar urban agriculture practices. By 1906, the United States Department of Agriculture estimated that over 75,000 schools alone managed urban agriculture programs to provide children and their families with fresh produce. However, it would not be until the First World War that US urban agriculture spread widely.
During World War One, food production became a major national security concern for several countries, including the US. President Woodrow Wilson called upon all American citizens to utilize any available open food growth, seeing this as a way to pull them out of a potentially damaging situation of food insecurity. The National War Garden Committee under the American Forestry Association organized campaigns with patriotic messages such as "Sow the Seeds of Victory", with the aim of reducing domestic pressure on food production. In so doing, primary agricultural industries could focus on shipping rations to troops in Europe. So called victory gardens sprouted during World War One (emulated later during World War Two) in the US, as well as Canada & the United Kingdom. By 1919, American victory gardens numbered 5 million plots country-wide, and over 500 million pounds of produce was harvested. So efficient were the American urban agriculture programs that surplus foodstuffs were shipped to war-ravaged European nations, in addition to American military forces.
A very similar practice came into use during the Great Depression that provided a purpose, job and food to those who would otherwise be without anything during such harsh times. These efforts helped raise spirits and boost economic growth. Over 2.8 million dollars worth of food was produced from the subsistence gardens during the Depression. Public and government support for Victory Gardens waned during the Interwar Period, with most American sites becoming repurposed for various economic development initiatives.
By World War II, the War/Food Administration set up a National Victory Garden Program that set out to systematically establish functioning agriculture within cities. Indeed, these new victory gardens became the "first line of defense for the country". Once more, the government supported and encouraged Victory Gardens as a means of national security: domestic pressure on major agricultural industries would be relieved to further augment the war economy. With this new plan in action, as many as 5.5 million Americans took part in the victory garden movement and over nine million pounds of fruit and vegetables were grown a year, accounting for 44% of US-grown produce throughout that time. In the post-war period, the US government gradually stopped assisting urban agriculture programs, partially due to the lack of need of war supplies and partially due to the US fully embracing industrialized food systems.
By the 1950s and 1960s, urban agriculture was more focused on grassroots initiatives spearheaded by politicized social movements, including the African-American Civil Rights. These groups benefited from a great number of vacant lots, left behind during a period of post-war urban to suburban migration. Despite these efforts, vacant lots as a whole became seen as blighted, decaying areas. Some American cities, such as Syracuse, NY once more supported urban agriculture programs, not for food security but to make these vacant lots more appealing. Social and environmental justice groups, such as New York City based Green Guerillas, Seattle-based P-Patch, Boston-based Urban Gardeners, and Philadelphia-based Philadelphia Green, continued to shape American urban agricultural practices during the 1970s. These groups - and many others - reinvigorated interest in urban agriculture, aimed not only at community development but also combating environmental crises.
American urban agriculture initiatives during the 1980s built upon the previous decade's focus on community engagement. A natural evolution was sites of urban agriculture entering everyday community roles and consequently requiring more funding than grassroots movements could muster. The US government created an Urban Garden Program, which funded programs in twenty-eight cities who in turn produced roughly twenty-one million dollars of produce. Though some sites of urban agriculture were repurposed for other economic development, the overall trend of the 1980s was an expansion of the practice. The 1990s continued this growth of urban agriculture sites in the US, while also expanding their purposes. A result of this broadening was the division of urban agriculture practitioners based on motivations, organizational structure, and a host of other operational concerns.
Throughout the 2000s, 2010s, and 2020s, urban agriculture sites and usages of these sites have continued to grow. Groups managing some sites focus on the economic security and cultural preservation of immigrants, such as the Hmong American communities in various US states. Other groups incorporate urban agriculture programs as part of wider social justice missions, such as those in the city of Wilmington, Delaware. Still others seek to use urban agriculture as a means of combating community-scale food insecurity, as part of wider goals of rewilding cities and human diets, among a multitude of other uses. Much attention has been placed on the practice of urban agriculture in connection to food movements such as alternative food networks, sustainable food networks, and local food movements. Alternative food networks seek to redefine food production, distribution, and consumption by considering the sociocultural elements of local communities and economies. Sustainable food networks are a related concept, but focus more on ecological concerns. Local food networks focus more on the political responses to globalization or concerns with the environmental impacts of industrialized food transportation.
Main types
There is no overarching term for agricultural plots in urban areas. Gardens and farms, while not easy to define, are the two main types. According to the USDA, a farm is "any place from which $1,000 or more of agricultural products were produced and sold." In Europe, the term "city farm" is used to include gardens and farms. Any plot with produce being grown in it can be considered an urban farm. Size does not matter, it is more about growing produce on your own in your personal plot or garden.
Gardens
Many communities make community gardening accessible to the public, providing space for citizens to cultivate plants for food, recreation and education. In many cities, small plots of land and also rooftops are used for community members to garden. Community gardens give citizens the opportunity to learn about horticulture through trial and error and get a better understanding of the process of producing food and other plants. All while still being able to feed those people in need from the community. It holds as both a learning experience as well as a means of help for those people in need. A community gardening program that is well-established is Seattle's P-Patch. The grassroots permaculture movement has been hugely influential in the renaissance of urban agriculture throughout the world. During the 1960s a number of community gardens were established in the United Kingdom, influenced by the community garden movement in the United States. Bristol's Severn Project was established in 2010 for £2500 and provides 34 tons of produce per year, employing people from disadvantaged backgrounds.
Farms
The first urban agriculture method of growing occurs when family farms maintain their land as the city grows around it. City farms/Urban farms are agricultural plots in urban areas, that have people working with animals and plants to produce food. They are usually community-run gardens seeking to improve community relationships and offer an awareness of agriculture and farming to people who live in urbanized areas. Although the name says urban, urban farming does not have to be in the urban area, it can be in the backyard of a house, or the rooftop of an apartment building. They are important sources of food security for many communities around the globe. City farms vary in size from small plots in private yards to larger farms that occupy a number of acres. In 1996, a United Nations report estimated there are over 800 million people worldwide who grow food and raise livestock in cities. Although some city farms have paid employees, most rely heavily on volunteer labour, and some are run by volunteers alone. Other city farms operate as partnerships with local authorities.
An early city farm was set up in 1972 in Kentish Town, London. It combines farm animals with gardening space, an addition inspired by children's farms in the Netherlands. Other city farms followed across London and the United Kingdom. In Australia, several city farms exist in various capital cities. In Melbourne, the Collingwood Children's Farm was established in 1979 on the Abbotsford Precinct Heritage Farmlands (the APHF), the oldest continually farmed land in Victoria, farmed since 1838.
In 2010, New York City saw the building and opening of the world's largest privately owned and operated rooftop farm, followed by an even larger location in 2012. Both were a result of municipal programs such as The Green Roof Tax Abatement Program and Green Infrastructure Grant Program.
The Philippines has numerous urban farms and other types of UA sites throughout the country. Several notable initiatives and organizations include The Philippine Urban Agriculture Network (PUAN), Gawad Kalinga, and the federal government's Urban Agriculture Program. Quezon City, the most populous city in the country, began a "Joy of Farming" program in the 2010s, which to date has implemented 160+ urban farms in backyards, daycare centres, churches and communal spaces as well as purpose-built farm demonstrations. In Metro Manila's Taguig, a 300 square meter urban farm built in 2020 was seen as a key player in the city's resilience during the Covid-19 pandemic. Fishermen based in Cebu initiated a hybrid farm to table scheme, connecting isolated farming communities to both national and international markets.
In Singapore, hydroponic rooftop farms (which also rely on vertical farming) are appearing. The goal behind these is to rejuvenate areas and workforces that have thus far been marginalized. Simultaneously top level pesticide-free produce will be grown and harvested. As Singapore imports 90+% of its food, urban agriculture is seen as essential to national security, a fact underscored during the Covid-19 pandemic.
On the island of Taiwan, local city governments sometimes support urban agriculture as a means of sustainable ecology, such as Taipei's Garden City Initiative. Originally intended to be small-scale, the program was met with unanticipated enthusiasm, for example, 200+ schools were approved for its trial phase out of planned 22. While certain challenges exist towards implementation and management, the program covers multiple types of urban agricultural sites governed by a variety of independent actors and institutions.
Like many countries experiencing rapid (urban) population growth, China is exploring urban agriculture as a means of feeding its growing population; such as in Shanghai and the Megalopolis of the Pearl River Delta. Despite several widespread cultural, architectural, economic, governmental, and social challenges towards implementation, urban agriculture is noted as having numerous positive transformative effects for China ranging from local food security to national economic security.
Aquaponics systems
Aquaponics is a closed-loop farming technique that ingeniously combines aquaculture and hydroponics to create a self-sustaining ecosystem. In this mutually beneficial relationship, fish waste serves as a natural fertilizer for the plants, while the plants filter and purify the water for the fish. This ingenious system not only minimizes water usage but also eliminates the need for chemical fertilizers, making it an eco-friendly and resource-efficient method of food production.
The origins of aquaponics can be traced back to the ancient Aztecs in Mexico, who practiced a form of this method by cultivating crops on floating rafts in nutrient-rich waters. In modern times, researchers like Dr. Mark McMurtry and Dr. James Rakocy further developed and popularized aquaponics during the 1970s and 1980s.
In practice, fish are raised in a tank, and their waste releases ammonia. Beneficial bacteria then convert the ammonia into nitrites and nitrates, which serve as essential nutrients for the plants. As the plants take up these nutrients, they cleanse the water, which is recirculated back to the fish tank, completing the sustainable loop.
Vertical farming
Vertical farming has emerged as a solution for sustainable urban agriculture, enabling crops to be cultivated in vertically stacked layers or inclined surfaces, within controlled indoor environments. This approach maximizes space utilization and facilitates year-round cultivation, making it an ideal choice for densely populated urban areas with limited land availability.
The concept of vertical farming dates back to the early 20th century, but its recent popularity has surged due to the challenges posed by urbanization and the growing demand for sustainable food production. Vertical farms have gained significant traction globally as they offer solutions to overcome the limitations associated with traditional agriculture.
In practice, vertical farms employ advanced techniques such as hydroponics or aeroponics, allowing plants to grow without soil by using nutrient-rich water or air instead. By utilizing vertical space, these farms achieve higher crop yields per square foot compared to conventional farming methods. The integration of artificial lighting and sophisticated climate control systems ensures optimal conditions for crop growth throughout the year.
Singapore stands at the forefront of the vertical farming movement, embracing this technology-driven agriculture to address its limited land availability and secure food sustainability. As a densely populated city-state, Singapore's adoption of vertical farming showcases how innovative approaches to agriculture can effectively tackle the challenges of urban living while promoting sustainable food production. The obstacles that vertical farming must overcome include; training/indoor farming expertise, commercial feasibility and resistance from city people/politicians.
Indoor farms
Indoor farming is a method involves cultivating plants indoors, free from the constraints of traditional agriculture such as weather fluctuations and limited land availability.
The concept of indoor farming emerged as a solution to the challenges faced by conventional farming methods. With unpredictable weather patterns and urbanization taking up valuable arable land, indoor farming offers a sustainable alternative.
In practice, indoor farms utilize advanced techniques like hydroponics, aeroponics, or aquaponics to cultivate plants. These systems provide a soil-less environment, ensuring efficient use of resources and optimal plant growth. Climate control systems play a crucial role in maintaining the perfect conditions for crops, regulating temperature, humidity, and lighting. Artificial lighting, often powered by energy-efficient LED technology, ensures that plants receive the right light spectrum for photosynthesis, resulting in healthy and abundant harvests. The ultimate goal of developments includes providing superior agricultural products that meet urban consumers' safety requirements.
Perspectives
Resource and economic
The Urban Agriculture Network has defined urban agriculture as:
An industry that produces, processes, and markets food, fuel, and other outputs, largely
in response to the daily demand of consumers within a town, city, or metropolis, many types of privately and publicly held land and water bodies were found throughout intra-urban and
peri-urban areas. Typically urban agriculture applies intensive production methods, frequently using and reusing natural resources and urban wastes, to yield a diverse array of land-, water-, and air-based fauna and flora contributing to food security, health, livelihood, and environment of the individual, household, and community.
With rising urbanization, food resources in urban areas are less accessible than in rural areas. This disproportionately affects the poorest communities, and the lack of food access and increased risk of malnutrition has been linked to socioeconomic inequities. Economic barriers to food access are linked to capitalist market structures and lead to "socioeconomic inequities in food choices", "less... healthful foods", and phenomena such as food deserts. Additionally, racialized systems of governance of urban poor communities facilitates the increasing prominence of issues such as unemployment, poverty, access to health, educational and social resources, including a community's access to healthy food.
Today, some cities have much vacant land due to urban sprawl and home foreclosures. This land could be used to address food insecurity. One study of Cleveland shows that the city could actually meet up to 100% of its fresh produce need. This would prevent up to $115 million in annual economic leakage. Using the rooftop space of New York City would also be able to provide roughly twice the amount of space necessary to supply New York City with its green vegetable yields. Space could be even better optimized through the usage of hydroponic or indoor factory production of food. Growing gardens within cities would also cut down on the amount of food waste. In order to fund these projects, it would require financial capital in the form of private enterprises or government funding.
Environmental
The Council for Agricultural Science and Technology (CAST) defines urban agriculture to include aspects of environmental health, remediation, and recreation:
Urban agriculture is a complex system encompassing a spectrum of interests, from a traditional core of activities associated with the production, processing, marketing, distribution, and consumption, to a multiplicity of other benefits and services that are less widely acknowledged and documented. These include recreation and leisure; economic vitality and business entrepreneurship, individual health and well-being; community health and well being; landscape beautification; and environmental restoration and remediation.
Modern planning and design initiatives are often more responsive to this model of urban agriculture because they fit within the current scope of sustainable design. The definition allows for a multitude of interpretations across cultures and time. Frequently it is tied to policy decisions to build sustainable cities.
Urban farms also provide unique opportunities for individuals, especially those living in cities, to engage with ecological citizenship actively. By reconnecting with food production and nature, urban community gardening teaches individuals the skills necessary to participate in a democratic society. Decisions must be made on a group-level basis to run the farm. Most effective results are achieved when community residents are asked to take on more active roles in the farm.
Food security
Access to nutritious food, both economically and geographically, is another perspective in the effort to locate food and livestock production in cities. The tremendous influx of the world population to urban areas has increased the need for fresh and safe food. The Community Food Security Coalition (CFSC) defines food security as:
All persons in a community having access to culturally acceptable, nutritionally adequate food through local, non-emergency sources at all times.
Areas faced with food security issues have limited choices, often relying on highly processed fast food or convenience store foods that are high in calories and low in nutrients, which may lead to elevated rates of diet-related illnesses such as diabetes. These problems have brought about the concept of food justice which Alkon and Norgaard (2009; 289) explain that, "places access to healthy, affordable, culturally appropriate food in the contexts of institutional racism, racial formation, and racialized geographies... Food justice serves as a theoretical and political bridge between scholarship and activism on sustainable agriculture, food insecurity, and environmental justice."
Some systematic reviews have already explored urban agriculture contribution to food security and other determinants of health outcomes (see)
Urban agriculture is part of a larger discussion of the need for alternative agricultural paradigms to address food insecurity, inaccessibility of fresh foods, and unjust practices on multiple levels of the food system; and this discussion has been led by different actors, including food-insecure individuals, farm workers, educators and academics, policymakers, social movements, organizations, and marginalized people globally.
The issue of food security is accompanied by the related movements of food justice and food sovereignty. These movements incorporate urban agriculture in how they address food-resources of a community. Food sovereignty, in addition to promoting food access, also seeks to address the power dynamics and political economy of food; it accounts for the embedded power structures of the food system, ownership of production, and decision-making on multiple levels (i.e. growing, processing, and distribution): Under this framework, representative decision-making and responsiveness to the community are core features.
Agroecological
Agroecology is a scientific framework, movement, and applied practice of agricultural management systems that seeks to achieve food sovereignty within food systems. In contrast to the dominant model of agriculture, agroecology emphasizes the importance of soil health by fostering connections between the diverse biotic and abiotic factors present. It prioritizes farmer and consumer well-being, traditional knowledge revival, and democratized learning systems. Transdisciplinarity and diversity of knowledge is a central theme to agroecology, so many urban agroecology initiatives address topics of social justice, gender empowerment, ecological sustainability, indigenous sovereignty, and public participation in addition to promoting food access. For example, agroecology has been integral to social movements surrounding public demand for sustainably grown food free from pesticides and other chemicals.
Under an agroecological framework, urban agriculture alleviates much more than simply food insecurity by also encouraging discourse about all facets of community wellness from physical and mental health to community connectedness. It has the potential to play a role as a "public space, as an economic development strategy, and as a community-organizing tool" while alleviating food insecurity.
Impact
In general, Urban and peri urban agriculture (UPA) contributes to food availability, particularly of fresh produce, provides employment and income and can contribute to the food security and nutrition of urban dwellers.
Economic
Urban and Peri-urban agriculture (UPA) expands the economic base of the city through production, processing, packaging, and marketing of consumable products. This results in an increase in entrepreneurial activities and the creation of jobs, as well as reducing food costs and improving quality. UPA provides employment, income, and access to food for urban populations, which helps to relieve chronic and emergency food insecurity. Chronic food insecurity refers to less affordable food and growing urban poverty, while emergency food insecurity relates to breakdowns in the chain of food distribution. UPA plays an important role in making food more affordable and in providing emergency supplies of food. Research into market values for produce grown in urban gardens has been attributed to a community garden plot a median yield value of between approximately $200 and $500 (US, adjusted for inflation).
Social
Urban agriculture can have a large impact on the social and emotional well-being of individuals. UA can have an overall positive impact on community health, which directly impacts individuals social and emotional well-being. Urban gardens are often places that facilitate positive social interaction, which also contributes to overall social and emotional well-being. Urban agriculture sites have been noted as lowering crime rates generally in local neighborhoods. Many gardens facilitate the improvement of social networks within the communities that they are located. For many neighborhoods, gardens provide a "symbolic focus", which leads to increased neighborhood pride. Urban agriculture increases community participation through diagnostic workshops or different commissions in the area of vegetable gardens. Activities which involve hundreds of people.
When individuals come together around UA, physical activity levels are often increased. This can also raise serotonin levels akin to working out at a gym. There is the added element of walking/biking to the gardens, further increasing physical activity and the benefits of being outdoors.
UPA can be seen as a means of improving the livelihood of people living in and around cities. Taking part in such practices is seen mostly as an informal activity, but in many cities where inadequate, unreliable, and irregular access to food is a recurring problem, urban agriculture has been a positive response to tackling food concerns. Due to the food security that comes with UA, feelings of independence and empowerment often arise. The ability to produce and grow food for oneself has also been reported to improve levels of self-esteem or of self-efficacy. Households and small communities take advantage of vacant land and contribute not only to their household food needs but also the needs of their resident city. The CFSC states that:
Community and residential gardening, as well as small-scale farming, save household food dollars. They promote nutrition and free cash for non-garden foods and other items. As an example, you can raise your own chickens on an urban farm and have fresh eggs for only $0.44 per dozen.
This allows families to generate larger incomes selling to local grocers or to local outdoor markets while supplying their household with the proper nutrition of fresh and nutritional products. With the popularity of farmers markets recently, this has allowed an even larger income.
Some community urban farms can be quite efficient and help women find work, who in some cases are marginalized from finding employment in the formal economy. Studies have shown that participation from women have a higher production rate, therefore producing the adequate amount for household consumption while supplying more for market sale.
As most UA activities are conducted on vacant municipal land, there have been raising concerns about the allocation of land and property rights. The IDRC and the FAO have published the Guidelines for Municipal Policymaking on Urban Agriculture, and are working with municipal governments to create successful policy measures that can be incorporated in urban planning.
Over a third of US households, roughly 42 million, participate in food gardening. There has also been an increase of 63% participation in farming by millennials from 2008 to 2013. US households participating in community gardening has also tripled from 1 to 3 million in that time frame. Urban agriculture provides unique opportunities to bridge diverse communities together. In addition, it provides opportunities for health care providers to interact with their patients. Thus, making each community garden a hub that is reflective of the community.
Energy efficiency
The current industrial agriculture system is accountable for high energy costs for the transportation of foodstuffs. According to a study by Rich Pirog, associate director of the Leopold Center for Sustainable Agriculture at Iowa State University, the average conventional produce item travels , using, if shipped by tractor-trailer, of fossil fuel per . The energy used to transport food is decreased when urban agriculture can provide cities with locally grown food. Pirog found that traditional, non-local, food distribution system used 4 to 17 times more fuel and emitted 5 to 17 times more than the local and regional transport.
Similarly, in a study by Marc Xuereb and Region of Waterloo Public Health, it was estimated that switching to locally-grown food could save transport-related emissions equivalent to nearly 50,000 metric tons of , or the equivalent of taking 16,191 cars off the road.
In theory one would save money, but everything is being run on the house's power grid most of the time. So prices can vary according to when you water, or how you water, etc.
Carbon footprint
As mentioned above, the energy-efficient nature of urban agriculture can reduce each city's carbon footprint by reducing the amount of transport that occurs to deliver goods to the consumer. Such areas can act as carbon sinks offsetting some of the carbon accumulation that is innate to urban areas, where pavement and buildings outnumber plants. Plants absorb atmospheric carbon dioxide () and release breathable oxygen (O2) through photosynthesis. The process of Carbon Sequestration can be further improved by combining other agriculture techniques to increase removal from the atmosphere and prevent the release of during harvest time. However, this process relies heavily on the types of plants selected and the methodology of farming. Specifically, choosing plants that do not lose their leaves and remain green all year can increase the farm's ability to sequester carbon.
Reduction in ozone and particulate matter
The reduction in ozone and other particulate matter can benefit human health. Reducing these particulates and ozone gases could reduce mortality rates in urban areas along with increase the health of those living in cities.
A 2011 article found that a rooftop containing 2000 m2 of uncut grass has the potential to remove up to 4000 kg of particulate matter and that one square meter of green roof is sufficient to offset the annual particulate matter emissions of a car.
Soil decontamination
Vacant urban lots are often victims to illegal dumping of hazardous chemicals and other wastes. They are also liable to accumulate standing water and "grey water", which can be dangerous to public health, especially left stagnant for long periods. The implementation of urban agriculture in these vacant lots can be a cost-effective method for removing these chemicals. In the process known as Phytoremediation, plants and the associated microorganisms are selected for their chemical ability to degrade, absorb, convert to an inert form, and remove toxins from the soil. Several chemicals can be targeted for removal, including heavy metals (e.g. Mercury and lead), inorganic compounds (e.g. Arsenic and Uranium), and organic compounds (e.g. petroleum and chlorinated compounds like PBCs).
Phytoremeditation is both an environmentally-friendly, cost-effective and energy-efficient measure to reduce pollution. Phytoremediation only costs about $5–$40 per ton of soil being decontaminated. Implementation of this process also reduces the amount of soil that must be disposed of in a hazardous waste landfill.
Urban agriculture as a method to mediate chemical pollution can be effective in preventing the spread of these chemicals into the surrounding environment. Other methods of remediation often disturb the soil and force the chemicals contained within it into the air or water. Plants can be used as a method to remove chemicals and also to hold the soil and prevent erosion of contaminated soil decreasing the spread of pollutants and the hazard presented by these lots.
One way of identifying soil contamination is through using already well-established plants as bioindicators of soil health. Using well-studied plants is important because there has already been substantial bodies of work to test them in various conditions, so responses can be verified with certainty. Such plants are also valuable because they are genetically identical as crops as opposed to natural variants of the same species. Typically urban soil has had the topsoil stripped away and has led to soil with low aeration, porosity, and drainage. Typical measures of soil health are microbial biomass and activity, enzymes, soil organic matter (SOM), total nitrogen, available nutrients, porosity, aggregate stability, and compaction. A new measurement is active carbon (AC), which is the most usable portion of the total organic carbon (TOC) in the soil. This contributes greatly to the functionality of the soil food web. Using common crops, which are generally well-studied, as bioindicators can be used to effectively test the quality of an urban farming plot before beginning planting.
Noise pollution
Large amounts of noise pollution not only lead to lower property values and high frustration, they can be damaging to human hearing and health. The study "Noise exposure and public health" found that exposure to continual noise is a public health problem. Examples of the detriment of continual noise on humans to include: "hearing impairment, hypertension and ischemic heart disease, annoyance, sleep disturbance, and decreased school performance." Since most roofs or vacant lots consist of hard flat surfaces that reflect sound waves instead of absorbing them, adding plants that can absorb these waves has the potential to lead to a vast reduction in noise pollution.
Nutrition and quality of food
Daily intake of a variety of fruits and vegetables is linked to a decreased risk of chronic diseases including diabetes, heart disease, and cancer. Urban agriculture is associated with increased consumption of fruits and vegetables which decreases risk for disease and can be a cost-effective way to provide citizens with quality, fresh produce in urban settings.
Produce from urban gardens can be perceived to be more flavorful and desirable than store bought produce which may also lead to a wider acceptance and higher intake. A Flint, Michigan study found that those participating in community gardens consumed fruits and vegetables 1.4 times more per day and were 3.5 times more likely to consume fruits or vegetables at least 5 times daily (p. 1). Garden-based education can also yield nutritional benefits in children. An Idaho study reported a positive association between school gardens and increased intake of fruit, vegetables, vitamin A, vitamin C and fiber among sixth graders. Harvesting fruits and vegetables initiates the enzymatic process of nutrient degradation which is especially detrimental to water soluble vitamins such as ascorbic acid and thiamin. The process of blanching produce in order to freeze or can reduce nutrient content slightly, but not nearly as much as the amount of time spent in storage. Harvesting produce from one's own community garden cuts back on storage times significantly.
Urban agriculture also provides quality nutrition for low-income households. Studies show that every $1 invested in a community garden yields $6 worth of vegetables if labor is not considered a factor in investment. Many urban gardens reduce the strain on food banks and other emergency food providers by donating shares of their harvest and providing fresh produce in areas that otherwise might be food deserts. The supplemental nutrition program Women, Infants and Children (WIC) as well as the Supplemental Nutrition Assistance Program (SNAP) have partnered with several urban gardens nationwide to improve the accessibility to produce in exchange for a few hours of volunteer gardening work.
Urban farming has been shown to increase health outcomes. Gardeners consume twice as much fruit and vegetables than non-gardeners. Levels of physical activity are also positively associated with urban farming. These results are seen indirectly and can be supported by the social involvement in an individual's community as a member of the community farm. This social involvement helped raise the aesthetic appeal of the neighborhood, boosting the motivation or efficacy of the community as a whole. This increased efficacy was shown to increase neighborhood attachment. Therefore, the positive health outcomes of urban farming can be explained in part by interpersonal and social factors that boost health. Focusing on improving the aesthetics and community relationships and not only on the plant yield, is the best way to maximize the positive effect of urban farms on a neighborhood.
Economy of scale
Using high-density urban farming with vertical farms or stacked greenhouses, many environmental benefits can be achieved on a citywide scale that would be impossible otherwise. These systems do not only provide food, but also produce potable water from waste water, and can recycle organic waste back to energy and nutrients. At the same time, they can reduce food-related transportation to a minimum while providing fresh food for large communities in almost any climate.
Health inequalities and food justice
A 2009 report by the USDA determined that "evidence is both abundant and robust enough for us to conclude that Americans living in low-income and minority areas tend to have poor access to healthy food", and that the "structural inequalities" in these neighborhoods "contribute to inequalities in diet and diet-related outcomes". These diet-related outcomes, including obesity and diabetes, have become epidemic in low-income urban environments in the United States. Although the definition and methods for determining "food deserts" have varied, studies indicate that, at least in the United States, there are racial disparities in the food environment. Thus using the definition of environment as the place where people live, work, play and pray, food disparities become an issue of environmental justice. This is especially true in American inner-cities where a history of racist practices have contributed to the development of food deserts in the low-income, minority areas of the urban core. The issue of inequality is so integral to the issues of food access and health that the Growing Food & Justice for All Initiative was founded with the mission of "dismantling racism" as an integral part of creating food security.
Not only can urban agriculture provide healthy, fresh food options, but also can contribute to a sense of community, aesthetic improvement, crime reduction, minority empowerment and autonomy, and even preserve culture through the use of farming methods and heirloom seeds preserved from areas of origin.
Environmental justice
Urban agriculture may advance environmental justice and food justice for communities living in food deserts. First, urban agriculture may reduce racial and class disparities in access to healthy food. When urban agriculture leads to locally grown fresh produce sold at affordable prices in food deserts, access to healthy food is not just available for those who live in wealthy areas, thereby leading to greater equity in rich and poor neighborhoods.
Improved access to food through urban agriculture can also help alleviate psychosocial stresses in poor communities. Community members engaged in urban agriculture improve local knowledge about healthy ways to fulfill dietary needs. Urban agriculture can also better the mental health of community members. Buying and selling quality products to local producers and consumers allows community members to support one another, which may reduce stress. Thus, urban agriculture can help improve conditions in poor communities, where residents experience higher levels of stress due to a perceived lack of control over the quality of their lives.
Urban agriculture may improve the livability and built environment in communities that lack supermarkets and other infrastructure due to the presence of high unemployment caused by deindustrialization. Urban farmers who follow sustainable agricultural methods can not only help to build local food system infrastructure, but can also contribute to improving local air, and water and soil quality. Urban farming serves as one type of green space in urban areas, it has a positive impact on the air quality in the surrounding area. A case study conducted on a rooftop farm shows the PM2.5 concentration in the urban farming area is 7–33% lower than the surrounding parts without green spaces in a city. When agricultural products are produced locally within the community, they do not need to be transported, which reduces emission rates and other pollutants that contribute to high rates of asthma in lower socioeconomic areas. Sustainable urban agriculture can also promote worker protection and consumer rights. For example, communities in New York City, Illinois, and Richmond, Virginia, have demonstrated improvements to their local environments through urban agricultural practices.
However, urban agriculture can also present urban growers with health risks if the soil used for urban farming is contaminated. Lead contamination is particularly common, with hazardous levels of lead found in soil in many United States cities. High lead levels in soil originate from sources including flaking lead paint which was widely used before being banned in the 1970s, vehicle exhaust, and atmospheric deposition. Without proper education on the risks of urban farming and safe practices, urban consumers of urban agricultural produce may face additional health-related issues.
Implementation
Creating a community-based infrastructure for urban agriculture means establishing local systems to grow and process food and transfer it from farmer to consumer.
To facilitate food production, cities have established community-based farming projects. Some projects have collectively tended community farms on common land, much like that of the eighteenth-century Boston Common. One such community farm is the Collingwood Children's Farm in Melbourne, Australia. Other community garden projects use the allotment garden model, in which gardeners care for individual plots in a larger gardening area, often sharing a tool shed and other amenities. Seattle's P-Patch Gardens use this model, as did the South Central Farm in Los Angeles and the Food Roof Farm in St. Louis. Independent urban gardeners also grow food in individual yards and on roofs. Garden sharing projects seek to pair producers with the land, typically, residential yard space. Roof gardens allow for urban dwellers to maintain green spaces in the city without having to set aside a tract of undeveloped land. Rooftop farms allow otherwise unused industrial roofspace to be used productively, creating work and profit. Projects around the world seek to enable cities to become 'continuous productive landscapes' by cultivating vacant urban land and temporary or permanent kitchen gardens.
Food processing on a community level has been accommodated by centralizing resources in community tool sheds and processing facilities for farmers to share. The Garden Resource Program Collaborative based in Detroit has cluster tool banks. Different areas of the city have tool banks where resources like tools, compost, mulch, tomato stakes, seeds, and education can be shared and distributed with the gardeners in that cluster. Detroit's Garden Resource Program Collaborative also strengthens their gardening community by providing access to their member's transplants; education on gardening, policy, and food issues; and by building connectivity between gardeners through workgroups, potlucks, tours, field trips, and cluster workdays. In Brazil, "Cities Without Hunger" has generated a public policy for the reconstruction of abandoned areas with food production and has improved the green areas of the community.
Farmers' markets, such as the farmers' market in Los Angeles, provide a common land where farmers can sell their product to consumers. Large cities tend to open their farmer's markets on the weekends and one day in the middle of the week. For example, the farmers' market of Boulevard Richard-Lenoir in Paris, France, is open on Sundays and Thursdays. However, to create a consumer dependency on urban agriculture and to introduce local food production as a sustainable career for farmers, markets would have to be open regularly. For example, the Los Angeles Farmers' Market is open seven days a week and has linked several local grocers together to provide different food products. The market's central location in downtown Los Angeles provides the perfect interaction for a diverse group of sellers to access their consumers.
Benefits
The benefits of UA for cities that implement this practice are numerous. Cities' transformation from food consumers to generators of agricultural products contributes to sustainability, improved health, and poverty alleviation.
UA creates circular energy loops in which food is consumed in the same place it is produced, and waste is not exported to the peripheral rural areas.
Wastewater and organic solid waste can be transformed into resources for growing agriculture products: the former can be used for irrigation, the latter as fertilizer.
Vacant urban areas can be used for agriculture production instead of sitting unused
Using wastewater for irrigation improves water management and increases the availability of fresh water for drinking and household consumption.
UPA can help to preserve bioregional ecologies from being transformed into cropland.
Urban agriculture saves energy (e.g., energy consumed in transporting food from rural to urban areas).
Local food production also allows savings in transportation costs, storage, and product loss, which results in a reduction in food costs.
UA improves the quality of the urban environment through greening and, thus, reduces pollution.
Urban agriculture also makes the city a healthier place to live by improving the quality of the environment.
UPA is a very effective tool for fighting hunger and malnutrition since it facilitates access to food for an impoverished sector of the urban population.
A large part of urban agriculture involves the urban poor. In developing countries, the majority of urban agricultural production is for self-consumption, with surpluses sold in the market. According to the FAO (Food and Agriculture Organization of the United Nations), poor urban consumers spend 60–80% of their income on food, making them vulnerable to higher food prices.
UPA provides food and creates savings in household consumable expenditures, thus increasing the amount of income allocated to other uses.
UPA surpluses can be sold in local markets, generating more income for the urban poor.
Community centers and gardens educate the community to see agriculture as an integral part of urban life. The Florida House Institute for Sustainable Development in Sarasota, Florida serves as a public community and education center where innovators with sustainable, energy-saving ideas can implement and test them. Community centers like Florida House integrate agriculture into the urban lifestyle by providing centralized urban areas to learn about urban agriculture and food production.
Urban farms also are a proven effective educational tool to teach kids about healthy eating and meaningful physical activity.
Trade-offs
Space is at a premium in cities and is accordingly expensive and difficult to secure.
The utilization of untreated wastewater for urban agricultural irrigation can facilitate the spread of waterborne diseases among the human population.
Although studies have demonstrated improved air quality in urban areas related to the proliferation of urban gardens, it has also been shown that increasing urban pollution (related specifically to a sharp rise in the number of automobiles on the road), has led to an increase in insect pests, which consume plants produced by urban agriculture. It is believed that changes to the physical structure of the plants themselves, which have been correlated to increased levels of air pollution, increase plants' palatability to insect pests. Reduced yields within urban gardens decreases the amount of food available for human consumption.
Studies indicate that the nutritional quality of wheat suffers when urban wheat plants are exposed to high nitrogen dioxide and sulfur dioxide concentrations. This problem is particularly acute in the developing world, where outdoor concentrations of sulfur dioxide are high and large percentages of the population rely upon urban agriculture as a primary source of food. These studies have implications for the nutritional quality of other staple crops that are grown in urban settings.
Agricultural activities on land that is contaminated (with such metals as lead) pose potential risks to human health. These risks are associated both with working directly on contaminated land and with consuming food that was grown in contaminated soil.
Municipal greening policy goals can pose conflicts. For example, policies promoting urban tree canopy are not sympathetic to vegetable gardening because of the deep shade cast by trees. However, some municipalities like Portland, Oregon, and Davenport, Iowa are encouraging the implementation of fruit-bearing trees (as street trees or as park orchards) to meet both greening and food production goals.
| Technology | Agriculture_2 | null |
450636 | https://en.wikipedia.org/wiki/Quartzite | Quartzite | Quartzite is a hard, non-foliated metamorphic rock which was originally pure quartz sandstone. Sandstone is converted into quartzite through heating and pressure usually related to tectonic compression within orogenic belts. Pure quartzite is usually white to grey, though quartzites often occur in various shades of pink and red due to varying amounts of hematite. Other colors, such as yellow, green, blue and orange, are due to other minerals.
The term quartzite is also sometimes used for very hard but unmetamorphosed sandstones that are composed of quartz grains thoroughly cemented with additional quartz. Such sedimentary rock has come to be described as orthoquartzite to distinguish it from metamorphic quartzite, which is sometimes called metaquartzite to emphasize its metamorphic origins.
Quartzite is very resistant to chemical weathering and often forms ridges and resistant hilltops. The nearly pure silica content of the rock provides little material for soil; therefore, the quartzite ridges are often bare or covered only with a very thin layer of soil and little (if any) vegetation. Some quartzites contain just enough weather-susceptible nutrient-bearing minerals such as carbonates and chlorite to form a loamy, fairly fertile though shallow and stony soil.
Quartzite has been used since prehistoric times for stone tools. It is presently used for decorative dimension stone, as crushed stone in highway construction, and as a source of silica for production of silicon and silicon compounds.
Characteristics and origin
Quartzite is a very hard rock composed predominantly of an interlocking mosaic of quartz crystals. The grainy, sandpaper-like surface is glassy in appearance. Minor amounts of former cementing materials, iron oxide, silica, carbonate and clay, often migrate during recrystallization, causing streaks and lenses to form within the quartzite. To be classified as a quartzite by the British Geological Survey, a metamorphic rock must contain at least 80% quartz by volume.
Quartzite is commonly regarded as metamorphic in origin. When sandstone is subjected to the great heat and pressure associated with regional metamorphism, the individual quartz grains recrystallize along with the former cementing material. Most or all of the original texture and sedimentary structures of the sandstone are erased by the metamorphism. The recrystallized quartz grains are roughly equal in size, forming what is called a granoblastic texture, and they also show signs of metamorphic annealing, in which the grains become coarser and acquire a more polygonal texture. The grains are so tightly interlocked that when the rock is broken, it fractures through the grains to form an irregular or conchoidal fracture.
Geologists had recognized by 1941 that some rocks show the macroscopic characteristics of quartzite, even though they have not undergone metamorphism at high pressure and temperature. These rocks have been subject only to the much lower temperatures and pressures associated with diagenesis of sedimentary rock, but diagenesis has cemented the rock so thoroughly that microscopic examination is necessary to distinguish it from metamorphic quartize. The term orthoquartzite is used to distinguish such sedimentary rock from metaquartzite produced by metamorphism. By extension, the term orthoquartzite has occasionally been more generally applied to any quartz-cemented quartz arenite. Orthoquartzite (in the narrow sense) is often 99% SiO2 with only very minor amounts of iron oxide and trace resistant minerals such as zircon, rutile and magnetite. Although few fossils are normally present, the original texture and sedimentary structures are preserved.
The typical distinction between a true orthoquartzite and an ordinary quartz sandstone is that an orthoquartzite is so highly cemented that it will fracture across grains, not around them. This is a distinction that can be recognized in the field. In turn, the distinction between an orthoquartzite and a metaquartzite is the onset of recrystallization of existing grains. The dividing line may be placed at the point where strained quartz grains begin to be replaced by new, unstrained, small quartz grains, producing a mortar texture that can be identified in thin sections under a polarizing microscope. With increasing grade of metamorphism, further recrystallization produces foam texture, characterized by polygonal grains meeting at triple junctions, and then porphyroblastic texture, characterized by coarse, irregular grains, including some larger grains (porphyroblasts).
Occurrence
In the United States, formations of quartzite can be found in some parts of Pennsylvania, the Washington DC area, eastern South Dakota, Central Texas, southwest Minnesota, Devil's Lake State Park in the Baraboo Range in Wisconsin, the Wasatch Range in Utah, near Salt Lake City, Utah and as resistant ridges in the Appalachians and other mountain regions. Quartzite is also found in the Morenci Copper Mine in Arizona. The town of Quartzsite in western Arizona derives its name from the quartzites in the nearby mountains in both Arizona and Southeastern California. A glassy vitreous quartzite has been described from the Belt Supergroup in the Coeur d’Alene district of northern Idaho.
In Canada, the La Cloche Mountains in Ontario are composed primarily of white quartzite. Vast areas of Nova Scotia are underlain by quartzite.
Paleoproterozoic quartzite-rhyolite successions are common in the Precambrian basement rock of western North America. The quartzites in these successions are interpreted as sedimentary beds deposited atop older greenstone belts. The quartzite-rhyolite successions may record the formation of back-arc basins along the margin of Laurentia, the ancient core of North America, between episodes of mountain building during the assembly of the continent. The quartzites are often nearly pure quartz, which is puzzling for sediments which must have eroded from igneous rock. Their purity may reflect unusual conditions of chemical weathering, at a time when the Earth's atmosphere was beginning to be oxygenated.
In Ireland areas of quartzite are found across the west and northwest, with Errigal in County Donegal as the most prominent outcrop. A good example of a quartzite area is on the Corraun Peninsula in County Mayo, which has a very thin layer of Irish Atlantic Bog covering it.
In the United Kingdom, a craggy ridge of quartzite called the Stiperstones (early Ordovician – Arenig Epoch, 500 Ma) runs parallel with the Pontesford-Linley fault, 6 km north-west of the Long Mynd in south Shropshire. Also to be found in England are the Cambrian "Wrekin quartzite" (in Shropshire), and the Cambrian "Hartshill quartzite" (Nuneaton area). In Wales, Holyhead Mountain and most of Holy island off Anglesey sport excellent Precambrian quartzite crags and cliffs. In the Scottish Highlands, several mountains (e.g. Foinaven, Arkle) composed of Cambrian quartzite can be found in the far north-west Moine Thrust Belt running in a narrow band from Loch Eriboll in a south-westerly direction to Skye.
In continental Europe, various regionally isolated quartzite deposits exist at surface level in a belt from the Rhenish Massif and the German Central Highlands into the Western Czech Republic, for example in the Taunus and Harz mountains. In Poland, quartzite deposits at surface level exists in Świętokrzyskie Mountains. In Norway, deposits are quarried near Austertana, which is one of the largest quarries in the world at annually, and Mårnes near Sandhornøya with an output of annually. Deposits are also quarried in Kragerø Municipality, and several other deposits are known but not actively quarried.
The highest mountain in Mozambique, Monte Binga (2436 m), as well as the rest of the surrounding Chimanimani Plateau are composed of very hard, pale grey, Precambrian quartzite. Quartzite is also mined in Brazil for use in kitchen countertops.
Uses
Quartzite is a decorative stone and may be used to cover walls, as roofing tiles, as flooring, and stairsteps. Its use for countertops in kitchens is expanding rapidly. It is harder and more resistant to stains than granite. Crushed quartzite is sometimes used in road construction. High purity quartzite is used to produce ferrosilicon, industrial silica sand, silicon and silicon carbide.
During the Paleolithic, quartzite was used, along with flint, quartz, and other lithic raw materials, for making stone tools. Prehistoric humans in the southeastern United States often made mortars out of quartzite stones.
Safety
As quartzite is a form of silica, it is a possible cause for concern in various workplaces. Cutting, grinding, chipping, sanding, drilling, and polishing natural and manufactured stone products can release hazardous levels of very small, crystalline silica dust particles into the air that workers breathe. Crystalline silica of respirable size is a recognized human carcinogen and may lead to other diseases of the lungs such as silicosis and pulmonary fibrosis.
Etymology
The term quartzite is derived from .
Gallery
| Physical sciences | Metamorphic rocks | Earth science |
450703 | https://en.wikipedia.org/wiki/Display%20device | Display device | A display device is an output device for presentation of information in visual or tactile form (the latter used for example in tactile electronic displays for blind people). When the input information that is supplied has an electrical signal the display is called an electronic display.
Common applications for electronic visual displays are television sets or computer monitors.
Types of electronic displays
In use
These are the technologies used to create the various displays in use today.
Mechanical types
Ticker tape (historical)
Split-flap display (or simply flap display)
Flip-disc display (or flip-dot display)
Vane display
Rollsign
Tactile electronic displays are usually intended for the blind. They use electro-mechanical parts to dynamically update a tactile image (usually of text) so that the image may be felt by the fingers.
Optacon, using metal rods instead of light in order to convey images to blind people by tactile sensation.
| Technology | Media and communication: Basics | null |
450749 | https://en.wikipedia.org/wiki/Sea%20breeze | Sea breeze | A sea breeze or onshore breeze is any wind that blows from a large body of water toward or onto a landmass. By contrast, a land breeze or offshore breeze is any wind that blows from a landmass toward or onto a large body of water. Sea breezes and land breezes are both important factors in coastal regions' prevailing winds.
Sea breeze and land breeze develop due to differences in air pressure created by the differing heat capacities of water and dry land. As such, sea breezes and land breezes are more localised than prevailing winds. Since land heats up much faster than water under solar radiation, a sea breeze is a common occurrence along coasts after sunrise. On the other hand, dry land also cools faster than water without solar radiation, so the wind instead flows from the land towards the sea when the sea breeze dissipates after sunset.
The land breeze at nighttime is usually shallower than the sea breeze in daytime. Unlike the daytime sea breeze, which is driven by convection, the nighttime land breeze is driven by convergence.
The term offshore wind refers to any wind over open water, which is related to but not synonymous with offshore breeze.
Causes
Sea breeze
The sea has a greater heat capacity than land, so the surface of the sea warms up more slowly than the surface of the land. As the temperature of the surface of the land rises, the land heats the air above it by convection. The hypsometric equation states that the hydrostatic pressure depends on the temperature. Thus, the hydrostatic pressure over the land decreases less at higher altitude. As the air above the coast has a relatively higher pressure, it starts moving towards the sea at high altitude. This creates an inverse airflow near the ground. The strength of the sea breeze is directly proportional to the temperature difference between the land and the sea. If a strong offshore wind is present (that is, a wind greater than ) and opposing the direction of a possible sea breeze, the sea breeze is not likely to develop.
Land breeze
At night, the land cools off faster than the ocean due to differences in their heat capacity, which forces the dying of the daytime sea breeze as the temperature of the land approaches that of the ocean. If the land becomes cooler than the adjacent sea surface temperature, the air pressure over the water will be lower than that of the land, setting up a land breeze blowing from the land to the sea, as long as the environmental surface wind pattern is not strong enough to oppose it.
Effects
Sea breeze
A sea-breeze front is a weather front created by a sea breeze, also known as a convergence zone. The cold air from the sea meets the warmer air from the land and creates a boundary like a shallow cold front. When powerful this front creates cumulus clouds, and if the air is humid and unstable, the front can sometimes trigger thunderstorms. If the flow aloft is aligned with the direction of the sea breeze, places experiencing the sea breeze frontal passage will have benign, or fair, weather for the remainder of the day. At the front warm air continues to flow upward and cold air continually moves in to replace it and so the front moves progressively inland. Its speed depends on whether it is assisted or hampered by the prevailing wind, and the strength of the thermal contrast between land and sea. At night, the sea breeze usually changes to a land breeze, due to a reversal of the same mechanisms.
Sea breezes in Florida
Thunderstorms caused by powerful sea breeze fronts frequently occur in Florida, a peninsula bounded on the east and west by the Atlantic Ocean and Gulf of Mexico, respectively. During the wet season, which typically lasts from June through September/October, any direction that the winds are blowing would always be off the water, thus making Florida the place most often struck by lightning in the United States, and one of the most on Earth. These storms can also produce significant hail due to the tremendous updraft it causes in the atmosphere especially during times when the upper atmosphere is cooler such as during the spring or fall.
On calm summer afternoons with little prevailing wind, sea breezes from both coasts may collide in the middle, creating especially severe storms down the center of the state. These thunderstorms can drift towards either the west or east coast depending on the relative strengths of the sea breezes, and sometimes survive to move out over the water at night, creating spectacular cloud-to-cloud lightning shows for hours after sunset. Due to its large size Lake Okeechobee may also contribute to this activity by creating its own lake breeze which collides with the east and west coast sea breezes.
In Cuba similar sea breeze collisions with the northern and southern coasts sometimes lead to storms.
Sea breezes in Southeast Australia
In the southeast Australian states of New South Wales and Victoria, an intense sea breeze called the southerly buster causes an abrupt, squally southerly wind change, with gusts in excess of , in coastal cities such as Sydney in New South Wales south to Mallacoota, Victoria and Melbourne, as it approaches from the southeast, mainly on a hot day, bringing in cool, usually severe weather and a dramatic temperature drop, thus ultimately replacing and relieving the prior hot conditions. Marking the boundary between hot and cool air masses, the southerly buster is sometimes represented by a roll-up cloud perpendicular to the coast.
The southerly buster is caused by the interaction of a shallow cold front with the blocking mountain range (Great Dividing Range) that aligns the coast, as the cool air becomes trapped against the ranges. The mountains create a channelling effect as the southerly gale winds move across the New South Wales coast, and frictional contrasts over the mainland and the ocean that disconnect the flow. Temperature changes can be dramatic, with falls of often occurring in a few minutes.
Land breeze
Land breeze, which consists of cool air coming from the land, pushes the warmer air upwards over the sea. If there is sufficient moisture and instability available, the land breeze can cause showers, or even thunderstorms, over the water. Overnight thunderstorm development offshore due to the land breeze can be a good predictor for the activity on land the following day, as long as there are no expected changes to the weather pattern over the following 12–24 hours. This is mainly because the strength of the land breeze is weaker than the sea breeze. The land breeze will die once the land warms up again the next morning.
Utilisation
Wind farms are often situated near a coast to take advantage of the normal daily fluctuations of wind speed resulting from sea or land breezes. While many onshore wind farms and offshore wind farms do not rely on these winds, a nearshore wind farm is a type of offshore wind farm located on shallow coastal waters to take advantage of both sea and land breezes. For practical reasons, other offshore wind farms are situated further out to sea and rely on prevailing winds rather than sea breezes.
| Physical sciences | Winds | Earth science |
451321 | https://en.wikipedia.org/wiki/Rhode%20Island%20Red | Rhode Island Red | The Rhode Island Red is an American breed of domestic chicken. It is the state bird of Rhode Island. It was developed there and in Massachusetts in the late nineteenth century, by cross-breeding birds of Oriental origin such as the Malay with brown Leghorn birds from Italy. It was a dual-purpose breed, raised both for meat and for eggs; modern strains have been bred for their egg-laying abilities. The traditional non-industrial strains of the Rhode Island Red are listed as "watch" (medium conservation priority, between "recovering" and "threatened") by The Livestock Conservancy. It is a separate breed to the Rhode Island White.
History
The Rhode Island Red was bred in Rhode Island and Massachusetts in the second half of the nineteenth century, by selective breeding of birds of Oriental origin such as the Cochin, Java, Malay and Shanghai with brown Leghorn birds from Italy. The characteristic deep red plumage derived from the Malay. The State of Rhode Island celebrated the centenary of the breed in 1954, when the Rhode Island Red Monument was raised at the William Tripp farm, in Little Compton, Rhode Island.
The name of the breed is ascribed either to Isaac Champlin Wilbour of Little Compton at an unknown date, or to a Mr. Jenny of the Southern Massachusetts Poultry Association in 1879 or 1880. In 1891 Nathaniel Borden Aldrich exhibited some as "Golden Buffs" in Rhode Island and in Philadelphia; they were first exhibited under the present name in 1895. They were previously also known as "John Macomber fowls" or "Tripp fowls."
The first breed standard was drawn up in 1898, and was approved by the American Rhode Island Red Club in Boston in 1901; the single-comb variety was admitted to the Standard of Perfection of the American Poultry Association in 1904, and the rose-comb in 1906.
Characteristics
The color of the plumage of the traditional Rhode Island red ranges from a lustrous deep red to almost black; the tail is mostly black. The comb may be either single or rose-comb; it is vivid red, as are the earlobes and wattles. The beak is a reddish horn color, the eyes are reddish bay, and the feet and legs are yellow, often with some red on the toes and sides of the shanks. Industrial strains may be smaller and paler in color than the old-type breed.
Use
The Rhode Island Red was developed as a dual-purpose breed, to provide both meat and eggs. Since about 1940, it has been selectively bred predominantly for egg-laying qualities, and the modern industrial Rhode Island Red is a layer breed. Rhode Island Reds have been used in the creation of many modern hybrid breeds.
The traditional dual-purpose "old-type" Rhode Island Red lays brown eggs per year, and yields rich-flavored meat. It is included in the Ark of Taste of the Slow Food Foundation.
| Biology and health sciences | Chickens | Animals |
451687 | https://en.wikipedia.org/wiki/Ski%20lift | Ski lift | A ski lift is a mechanism for transporting skiers up a hill. Ski lifts are typically a paid service at ski resorts. The first ski lift was built in 1908 by German Robert Winterhalder in Schollach/Eisenbach, Hochschwarzwald.
Types
Aerial lifts transport skiers while suspended off the ground. Aerial lifts are often bicable ropeways, the "bi-" prefix meaning that the cables have two different functions (carrying and pulling).
Aerial tramways
Chairlifts and detachable chairlifts
Funifors
Funitels
Gondola lifts
Hybrid lifts
Surface lifts, including T-bars, magic carpets, and rope tows.
Cable railways, including funiculars
Helicopters are used for heliskiing and snowcats for snowcat skiing. This is backcountry skiing or boarding accessed by a snowcat or helicopter instead of a lift, or by hiking. Cat skiing is less than half the cost of heliskiing, more expensive than a lift ticket but is easier than ski touring. Cat skiing is guided. Skiing at select, extreme resorts, like Silverton Mountain, is also guided, even when skiing just off the lift.
Locations
Ski lifts are built in many parts of the world.
Extreme locations of outdoor ski lifts:
The northernmost is near Tromsø, Norway
The southernmost is near Ushuaia, Argentina
The closest to the equator in the northern hemisphere is near Liang, China
The closest to the equator in the southern hemisphere is near Mahlasela, Lesotho
| Technology | Rail and cable transport | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.